# 🧠 Cortex Hub: Autonomous AI Agent Mesh & Orchestrator

Cortex Hub is a state-of-the-art, modular AI orchestration platform that bridges the gap between Large Language Models (LLMs) and local execution via a distributed **Agent Node Mesh**. It features a modern React-based workspace, a powerful **Skill System**, and advanced **RAG** (Retrieval-Augmented Generation) capabilities.

## ✨ Key Features

*   **🌐 Distributed Agent Mesh**: Connect multiple local or remote nodes (Linux, macOS, Windows) to your Hub. Each node can execute tasks, manage files, and provide terminal access.
*   **🌍 Dedicated Browser Service**: High-performance browser automation (Playwright) running as a dedicated system service. Centralized execution for reduced latency and reduced node footprint.
*   **🛠️ Extensible Skill System**: Orchestrate AI capabilities via "Skills" (Terminal Control, File Management, System Analysis). Dynamic permissioning allows granular control over which users or groups can access specific nodes and skills.
*   **📂 Private RAG Pipeline**: Securely ingest documents into a FAISS vector store to ground AI responses in factual, local data.
*   **🔐 Industrial-Grade Security**: Integrated with OIDC (OpenID Connect) for secure user authentication and Role-Based Access Control (RBAC). 
*   **🖥️ Unified Command Center**: A sleek, premium React frontend for managing sessions, configuring nodes, and monitoring the swarm in real-time.

---

## 🆕 Recent Updates (v2 Architecture)
*   **WebMCP Integration**: Fully comprehensive Model Context Protocol (MCP) server integration, mapping 30+ tools (Sessions, Security Groups, Access Controls, and Swarm Operations) directly to AI clients via JSON-RPC over HTTP/SSE.
*   **Enhanced Node Stability**: Implemented deadlock-free PTY process execution using native Python `process_group` bindings, completely eliminating gRPC stream drops during heavy remote task execution.
*   **Backend Refactoring**: Split monolithic API files into cleanly decoupled services, routing modules, and a flexible plugin-based tool repository for robust scaling.
*   **Frontend Modularization**: Componentized sprawling React files into dedicated views, reusable UI components, and distinct feature directories for significantly enhanced maintainability.
*   **Browser Service Concurrency**: Integrated a dynamic worker pool with a `ParallelFetch` gRPC pipeline and clean Markdown Document Extraction for high-speed AI web scraping.
*   **Mesh Stability & GC**: Advanced swarm-level file synchronization and background garbage collection now guarantees instant removal of zombie nodes, orphaned `ffmpeg`/`playwright` processes, and stale temporary directories across all worker drives.

---

## 🚀 Quick Start (Deploy & Run)

Follow these steps to pull the code, launch the system locally, and configure your mesh:

### 1. Pull the Code and Launch
```bash
git clone https://gitbucket.jerxie.com/git/yangyangxie/cortex-hub.git
cd cortex-hub
# Launch the interactive setup wizard
bash setup.sh
```

**The Setup Wizard will:**
1. Prompt you for an Admin Email address.
2. Automatically generate secure cryptographic keys and an initial `CORTEX_ADMIN_PASSWORD`.
3. Spin up the entire system via Docker Compose in the background.
4. Output your generated login credentials directly in the terminal—**save them!**

**What components are spawned?**
When the Docker stack boots, the following services initialize:
* **`ai-hub` (API)**: The core Python API (FastAPI) handling orchestration, websocket streaming, RAG logic, and the SQLite Database (`cortex.db`).
* **`ai-frontend` (UI)**: The React-based dashboard served directly to your browser on port `8002`.
* **`browser-service`**: A dedicated high-performance Playwright container handling asynchronous heavy web scraping and visual rendering.

*   **Frontend UI**: [http://localhost:8002](http://localhost:8002)

---

## ⚙️ Initial Configuration

Open the Frontend UI in your browser and sign in using the **Admin Email** and the auto-generated **Initial Password** from the setup script. Once authenticated, follow these steps to bootstrap your mesh:

### 1. Set up your AI Provider
Navigate to the **Configuration** page via the sidebar. In the Settings pane, input and save your API Keys (e.g., Gemini, DeepSeek). The system requires a valid LLM provider to reason and execute tasks.

### 2. Configure Access Control
Access to nodes and skills is governed by granular Role-Based Access Control (RBAC). 
1. Navigate to the **Groups** or **Admin** section.
2. Define a **User Group** and assign your user account to it.
3. Assign necessary system resources (Skills, Node access mappings, restricted Models) to this group so its members can utilize them.

### 3. Add a New Node into the Agent Mesh
To give the AI hands-on execution power, you must register and deploy a new "Node". Nodes now run as dependency-free compiled Python binaries.
1. Navigate to the **Swarm Nodes** page in the UI.
2. Click **Register Node** to generate a unique Node Slug and an Invite Auth Token.
3. In your terminal (on the machine you want to use as a node), copy and paste the one-liner command provided in the UI! For example:
   ```bash
   curl -sSL 'http://localhost:8000/api/v1/nodes/provision/sh/YOUR_NODE_ID?token=YOUR_TOKEN' | bash
   ```
4. Watch the node's pulsing indicator turn green on your Swarm dashboard!
*(Note: Agent nodes compile locally to **Linux AMD64, Linux ARM64, and macOS**. If using macOS, download the source bundle or build the darwin binary locally if native execution is required.)*

### 4. Start Using Swarm Control
With your LLM configured and your local node connected, you can now enter **Swarm Control** or start a Chat session! You can attach specific nodes to your chat, granting the AI agent secure hands-on access to the file systems and terminals of those environments to complete workflows autonomously.

---

## 🏗️ Deployment Architecture

Cortex Hub uses a layered deployment strategy to keep the core codebase clean while supporting specific production environments.

### 📂 Folder Structure
*   **`ai-hub/`**: The Core Python (FastAPI) backend.
*   **`frontend/`**: The React-based unified dashboard.
*   **`agent-node/`**: The lightweight client software for distributed nodes.
*   **`skills/`**: Source for AI capabilities (Shell, Browser, etc.).
*   **`deployment/`**: Example environment-specific overrides (e.g., production environments with NFS support).
*   **`scripts/`**: General helper scripts for local orchestration and testing.

### 🚢 Production Deployment
Because Cortex Hub relies on isolated containers communicating via Docker networks, standard orchestration tools (like generic `docker compose` overrides, Kubernetes, or Nomad) are natively supported for standing up production environments securely. Simply mount your necessary storage volumes to `/app/data` and expose the frontend proxy.

---

## 🏛️ Project Layout

```text
.
├── ai-hub/             # Backend API & Orchestrator
├── frontend/           # Frontend Workspace (React)
├── agent-node/         # Distributed Node Client (Lightweight)
├── browser-service/    # Dedicated Browser Automation Service (Playwright)
├── skills/             # AI Skill Definitions
├── deployment/         # Env Overrides (NFS, SSL, OIDC)
├── scripts/            # CI/CD & Maintenance Scripts
├── cortex.db           # Local SQLite Cache
└── docker-compose.yml  # Generic Development Entrypoint
```

## 📚 API Documentation

Cortex Hub APIs are self-documenting.
* **Interactive Swagger UI**: When running the backend locally, navigate to [http://localhost:8000/docs](http://localhost:8000/docs).
* **Static Markdown Reference**: The full API definition (including detailed endpoints, schemas, and `curl` examples) is generated statically from the FastAPI OpenAPI schema. You can view the grouped documentation at `ai-hub/docs/api_reference/index.md`. 

If you make any changes to the backend APIs, regenerate the Markdown documentation by running:
```bash
cd ai-hub && python3 ../.agent/utils/generate_api_docs.py
```

## 🧪 Testing

*   **Backend**: `pytest ai-hub/tests/`
*   **Frontend Health**: `scripts/frontend_tester`
*   **Connectivity**: `scripts/test_ws.js`

---

## ⚖️ License
Distributed under the MIT License. See `LICENSE` for more information.
