Cortex Hub is a state-of-the-art, modular AI orchestration platform that bridges the gap between Large Language Models (LLMs) and local execution via a distributed Agent Node Mesh. It features a modern React-based workspace, a powerful Skill System, and advanced RAG (Retrieval-Augmented Generation) capabilities.
ParallelFetch gRPC pipeline and clean Markdown Document Extraction for high-speed AI web scraping.ffmpeg/playwright processes, and stale temporary directories across all worker drives.Follow these steps to pull the code, launch the system locally, and configure your mesh:
git clone https://gitbucket.jerxie.com/git/yangyangxie/cortex-hub.git cd cortex-hub # Launch the interactive setup wizard bash setup.sh
The Setup Wizard will:
CORTEX_ADMIN_PASSWORD.What components are spawned? When the Docker stack boots, the following services initialize:
ai-hub (API): The core Python API (FastAPI) handling orchestration, websocket streaming, RAG logic, and the SQLite Database (cortex.db).ai-frontend (UI): The React-based dashboard served directly to your browser on port 8002.browser-service: A dedicated high-performance Playwright container handling asynchronous heavy web scraping and visual rendering.
Frontend UI: http://localhost:8002
Open the Frontend UI in your browser and sign in using the Admin Email and the auto-generated Initial Password from the setup script. Once authenticated, follow these steps to bootstrap your mesh:
Navigate to the Configuration page via the sidebar. In the Settings pane, input and save your API Keys (e.g., Gemini, DeepSeek). The system requires a valid LLM provider to reason and execute tasks.
Access to nodes and skills is governed by granular Role-Based Access Control (RBAC).
To give the AI hands-on execution power, you must register and deploy a new "Node":
bash add_node.sh # It will prompt you for the Node ID and Auth Token you just generated!
With your LLM configured and your local node connected, you can now enter Swarm Control or start a Chat session! You can attach specific nodes to your chat, granting the AI agent secure hands-on access to the file systems and terminals of those environments to complete workflows autonomously.
Cortex Hub uses a layered deployment strategy to keep the core codebase clean while supporting specific production environments.
ai-hub/: The Core Python (FastAPI) backend.frontend/: The React-based unified dashboard.agent-node/: The lightweight client software for distributed nodes.skills/: Source for AI capabilities (Shell, Browser, etc.).deployment/: Example environment-specific overrides (e.g., production environments with NFS support).scripts/: General helper scripts for local orchestration and testing.Because Cortex Hub relies on isolated containers communicating via Docker networks, standard orchestration tools (like generic docker compose overrides, Kubernetes, or Nomad) are natively supported for standing up production environments securely. Simply mount your necessary storage volumes to /app/data and expose the frontend proxy.
. βββ ai-hub/ # Backend API & Orchestrator βββ frontend/ # Frontend Workspace (React) βββ agent-node/ # Distributed Node Client (Lightweight) βββ browser-service/ # Dedicated Browser Automation Service (Playwright) βββ skills/ # AI Skill Definitions βββ deployment/ # Env Overrides (NFS, SSL, OIDC) βββ scripts/ # CI/CD & Maintenance Scripts βββ cortex.db # Local SQLite Cache βββ docker-compose.yml # Generic Development Entrypoint
Cortex Hub APIs are self-documenting.
curl examples) is generated statically from the FastAPI OpenAPI schema. You can view the grouped documentation at ai-hub/docs/api_reference/index.md.If you make any changes to the backend APIs, regenerate the Markdown documentation by running:
cd ai-hub && python3 ../.agent/utils/generate_api_docs.py
pytest ai-hub/tests/scripts/frontend_testerscripts/test_ws.jsDistributed under the MIT License. See LICENSE for more information.