@Antigravity AI Antigravity AI authored 14 hours ago
.agent chore: remove hardcoded prod passwords from test scripts and workflows 14 hours ago
.vscode fix: remove forced sessionId reset in websocket service 20 days ago
agent-node fix(sync): broadcast CLEANUP and PURGE instantly on session delete, update protobufs 3 days ago
ai-hub chore: remove hardcoded prod passwords from test scripts and workflows 14 hours ago
browser-service chore: add frontend modularity refactor plan and update related files 3 days ago
deployment Fix: Localized UI verification feedback for Swarm/OIDC testing, UI locking pre-validation, password fallback parsing fix, and icon corrections 15 hours ago
docs Refactor AI Config and Fix Login failure: Migrated services to a structured api directory, updated backend schemas for multiple TTS/STT providers, and improved settings/chat UI coverage. 1 day ago
frontend chore: remove hardcoded prod passwords from test scripts and workflows 14 hours ago
scripts chore: remove hardcoded prod passwords from test scripts and workflows 14 hours ago
skills refactor: consolidate chat into swarm_control everywhere 3 days ago
tests Refactor browser service for parallel worker pool and deep research capabilities. Clean up redundant directories and fix potential path issues for .browser_data. 4 days ago
.gitignore security: scrub hardcoded GITBUCKET_TOKEN and update .gitignore #6 17 days ago
LICENSE feat: agent node mesh integration and UI polish 12 days ago
README.md docs: remove environment coupled remote_deploy script from README 14 hours ago
add_node.sh refactor(auth): decouple sandbox node from core deployment and add dedicated provisioning script 2 days ago
docker-compose.node.yml refactor(auth): decouple sandbox node from core deployment and add dedicated provisioning script 2 days ago
docker-compose.yml Refactor AI Config and Fix Login failure: Migrated services to a structured api directory, updated backend schemas for multiple TTS/STT providers, and improved settings/chat UI coverage. 1 day ago
migrate_skill_features.py refactor: consolidate chat into swarm_control everywhere 3 days ago
nginx.conf fix: resolve TTS stream network errors and optimize Nginx for streaming 18 days ago
setup.sh refactor(auth): decouple sandbox node from core deployment and add dedicated provisioning script 2 days ago
README.md

🧠 Cortex Hub: Autonomous AI Agent Mesh & Orchestrator

Cortex Hub is a state-of-the-art, modular AI orchestration platform that bridges the gap between Large Language Models (LLMs) and local execution via a distributed Agent Node Mesh. It features a modern React-based workspace, a powerful Skill System, and advanced RAG (Retrieval-Augmented Generation) capabilities.

✨ Key Features

  • 🌐 Distributed Agent Mesh: Connect multiple local or remote nodes (Linux, macOS, Windows) to your Hub. Each node can execute tasks, manage files, and provide terminal access.
  • 🌍 Dedicated Browser Service: High-performance browser automation (Playwright) running as a dedicated system service. Centralized execution for reduced latency and reduced node footprint.
  • πŸ› οΈ Extensible Skill System: Orchestrate AI capabilities via "Skills" (Terminal Control, File Management, System Analysis). Dynamic permissioning allows granular control over which users or groups can access specific nodes and skills.
  • πŸ“‚ Private RAG Pipeline: Securely ingest documents into a FAISS vector store to ground AI responses in factual, local data.
  • πŸ” Industrial-Grade Security: Integrated with OIDC (OpenID Connect) for secure user authentication and Role-Based Access Control (RBAC).
  • πŸ–₯️ Unified Command Center: A sleek, premium React frontend for managing sessions, configuring nodes, and monitoring the swarm in real-time.

πŸ†• Recent Updates (v2 Architecture)

  • Backend Refactoring: Split monolithic API files into cleanly decoupled services, routing modules, and a flexible plugin-based tool repository for robust scaling.
  • Frontend Modularization: Componentized sprawling React files into dedicated views, reusable UI components, and distinct feature directories for significantly enhanced maintainability.
  • Browser Service Concurrency: Integrated a dynamic worker pool with a ParallelFetch gRPC pipeline and clean Markdown Document Extraction for high-speed AI web scraping.
  • Mesh Stability & GC: Advanced swarm-level file synchronization and background garbage collection now guarantees instant removal of zombie nodes, orphaned ffmpeg/playwright processes, and stale temporary directories across all worker drives.

πŸš€ Quick Start (Deploy & Run)

Follow these steps to pull the code, launch the system locally, and configure your mesh:

1. Pull the Code and Launch

git clone https://gitbucket.jerxie.com/git/yangyangxie/cortex-hub.git
cd cortex-hub
# Launch the interactive setup wizard
bash setup.sh

The Setup Wizard will:

  1. Prompt you for an Admin Email address.
  2. Automatically generate secure cryptographic keys and an initial CORTEX_ADMIN_PASSWORD.
  3. Spin up the entire system via Docker Compose in the background.
  4. Output your generated login credentials directly in the terminalβ€”save them!

What components are spawned? When the Docker stack boots, the following services initialize:

  • ai-hub (API): The core Python API (FastAPI) handling orchestration, websocket streaming, RAG logic, and the SQLite Database (cortex.db).
  • ai-frontend (UI): The React-based dashboard served directly to your browser on port 8002.
  • browser-service: A dedicated high-performance Playwright container handling asynchronous heavy web scraping and visual rendering.

  • Frontend UI: http://localhost:8002


βš™οΈ Initial Configuration

Open the Frontend UI in your browser and sign in using the Admin Email and the auto-generated Initial Password from the setup script. Once authenticated, follow these steps to bootstrap your mesh:

1. Set up your AI Provider

Navigate to the Configuration page via the sidebar. In the Settings pane, input and save your API Keys (e.g., Gemini, DeepSeek). The system requires a valid LLM provider to reason and execute tasks.

2. Configure Access Control

Access to nodes and skills is governed by granular Role-Based Access Control (RBAC).

  1. Navigate to the Groups or Admin section.
  2. Define a User Group and assign your user account to it.
  3. Assign necessary system resources (Skills, Node access mappings, restricted Models) to this group so its members can utilize them.

3. Add a New Node into the Agent Mesh

To give the AI hands-on execution power, you must register and deploy a new "Node":

  1. Navigate to the Swarm Nodes page in the UI.
  2. Click Register Node to generate a unique Node Slug and an Invite Auth Token.
  3. In your terminal (on the machine you want to use as a node), run the node deployment script:
    bash add_node.sh
    # It will prompt you for the Node ID and Auth Token you just generated!
  4. Watch the node's pulsing indicator turn green on your Swarm dashboard! (Note: We have extensively tested agent nodes on macOS and Linux. Windows deployment is not supported yet.)

4. Start Using Swarm Control

With your LLM configured and your local node connected, you can now enter Swarm Control or start a Chat session! You can attach specific nodes to your chat, granting the AI agent secure hands-on access to the file systems and terminals of those environments to complete workflows autonomously.


πŸ—οΈ Deployment Architecture

Cortex Hub uses a layered deployment strategy to keep the core codebase clean while supporting specific production environments.

πŸ“‚ Folder Structure

  • ai-hub/: The Core Python (FastAPI) backend.
  • frontend/: The React-based unified dashboard.
  • agent-node/: The lightweight client software for distributed nodes.
  • skills/: Source for AI capabilities (Shell, Browser, etc.).
  • deployment/: Example environment-specific overrides (e.g., production environments with NFS support).
  • scripts/: General helper scripts for local orchestration and testing.

🚒 Production Deployment

Because Cortex Hub relies on isolated containers communicating via Docker networks, standard orchestration tools (like generic docker compose overrides, Kubernetes, or Nomad) are natively supported for standing up production environments securely. Simply mount your necessary storage volumes to /app/data and expose the frontend proxy.


πŸ›οΈ Project Layout

.
β”œβ”€β”€ ai-hub/             # Backend API & Orchestrator
β”œβ”€β”€ frontend/           # Frontend Workspace (React)
β”œβ”€β”€ agent-node/         # Distributed Node Client (Lightweight)
β”œβ”€β”€ browser-service/    # Dedicated Browser Automation Service (Playwright)
β”œβ”€β”€ skills/             # AI Skill Definitions
β”œβ”€β”€ deployment/         # Env Overrides (NFS, SSL, OIDC)
β”œβ”€β”€ scripts/            # CI/CD & Maintenance Scripts
β”œβ”€β”€ cortex.db           # Local SQLite Cache
└── docker-compose.yml  # Generic Development Entrypoint

πŸ“š API Documentation

Cortex Hub APIs are self-documenting.

  • Interactive Swagger UI: When running the backend locally, navigate to http://localhost:8000/docs.
  • Static Markdown Reference: The full API definition (including detailed endpoints, schemas, and curl examples) is generated statically from the FastAPI OpenAPI schema. You can view the grouped documentation at ai-hub/docs/api_reference/index.md.

If you make any changes to the backend APIs, regenerate the Markdown documentation by running:

cd ai-hub && python3 ../.agent/utils/generate_api_docs.py

πŸ§ͺ Testing

  • Backend: pytest ai-hub/tests/
  • Frontend Health: scripts/frontend_tester
  • Connectivity: scripts/test_ws.js

βš–οΈ License

Distributed under the MIT License. See LICENSE for more information.