diff --git a/KickoffPlan.md b/KickoffPlan.md deleted file mode 100644 index 0e212f3..0000000 --- a/KickoffPlan.md +++ /dev/null @@ -1,91 +0,0 @@ -# **Kickoff Plan: AI Model Hub Service** - -This document outlines the plan for developing a central **"hub" service** that routes requests to various Large Language Models (LLMs) and uses **PostgreSQL** for metadata storage alongside **FAISS** for similarity search on vector data. - ---- - -### **1. High-Level Architecture** - -The service will consist of three main components: - -1. **API Server**: - A web server that exposes endpoints to receive user prompts and return model responses. This will be the main entry point for all client applications. - -2. **LLM Router/Orchestrator**: - A core logic layer responsible for deciding which LLM (Gemini, DeepSeek, etc.) should handle a given request. It will also manage interactions with **PostgreSQL** and **FAISS**. - -3. **Vector Database (FAISS + PostgreSQL)**: - A two-layered database system: - - * **FAISS**: Stores vectors (numerical representations of text). Handles high-performance similarity search. - * **PostgreSQL**: Stores metadata such as conversation IDs, document titles, timestamps, and other relational data. - ---- - -### **2. Technology Stack** - -* **API Framework**: - **FastAPI (Python)** – High-performance, easy to learn, with automatic interactive documentation, ideal for testing and development. - -* **LLM Interaction**: - **LangChain** (or a similar abstraction library) – Simplifies communication with different LLM APIs by providing a unified interface. - -* **Vector Database**: - - * **FAISS**: High-performance similarity search for vectors. - * **PostgreSQL**: Stores metadata for vectors, such as document IDs, user data, timestamps, etc. Used for filtering, organizing, and managing relational data. - -* **Deployment**: - **Docker** – Containerizing the application for portability, ensuring easy deployment across any machine within the local network. - ---- - -### **3. Development Roadmap** - -#### **Phase 1: Core API and Model Integration** *(1-2 weeks)* - -* [X] Set up a basic **FastAPI server**. -* [X] Create a `/chat` endpoint that accepts user prompts. -* [X] Implement basic **routing logic** to forward requests to one hardcoded LLM (e.g., Gemini). -* [X] Connect to the LLM's API and return the response to the user. - -#### **Phase 2: PostgreSQL and FAISS Integration** *(2-3 weeks)* - -* [ ] Integrate **PostgreSQL** for metadata storage (document IDs, timestamps, etc.). -* [ ] Integrate **FAISS** for vector storage and similarity search. -* [ ] On each API call, **embed the user prompt** and the model's response into vectors. -* [ ] Store the vectors in **FAISS** and store associated metadata in **PostgreSQL** (such as document title, conversation ID). -* [ ] Perform a **similarity search** using **FAISS** before sending a new prompt to the LLM, and include relevant history stored in **PostgreSQL** as context. - -#### **Phase 3: Multi-Model Routing & RAG** *(1-2 weeks)* - -* [ ] Abstract LLM connections to easily support multiple models (Gemini, DeepSeek, etc.). -* [ ] Add logic to the `/chat` endpoint to allow clients to specify which model to use. -* [ ] Create a separate endpoint (e.g., `/add-document`) to upload text files. -* [ ] Implement a **RAG pipeline**: - - * When a prompt is received, search **FAISS** for relevant vector matches and retrieve metadata from **PostgreSQL**. - * Pass the relevant document chunks along with the prompt to the selected LLM. - -#### **Phase 4: Refinement and Deployment** *(1 week)* - -* [ ] Develop a simple **UI** (optional, could use FastAPI's built-in docs). -* [ ] Write **Dockerfiles** for the application. -* [ ] Add **configuration management** for API keys and other settings. -* [ ] Implement basic **logging** and **error handling**. - ---- - -### **4. PostgreSQL + FAISS Workflow** - -* **Storing Vectors**: - When a document is added, its vector representation is stored in **FAISS**. - Metadata such as document titles, timestamps, and user IDs are stored in **PostgreSQL**. - -* **Querying**: - For a user query, embed the query into a vector. - Use **FAISS** to perform a similarity search and retrieve the nearest vectors. - Query **PostgreSQL** for metadata (e.g., title, author) related to the relevant vectors. - -* **Syncing Data**: - Ensure that metadata in **PostgreSQL** is synchronized with vectors in **FAISS** for accurate and consistent retrieval. diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000..63cc7ac --- /dev/null +++ b/LICENSE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2017 Fullstory, Inc + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/agent-node/Dockerfile b/agent-node/Dockerfile index 44b7e50..8fee39c 100644 --- a/agent-node/Dockerfile +++ b/agent-node/Dockerfile @@ -31,8 +31,8 @@ COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt -# Install Playwright browsers (optional, depending on if you use browser skill) -# RUN playwright install --with-deps chromium +# Install Playwright browsers +RUN playwright install chromium # Copy the rest of the node code COPY . . diff --git a/agent-node/agent_node/node.py b/agent-node/agent_node/node.py index f07165b..8ef16a0 100644 --- a/agent-node/agent_node/node.py +++ b/agent-node/agent_node/node.py @@ -29,14 +29,50 @@ self.task_queue = queue.Queue() self.stub = get_secure_stub() + def _collect_capabilities(self) -> dict: + """Collect hardware metadata to advertise at registration.""" + import platform + import subprocess + + caps = { + "shell": "v1", + "browser": "playwright-sync-bridge", + "arch": platform.machine(), # e.g. x86_64, arm64, aarch64 + "os": platform.system().lower(), # linux, darwin, windows + "os_release": platform.release(), + } + + # GPU Detection — try nvidia-smi first, then check for Apple GPU + try: + result = subprocess.run( + ["nvidia-smi", "--query-gpu=name,memory.total", "--format=csv,noheader,nounits"], + capture_output=True, text=True, timeout=5 + ) + if result.returncode == 0 and result.stdout.strip(): + gpu_lines = result.stdout.strip().split("\n") + caps["gpu"] = gpu_lines[0].strip() # e.g. "NVIDIA GeForce RTX 3080, 10240" + caps["gpu_count"] = str(len(gpu_lines)) + else: + caps["gpu"] = "none" + except Exception: + # No nvidia-smi — check if Apple Silicon (arm64 + darwin) + if caps["os"] == "darwin" and "arm" in caps["arch"].lower(): + caps["gpu"] = "apple-silicon" + else: + caps["gpu"] = "none" + + return caps + def sync_configuration(self): """Initial handshake to retrieve policy and metadata.""" print(f"[*] Handshake with Orchestrator: {self.node_id}") + caps = self._collect_capabilities() + print(f"[*] Capabilities: {caps}") reg_req = agent_pb2.RegistrationRequest( node_id=self.node_id, auth_token=AUTH_TOKEN, node_description=NODE_DESC, - capabilities={"shell": "v1", "browser": "playwright-sync-bridge"} + capabilities=caps ) diff --git a/agent-node/agent_node/utils/network.py b/agent-node/agent_node/utils/network.py index 3eac1c6..3ef8bda 100644 --- a/agent-node/agent_node/utils/network.py +++ b/agent-node/agent_node/utils/network.py @@ -6,9 +6,16 @@ def get_secure_stub(): """Initializes a gRPC channel (Secure or Insecure) and returns the orchestrator stub.""" + options = [ + ('grpc.keepalive_time_ms', 30000), # Send keepalive ping every 30s + ('grpc.keepalive_timeout_ms', 10000), # Wait 10s for pong + ('grpc.keepalive_permit_without_calls', True), + ('grpc.http2.max_pings_without_data', 0) # Allow infinite pings + ] + if not TLS_ENABLED: print(f"[!] TLS is disabled. Connecting via insecure channel to {SERVER_HOST_PORT}") - channel = grpc.insecure_channel(SERVER_HOST_PORT) + channel = grpc.insecure_channel(SERVER_HOST_PORT, options=options) return agent_pb2_grpc.AgentOrchestratorStub(channel) print(f"[*] Connecting via secure (mTLS) channel to {SERVER_HOST_PORT}") @@ -18,10 +25,12 @@ with open(CERT_CA, 'rb') as f: ca = f.read() creds = grpc.ssl_channel_credentials(ca, pkey, cert) - channel = grpc.secure_channel(SERVER_HOST_PORT, creds) + channel = grpc.secure_channel(SERVER_HOST_PORT, creds, options=options) return agent_pb2_grpc.AgentOrchestratorStub(channel) except FileNotFoundError as e: - print(f"[!] Certificate files not found: {e}. Falling back to insecure channel...") - channel = grpc.insecure_channel(SERVER_HOST_PORT) + print(f"[!] mTLS Certificate files not found: {e}. Falling back to standard TLS (Server Verify)...") + # Fallback to standard TLS (uses system CA roots by default) + creds = grpc.ssl_channel_credentials() + channel = grpc.secure_channel(SERVER_HOST_PORT, creds, options=options) return agent_pb2_grpc.AgentOrchestratorStub(channel) diff --git a/agent-node/docker-compose.yml b/agent-node/docker-compose.yml index 44d3b52..a7ddaf3 100644 --- a/agent-node/docker-compose.yml +++ b/agent-node/docker-compose.yml @@ -1,21 +1,19 @@ # agent-node/docker-compose.yml +# This is a template for deploying a single Agent Node. +# Usage: +# docker-compose up -d services: agent-node: build: . - container_name: cortex-local-agent + container_name: cortex-agent environment: - - AGENT_NODE_ID=${AGENT_NODE_ID:-test-prod-node} + - AGENT_NODE_ID=${AGENT_NODE_ID:-agent-node-001} - AGENT_NODE_DESC=${AGENT_NODE_DESC:-Modular Stateful Node} - - GRPC_ENDPOINT=${GRPC_ENDPOINT:-ai_hub_service:50051} - - AGENT_SECRET_KEY=${AGENT_SECRET_KEY:-aYc2j1lYUUZXkBFFUndnleZI} - - AGENT_TLS_ENABLED=${AGENT_TLS_ENABLED:-false} + - GRPC_ENDPOINT=${GRPC_ENDPOINT:-ai.jerxie.com:443} + - AGENT_AUTH_TOKEN=${AGENT_AUTH_TOKEN} + - AGENT_SECRET_KEY=${AGENT_SECRET_KEY} + - AGENT_TLS_ENABLED=${AGENT_TLS_ENABLED:-true} - CORTEX_SYNC_DIR=/app/sync volumes: - ./sync:/app/sync restart: unless-stopped - networks: - - cortex-hub_default - -networks: - cortex-hub_default: - external: true diff --git a/ai-hub/app/api/routes/nodes.py b/ai-hub/app/api/routes/nodes.py index c1c561c..371207f 100644 --- a/ai-hub/app/api/routes/nodes.py +++ b/ai-hub/app/api/routes/nodes.py @@ -26,11 +26,7 @@ import uuid import secrets import logging -import io -import zipfile -import os from fastapi import APIRouter, HTTPException, WebSocket, WebSocketDisconnect, Depends -from fastapi.responses import StreamingResponse from sqlalchemy.orm import Session @@ -144,8 +140,9 @@ _require_admin(admin_id, db) node = _get_node_or_404(node_id, db) - # Access grants are usually cascade-deleted by SQLAlchemy if configured, - # but let's be explicit if needed or let the DB handle it. + # Deregister from live memory if online + _registry().deregister(node_id) + db.delete(node) db.commit() return {"status": "success", "message": f"Node {node_id} deleted"} @@ -361,6 +358,12 @@ hub_grpc = os.getenv("HUB_GRPC_ENDPOINT", "ai.jerxie.com:50051") secret_key = os.getenv("SECRET_KEY", "dev-secret-key-1337") skill_cfg = node.skill_config or {} + if isinstance(skill_cfg, str): + import json + try: + skill_cfg = json.loads(skill_cfg) + except Exception: + skill_cfg = {} lines = [ "# Cortex Hub — Agent Node Configuration", @@ -384,6 +387,8 @@ "skills:", ] for skill, cfg in skill_cfg.items(): + if not isinstance(cfg, dict): + continue enabled = cfg.get("enabled", True) lines.append(f" {skill}:") lines.append(f" enabled: {str(enabled).lower()}") @@ -403,71 +408,6 @@ config_yaml = "\n".join(lines) return schemas.NodeConfigYamlResponse(node_id=node_id, config_yaml=config_yaml) - @router.get( - "/admin/{node_id}/download", - summary="[Admin] Download Pre-configured Agent Node Bundle", - ) - def admin_download_node_bundle(node_id: str, admin_id: str, db: Session = Depends(get_db)): - """ - Bundle everything needed to run an Agent Node into a single ZIP: - - agent_node (source) - - protos (schemas) - - shared_core (ignore rules) - - requirements.txt - - agent_config.yaml (pre-signed invite token) - - Admins download this, unzip on their target machine, and run: - pip install -r requirements.txt - python3 -m agent_node.main - """ - _require_admin(admin_id, db) - node = _get_node_or_404(node_id, db) - - # 1. Generate the same config YAML as the other endpoint - # (Internal DRY: calling the helper logic) - config_resp = download_node_config_yaml(node_id, admin_id, db) - config_yaml = config_resp.config_yaml - - # 2. Build the ZIP in-memory - buf = io.BytesIO() - source_root = "/app/agent-node" - - with zipfile.ZipFile(buf, "w", zipfile.ZIP_DEFLATED) as zip_file: - # Helper to add a directory - def add_dir(dir_name): - path = os.path.join(source_root, dir_name) - for root, dirs, files in os.walk(path): - for file in files: - if "__pycache__" in root: continue - abs_file = os.path.join(root, file) - rel_path = os.path.relpath(abs_file, source_root) - zip_file.write(abs_file, rel_path) - - # Add source folders - add_dir("agent_node") - add_dir("protos") - add_dir("shared_core") - - # Add requirements - req_path = os.path.join(source_root, "requirements.txt") - if os.path.exists(req_path): - zip_file.write(req_path, "requirements.txt") - - # Add the CUSTOM config YAML - zip_file.writestr("agent_config.yaml", config_yaml) - - # Add a README for quick start - readme = f"# Cortex Agent Node: {node_id}\n\n1. Install deps: pip install -r requirements.txt\n2. Run node: python3 -m agent_node.main\n" - zip_file.writestr("README.md", readme) - - buf.seek(0) - filename = f"cortex-node-{node_id}.zip" - return StreamingResponse( - buf, - media_type="application/zip", - headers={"Content-Disposition": f"attachment; filename={filename}"} - ) - # ================================================================== # M4: Invite Token Validation (called internally by gRPC server) @@ -810,7 +750,10 @@ def _node_to_user_view(node: models.AgentNode, registry) -> schemas.AgentNodeUserView: live = registry.get_node(node.node_id) - status = live._compute_status() if live else node.last_status or "offline" + # The record should only show online if it's currently connected and in the live gRPC registry map. + # We default back to "offline" even if the DB record says "online" (zombie fix). + status = live._compute_status() if live else "offline" + skill_cfg = node.skill_config or {} if isinstance(skill_cfg, str): import json diff --git a/ai-hub/app/app.py b/ai-hub/app/app.py index 418648e..ca13a01 100644 --- a/ai-hub/app/app.py +++ b/ai-hub/app/app.py @@ -43,6 +43,12 @@ create_db_and_tables() run_migrations() + # --- Reset Node Statuses (Zombie Fix) --- + try: + app.state.services.node_registry_service.reset_all_statuses() + except Exception as e: + logger.warning(f"Failed to reset node statuses: {e}") + # --- Start gRPC Orchestrator (M6) --- try: from app.core.grpc.services.grpc_server import serve_grpc diff --git a/ai-hub/app/core/grpc/services/grpc_server.py b/ai-hub/app/core/grpc/services/grpc_server.py index 57022a4..0f92fd6 100644 --- a/ai-hub/app/core/grpc/services/grpc_server.py +++ b/ai-hub/app/core/grpc/services/grpc_server.py @@ -86,9 +86,9 @@ params={"node_id": node_id, "token": invite_token}, timeout=5, ) - payload = resp.json() + payload = resp.json() or {} if not payload.get("valid"): - reason = payload.get("reason", "Token rejected") + reason = payload.get("reason", "Token rejected") or "Token rejected" logger.warning(f"[🔒] SyncConfiguration REJECTED {node_id}: {reason}") return agent_pb2.RegistrationResponse(success=False, error_message=reason) @@ -100,15 +100,18 @@ # Build allowed_commands from skill_config # Build Sandbox Policy from skill_config['shell'] - shell_cfg = skill_cfg.get("shell", {}) + shell_cfg = skill_cfg.get("shell", {}) if isinstance(skill_cfg, dict) else {} + if shell_cfg is None: shell_cfg = {} + sandbox_cfg = shell_cfg.get("sandbox", {}) if isinstance(shell_cfg, dict) else {} + if sandbox_cfg is None: sandbox_cfg = {} # 1. Resolve Mode mode_str = sandbox_cfg.get("mode", "STRICT").upper() grpc_mode = agent_pb2.SandboxPolicy.STRICT if mode_str == "STRICT" else agent_pb2.SandboxPolicy.PERMISSIVE # 2. Resolve Command Lists (fallback to some safe defaults if enabled but empty) - allowed = sandbox_cfg.get("allowed_commands", []) + allowed = sandbox_cfg.get("allowed_commands", []) or [] if not allowed and shell_cfg.get("enabled", True): allowed = ["ls", "cat", "echo", "pwd", "uname", "curl", "python3", "git"] @@ -348,6 +351,14 @@ addr = f"[::]:{port}" server.add_insecure_port(addr) + # --- Enable Reflection (M6 Debugging) --- + from grpc_reflection.v1alpha import reflection + SERVICE_NAMES = ( + agent_pb2.DESCRIPTOR.services_by_name['AgentOrchestrator'].full_name, + reflection.SERVICE_NAME, + ) + reflection.enable_server_reflection(SERVICE_NAMES, server) + logger.info(f"🚀 CORTEX gRPC Orchestrator starting on {addr}") server.start() return server, orchestrator diff --git a/ai-hub/app/core/services/node_registry.py b/ai-hub/app/core/services/node_registry.py index a83cb81..86537bf 100644 --- a/ai-hub/app/core/services/node_registry.py +++ b/ai-hub/app/core/services/node_registry.py @@ -78,6 +78,10 @@ if delta > 30: return "stale" return "online" + + def is_healthy(self) -> bool: + """True if the node has reported metrics recently and has an active stream.""" + return self._compute_status() == "online" class NodeRegistryService: @@ -147,9 +151,22 @@ if record: record.last_seen_at = datetime.utcnow() record.last_status = "online" + db.commit() except Exception as e: print(f"[NodeRegistry] DB heartbeat update failed for {node_id}: {e}") + def reset_all_statuses(self): + """Reset all nodes in the DB to offline (call on Hub startup).""" + from app.db.models import AgentNode + from app.db.session import get_db_session + try: + with get_db_session() as db: + db.query(AgentNode).update({"last_status": "offline"}) + db.commit() + logger.info("[NodeRegistry] Reset all DB node statuses to 'offline'.") + except Exception as e: + logger.error(f"[NodeRegistry] Failed to reset DB statuses: {e}") + # ------------------------------------------------------------------ # # Registration # diff --git a/ai-hub/requirements.txt b/ai-hub/requirements.txt index 7a74f93..cf3f567 100644 --- a/ai-hub/requirements.txt +++ b/ai-hub/requirements.txt @@ -21,4 +21,5 @@ litellm tiktoken grpcio==1.62.1 -grpcio-tools==1.62.1 \ No newline at end of file +grpcio-tools==1.62.1 +grpcio-reflection==1.62.1 \ No newline at end of file diff --git a/deploy_local.sh b/deploy_local.sh deleted file mode 100644 index 76b63cc..0000000 --- a/deploy_local.sh +++ /dev/null @@ -1,69 +0,0 @@ -#!/bin/bash - -# --- Deployment Script for AI Hub --- -# This script is designed to automate the deployment of the AI Hub application -# using Docker Compose. It's intended to be run on the production server. -# -# The script performs the following actions: -# 1. Defines project-specific configuration variables. -# 2. **Installs Docker Compose if it's not found on the system.** -# 3. Navigates to the correct project directory. -# 4. Stops and removes any currently running Docker containers for the project. -# 5. Pulls the latest Docker images from a registry (if applicable). -# 6. Builds the new Docker images from the source code. -# 7. Starts the new containers in detached mode, with production settings. -# 8. Performs cleanup of old, unused Docker images. - -# --- Configuration --- -# Set the project directory to the directory where this script is located. -PROJECT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" &> /dev/null && pwd)" - - -# --- Helper Function --- -# Find the correct docker-compose command (modern plugin or standalone v1) -if docker compose version &> /dev/null; then - DOCKER_CMD="docker compose" -else - DOCKER_CMD="docker-compose" -fi - -# --- Script Execution --- -echo "🚀 Starting AI Hub deployment process..." - -# Navigate to the project directory. Exit if the directory doesn't exist. -cd "$PROJECT_DIR" || { echo "Error: Project directory '$PROJECT_DIR' not found. Exiting."; exit 1; } - -# Stop and remove any existing containers to ensure a clean deployment. -echo "🛑 Stopping and removing old Docker containers and networks..." -sudo $DOCKER_CMD down || true - -# Pull the latest images if they are hosted on a registry. -# echo "📥 Pulling latest Docker images..." -# sudo $DOCKER_CMD pull - -# Build new images and start the services. The `--build` flag ensures -# the images are re-built from their respective Dockerfiles. -# The `--remove-orphans` flag ensures old service containers are cleaned up. -echo "🏗️ Building and starting new containers (Hub & Frontend)..." -sudo $DOCKER_CMD up -d --build --remove-orphans - -if [ -d "agent-node" ]; then - echo "🏗️ Building and starting Agent Node..." - cd agent-node - sudo $DOCKER_CMD up -d --build --remove-orphans - cd .. -fi - -echo "✅ Containers started! Checking status..." -sudo docker ps --filter "name=ai_" -sudo docker ps --filter "name=cortex-local-agent" - -echo "✅ Deployment complete! The AI Hub application is now running." - -# --- Post-Deployment Cleanup --- -echo "🧹 Cleaning up unused Docker resources..." - -# Remove dangling images (images without a tag). -sudo docker image prune -f || true - -echo "✨ Cleanup finished." \ No newline at end of file diff --git a/deploy_prod.sh b/deploy_prod.sh new file mode 100755 index 0000000..11403b1 --- /dev/null +++ b/deploy_prod.sh @@ -0,0 +1,95 @@ +#!/bin/bash +# Description: Automates deployment from the local environment to the production host 192.168.68.113 + +# Credentials - Can be set via ENV or fetched from GitBucket +HOST="${REMOTE_HOST}" +USER="${REMOTE_USER}" +PASS="${REMOTE_PASS}" + +# If credentials are missing, try to fetch from GitBucket Private Snippet +if [ -z "$PASS" ] || [ -z "$HOST" ]; then + # Load token/id from local env if present + if [ -f "/app/.env.gitbucket" ]; then + source "/app/.env.gitbucket" + fi + + GITBUCKET_TOKEN="${GITBUCKET_TOKEN}" + SNIPPET_ID="${DEPLOYMENT_SNIPPET_ID}" + + if [ -n "$GITBUCKET_TOKEN" ] && [ -n "$SNIPPET_ID" ]; then + echo "Secrets not provided in environment. Attempting to fetch from GitBucket..." + + TMP_SECRETS=$(mktemp -d) + if git clone "https://yangyangxie:${GITBUCKET_TOKEN}@gitbucket.jerxie.com/git/gist/yangyangxie/${SNIPPET_ID}.git" "$TMP_SECRETS" &> /dev/null; then + if [ -f "$TMP_SECRETS/.env.production" ]; then + source "$TMP_SECRETS/.env.production" + HOST="${REMOTE_HOST:-$HOST}" + USER="${REMOTE_USER:-$USER}" + PASS="${REMOTE_PASSWORD:-$PASS}" + echo "Successfully loaded credentials from GitBucket." + # Strip potential carriage returns + HOST=$(echo "$HOST" | tr -d '\r') + USER=$(echo "$USER" | tr -d '\r') + PASS=$(echo "$PASS" | tr -d '\r') + fi + else + echo "Failed to fetch secrets from GitBucket." + fi + rm -rf "$TMP_SECRETS" + fi +fi + +# Fallback defaults if still not set +HOST="${HOST:-192.168.68.113}" +USER="${USER:-axieyangb}" + +# System Paths +REMOTE_TMP="/tmp/cortex-hub/" +REMOTE_PROJ="/home/coder/project/cortex-hub" + +if [ -z "$PASS" ]; then + echo "Error: REMOTE_PASS not found and could not be fetched from GitBucket." + echo "Please set REMOTE_PASS or GITBUCKET_TOKEN environment variables." + exit 1 +fi + +echo "Checking if sshpass is installed..." +if ! command -v sshpass &> /dev/null; then + echo "sshpass could not be found, installing..." + sudo apt-get update && sudo apt-get install -y sshpass +fi + +# 1. Sync local codebase to temporary directory on remote server +echo "Syncing local files to production [USER: $USER, HOST: $HOST]..." +sshpass -p "$PASS" rsync -avz \ + --exclude '.git' \ + --exclude 'node_modules' \ + --exclude 'ui/client-app/node_modules' \ + --exclude 'ui/client-app/build' \ + --exclude 'ai-hub/__pycache__' \ + --exclude '.venv' \ + -e "ssh -o StrictHostKeyChecking=no" /app/ "$USER@$HOST:$REMOTE_TMP" + +if [ $? -ne 0 ]; then + echo "Rsync failed! Exiting." + exit 1 +fi + +# 2. Copy the synced files into the actual project directory replacing the old ones +echo "Overwriting production project files..." +sshpass -p "$PASS" ssh -o StrictHostKeyChecking=no "$USER@$HOST" << EOF + echo '$PASS' | sudo -S rm -rf $REMOTE_PROJ/nginx.conf + echo '$PASS' | sudo -S cp -r ${REMOTE_TMP}* $REMOTE_PROJ/ + echo '$PASS' | sudo -S chown -R $USER:$USER $REMOTE_PROJ +EOF + +# 3. Rebuild and restart services remotely +echo "Deploying on production server..." +sshpass -p "$PASS" ssh -o StrictHostKeyChecking=no "$USER@$HOST" << EOF + cd $REMOTE_PROJ + echo '$PASS' | sudo -S bash local_rebuild.sh +EOF + +echo "Done! The new code is deployed to $HOST." +echo "CRITICAL: Run the automated Frontend Health Check now to verify production stability." +echo "Command: /frontend_tester" diff --git a/deploy_remote.sh b/deploy_remote.sh deleted file mode 100755 index 2e3411d..0000000 --- a/deploy_remote.sh +++ /dev/null @@ -1,95 +0,0 @@ -#!/bin/bash -# Description: Automates deployment from the local environment to the production host 192.168.68.113 - -# Credentials - Can be set via ENV or fetched from GitBucket -HOST="${REMOTE_HOST}" -USER="${REMOTE_USER}" -PASS="${REMOTE_PASS}" - -# If credentials are missing, try to fetch from GitBucket Private Snippet -if [ -z "$PASS" ] || [ -z "$HOST" ]; then - # Load token/id from local env if present - if [ -f "/app/.env.gitbucket" ]; then - source "/app/.env.gitbucket" - fi - - GITBUCKET_TOKEN="${GITBUCKET_TOKEN}" - SNIPPET_ID="${DEPLOYMENT_SNIPPET_ID}" - - if [ -n "$GITBUCKET_TOKEN" ] && [ -n "$SNIPPET_ID" ]; then - echo "Secrets not provided in environment. Attempting to fetch from GitBucket..." - - TMP_SECRETS=$(mktemp -d) - if git clone "https://yangyangxie:${GITBUCKET_TOKEN}@gitbucket.jerxie.com/git/gist/yangyangxie/${SNIPPET_ID}.git" "$TMP_SECRETS" &> /dev/null; then - if [ -f "$TMP_SECRETS/.env.production" ]; then - source "$TMP_SECRETS/.env.production" - HOST="${REMOTE_HOST:-$HOST}" - USER="${REMOTE_USER:-$USER}" - PASS="${REMOTE_PASSWORD:-$PASS}" - echo "Successfully loaded credentials from GitBucket." - # Strip potential carriage returns - HOST=$(echo "$HOST" | tr -d '\r') - USER=$(echo "$USER" | tr -d '\r') - PASS=$(echo "$PASS" | tr -d '\r') - fi - else - echo "Failed to fetch secrets from GitBucket." - fi - rm -rf "$TMP_SECRETS" - fi -fi - -# Fallback defaults if still not set -HOST="${HOST:-192.168.68.113}" -USER="${USER:-axieyangb}" - -# System Paths -REMOTE_TMP="/tmp/cortex-hub/" -REMOTE_PROJ="/home/coder/project/cortex-hub" - -if [ -z "$PASS" ]; then - echo "Error: REMOTE_PASS not found and could not be fetched from GitBucket." - echo "Please set REMOTE_PASS or GITBUCKET_TOKEN environment variables." - exit 1 -fi - -echo "Checking if sshpass is installed..." -if ! command -v sshpass &> /dev/null; then - echo "sshpass could not be found, installing..." - sudo apt-get update && sudo apt-get install -y sshpass -fi - -# 1. Sync local codebase to temporary directory on remote server -echo "Syncing local files to production [USER: $USER, HOST: $HOST]..." -sshpass -p "$PASS" rsync -avz \ - --exclude '.git' \ - --exclude 'node_modules' \ - --exclude 'ui/client-app/node_modules' \ - --exclude 'ui/client-app/build' \ - --exclude 'ai-hub/__pycache__' \ - --exclude '.venv' \ - -e "ssh -o StrictHostKeyChecking=no" /app/ "$USER@$HOST:$REMOTE_TMP" - -if [ $? -ne 0 ]; then - echo "Rsync failed! Exiting." - exit 1 -fi - -# 2. Copy the synced files into the actual project directory replacing the old ones -echo "Overwriting production project files..." -sshpass -p "$PASS" ssh -o StrictHostKeyChecking=no "$USER@$HOST" << EOF - echo '$PASS' | sudo -S rm -rf $REMOTE_PROJ/nginx.conf - echo '$PASS' | sudo -S cp -r ${REMOTE_TMP}* $REMOTE_PROJ/ - echo '$PASS' | sudo -S chown -R $USER:$USER $REMOTE_PROJ -EOF - -# 3. Rebuild and restart services remotely -echo "Deploying on production server..." -sshpass -p "$PASS" ssh -o StrictHostKeyChecking=no "$USER@$HOST" << EOF - cd $REMOTE_PROJ - echo '$PASS' | sudo -S bash deploy_local.sh -EOF - -echo "Done! The new code is deployed to $HOST." -echo "CRITICAL: Run the automated Frontend Health Check now to verify production stability." -echo "Command: /frontend_tester" diff --git a/deploy_test_nodes.sh b/deploy_test_nodes.sh new file mode 100755 index 0000000..ad5d8f8 --- /dev/null +++ b/deploy_test_nodes.sh @@ -0,0 +1,66 @@ +#!/bin/bash +# deploy_test_nodes.sh: Spawns N test agent nodes on the production host for mesh testing. +# Usage: ./deploy_test_nodes.sh [COUNT] (default 2) + +COUNT=${1:-2} +HOST="${REMOTE_HOST:-192.168.68.113}" +USER="${REMOTE_USER:-axieyangb}" +PASS="${REMOTE_PASS}" + +# Load credentials from GitBucket if not in environment +if [ -z "$PASS" ]; then + if [ -f "/app/.env.gitbucket" ]; then source "/app/.env.gitbucket"; fi + GITBUCKET_TOKEN="${GITBUCKET_TOKEN}" + SNIPPET_ID="${DEPLOYMENT_SNIPPET_ID}" + if [ -n "$GITBUCKET_TOKEN" ] && [ -n "$SNIPPET_ID" ]; then + TMP_SECRETS=$(mktemp -d) + if git clone "https://yangyangxie:${GITBUCKET_TOKEN}@gitbucket.jerxie.com/git/gist/yangyangxie/${SNIPPET_ID}.git" "$TMP_SECRETS" &> /dev/null; then + source "$TMP_SECRETS/.env.production" + PASS="${REMOTE_PASSWORD:-$PASS}" + fi + rm -rf "$TMP_SECRETS" + fi +fi + +if [ -z "$PASS" ]; then echo "Error: REMOTE_PASS not found."; exit 1; fi + +REMOTE_PROJ="/home/coder/project/cortex-hub" +AGENT_DIR="$REMOTE_PROJ/agent-node" + +echo "🚀 Deploying $COUNT test nodes to $HOST..." + +# We use docker run instead of compose to allow scaling with unique names/IDs easily +# without modifying the persistent docker-compose.yml on the server. + +sshpass -p "$PASS" ssh -o StrictHostKeyChecking=no "$USER@$HOST" << EOF + # 1. Ensure the base image is built + cd $AGENT_DIR + echo '$PASS' | sudo -S docker build -t agent-node-base . + + # 2. Cleanup any previous test nodes + echo "Cleaning up old test nodes..." + echo '$PASS' | sudo -S docker ps -a --filter "name=cortex-test-node-" -q | xargs -r sudo docker rm -f + + # 3. Spawn N nodes + for i in \$(seq 1 $COUNT); do + NODE_ID="test-node-\$i" + CONTAINER_NAME="cortex-test-node-\$i" + + echo "[+] Starting \$CONTAINER_NAME..." + + echo '$PASS' | sudo -S docker run -d \\ + --name "\$CONTAINER_NAME" \\ + --network cortex-hub_default \\ + -e AGENT_NODE_ID="\$NODE_ID" \\ + -e AGENT_NODE_DESC="Scalable Test Node #\$i" \\ + -e GRPC_ENDPOINT="ai_hub_service:50051" \\ + -e AGENT_SECRET_KEY="cortex-secret-shared-key" \\ + -e AGENT_TLS_ENABLED="false" \\ + agent-node-base + done + + echo "✅ Spawning complete. Currently running test nodes:" + echo '$PASS' | sudo -S docker ps --filter "name=cortex-test-node-" +EOF + +echo "✨ Done! Check https://ai.jerxie.com/nodes to see the new nodes join the mesh." diff --git a/docker-compose.test-nodes.yml b/docker-compose.test-nodes.yml new file mode 100644 index 0000000..cb7dbc5 --- /dev/null +++ b/docker-compose.test-nodes.yml @@ -0,0 +1,37 @@ +# docker-compose.test-nodes.yml +# Internal testing setup for multiple Agent Nodes (e.g. Test Node 1, Test Node 2). +# This is NOT meant for end-user deployment. +services: + test-node-1: + build: + context: ./agent-node + container_name: cortex-test-1 + environment: + - AGENT_NODE_ID=test-node-1 + - AGENT_NODE_DESC=Primary Test Node + - GRPC_ENDPOINT=ai_hub_service:50051 + - AGENT_SECRET_KEY=aYc2j1lYUUZXkBFFUndnleZI + - AGENT_AUTH_TOKEN=cortex-secret-shared-key + - AGENT_TLS_ENABLED=false + networks: + - cortex-hub_default + restart: unless-stopped + + test-node-2: + build: + context: ./agent-node + container_name: cortex-test-2 + environment: + - AGENT_NODE_ID=test-node-2 + - AGENT_NODE_DESC=Secondary Test Node + - GRPC_ENDPOINT=ai_hub_service:50051 + - AGENT_SECRET_KEY=aYc2j1lYUUZXkBFFUndnleZI + - AGENT_AUTH_TOKEN=ysHjZIRXeWo-YYK6EWtBsIgJ4uNBihSnZMtt0BQW3eI + - AGENT_TLS_ENABLED=false + networks: + - cortex-hub_default + restart: unless-stopped + +networks: + cortex-hub_default: + external: true diff --git a/docker-compose.yml b/docker-compose.yml index 16a9eb1..d474455 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -41,6 +41,7 @@ - SECRET_KEY=aYc2j1lYUUZXkBFFUndnleZI volumes: - ai_hub_data:/app/data:rw + - ./agent-node:/app/agent-node-source:ro deploy: resources: limits: diff --git a/docs/KickoffPlan.md b/docs/KickoffPlan.md new file mode 100644 index 0000000..0e212f3 --- /dev/null +++ b/docs/KickoffPlan.md @@ -0,0 +1,91 @@ +# **Kickoff Plan: AI Model Hub Service** + +This document outlines the plan for developing a central **"hub" service** that routes requests to various Large Language Models (LLMs) and uses **PostgreSQL** for metadata storage alongside **FAISS** for similarity search on vector data. + +--- + +### **1. High-Level Architecture** + +The service will consist of three main components: + +1. **API Server**: + A web server that exposes endpoints to receive user prompts and return model responses. This will be the main entry point for all client applications. + +2. **LLM Router/Orchestrator**: + A core logic layer responsible for deciding which LLM (Gemini, DeepSeek, etc.) should handle a given request. It will also manage interactions with **PostgreSQL** and **FAISS**. + +3. **Vector Database (FAISS + PostgreSQL)**: + A two-layered database system: + + * **FAISS**: Stores vectors (numerical representations of text). Handles high-performance similarity search. + * **PostgreSQL**: Stores metadata such as conversation IDs, document titles, timestamps, and other relational data. + +--- + +### **2. Technology Stack** + +* **API Framework**: + **FastAPI (Python)** – High-performance, easy to learn, with automatic interactive documentation, ideal for testing and development. + +* **LLM Interaction**: + **LangChain** (or a similar abstraction library) – Simplifies communication with different LLM APIs by providing a unified interface. + +* **Vector Database**: + + * **FAISS**: High-performance similarity search for vectors. + * **PostgreSQL**: Stores metadata for vectors, such as document IDs, user data, timestamps, etc. Used for filtering, organizing, and managing relational data. + +* **Deployment**: + **Docker** – Containerizing the application for portability, ensuring easy deployment across any machine within the local network. + +--- + +### **3. Development Roadmap** + +#### **Phase 1: Core API and Model Integration** *(1-2 weeks)* + +* [X] Set up a basic **FastAPI server**. +* [X] Create a `/chat` endpoint that accepts user prompts. +* [X] Implement basic **routing logic** to forward requests to one hardcoded LLM (e.g., Gemini). +* [X] Connect to the LLM's API and return the response to the user. + +#### **Phase 2: PostgreSQL and FAISS Integration** *(2-3 weeks)* + +* [ ] Integrate **PostgreSQL** for metadata storage (document IDs, timestamps, etc.). +* [ ] Integrate **FAISS** for vector storage and similarity search. +* [ ] On each API call, **embed the user prompt** and the model's response into vectors. +* [ ] Store the vectors in **FAISS** and store associated metadata in **PostgreSQL** (such as document title, conversation ID). +* [ ] Perform a **similarity search** using **FAISS** before sending a new prompt to the LLM, and include relevant history stored in **PostgreSQL** as context. + +#### **Phase 3: Multi-Model Routing & RAG** *(1-2 weeks)* + +* [ ] Abstract LLM connections to easily support multiple models (Gemini, DeepSeek, etc.). +* [ ] Add logic to the `/chat` endpoint to allow clients to specify which model to use. +* [ ] Create a separate endpoint (e.g., `/add-document`) to upload text files. +* [ ] Implement a **RAG pipeline**: + + * When a prompt is received, search **FAISS** for relevant vector matches and retrieve metadata from **PostgreSQL**. + * Pass the relevant document chunks along with the prompt to the selected LLM. + +#### **Phase 4: Refinement and Deployment** *(1 week)* + +* [ ] Develop a simple **UI** (optional, could use FastAPI's built-in docs). +* [ ] Write **Dockerfiles** for the application. +* [ ] Add **configuration management** for API keys and other settings. +* [ ] Implement basic **logging** and **error handling**. + +--- + +### **4. PostgreSQL + FAISS Workflow** + +* **Storing Vectors**: + When a document is added, its vector representation is stored in **FAISS**. + Metadata such as document titles, timestamps, and user IDs are stored in **PostgreSQL**. + +* **Querying**: + For a user query, embed the query into a vector. + Use **FAISS** to perform a similarity search and retrieve the nearest vectors. + Query **PostgreSQL** for metadata (e.g., title, author) related to the relevant vectors. + +* **Syncing Data**: + Ensure that metadata in **PostgreSQL** is synchronized with vectors in **FAISS** for accurate and consistent retrieval. diff --git a/get_user.py b/get_user.py deleted file mode 100644 index 1d7a304..0000000 --- a/get_user.py +++ /dev/null @@ -1,13 +0,0 @@ -import os -import sqlite3 - -def get_db(): - db = sqlite3.connect("ai-hub/data/ai_hub.db") - cur = db.cursor() - cur.execute("SELECT id, email, role, group_id FROM users") - for row in cur.fetchall(): - print(row) - db.close() - -get_db() - diff --git a/local_rebuild.sh b/local_rebuild.sh new file mode 100755 index 0000000..d5dca87 --- /dev/null +++ b/local_rebuild.sh @@ -0,0 +1,68 @@ +#!/bin/bash + +# --- Deployment Script for AI Hub --- +# This script is designed to automate the deployment of the AI Hub application +# using Docker Compose. It's intended to be run on the production server. +# +# The script performs the following actions: +# 1. Defines project-specific configuration variables. +# 2. **Installs Docker Compose if it's not found on the system.** +# 3. Navigates to the correct project directory. +# 4. Stops and removes any currently running Docker containers for the project. +# 5. Pulls the latest Docker images from a registry (if applicable). +# 6. Builds the new Docker images from the source code. +# 7. Starts the new containers in detached mode, with production settings. +# 8. Performs cleanup of old, unused Docker images. + +# --- Configuration --- +# Set the project directory to the directory where this script is located. +PROJECT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" &> /dev/null && pwd)" + + +# --- Helper Function --- +# Find the correct docker-compose command (modern plugin or standalone v1) +if docker compose version &> /dev/null; then + DOCKER_CMD="docker compose" +else + DOCKER_CMD="docker-compose" +fi + +# --- Script Execution --- +echo "🚀 Starting AI Hub deployment process..." + +# Navigate to the project directory. Exit if the directory doesn't exist. +cd "$PROJECT_DIR" || { echo "Error: Project directory '$PROJECT_DIR' not found. Exiting."; exit 1; } + +# Stop and remove any existing containers to ensure a clean deployment. +echo "🛑 Stopping and removing old Docker containers and networks..." +sudo $DOCKER_CMD down || true + +# Pull the latest images if they are hosted on a registry. +# echo "📥 Pulling latest Docker images..." +# sudo $DOCKER_CMD pull + +# Build new images and start the services. The `--build` flag ensures +# the images are re-built from their respective Dockerfiles. +# The `--remove-orphans` flag ensures old service containers are cleaned up. +echo "🏗️ Building and starting new containers (Hub & Frontend)..." +sudo $DOCKER_CMD up -d --build --remove-orphans + +if [ -f "docker-compose.test-nodes.yml" ]; then + echo "🏗️ Building and starting Internal Test Nodes..." + # IMPORTANT: DO NOT use --remove-orphans here, as it will kill the Hub/Frontend + sudo $DOCKER_CMD -f docker-compose.test-nodes.yml up -d --build +fi + +echo "✅ Containers started! Checking status..." +sudo docker ps --filter "name=ai_" +sudo docker ps --filter "name=cortex-" + +echo "✅ Deployment complete! The AI Hub application is now running." + +# --- Post-Deployment Cleanup --- +echo "🧹 Cleaning up unused Docker resources..." + +# Remove dangling images (images without a tag). +sudo docker image prune -f || true + +echo "✨ Cleanup finished." \ No newline at end of file diff --git a/node_modules/.package-lock.json b/node_modules/.package-lock.json deleted file mode 100644 index 33eb05f..0000000 --- a/node_modules/.package-lock.json +++ /dev/null @@ -1,27 +0,0 @@ -{ - "name": "app", - "lockfileVersion": 3, - "requires": true, - "packages": { - "node_modules/ws": { - "version": "8.19.0", - "resolved": "https://registry.npmjs.org/ws/-/ws-8.19.0.tgz", - "integrity": "sha512-blAT2mjOEIi0ZzruJfIhb3nps74PRWTCz1IjglWEEpQl5XS/UNama6u2/rjFkDDouqr4L67ry+1aGIALViWjDg==", - "engines": { - "node": ">=10.0.0" - }, - "peerDependencies": { - "bufferutil": "^4.0.1", - "utf-8-validate": ">=5.0.2" - }, - "peerDependenciesMeta": { - "bufferutil": { - "optional": true - }, - "utf-8-validate": { - "optional": true - } - } - } - } -} diff --git a/node_modules/ws/LICENSE b/node_modules/ws/LICENSE deleted file mode 100644 index 1da5b96..0000000 --- a/node_modules/ws/LICENSE +++ /dev/null @@ -1,20 +0,0 @@ -Copyright (c) 2011 Einar Otto Stangvik -Copyright (c) 2013 Arnout Kazemier and contributors -Copyright (c) 2016 Luigi Pinca and contributors - -Permission is hereby granted, free of charge, to any person obtaining a copy of -this software and associated documentation files (the "Software"), to deal in -the Software without restriction, including without limitation the rights to -use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of -the Software, and to permit persons to whom the Software is furnished to do so, -subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS -FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR -COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER -IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/node_modules/ws/README.md b/node_modules/ws/README.md deleted file mode 100644 index 21f10df..0000000 --- a/node_modules/ws/README.md +++ /dev/null @@ -1,548 +0,0 @@ -# ws: a Node.js WebSocket library - -[![Version npm](https://img.shields.io/npm/v/ws.svg?logo=npm)](https://www.npmjs.com/package/ws) -[![CI](https://img.shields.io/github/actions/workflow/status/websockets/ws/ci.yml?branch=master&label=CI&logo=github)](https://github.com/websockets/ws/actions?query=workflow%3ACI+branch%3Amaster) -[![Coverage Status](https://img.shields.io/coveralls/websockets/ws/master.svg?logo=coveralls)](https://coveralls.io/github/websockets/ws) - -ws is a simple to use, blazing fast, and thoroughly tested WebSocket client and -server implementation. - -Passes the quite extensive Autobahn test suite: [server][server-report], -[client][client-report]. - -**Note**: This module does not work in the browser. The client in the docs is a -reference to a backend with the role of a client in the WebSocket communication. -Browser clients must use the native -[`WebSocket`](https://developer.mozilla.org/en-US/docs/Web/API/WebSocket) -object. To make the same code work seamlessly on Node.js and the browser, you -can use one of the many wrappers available on npm, like -[isomorphic-ws](https://github.com/heineiuo/isomorphic-ws). - -## Table of Contents - -- [Protocol support](#protocol-support) -- [Installing](#installing) - - [Opt-in for performance](#opt-in-for-performance) - - [Legacy opt-in for performance](#legacy-opt-in-for-performance) -- [API docs](#api-docs) -- [WebSocket compression](#websocket-compression) -- [Usage examples](#usage-examples) - - [Sending and receiving text data](#sending-and-receiving-text-data) - - [Sending binary data](#sending-binary-data) - - [Simple server](#simple-server) - - [External HTTP/S server](#external-https-server) - - [Multiple servers sharing a single HTTP/S server](#multiple-servers-sharing-a-single-https-server) - - [Client authentication](#client-authentication) - - [Server broadcast](#server-broadcast) - - [Round-trip time](#round-trip-time) - - [Use the Node.js streams API](#use-the-nodejs-streams-api) - - [Other examples](#other-examples) -- [FAQ](#faq) - - [How to get the IP address of the client?](#how-to-get-the-ip-address-of-the-client) - - [How to detect and close broken connections?](#how-to-detect-and-close-broken-connections) - - [How to connect via a proxy?](#how-to-connect-via-a-proxy) -- [Changelog](#changelog) -- [License](#license) - -## Protocol support - -- **HyBi drafts 07-12** (Use the option `protocolVersion: 8`) -- **HyBi drafts 13-17** (Current default, alternatively option - `protocolVersion: 13`) - -## Installing - -``` -npm install ws -``` - -### Opt-in for performance - -[bufferutil][] is an optional module that can be installed alongside the ws -module: - -``` -npm install --save-optional bufferutil -``` - -This is a binary addon that improves the performance of certain operations such -as masking and unmasking the data payload of the WebSocket frames. Prebuilt -binaries are available for the most popular platforms, so you don't necessarily -need to have a C++ compiler installed on your machine. - -To force ws to not use bufferutil, use the -[`WS_NO_BUFFER_UTIL`](./doc/ws.md#ws_no_buffer_util) environment variable. This -can be useful to enhance security in systems where a user can put a package in -the package search path of an application of another user, due to how the -Node.js resolver algorithm works. - -#### Legacy opt-in for performance - -If you are running on an old version of Node.js (prior to v18.14.0), ws also -supports the [utf-8-validate][] module: - -``` -npm install --save-optional utf-8-validate -``` - -This contains a binary polyfill for [`buffer.isUtf8()`][]. - -To force ws not to use utf-8-validate, use the -[`WS_NO_UTF_8_VALIDATE`](./doc/ws.md#ws_no_utf_8_validate) environment variable. - -## API docs - -See [`/doc/ws.md`](./doc/ws.md) for Node.js-like documentation of ws classes and -utility functions. - -## WebSocket compression - -ws supports the [permessage-deflate extension][permessage-deflate] which enables -the client and server to negotiate a compression algorithm and its parameters, -and then selectively apply it to the data payloads of each WebSocket message. - -The extension is disabled by default on the server and enabled by default on the -client. It adds a significant overhead in terms of performance and memory -consumption so we suggest to enable it only if it is really needed. - -Note that Node.js has a variety of issues with high-performance compression, -where increased concurrency, especially on Linux, can lead to [catastrophic -memory fragmentation][node-zlib-bug] and slow performance. If you intend to use -permessage-deflate in production, it is worthwhile to set up a test -representative of your workload and ensure Node.js/zlib will handle it with -acceptable performance and memory usage. - -Tuning of permessage-deflate can be done via the options defined below. You can -also use `zlibDeflateOptions` and `zlibInflateOptions`, which is passed directly -into the creation of [raw deflate/inflate streams][node-zlib-deflaterawdocs]. - -See [the docs][ws-server-options] for more options. - -```js -import WebSocket, { WebSocketServer } from 'ws'; - -const wss = new WebSocketServer({ - port: 8080, - perMessageDeflate: { - zlibDeflateOptions: { - // See zlib defaults. - chunkSize: 1024, - memLevel: 7, - level: 3 - }, - zlibInflateOptions: { - chunkSize: 10 * 1024 - }, - // Other options settable: - clientNoContextTakeover: true, // Defaults to negotiated value. - serverNoContextTakeover: true, // Defaults to negotiated value. - serverMaxWindowBits: 10, // Defaults to negotiated value. - // Below options specified as default values. - concurrencyLimit: 10, // Limits zlib concurrency for perf. - threshold: 1024 // Size (in bytes) below which messages - // should not be compressed if context takeover is disabled. - } -}); -``` - -The client will only use the extension if it is supported and enabled on the -server. To always disable the extension on the client, set the -`perMessageDeflate` option to `false`. - -```js -import WebSocket from 'ws'; - -const ws = new WebSocket('ws://www.host.com/path', { - perMessageDeflate: false -}); -``` - -## Usage examples - -### Sending and receiving text data - -```js -import WebSocket from 'ws'; - -const ws = new WebSocket('ws://www.host.com/path'); - -ws.on('error', console.error); - -ws.on('open', function open() { - ws.send('something'); -}); - -ws.on('message', function message(data) { - console.log('received: %s', data); -}); -``` - -### Sending binary data - -```js -import WebSocket from 'ws'; - -const ws = new WebSocket('ws://www.host.com/path'); - -ws.on('error', console.error); - -ws.on('open', function open() { - const array = new Float32Array(5); - - for (var i = 0; i < array.length; ++i) { - array[i] = i / 2; - } - - ws.send(array); -}); -``` - -### Simple server - -```js -import { WebSocketServer } from 'ws'; - -const wss = new WebSocketServer({ port: 8080 }); - -wss.on('connection', function connection(ws) { - ws.on('error', console.error); - - ws.on('message', function message(data) { - console.log('received: %s', data); - }); - - ws.send('something'); -}); -``` - -### External HTTP/S server - -```js -import { createServer } from 'https'; -import { readFileSync } from 'fs'; -import { WebSocketServer } from 'ws'; - -const server = createServer({ - cert: readFileSync('/path/to/cert.pem'), - key: readFileSync('/path/to/key.pem') -}); -const wss = new WebSocketServer({ server }); - -wss.on('connection', function connection(ws) { - ws.on('error', console.error); - - ws.on('message', function message(data) { - console.log('received: %s', data); - }); - - ws.send('something'); -}); - -server.listen(8080); -``` - -### Multiple servers sharing a single HTTP/S server - -```js -import { createServer } from 'http'; -import { WebSocketServer } from 'ws'; - -const server = createServer(); -const wss1 = new WebSocketServer({ noServer: true }); -const wss2 = new WebSocketServer({ noServer: true }); - -wss1.on('connection', function connection(ws) { - ws.on('error', console.error); - - // ... -}); - -wss2.on('connection', function connection(ws) { - ws.on('error', console.error); - - // ... -}); - -server.on('upgrade', function upgrade(request, socket, head) { - const { pathname } = new URL(request.url, 'wss://base.url'); - - if (pathname === '/foo') { - wss1.handleUpgrade(request, socket, head, function done(ws) { - wss1.emit('connection', ws, request); - }); - } else if (pathname === '/bar') { - wss2.handleUpgrade(request, socket, head, function done(ws) { - wss2.emit('connection', ws, request); - }); - } else { - socket.destroy(); - } -}); - -server.listen(8080); -``` - -### Client authentication - -```js -import { createServer } from 'http'; -import { WebSocketServer } from 'ws'; - -function onSocketError(err) { - console.error(err); -} - -const server = createServer(); -const wss = new WebSocketServer({ noServer: true }); - -wss.on('connection', function connection(ws, request, client) { - ws.on('error', console.error); - - ws.on('message', function message(data) { - console.log(`Received message ${data} from user ${client}`); - }); -}); - -server.on('upgrade', function upgrade(request, socket, head) { - socket.on('error', onSocketError); - - // This function is not defined on purpose. Implement it with your own logic. - authenticate(request, function next(err, client) { - if (err || !client) { - socket.write('HTTP/1.1 401 Unauthorized\r\n\r\n'); - socket.destroy(); - return; - } - - socket.removeListener('error', onSocketError); - - wss.handleUpgrade(request, socket, head, function done(ws) { - wss.emit('connection', ws, request, client); - }); - }); -}); - -server.listen(8080); -``` - -Also see the provided [example][session-parse-example] using `express-session`. - -### Server broadcast - -A client WebSocket broadcasting to all connected WebSocket clients, including -itself. - -```js -import WebSocket, { WebSocketServer } from 'ws'; - -const wss = new WebSocketServer({ port: 8080 }); - -wss.on('connection', function connection(ws) { - ws.on('error', console.error); - - ws.on('message', function message(data, isBinary) { - wss.clients.forEach(function each(client) { - if (client.readyState === WebSocket.OPEN) { - client.send(data, { binary: isBinary }); - } - }); - }); -}); -``` - -A client WebSocket broadcasting to every other connected WebSocket clients, -excluding itself. - -```js -import WebSocket, { WebSocketServer } from 'ws'; - -const wss = new WebSocketServer({ port: 8080 }); - -wss.on('connection', function connection(ws) { - ws.on('error', console.error); - - ws.on('message', function message(data, isBinary) { - wss.clients.forEach(function each(client) { - if (client !== ws && client.readyState === WebSocket.OPEN) { - client.send(data, { binary: isBinary }); - } - }); - }); -}); -``` - -### Round-trip time - -```js -import WebSocket from 'ws'; - -const ws = new WebSocket('wss://websocket-echo.com/'); - -ws.on('error', console.error); - -ws.on('open', function open() { - console.log('connected'); - ws.send(Date.now()); -}); - -ws.on('close', function close() { - console.log('disconnected'); -}); - -ws.on('message', function message(data) { - console.log(`Round-trip time: ${Date.now() - data} ms`); - - setTimeout(function timeout() { - ws.send(Date.now()); - }, 500); -}); -``` - -### Use the Node.js streams API - -```js -import WebSocket, { createWebSocketStream } from 'ws'; - -const ws = new WebSocket('wss://websocket-echo.com/'); - -const duplex = createWebSocketStream(ws, { encoding: 'utf8' }); - -duplex.on('error', console.error); - -duplex.pipe(process.stdout); -process.stdin.pipe(duplex); -``` - -### Other examples - -For a full example with a browser client communicating with a ws server, see the -examples folder. - -Otherwise, see the test cases. - -## FAQ - -### How to get the IP address of the client? - -The remote IP address can be obtained from the raw socket. - -```js -import { WebSocketServer } from 'ws'; - -const wss = new WebSocketServer({ port: 8080 }); - -wss.on('connection', function connection(ws, req) { - const ip = req.socket.remoteAddress; - - ws.on('error', console.error); -}); -``` - -When the server runs behind a proxy like NGINX, the de-facto standard is to use -the `X-Forwarded-For` header. - -```js -wss.on('connection', function connection(ws, req) { - const ip = req.headers['x-forwarded-for'].split(',')[0].trim(); - - ws.on('error', console.error); -}); -``` - -### How to detect and close broken connections? - -Sometimes, the link between the server and the client can be interrupted in a -way that keeps both the server and the client unaware of the broken state of the -connection (e.g. when pulling the cord). - -In these cases, ping messages can be used as a means to verify that the remote -endpoint is still responsive. - -```js -import { WebSocketServer } from 'ws'; - -function heartbeat() { - this.isAlive = true; -} - -const wss = new WebSocketServer({ port: 8080 }); - -wss.on('connection', function connection(ws) { - ws.isAlive = true; - ws.on('error', console.error); - ws.on('pong', heartbeat); -}); - -const interval = setInterval(function ping() { - wss.clients.forEach(function each(ws) { - if (ws.isAlive === false) return ws.terminate(); - - ws.isAlive = false; - ws.ping(); - }); -}, 30000); - -wss.on('close', function close() { - clearInterval(interval); -}); -``` - -Pong messages are automatically sent in response to ping messages as required by -the spec. - -Just like the server example above, your clients might as well lose connection -without knowing it. You might want to add a ping listener on your clients to -prevent that. A simple implementation would be: - -```js -import WebSocket from 'ws'; - -function heartbeat() { - clearTimeout(this.pingTimeout); - - // Use `WebSocket#terminate()`, which immediately destroys the connection, - // instead of `WebSocket#close()`, which waits for the close timer. - // Delay should be equal to the interval at which your server - // sends out pings plus a conservative assumption of the latency. - this.pingTimeout = setTimeout(() => { - this.terminate(); - }, 30000 + 1000); -} - -const client = new WebSocket('wss://websocket-echo.com/'); - -client.on('error', console.error); -client.on('open', heartbeat); -client.on('ping', heartbeat); -client.on('close', function clear() { - clearTimeout(this.pingTimeout); -}); -``` - -### How to connect via a proxy? - -Use a custom `http.Agent` implementation like [https-proxy-agent][] or -[socks-proxy-agent][]. - -## Changelog - -We're using the GitHub [releases][changelog] for changelog entries. - -## License - -[MIT](LICENSE) - -[`buffer.isutf8()`]: https://nodejs.org/api/buffer.html#bufferisutf8input -[bufferutil]: https://github.com/websockets/bufferutil -[changelog]: https://github.com/websockets/ws/releases -[client-report]: http://websockets.github.io/ws/autobahn/clients/ -[https-proxy-agent]: https://github.com/TooTallNate/node-https-proxy-agent -[node-zlib-bug]: https://github.com/nodejs/node/issues/8871 -[node-zlib-deflaterawdocs]: - https://nodejs.org/api/zlib.html#zlib_zlib_createdeflateraw_options -[permessage-deflate]: https://tools.ietf.org/html/rfc7692 -[server-report]: http://websockets.github.io/ws/autobahn/servers/ -[session-parse-example]: ./examples/express-session-parse -[socks-proxy-agent]: https://github.com/TooTallNate/node-socks-proxy-agent -[utf-8-validate]: https://github.com/websockets/utf-8-validate -[ws-server-options]: ./doc/ws.md#new-websocketserveroptions-callback diff --git a/node_modules/ws/browser.js b/node_modules/ws/browser.js deleted file mode 100644 index ca4f628..0000000 --- a/node_modules/ws/browser.js +++ /dev/null @@ -1,8 +0,0 @@ -'use strict'; - -module.exports = function () { - throw new Error( - 'ws does not work in the browser. Browser clients must use the native ' + - 'WebSocket object' - ); -}; diff --git a/node_modules/ws/index.js b/node_modules/ws/index.js deleted file mode 100644 index 41edb3b..0000000 --- a/node_modules/ws/index.js +++ /dev/null @@ -1,13 +0,0 @@ -'use strict'; - -const WebSocket = require('./lib/websocket'); - -WebSocket.createWebSocketStream = require('./lib/stream'); -WebSocket.Server = require('./lib/websocket-server'); -WebSocket.Receiver = require('./lib/receiver'); -WebSocket.Sender = require('./lib/sender'); - -WebSocket.WebSocket = WebSocket; -WebSocket.WebSocketServer = WebSocket.Server; - -module.exports = WebSocket; diff --git a/node_modules/ws/lib/buffer-util.js b/node_modules/ws/lib/buffer-util.js deleted file mode 100644 index f7536e2..0000000 --- a/node_modules/ws/lib/buffer-util.js +++ /dev/null @@ -1,131 +0,0 @@ -'use strict'; - -const { EMPTY_BUFFER } = require('./constants'); - -const FastBuffer = Buffer[Symbol.species]; - -/** - * Merges an array of buffers into a new buffer. - * - * @param {Buffer[]} list The array of buffers to concat - * @param {Number} totalLength The total length of buffers in the list - * @return {Buffer} The resulting buffer - * @public - */ -function concat(list, totalLength) { - if (list.length === 0) return EMPTY_BUFFER; - if (list.length === 1) return list[0]; - - const target = Buffer.allocUnsafe(totalLength); - let offset = 0; - - for (let i = 0; i < list.length; i++) { - const buf = list[i]; - target.set(buf, offset); - offset += buf.length; - } - - if (offset < totalLength) { - return new FastBuffer(target.buffer, target.byteOffset, offset); - } - - return target; -} - -/** - * Masks a buffer using the given mask. - * - * @param {Buffer} source The buffer to mask - * @param {Buffer} mask The mask to use - * @param {Buffer} output The buffer where to store the result - * @param {Number} offset The offset at which to start writing - * @param {Number} length The number of bytes to mask. - * @public - */ -function _mask(source, mask, output, offset, length) { - for (let i = 0; i < length; i++) { - output[offset + i] = source[i] ^ mask[i & 3]; - } -} - -/** - * Unmasks a buffer using the given mask. - * - * @param {Buffer} buffer The buffer to unmask - * @param {Buffer} mask The mask to use - * @public - */ -function _unmask(buffer, mask) { - for (let i = 0; i < buffer.length; i++) { - buffer[i] ^= mask[i & 3]; - } -} - -/** - * Converts a buffer to an `ArrayBuffer`. - * - * @param {Buffer} buf The buffer to convert - * @return {ArrayBuffer} Converted buffer - * @public - */ -function toArrayBuffer(buf) { - if (buf.length === buf.buffer.byteLength) { - return buf.buffer; - } - - return buf.buffer.slice(buf.byteOffset, buf.byteOffset + buf.length); -} - -/** - * Converts `data` to a `Buffer`. - * - * @param {*} data The data to convert - * @return {Buffer} The buffer - * @throws {TypeError} - * @public - */ -function toBuffer(data) { - toBuffer.readOnly = true; - - if (Buffer.isBuffer(data)) return data; - - let buf; - - if (data instanceof ArrayBuffer) { - buf = new FastBuffer(data); - } else if (ArrayBuffer.isView(data)) { - buf = new FastBuffer(data.buffer, data.byteOffset, data.byteLength); - } else { - buf = Buffer.from(data); - toBuffer.readOnly = false; - } - - return buf; -} - -module.exports = { - concat, - mask: _mask, - toArrayBuffer, - toBuffer, - unmask: _unmask -}; - -/* istanbul ignore else */ -if (!process.env.WS_NO_BUFFER_UTIL) { - try { - const bufferUtil = require('bufferutil'); - - module.exports.mask = function (source, mask, output, offset, length) { - if (length < 48) _mask(source, mask, output, offset, length); - else bufferUtil.mask(source, mask, output, offset, length); - }; - - module.exports.unmask = function (buffer, mask) { - if (buffer.length < 32) _unmask(buffer, mask); - else bufferUtil.unmask(buffer, mask); - }; - } catch (e) { - // Continue regardless of the error. - } -} diff --git a/node_modules/ws/lib/constants.js b/node_modules/ws/lib/constants.js deleted file mode 100644 index 69b2fe3..0000000 --- a/node_modules/ws/lib/constants.js +++ /dev/null @@ -1,19 +0,0 @@ -'use strict'; - -const BINARY_TYPES = ['nodebuffer', 'arraybuffer', 'fragments']; -const hasBlob = typeof Blob !== 'undefined'; - -if (hasBlob) BINARY_TYPES.push('blob'); - -module.exports = { - BINARY_TYPES, - CLOSE_TIMEOUT: 30000, - EMPTY_BUFFER: Buffer.alloc(0), - GUID: '258EAFA5-E914-47DA-95CA-C5AB0DC85B11', - hasBlob, - kForOnEventAttribute: Symbol('kIsForOnEventAttribute'), - kListener: Symbol('kListener'), - kStatusCode: Symbol('status-code'), - kWebSocket: Symbol('websocket'), - NOOP: () => {} -}; diff --git a/node_modules/ws/lib/event-target.js b/node_modules/ws/lib/event-target.js deleted file mode 100644 index fea4cbc..0000000 --- a/node_modules/ws/lib/event-target.js +++ /dev/null @@ -1,292 +0,0 @@ -'use strict'; - -const { kForOnEventAttribute, kListener } = require('./constants'); - -const kCode = Symbol('kCode'); -const kData = Symbol('kData'); -const kError = Symbol('kError'); -const kMessage = Symbol('kMessage'); -const kReason = Symbol('kReason'); -const kTarget = Symbol('kTarget'); -const kType = Symbol('kType'); -const kWasClean = Symbol('kWasClean'); - -/** - * Class representing an event. - */ -class Event { - /** - * Create a new `Event`. - * - * @param {String} type The name of the event - * @throws {TypeError} If the `type` argument is not specified - */ - constructor(type) { - this[kTarget] = null; - this[kType] = type; - } - - /** - * @type {*} - */ - get target() { - return this[kTarget]; - } - - /** - * @type {String} - */ - get type() { - return this[kType]; - } -} - -Object.defineProperty(Event.prototype, 'target', { enumerable: true }); -Object.defineProperty(Event.prototype, 'type', { enumerable: true }); - -/** - * Class representing a close event. - * - * @extends Event - */ -class CloseEvent extends Event { - /** - * Create a new `CloseEvent`. - * - * @param {String} type The name of the event - * @param {Object} [options] A dictionary object that allows for setting - * attributes via object members of the same name - * @param {Number} [options.code=0] The status code explaining why the - * connection was closed - * @param {String} [options.reason=''] A human-readable string explaining why - * the connection was closed - * @param {Boolean} [options.wasClean=false] Indicates whether or not the - * connection was cleanly closed - */ - constructor(type, options = {}) { - super(type); - - this[kCode] = options.code === undefined ? 0 : options.code; - this[kReason] = options.reason === undefined ? '' : options.reason; - this[kWasClean] = options.wasClean === undefined ? false : options.wasClean; - } - - /** - * @type {Number} - */ - get code() { - return this[kCode]; - } - - /** - * @type {String} - */ - get reason() { - return this[kReason]; - } - - /** - * @type {Boolean} - */ - get wasClean() { - return this[kWasClean]; - } -} - -Object.defineProperty(CloseEvent.prototype, 'code', { enumerable: true }); -Object.defineProperty(CloseEvent.prototype, 'reason', { enumerable: true }); -Object.defineProperty(CloseEvent.prototype, 'wasClean', { enumerable: true }); - -/** - * Class representing an error event. - * - * @extends Event - */ -class ErrorEvent extends Event { - /** - * Create a new `ErrorEvent`. - * - * @param {String} type The name of the event - * @param {Object} [options] A dictionary object that allows for setting - * attributes via object members of the same name - * @param {*} [options.error=null] The error that generated this event - * @param {String} [options.message=''] The error message - */ - constructor(type, options = {}) { - super(type); - - this[kError] = options.error === undefined ? null : options.error; - this[kMessage] = options.message === undefined ? '' : options.message; - } - - /** - * @type {*} - */ - get error() { - return this[kError]; - } - - /** - * @type {String} - */ - get message() { - return this[kMessage]; - } -} - -Object.defineProperty(ErrorEvent.prototype, 'error', { enumerable: true }); -Object.defineProperty(ErrorEvent.prototype, 'message', { enumerable: true }); - -/** - * Class representing a message event. - * - * @extends Event - */ -class MessageEvent extends Event { - /** - * Create a new `MessageEvent`. - * - * @param {String} type The name of the event - * @param {Object} [options] A dictionary object that allows for setting - * attributes via object members of the same name - * @param {*} [options.data=null] The message content - */ - constructor(type, options = {}) { - super(type); - - this[kData] = options.data === undefined ? null : options.data; - } - - /** - * @type {*} - */ - get data() { - return this[kData]; - } -} - -Object.defineProperty(MessageEvent.prototype, 'data', { enumerable: true }); - -/** - * This provides methods for emulating the `EventTarget` interface. It's not - * meant to be used directly. - * - * @mixin - */ -const EventTarget = { - /** - * Register an event listener. - * - * @param {String} type A string representing the event type to listen for - * @param {(Function|Object)} handler The listener to add - * @param {Object} [options] An options object specifies characteristics about - * the event listener - * @param {Boolean} [options.once=false] A `Boolean` indicating that the - * listener should be invoked at most once after being added. If `true`, - * the listener would be automatically removed when invoked. - * @public - */ - addEventListener(type, handler, options = {}) { - for (const listener of this.listeners(type)) { - if ( - !options[kForOnEventAttribute] && - listener[kListener] === handler && - !listener[kForOnEventAttribute] - ) { - return; - } - } - - let wrapper; - - if (type === 'message') { - wrapper = function onMessage(data, isBinary) { - const event = new MessageEvent('message', { - data: isBinary ? data : data.toString() - }); - - event[kTarget] = this; - callListener(handler, this, event); - }; - } else if (type === 'close') { - wrapper = function onClose(code, message) { - const event = new CloseEvent('close', { - code, - reason: message.toString(), - wasClean: this._closeFrameReceived && this._closeFrameSent - }); - - event[kTarget] = this; - callListener(handler, this, event); - }; - } else if (type === 'error') { - wrapper = function onError(error) { - const event = new ErrorEvent('error', { - error, - message: error.message - }); - - event[kTarget] = this; - callListener(handler, this, event); - }; - } else if (type === 'open') { - wrapper = function onOpen() { - const event = new Event('open'); - - event[kTarget] = this; - callListener(handler, this, event); - }; - } else { - return; - } - - wrapper[kForOnEventAttribute] = !!options[kForOnEventAttribute]; - wrapper[kListener] = handler; - - if (options.once) { - this.once(type, wrapper); - } else { - this.on(type, wrapper); - } - }, - - /** - * Remove an event listener. - * - * @param {String} type A string representing the event type to remove - * @param {(Function|Object)} handler The listener to remove - * @public - */ - removeEventListener(type, handler) { - for (const listener of this.listeners(type)) { - if (listener[kListener] === handler && !listener[kForOnEventAttribute]) { - this.removeListener(type, listener); - break; - } - } - } -}; - -module.exports = { - CloseEvent, - ErrorEvent, - Event, - EventTarget, - MessageEvent -}; - -/** - * Call an event listener - * - * @param {(Function|Object)} listener The listener to call - * @param {*} thisArg The value to use as `this`` when calling the listener - * @param {Event} event The event to pass to the listener - * @private - */ -function callListener(listener, thisArg, event) { - if (typeof listener === 'object' && listener.handleEvent) { - listener.handleEvent.call(listener, event); - } else { - listener.call(thisArg, event); - } -} diff --git a/node_modules/ws/lib/extension.js b/node_modules/ws/lib/extension.js deleted file mode 100644 index 3d7895c..0000000 --- a/node_modules/ws/lib/extension.js +++ /dev/null @@ -1,203 +0,0 @@ -'use strict'; - -const { tokenChars } = require('./validation'); - -/** - * Adds an offer to the map of extension offers or a parameter to the map of - * parameters. - * - * @param {Object} dest The map of extension offers or parameters - * @param {String} name The extension or parameter name - * @param {(Object|Boolean|String)} elem The extension parameters or the - * parameter value - * @private - */ -function push(dest, name, elem) { - if (dest[name] === undefined) dest[name] = [elem]; - else dest[name].push(elem); -} - -/** - * Parses the `Sec-WebSocket-Extensions` header into an object. - * - * @param {String} header The field value of the header - * @return {Object} The parsed object - * @public - */ -function parse(header) { - const offers = Object.create(null); - let params = Object.create(null); - let mustUnescape = false; - let isEscaping = false; - let inQuotes = false; - let extensionName; - let paramName; - let start = -1; - let code = -1; - let end = -1; - let i = 0; - - for (; i < header.length; i++) { - code = header.charCodeAt(i); - - if (extensionName === undefined) { - if (end === -1 && tokenChars[code] === 1) { - if (start === -1) start = i; - } else if ( - i !== 0 && - (code === 0x20 /* ' ' */ || code === 0x09) /* '\t' */ - ) { - if (end === -1 && start !== -1) end = i; - } else if (code === 0x3b /* ';' */ || code === 0x2c /* ',' */) { - if (start === -1) { - throw new SyntaxError(`Unexpected character at index ${i}`); - } - - if (end === -1) end = i; - const name = header.slice(start, end); - if (code === 0x2c) { - push(offers, name, params); - params = Object.create(null); - } else { - extensionName = name; - } - - start = end = -1; - } else { - throw new SyntaxError(`Unexpected character at index ${i}`); - } - } else if (paramName === undefined) { - if (end === -1 && tokenChars[code] === 1) { - if (start === -1) start = i; - } else if (code === 0x20 || code === 0x09) { - if (end === -1 && start !== -1) end = i; - } else if (code === 0x3b || code === 0x2c) { - if (start === -1) { - throw new SyntaxError(`Unexpected character at index ${i}`); - } - - if (end === -1) end = i; - push(params, header.slice(start, end), true); - if (code === 0x2c) { - push(offers, extensionName, params); - params = Object.create(null); - extensionName = undefined; - } - - start = end = -1; - } else if (code === 0x3d /* '=' */ && start !== -1 && end === -1) { - paramName = header.slice(start, i); - start = end = -1; - } else { - throw new SyntaxError(`Unexpected character at index ${i}`); - } - } else { - // - // The value of a quoted-string after unescaping must conform to the - // token ABNF, so only token characters are valid. - // Ref: https://tools.ietf.org/html/rfc6455#section-9.1 - // - if (isEscaping) { - if (tokenChars[code] !== 1) { - throw new SyntaxError(`Unexpected character at index ${i}`); - } - if (start === -1) start = i; - else if (!mustUnescape) mustUnescape = true; - isEscaping = false; - } else if (inQuotes) { - if (tokenChars[code] === 1) { - if (start === -1) start = i; - } else if (code === 0x22 /* '"' */ && start !== -1) { - inQuotes = false; - end = i; - } else if (code === 0x5c /* '\' */) { - isEscaping = true; - } else { - throw new SyntaxError(`Unexpected character at index ${i}`); - } - } else if (code === 0x22 && header.charCodeAt(i - 1) === 0x3d) { - inQuotes = true; - } else if (end === -1 && tokenChars[code] === 1) { - if (start === -1) start = i; - } else if (start !== -1 && (code === 0x20 || code === 0x09)) { - if (end === -1) end = i; - } else if (code === 0x3b || code === 0x2c) { - if (start === -1) { - throw new SyntaxError(`Unexpected character at index ${i}`); - } - - if (end === -1) end = i; - let value = header.slice(start, end); - if (mustUnescape) { - value = value.replace(/\\/g, ''); - mustUnescape = false; - } - push(params, paramName, value); - if (code === 0x2c) { - push(offers, extensionName, params); - params = Object.create(null); - extensionName = undefined; - } - - paramName = undefined; - start = end = -1; - } else { - throw new SyntaxError(`Unexpected character at index ${i}`); - } - } - } - - if (start === -1 || inQuotes || code === 0x20 || code === 0x09) { - throw new SyntaxError('Unexpected end of input'); - } - - if (end === -1) end = i; - const token = header.slice(start, end); - if (extensionName === undefined) { - push(offers, token, params); - } else { - if (paramName === undefined) { - push(params, token, true); - } else if (mustUnescape) { - push(params, paramName, token.replace(/\\/g, '')); - } else { - push(params, paramName, token); - } - push(offers, extensionName, params); - } - - return offers; -} - -/** - * Builds the `Sec-WebSocket-Extensions` header field value. - * - * @param {Object} extensions The map of extensions and parameters to format - * @return {String} A string representing the given object - * @public - */ -function format(extensions) { - return Object.keys(extensions) - .map((extension) => { - let configurations = extensions[extension]; - if (!Array.isArray(configurations)) configurations = [configurations]; - return configurations - .map((params) => { - return [extension] - .concat( - Object.keys(params).map((k) => { - let values = params[k]; - if (!Array.isArray(values)) values = [values]; - return values - .map((v) => (v === true ? k : `${k}=${v}`)) - .join('; '); - }) - ) - .join('; '); - }) - .join(', '); - }) - .join(', '); -} - -module.exports = { format, parse }; diff --git a/node_modules/ws/lib/limiter.js b/node_modules/ws/lib/limiter.js deleted file mode 100644 index 3fd3578..0000000 --- a/node_modules/ws/lib/limiter.js +++ /dev/null @@ -1,55 +0,0 @@ -'use strict'; - -const kDone = Symbol('kDone'); -const kRun = Symbol('kRun'); - -/** - * A very simple job queue with adjustable concurrency. Adapted from - * https://github.com/STRML/async-limiter - */ -class Limiter { - /** - * Creates a new `Limiter`. - * - * @param {Number} [concurrency=Infinity] The maximum number of jobs allowed - * to run concurrently - */ - constructor(concurrency) { - this[kDone] = () => { - this.pending--; - this[kRun](); - }; - this.concurrency = concurrency || Infinity; - this.jobs = []; - this.pending = 0; - } - - /** - * Adds a job to the queue. - * - * @param {Function} job The job to run - * @public - */ - add(job) { - this.jobs.push(job); - this[kRun](); - } - - /** - * Removes a job from the queue and runs it if possible. - * - * @private - */ - [kRun]() { - if (this.pending === this.concurrency) return; - - if (this.jobs.length) { - const job = this.jobs.shift(); - - this.pending++; - job(this[kDone]); - } - } -} - -module.exports = Limiter; diff --git a/node_modules/ws/lib/permessage-deflate.js b/node_modules/ws/lib/permessage-deflate.js deleted file mode 100644 index 41ff70e..0000000 --- a/node_modules/ws/lib/permessage-deflate.js +++ /dev/null @@ -1,528 +0,0 @@ -'use strict'; - -const zlib = require('zlib'); - -const bufferUtil = require('./buffer-util'); -const Limiter = require('./limiter'); -const { kStatusCode } = require('./constants'); - -const FastBuffer = Buffer[Symbol.species]; -const TRAILER = Buffer.from([0x00, 0x00, 0xff, 0xff]); -const kPerMessageDeflate = Symbol('permessage-deflate'); -const kTotalLength = Symbol('total-length'); -const kCallback = Symbol('callback'); -const kBuffers = Symbol('buffers'); -const kError = Symbol('error'); - -// -// We limit zlib concurrency, which prevents severe memory fragmentation -// as documented in https://github.com/nodejs/node/issues/8871#issuecomment-250915913 -// and https://github.com/websockets/ws/issues/1202 -// -// Intentionally global; it's the global thread pool that's an issue. -// -let zlibLimiter; - -/** - * permessage-deflate implementation. - */ -class PerMessageDeflate { - /** - * Creates a PerMessageDeflate instance. - * - * @param {Object} [options] Configuration options - * @param {(Boolean|Number)} [options.clientMaxWindowBits] Advertise support - * for, or request, a custom client window size - * @param {Boolean} [options.clientNoContextTakeover=false] Advertise/ - * acknowledge disabling of client context takeover - * @param {Number} [options.concurrencyLimit=10] The number of concurrent - * calls to zlib - * @param {(Boolean|Number)} [options.serverMaxWindowBits] Request/confirm the - * use of a custom server window size - * @param {Boolean} [options.serverNoContextTakeover=false] Request/accept - * disabling of server context takeover - * @param {Number} [options.threshold=1024] Size (in bytes) below which - * messages should not be compressed if context takeover is disabled - * @param {Object} [options.zlibDeflateOptions] Options to pass to zlib on - * deflate - * @param {Object} [options.zlibInflateOptions] Options to pass to zlib on - * inflate - * @param {Boolean} [isServer=false] Create the instance in either server or - * client mode - * @param {Number} [maxPayload=0] The maximum allowed message length - */ - constructor(options, isServer, maxPayload) { - this._maxPayload = maxPayload | 0; - this._options = options || {}; - this._threshold = - this._options.threshold !== undefined ? this._options.threshold : 1024; - this._isServer = !!isServer; - this._deflate = null; - this._inflate = null; - - this.params = null; - - if (!zlibLimiter) { - const concurrency = - this._options.concurrencyLimit !== undefined - ? this._options.concurrencyLimit - : 10; - zlibLimiter = new Limiter(concurrency); - } - } - - /** - * @type {String} - */ - static get extensionName() { - return 'permessage-deflate'; - } - - /** - * Create an extension negotiation offer. - * - * @return {Object} Extension parameters - * @public - */ - offer() { - const params = {}; - - if (this._options.serverNoContextTakeover) { - params.server_no_context_takeover = true; - } - if (this._options.clientNoContextTakeover) { - params.client_no_context_takeover = true; - } - if (this._options.serverMaxWindowBits) { - params.server_max_window_bits = this._options.serverMaxWindowBits; - } - if (this._options.clientMaxWindowBits) { - params.client_max_window_bits = this._options.clientMaxWindowBits; - } else if (this._options.clientMaxWindowBits == null) { - params.client_max_window_bits = true; - } - - return params; - } - - /** - * Accept an extension negotiation offer/response. - * - * @param {Array} configurations The extension negotiation offers/reponse - * @return {Object} Accepted configuration - * @public - */ - accept(configurations) { - configurations = this.normalizeParams(configurations); - - this.params = this._isServer - ? this.acceptAsServer(configurations) - : this.acceptAsClient(configurations); - - return this.params; - } - - /** - * Releases all resources used by the extension. - * - * @public - */ - cleanup() { - if (this._inflate) { - this._inflate.close(); - this._inflate = null; - } - - if (this._deflate) { - const callback = this._deflate[kCallback]; - - this._deflate.close(); - this._deflate = null; - - if (callback) { - callback( - new Error( - 'The deflate stream was closed while data was being processed' - ) - ); - } - } - } - - /** - * Accept an extension negotiation offer. - * - * @param {Array} offers The extension negotiation offers - * @return {Object} Accepted configuration - * @private - */ - acceptAsServer(offers) { - const opts = this._options; - const accepted = offers.find((params) => { - if ( - (opts.serverNoContextTakeover === false && - params.server_no_context_takeover) || - (params.server_max_window_bits && - (opts.serverMaxWindowBits === false || - (typeof opts.serverMaxWindowBits === 'number' && - opts.serverMaxWindowBits > params.server_max_window_bits))) || - (typeof opts.clientMaxWindowBits === 'number' && - !params.client_max_window_bits) - ) { - return false; - } - - return true; - }); - - if (!accepted) { - throw new Error('None of the extension offers can be accepted'); - } - - if (opts.serverNoContextTakeover) { - accepted.server_no_context_takeover = true; - } - if (opts.clientNoContextTakeover) { - accepted.client_no_context_takeover = true; - } - if (typeof opts.serverMaxWindowBits === 'number') { - accepted.server_max_window_bits = opts.serverMaxWindowBits; - } - if (typeof opts.clientMaxWindowBits === 'number') { - accepted.client_max_window_bits = opts.clientMaxWindowBits; - } else if ( - accepted.client_max_window_bits === true || - opts.clientMaxWindowBits === false - ) { - delete accepted.client_max_window_bits; - } - - return accepted; - } - - /** - * Accept the extension negotiation response. - * - * @param {Array} response The extension negotiation response - * @return {Object} Accepted configuration - * @private - */ - acceptAsClient(response) { - const params = response[0]; - - if ( - this._options.clientNoContextTakeover === false && - params.client_no_context_takeover - ) { - throw new Error('Unexpected parameter "client_no_context_takeover"'); - } - - if (!params.client_max_window_bits) { - if (typeof this._options.clientMaxWindowBits === 'number') { - params.client_max_window_bits = this._options.clientMaxWindowBits; - } - } else if ( - this._options.clientMaxWindowBits === false || - (typeof this._options.clientMaxWindowBits === 'number' && - params.client_max_window_bits > this._options.clientMaxWindowBits) - ) { - throw new Error( - 'Unexpected or invalid parameter "client_max_window_bits"' - ); - } - - return params; - } - - /** - * Normalize parameters. - * - * @param {Array} configurations The extension negotiation offers/reponse - * @return {Array} The offers/response with normalized parameters - * @private - */ - normalizeParams(configurations) { - configurations.forEach((params) => { - Object.keys(params).forEach((key) => { - let value = params[key]; - - if (value.length > 1) { - throw new Error(`Parameter "${key}" must have only a single value`); - } - - value = value[0]; - - if (key === 'client_max_window_bits') { - if (value !== true) { - const num = +value; - if (!Number.isInteger(num) || num < 8 || num > 15) { - throw new TypeError( - `Invalid value for parameter "${key}": ${value}` - ); - } - value = num; - } else if (!this._isServer) { - throw new TypeError( - `Invalid value for parameter "${key}": ${value}` - ); - } - } else if (key === 'server_max_window_bits') { - const num = +value; - if (!Number.isInteger(num) || num < 8 || num > 15) { - throw new TypeError( - `Invalid value for parameter "${key}": ${value}` - ); - } - value = num; - } else if ( - key === 'client_no_context_takeover' || - key === 'server_no_context_takeover' - ) { - if (value !== true) { - throw new TypeError( - `Invalid value for parameter "${key}": ${value}` - ); - } - } else { - throw new Error(`Unknown parameter "${key}"`); - } - - params[key] = value; - }); - }); - - return configurations; - } - - /** - * Decompress data. Concurrency limited. - * - * @param {Buffer} data Compressed data - * @param {Boolean} fin Specifies whether or not this is the last fragment - * @param {Function} callback Callback - * @public - */ - decompress(data, fin, callback) { - zlibLimiter.add((done) => { - this._decompress(data, fin, (err, result) => { - done(); - callback(err, result); - }); - }); - } - - /** - * Compress data. Concurrency limited. - * - * @param {(Buffer|String)} data Data to compress - * @param {Boolean} fin Specifies whether or not this is the last fragment - * @param {Function} callback Callback - * @public - */ - compress(data, fin, callback) { - zlibLimiter.add((done) => { - this._compress(data, fin, (err, result) => { - done(); - callback(err, result); - }); - }); - } - - /** - * Decompress data. - * - * @param {Buffer} data Compressed data - * @param {Boolean} fin Specifies whether or not this is the last fragment - * @param {Function} callback Callback - * @private - */ - _decompress(data, fin, callback) { - const endpoint = this._isServer ? 'client' : 'server'; - - if (!this._inflate) { - const key = `${endpoint}_max_window_bits`; - const windowBits = - typeof this.params[key] !== 'number' - ? zlib.Z_DEFAULT_WINDOWBITS - : this.params[key]; - - this._inflate = zlib.createInflateRaw({ - ...this._options.zlibInflateOptions, - windowBits - }); - this._inflate[kPerMessageDeflate] = this; - this._inflate[kTotalLength] = 0; - this._inflate[kBuffers] = []; - this._inflate.on('error', inflateOnError); - this._inflate.on('data', inflateOnData); - } - - this._inflate[kCallback] = callback; - - this._inflate.write(data); - if (fin) this._inflate.write(TRAILER); - - this._inflate.flush(() => { - const err = this._inflate[kError]; - - if (err) { - this._inflate.close(); - this._inflate = null; - callback(err); - return; - } - - const data = bufferUtil.concat( - this._inflate[kBuffers], - this._inflate[kTotalLength] - ); - - if (this._inflate._readableState.endEmitted) { - this._inflate.close(); - this._inflate = null; - } else { - this._inflate[kTotalLength] = 0; - this._inflate[kBuffers] = []; - - if (fin && this.params[`${endpoint}_no_context_takeover`]) { - this._inflate.reset(); - } - } - - callback(null, data); - }); - } - - /** - * Compress data. - * - * @param {(Buffer|String)} data Data to compress - * @param {Boolean} fin Specifies whether or not this is the last fragment - * @param {Function} callback Callback - * @private - */ - _compress(data, fin, callback) { - const endpoint = this._isServer ? 'server' : 'client'; - - if (!this._deflate) { - const key = `${endpoint}_max_window_bits`; - const windowBits = - typeof this.params[key] !== 'number' - ? zlib.Z_DEFAULT_WINDOWBITS - : this.params[key]; - - this._deflate = zlib.createDeflateRaw({ - ...this._options.zlibDeflateOptions, - windowBits - }); - - this._deflate[kTotalLength] = 0; - this._deflate[kBuffers] = []; - - this._deflate.on('data', deflateOnData); - } - - this._deflate[kCallback] = callback; - - this._deflate.write(data); - this._deflate.flush(zlib.Z_SYNC_FLUSH, () => { - if (!this._deflate) { - // - // The deflate stream was closed while data was being processed. - // - return; - } - - let data = bufferUtil.concat( - this._deflate[kBuffers], - this._deflate[kTotalLength] - ); - - if (fin) { - data = new FastBuffer(data.buffer, data.byteOffset, data.length - 4); - } - - // - // Ensure that the callback will not be called again in - // `PerMessageDeflate#cleanup()`. - // - this._deflate[kCallback] = null; - - this._deflate[kTotalLength] = 0; - this._deflate[kBuffers] = []; - - if (fin && this.params[`${endpoint}_no_context_takeover`]) { - this._deflate.reset(); - } - - callback(null, data); - }); - } -} - -module.exports = PerMessageDeflate; - -/** - * The listener of the `zlib.DeflateRaw` stream `'data'` event. - * - * @param {Buffer} chunk A chunk of data - * @private - */ -function deflateOnData(chunk) { - this[kBuffers].push(chunk); - this[kTotalLength] += chunk.length; -} - -/** - * The listener of the `zlib.InflateRaw` stream `'data'` event. - * - * @param {Buffer} chunk A chunk of data - * @private - */ -function inflateOnData(chunk) { - this[kTotalLength] += chunk.length; - - if ( - this[kPerMessageDeflate]._maxPayload < 1 || - this[kTotalLength] <= this[kPerMessageDeflate]._maxPayload - ) { - this[kBuffers].push(chunk); - return; - } - - this[kError] = new RangeError('Max payload size exceeded'); - this[kError].code = 'WS_ERR_UNSUPPORTED_MESSAGE_LENGTH'; - this[kError][kStatusCode] = 1009; - this.removeListener('data', inflateOnData); - - // - // The choice to employ `zlib.reset()` over `zlib.close()` is dictated by the - // fact that in Node.js versions prior to 13.10.0, the callback for - // `zlib.flush()` is not called if `zlib.close()` is used. Utilizing - // `zlib.reset()` ensures that either the callback is invoked or an error is - // emitted. - // - this.reset(); -} - -/** - * The listener of the `zlib.InflateRaw` stream `'error'` event. - * - * @param {Error} err The emitted error - * @private - */ -function inflateOnError(err) { - // - // There is no need to call `Zlib#close()` as the handle is automatically - // closed when an error is emitted. - // - this[kPerMessageDeflate]._inflate = null; - - if (this[kError]) { - this[kCallback](this[kError]); - return; - } - - err[kStatusCode] = 1007; - this[kCallback](err); -} diff --git a/node_modules/ws/lib/receiver.js b/node_modules/ws/lib/receiver.js deleted file mode 100644 index 54d9b4f..0000000 --- a/node_modules/ws/lib/receiver.js +++ /dev/null @@ -1,706 +0,0 @@ -'use strict'; - -const { Writable } = require('stream'); - -const PerMessageDeflate = require('./permessage-deflate'); -const { - BINARY_TYPES, - EMPTY_BUFFER, - kStatusCode, - kWebSocket -} = require('./constants'); -const { concat, toArrayBuffer, unmask } = require('./buffer-util'); -const { isValidStatusCode, isValidUTF8 } = require('./validation'); - -const FastBuffer = Buffer[Symbol.species]; - -const GET_INFO = 0; -const GET_PAYLOAD_LENGTH_16 = 1; -const GET_PAYLOAD_LENGTH_64 = 2; -const GET_MASK = 3; -const GET_DATA = 4; -const INFLATING = 5; -const DEFER_EVENT = 6; - -/** - * HyBi Receiver implementation. - * - * @extends Writable - */ -class Receiver extends Writable { - /** - * Creates a Receiver instance. - * - * @param {Object} [options] Options object - * @param {Boolean} [options.allowSynchronousEvents=true] Specifies whether - * any of the `'message'`, `'ping'`, and `'pong'` events can be emitted - * multiple times in the same tick - * @param {String} [options.binaryType=nodebuffer] The type for binary data - * @param {Object} [options.extensions] An object containing the negotiated - * extensions - * @param {Boolean} [options.isServer=false] Specifies whether to operate in - * client or server mode - * @param {Number} [options.maxPayload=0] The maximum allowed message length - * @param {Boolean} [options.skipUTF8Validation=false] Specifies whether or - * not to skip UTF-8 validation for text and close messages - */ - constructor(options = {}) { - super(); - - this._allowSynchronousEvents = - options.allowSynchronousEvents !== undefined - ? options.allowSynchronousEvents - : true; - this._binaryType = options.binaryType || BINARY_TYPES[0]; - this._extensions = options.extensions || {}; - this._isServer = !!options.isServer; - this._maxPayload = options.maxPayload | 0; - this._skipUTF8Validation = !!options.skipUTF8Validation; - this[kWebSocket] = undefined; - - this._bufferedBytes = 0; - this._buffers = []; - - this._compressed = false; - this._payloadLength = 0; - this._mask = undefined; - this._fragmented = 0; - this._masked = false; - this._fin = false; - this._opcode = 0; - - this._totalPayloadLength = 0; - this._messageLength = 0; - this._fragments = []; - - this._errored = false; - this._loop = false; - this._state = GET_INFO; - } - - /** - * Implements `Writable.prototype._write()`. - * - * @param {Buffer} chunk The chunk of data to write - * @param {String} encoding The character encoding of `chunk` - * @param {Function} cb Callback - * @private - */ - _write(chunk, encoding, cb) { - if (this._opcode === 0x08 && this._state == GET_INFO) return cb(); - - this._bufferedBytes += chunk.length; - this._buffers.push(chunk); - this.startLoop(cb); - } - - /** - * Consumes `n` bytes from the buffered data. - * - * @param {Number} n The number of bytes to consume - * @return {Buffer} The consumed bytes - * @private - */ - consume(n) { - this._bufferedBytes -= n; - - if (n === this._buffers[0].length) return this._buffers.shift(); - - if (n < this._buffers[0].length) { - const buf = this._buffers[0]; - this._buffers[0] = new FastBuffer( - buf.buffer, - buf.byteOffset + n, - buf.length - n - ); - - return new FastBuffer(buf.buffer, buf.byteOffset, n); - } - - const dst = Buffer.allocUnsafe(n); - - do { - const buf = this._buffers[0]; - const offset = dst.length - n; - - if (n >= buf.length) { - dst.set(this._buffers.shift(), offset); - } else { - dst.set(new Uint8Array(buf.buffer, buf.byteOffset, n), offset); - this._buffers[0] = new FastBuffer( - buf.buffer, - buf.byteOffset + n, - buf.length - n - ); - } - - n -= buf.length; - } while (n > 0); - - return dst; - } - - /** - * Starts the parsing loop. - * - * @param {Function} cb Callback - * @private - */ - startLoop(cb) { - this._loop = true; - - do { - switch (this._state) { - case GET_INFO: - this.getInfo(cb); - break; - case GET_PAYLOAD_LENGTH_16: - this.getPayloadLength16(cb); - break; - case GET_PAYLOAD_LENGTH_64: - this.getPayloadLength64(cb); - break; - case GET_MASK: - this.getMask(); - break; - case GET_DATA: - this.getData(cb); - break; - case INFLATING: - case DEFER_EVENT: - this._loop = false; - return; - } - } while (this._loop); - - if (!this._errored) cb(); - } - - /** - * Reads the first two bytes of a frame. - * - * @param {Function} cb Callback - * @private - */ - getInfo(cb) { - if (this._bufferedBytes < 2) { - this._loop = false; - return; - } - - const buf = this.consume(2); - - if ((buf[0] & 0x30) !== 0x00) { - const error = this.createError( - RangeError, - 'RSV2 and RSV3 must be clear', - true, - 1002, - 'WS_ERR_UNEXPECTED_RSV_2_3' - ); - - cb(error); - return; - } - - const compressed = (buf[0] & 0x40) === 0x40; - - if (compressed && !this._extensions[PerMessageDeflate.extensionName]) { - const error = this.createError( - RangeError, - 'RSV1 must be clear', - true, - 1002, - 'WS_ERR_UNEXPECTED_RSV_1' - ); - - cb(error); - return; - } - - this._fin = (buf[0] & 0x80) === 0x80; - this._opcode = buf[0] & 0x0f; - this._payloadLength = buf[1] & 0x7f; - - if (this._opcode === 0x00) { - if (compressed) { - const error = this.createError( - RangeError, - 'RSV1 must be clear', - true, - 1002, - 'WS_ERR_UNEXPECTED_RSV_1' - ); - - cb(error); - return; - } - - if (!this._fragmented) { - const error = this.createError( - RangeError, - 'invalid opcode 0', - true, - 1002, - 'WS_ERR_INVALID_OPCODE' - ); - - cb(error); - return; - } - - this._opcode = this._fragmented; - } else if (this._opcode === 0x01 || this._opcode === 0x02) { - if (this._fragmented) { - const error = this.createError( - RangeError, - `invalid opcode ${this._opcode}`, - true, - 1002, - 'WS_ERR_INVALID_OPCODE' - ); - - cb(error); - return; - } - - this._compressed = compressed; - } else if (this._opcode > 0x07 && this._opcode < 0x0b) { - if (!this._fin) { - const error = this.createError( - RangeError, - 'FIN must be set', - true, - 1002, - 'WS_ERR_EXPECTED_FIN' - ); - - cb(error); - return; - } - - if (compressed) { - const error = this.createError( - RangeError, - 'RSV1 must be clear', - true, - 1002, - 'WS_ERR_UNEXPECTED_RSV_1' - ); - - cb(error); - return; - } - - if ( - this._payloadLength > 0x7d || - (this._opcode === 0x08 && this._payloadLength === 1) - ) { - const error = this.createError( - RangeError, - `invalid payload length ${this._payloadLength}`, - true, - 1002, - 'WS_ERR_INVALID_CONTROL_PAYLOAD_LENGTH' - ); - - cb(error); - return; - } - } else { - const error = this.createError( - RangeError, - `invalid opcode ${this._opcode}`, - true, - 1002, - 'WS_ERR_INVALID_OPCODE' - ); - - cb(error); - return; - } - - if (!this._fin && !this._fragmented) this._fragmented = this._opcode; - this._masked = (buf[1] & 0x80) === 0x80; - - if (this._isServer) { - if (!this._masked) { - const error = this.createError( - RangeError, - 'MASK must be set', - true, - 1002, - 'WS_ERR_EXPECTED_MASK' - ); - - cb(error); - return; - } - } else if (this._masked) { - const error = this.createError( - RangeError, - 'MASK must be clear', - true, - 1002, - 'WS_ERR_UNEXPECTED_MASK' - ); - - cb(error); - return; - } - - if (this._payloadLength === 126) this._state = GET_PAYLOAD_LENGTH_16; - else if (this._payloadLength === 127) this._state = GET_PAYLOAD_LENGTH_64; - else this.haveLength(cb); - } - - /** - * Gets extended payload length (7+16). - * - * @param {Function} cb Callback - * @private - */ - getPayloadLength16(cb) { - if (this._bufferedBytes < 2) { - this._loop = false; - return; - } - - this._payloadLength = this.consume(2).readUInt16BE(0); - this.haveLength(cb); - } - - /** - * Gets extended payload length (7+64). - * - * @param {Function} cb Callback - * @private - */ - getPayloadLength64(cb) { - if (this._bufferedBytes < 8) { - this._loop = false; - return; - } - - const buf = this.consume(8); - const num = buf.readUInt32BE(0); - - // - // The maximum safe integer in JavaScript is 2^53 - 1. An error is returned - // if payload length is greater than this number. - // - if (num > Math.pow(2, 53 - 32) - 1) { - const error = this.createError( - RangeError, - 'Unsupported WebSocket frame: payload length > 2^53 - 1', - false, - 1009, - 'WS_ERR_UNSUPPORTED_DATA_PAYLOAD_LENGTH' - ); - - cb(error); - return; - } - - this._payloadLength = num * Math.pow(2, 32) + buf.readUInt32BE(4); - this.haveLength(cb); - } - - /** - * Payload length has been read. - * - * @param {Function} cb Callback - * @private - */ - haveLength(cb) { - if (this._payloadLength && this._opcode < 0x08) { - this._totalPayloadLength += this._payloadLength; - if (this._totalPayloadLength > this._maxPayload && this._maxPayload > 0) { - const error = this.createError( - RangeError, - 'Max payload size exceeded', - false, - 1009, - 'WS_ERR_UNSUPPORTED_MESSAGE_LENGTH' - ); - - cb(error); - return; - } - } - - if (this._masked) this._state = GET_MASK; - else this._state = GET_DATA; - } - - /** - * Reads mask bytes. - * - * @private - */ - getMask() { - if (this._bufferedBytes < 4) { - this._loop = false; - return; - } - - this._mask = this.consume(4); - this._state = GET_DATA; - } - - /** - * Reads data bytes. - * - * @param {Function} cb Callback - * @private - */ - getData(cb) { - let data = EMPTY_BUFFER; - - if (this._payloadLength) { - if (this._bufferedBytes < this._payloadLength) { - this._loop = false; - return; - } - - data = this.consume(this._payloadLength); - - if ( - this._masked && - (this._mask[0] | this._mask[1] | this._mask[2] | this._mask[3]) !== 0 - ) { - unmask(data, this._mask); - } - } - - if (this._opcode > 0x07) { - this.controlMessage(data, cb); - return; - } - - if (this._compressed) { - this._state = INFLATING; - this.decompress(data, cb); - return; - } - - if (data.length) { - // - // This message is not compressed so its length is the sum of the payload - // length of all fragments. - // - this._messageLength = this._totalPayloadLength; - this._fragments.push(data); - } - - this.dataMessage(cb); - } - - /** - * Decompresses data. - * - * @param {Buffer} data Compressed data - * @param {Function} cb Callback - * @private - */ - decompress(data, cb) { - const perMessageDeflate = this._extensions[PerMessageDeflate.extensionName]; - - perMessageDeflate.decompress(data, this._fin, (err, buf) => { - if (err) return cb(err); - - if (buf.length) { - this._messageLength += buf.length; - if (this._messageLength > this._maxPayload && this._maxPayload > 0) { - const error = this.createError( - RangeError, - 'Max payload size exceeded', - false, - 1009, - 'WS_ERR_UNSUPPORTED_MESSAGE_LENGTH' - ); - - cb(error); - return; - } - - this._fragments.push(buf); - } - - this.dataMessage(cb); - if (this._state === GET_INFO) this.startLoop(cb); - }); - } - - /** - * Handles a data message. - * - * @param {Function} cb Callback - * @private - */ - dataMessage(cb) { - if (!this._fin) { - this._state = GET_INFO; - return; - } - - const messageLength = this._messageLength; - const fragments = this._fragments; - - this._totalPayloadLength = 0; - this._messageLength = 0; - this._fragmented = 0; - this._fragments = []; - - if (this._opcode === 2) { - let data; - - if (this._binaryType === 'nodebuffer') { - data = concat(fragments, messageLength); - } else if (this._binaryType === 'arraybuffer') { - data = toArrayBuffer(concat(fragments, messageLength)); - } else if (this._binaryType === 'blob') { - data = new Blob(fragments); - } else { - data = fragments; - } - - if (this._allowSynchronousEvents) { - this.emit('message', data, true); - this._state = GET_INFO; - } else { - this._state = DEFER_EVENT; - setImmediate(() => { - this.emit('message', data, true); - this._state = GET_INFO; - this.startLoop(cb); - }); - } - } else { - const buf = concat(fragments, messageLength); - - if (!this._skipUTF8Validation && !isValidUTF8(buf)) { - const error = this.createError( - Error, - 'invalid UTF-8 sequence', - true, - 1007, - 'WS_ERR_INVALID_UTF8' - ); - - cb(error); - return; - } - - if (this._state === INFLATING || this._allowSynchronousEvents) { - this.emit('message', buf, false); - this._state = GET_INFO; - } else { - this._state = DEFER_EVENT; - setImmediate(() => { - this.emit('message', buf, false); - this._state = GET_INFO; - this.startLoop(cb); - }); - } - } - } - - /** - * Handles a control message. - * - * @param {Buffer} data Data to handle - * @return {(Error|RangeError|undefined)} A possible error - * @private - */ - controlMessage(data, cb) { - if (this._opcode === 0x08) { - if (data.length === 0) { - this._loop = false; - this.emit('conclude', 1005, EMPTY_BUFFER); - this.end(); - } else { - const code = data.readUInt16BE(0); - - if (!isValidStatusCode(code)) { - const error = this.createError( - RangeError, - `invalid status code ${code}`, - true, - 1002, - 'WS_ERR_INVALID_CLOSE_CODE' - ); - - cb(error); - return; - } - - const buf = new FastBuffer( - data.buffer, - data.byteOffset + 2, - data.length - 2 - ); - - if (!this._skipUTF8Validation && !isValidUTF8(buf)) { - const error = this.createError( - Error, - 'invalid UTF-8 sequence', - true, - 1007, - 'WS_ERR_INVALID_UTF8' - ); - - cb(error); - return; - } - - this._loop = false; - this.emit('conclude', code, buf); - this.end(); - } - - this._state = GET_INFO; - return; - } - - if (this._allowSynchronousEvents) { - this.emit(this._opcode === 0x09 ? 'ping' : 'pong', data); - this._state = GET_INFO; - } else { - this._state = DEFER_EVENT; - setImmediate(() => { - this.emit(this._opcode === 0x09 ? 'ping' : 'pong', data); - this._state = GET_INFO; - this.startLoop(cb); - }); - } - } - - /** - * Builds an error object. - * - * @param {function(new:Error|RangeError)} ErrorCtor The error constructor - * @param {String} message The error message - * @param {Boolean} prefix Specifies whether or not to add a default prefix to - * `message` - * @param {Number} statusCode The status code - * @param {String} errorCode The exposed error code - * @return {(Error|RangeError)} The error - * @private - */ - createError(ErrorCtor, message, prefix, statusCode, errorCode) { - this._loop = false; - this._errored = true; - - const err = new ErrorCtor( - prefix ? `Invalid WebSocket frame: ${message}` : message - ); - - Error.captureStackTrace(err, this.createError); - err.code = errorCode; - err[kStatusCode] = statusCode; - return err; - } -} - -module.exports = Receiver; diff --git a/node_modules/ws/lib/sender.js b/node_modules/ws/lib/sender.js deleted file mode 100644 index a8b1da3..0000000 --- a/node_modules/ws/lib/sender.js +++ /dev/null @@ -1,602 +0,0 @@ -/* eslint no-unused-vars: ["error", { "varsIgnorePattern": "^Duplex" }] */ - -'use strict'; - -const { Duplex } = require('stream'); -const { randomFillSync } = require('crypto'); - -const PerMessageDeflate = require('./permessage-deflate'); -const { EMPTY_BUFFER, kWebSocket, NOOP } = require('./constants'); -const { isBlob, isValidStatusCode } = require('./validation'); -const { mask: applyMask, toBuffer } = require('./buffer-util'); - -const kByteLength = Symbol('kByteLength'); -const maskBuffer = Buffer.alloc(4); -const RANDOM_POOL_SIZE = 8 * 1024; -let randomPool; -let randomPoolPointer = RANDOM_POOL_SIZE; - -const DEFAULT = 0; -const DEFLATING = 1; -const GET_BLOB_DATA = 2; - -/** - * HyBi Sender implementation. - */ -class Sender { - /** - * Creates a Sender instance. - * - * @param {Duplex} socket The connection socket - * @param {Object} [extensions] An object containing the negotiated extensions - * @param {Function} [generateMask] The function used to generate the masking - * key - */ - constructor(socket, extensions, generateMask) { - this._extensions = extensions || {}; - - if (generateMask) { - this._generateMask = generateMask; - this._maskBuffer = Buffer.alloc(4); - } - - this._socket = socket; - - this._firstFragment = true; - this._compress = false; - - this._bufferedBytes = 0; - this._queue = []; - this._state = DEFAULT; - this.onerror = NOOP; - this[kWebSocket] = undefined; - } - - /** - * Frames a piece of data according to the HyBi WebSocket protocol. - * - * @param {(Buffer|String)} data The data to frame - * @param {Object} options Options object - * @param {Boolean} [options.fin=false] Specifies whether or not to set the - * FIN bit - * @param {Function} [options.generateMask] The function used to generate the - * masking key - * @param {Boolean} [options.mask=false] Specifies whether or not to mask - * `data` - * @param {Buffer} [options.maskBuffer] The buffer used to store the masking - * key - * @param {Number} options.opcode The opcode - * @param {Boolean} [options.readOnly=false] Specifies whether `data` can be - * modified - * @param {Boolean} [options.rsv1=false] Specifies whether or not to set the - * RSV1 bit - * @return {(Buffer|String)[]} The framed data - * @public - */ - static frame(data, options) { - let mask; - let merge = false; - let offset = 2; - let skipMasking = false; - - if (options.mask) { - mask = options.maskBuffer || maskBuffer; - - if (options.generateMask) { - options.generateMask(mask); - } else { - if (randomPoolPointer === RANDOM_POOL_SIZE) { - /* istanbul ignore else */ - if (randomPool === undefined) { - // - // This is lazily initialized because server-sent frames must not - // be masked so it may never be used. - // - randomPool = Buffer.alloc(RANDOM_POOL_SIZE); - } - - randomFillSync(randomPool, 0, RANDOM_POOL_SIZE); - randomPoolPointer = 0; - } - - mask[0] = randomPool[randomPoolPointer++]; - mask[1] = randomPool[randomPoolPointer++]; - mask[2] = randomPool[randomPoolPointer++]; - mask[3] = randomPool[randomPoolPointer++]; - } - - skipMasking = (mask[0] | mask[1] | mask[2] | mask[3]) === 0; - offset = 6; - } - - let dataLength; - - if (typeof data === 'string') { - if ( - (!options.mask || skipMasking) && - options[kByteLength] !== undefined - ) { - dataLength = options[kByteLength]; - } else { - data = Buffer.from(data); - dataLength = data.length; - } - } else { - dataLength = data.length; - merge = options.mask && options.readOnly && !skipMasking; - } - - let payloadLength = dataLength; - - if (dataLength >= 65536) { - offset += 8; - payloadLength = 127; - } else if (dataLength > 125) { - offset += 2; - payloadLength = 126; - } - - const target = Buffer.allocUnsafe(merge ? dataLength + offset : offset); - - target[0] = options.fin ? options.opcode | 0x80 : options.opcode; - if (options.rsv1) target[0] |= 0x40; - - target[1] = payloadLength; - - if (payloadLength === 126) { - target.writeUInt16BE(dataLength, 2); - } else if (payloadLength === 127) { - target[2] = target[3] = 0; - target.writeUIntBE(dataLength, 4, 6); - } - - if (!options.mask) return [target, data]; - - target[1] |= 0x80; - target[offset - 4] = mask[0]; - target[offset - 3] = mask[1]; - target[offset - 2] = mask[2]; - target[offset - 1] = mask[3]; - - if (skipMasking) return [target, data]; - - if (merge) { - applyMask(data, mask, target, offset, dataLength); - return [target]; - } - - applyMask(data, mask, data, 0, dataLength); - return [target, data]; - } - - /** - * Sends a close message to the other peer. - * - * @param {Number} [code] The status code component of the body - * @param {(String|Buffer)} [data] The message component of the body - * @param {Boolean} [mask=false] Specifies whether or not to mask the message - * @param {Function} [cb] Callback - * @public - */ - close(code, data, mask, cb) { - let buf; - - if (code === undefined) { - buf = EMPTY_BUFFER; - } else if (typeof code !== 'number' || !isValidStatusCode(code)) { - throw new TypeError('First argument must be a valid error code number'); - } else if (data === undefined || !data.length) { - buf = Buffer.allocUnsafe(2); - buf.writeUInt16BE(code, 0); - } else { - const length = Buffer.byteLength(data); - - if (length > 123) { - throw new RangeError('The message must not be greater than 123 bytes'); - } - - buf = Buffer.allocUnsafe(2 + length); - buf.writeUInt16BE(code, 0); - - if (typeof data === 'string') { - buf.write(data, 2); - } else { - buf.set(data, 2); - } - } - - const options = { - [kByteLength]: buf.length, - fin: true, - generateMask: this._generateMask, - mask, - maskBuffer: this._maskBuffer, - opcode: 0x08, - readOnly: false, - rsv1: false - }; - - if (this._state !== DEFAULT) { - this.enqueue([this.dispatch, buf, false, options, cb]); - } else { - this.sendFrame(Sender.frame(buf, options), cb); - } - } - - /** - * Sends a ping message to the other peer. - * - * @param {*} data The message to send - * @param {Boolean} [mask=false] Specifies whether or not to mask `data` - * @param {Function} [cb] Callback - * @public - */ - ping(data, mask, cb) { - let byteLength; - let readOnly; - - if (typeof data === 'string') { - byteLength = Buffer.byteLength(data); - readOnly = false; - } else if (isBlob(data)) { - byteLength = data.size; - readOnly = false; - } else { - data = toBuffer(data); - byteLength = data.length; - readOnly = toBuffer.readOnly; - } - - if (byteLength > 125) { - throw new RangeError('The data size must not be greater than 125 bytes'); - } - - const options = { - [kByteLength]: byteLength, - fin: true, - generateMask: this._generateMask, - mask, - maskBuffer: this._maskBuffer, - opcode: 0x09, - readOnly, - rsv1: false - }; - - if (isBlob(data)) { - if (this._state !== DEFAULT) { - this.enqueue([this.getBlobData, data, false, options, cb]); - } else { - this.getBlobData(data, false, options, cb); - } - } else if (this._state !== DEFAULT) { - this.enqueue([this.dispatch, data, false, options, cb]); - } else { - this.sendFrame(Sender.frame(data, options), cb); - } - } - - /** - * Sends a pong message to the other peer. - * - * @param {*} data The message to send - * @param {Boolean} [mask=false] Specifies whether or not to mask `data` - * @param {Function} [cb] Callback - * @public - */ - pong(data, mask, cb) { - let byteLength; - let readOnly; - - if (typeof data === 'string') { - byteLength = Buffer.byteLength(data); - readOnly = false; - } else if (isBlob(data)) { - byteLength = data.size; - readOnly = false; - } else { - data = toBuffer(data); - byteLength = data.length; - readOnly = toBuffer.readOnly; - } - - if (byteLength > 125) { - throw new RangeError('The data size must not be greater than 125 bytes'); - } - - const options = { - [kByteLength]: byteLength, - fin: true, - generateMask: this._generateMask, - mask, - maskBuffer: this._maskBuffer, - opcode: 0x0a, - readOnly, - rsv1: false - }; - - if (isBlob(data)) { - if (this._state !== DEFAULT) { - this.enqueue([this.getBlobData, data, false, options, cb]); - } else { - this.getBlobData(data, false, options, cb); - } - } else if (this._state !== DEFAULT) { - this.enqueue([this.dispatch, data, false, options, cb]); - } else { - this.sendFrame(Sender.frame(data, options), cb); - } - } - - /** - * Sends a data message to the other peer. - * - * @param {*} data The message to send - * @param {Object} options Options object - * @param {Boolean} [options.binary=false] Specifies whether `data` is binary - * or text - * @param {Boolean} [options.compress=false] Specifies whether or not to - * compress `data` - * @param {Boolean} [options.fin=false] Specifies whether the fragment is the - * last one - * @param {Boolean} [options.mask=false] Specifies whether or not to mask - * `data` - * @param {Function} [cb] Callback - * @public - */ - send(data, options, cb) { - const perMessageDeflate = this._extensions[PerMessageDeflate.extensionName]; - let opcode = options.binary ? 2 : 1; - let rsv1 = options.compress; - - let byteLength; - let readOnly; - - if (typeof data === 'string') { - byteLength = Buffer.byteLength(data); - readOnly = false; - } else if (isBlob(data)) { - byteLength = data.size; - readOnly = false; - } else { - data = toBuffer(data); - byteLength = data.length; - readOnly = toBuffer.readOnly; - } - - if (this._firstFragment) { - this._firstFragment = false; - if ( - rsv1 && - perMessageDeflate && - perMessageDeflate.params[ - perMessageDeflate._isServer - ? 'server_no_context_takeover' - : 'client_no_context_takeover' - ] - ) { - rsv1 = byteLength >= perMessageDeflate._threshold; - } - this._compress = rsv1; - } else { - rsv1 = false; - opcode = 0; - } - - if (options.fin) this._firstFragment = true; - - const opts = { - [kByteLength]: byteLength, - fin: options.fin, - generateMask: this._generateMask, - mask: options.mask, - maskBuffer: this._maskBuffer, - opcode, - readOnly, - rsv1 - }; - - if (isBlob(data)) { - if (this._state !== DEFAULT) { - this.enqueue([this.getBlobData, data, this._compress, opts, cb]); - } else { - this.getBlobData(data, this._compress, opts, cb); - } - } else if (this._state !== DEFAULT) { - this.enqueue([this.dispatch, data, this._compress, opts, cb]); - } else { - this.dispatch(data, this._compress, opts, cb); - } - } - - /** - * Gets the contents of a blob as binary data. - * - * @param {Blob} blob The blob - * @param {Boolean} [compress=false] Specifies whether or not to compress - * the data - * @param {Object} options Options object - * @param {Boolean} [options.fin=false] Specifies whether or not to set the - * FIN bit - * @param {Function} [options.generateMask] The function used to generate the - * masking key - * @param {Boolean} [options.mask=false] Specifies whether or not to mask - * `data` - * @param {Buffer} [options.maskBuffer] The buffer used to store the masking - * key - * @param {Number} options.opcode The opcode - * @param {Boolean} [options.readOnly=false] Specifies whether `data` can be - * modified - * @param {Boolean} [options.rsv1=false] Specifies whether or not to set the - * RSV1 bit - * @param {Function} [cb] Callback - * @private - */ - getBlobData(blob, compress, options, cb) { - this._bufferedBytes += options[kByteLength]; - this._state = GET_BLOB_DATA; - - blob - .arrayBuffer() - .then((arrayBuffer) => { - if (this._socket.destroyed) { - const err = new Error( - 'The socket was closed while the blob was being read' - ); - - // - // `callCallbacks` is called in the next tick to ensure that errors - // that might be thrown in the callbacks behave like errors thrown - // outside the promise chain. - // - process.nextTick(callCallbacks, this, err, cb); - return; - } - - this._bufferedBytes -= options[kByteLength]; - const data = toBuffer(arrayBuffer); - - if (!compress) { - this._state = DEFAULT; - this.sendFrame(Sender.frame(data, options), cb); - this.dequeue(); - } else { - this.dispatch(data, compress, options, cb); - } - }) - .catch((err) => { - // - // `onError` is called in the next tick for the same reason that - // `callCallbacks` above is. - // - process.nextTick(onError, this, err, cb); - }); - } - - /** - * Dispatches a message. - * - * @param {(Buffer|String)} data The message to send - * @param {Boolean} [compress=false] Specifies whether or not to compress - * `data` - * @param {Object} options Options object - * @param {Boolean} [options.fin=false] Specifies whether or not to set the - * FIN bit - * @param {Function} [options.generateMask] The function used to generate the - * masking key - * @param {Boolean} [options.mask=false] Specifies whether or not to mask - * `data` - * @param {Buffer} [options.maskBuffer] The buffer used to store the masking - * key - * @param {Number} options.opcode The opcode - * @param {Boolean} [options.readOnly=false] Specifies whether `data` can be - * modified - * @param {Boolean} [options.rsv1=false] Specifies whether or not to set the - * RSV1 bit - * @param {Function} [cb] Callback - * @private - */ - dispatch(data, compress, options, cb) { - if (!compress) { - this.sendFrame(Sender.frame(data, options), cb); - return; - } - - const perMessageDeflate = this._extensions[PerMessageDeflate.extensionName]; - - this._bufferedBytes += options[kByteLength]; - this._state = DEFLATING; - perMessageDeflate.compress(data, options.fin, (_, buf) => { - if (this._socket.destroyed) { - const err = new Error( - 'The socket was closed while data was being compressed' - ); - - callCallbacks(this, err, cb); - return; - } - - this._bufferedBytes -= options[kByteLength]; - this._state = DEFAULT; - options.readOnly = false; - this.sendFrame(Sender.frame(buf, options), cb); - this.dequeue(); - }); - } - - /** - * Executes queued send operations. - * - * @private - */ - dequeue() { - while (this._state === DEFAULT && this._queue.length) { - const params = this._queue.shift(); - - this._bufferedBytes -= params[3][kByteLength]; - Reflect.apply(params[0], this, params.slice(1)); - } - } - - /** - * Enqueues a send operation. - * - * @param {Array} params Send operation parameters. - * @private - */ - enqueue(params) { - this._bufferedBytes += params[3][kByteLength]; - this._queue.push(params); - } - - /** - * Sends a frame. - * - * @param {(Buffer | String)[]} list The frame to send - * @param {Function} [cb] Callback - * @private - */ - sendFrame(list, cb) { - if (list.length === 2) { - this._socket.cork(); - this._socket.write(list[0]); - this._socket.write(list[1], cb); - this._socket.uncork(); - } else { - this._socket.write(list[0], cb); - } - } -} - -module.exports = Sender; - -/** - * Calls queued callbacks with an error. - * - * @param {Sender} sender The `Sender` instance - * @param {Error} err The error to call the callbacks with - * @param {Function} [cb] The first callback - * @private - */ -function callCallbacks(sender, err, cb) { - if (typeof cb === 'function') cb(err); - - for (let i = 0; i < sender._queue.length; i++) { - const params = sender._queue[i]; - const callback = params[params.length - 1]; - - if (typeof callback === 'function') callback(err); - } -} - -/** - * Handles a `Sender` error. - * - * @param {Sender} sender The `Sender` instance - * @param {Error} err The error - * @param {Function} [cb] The first pending callback - * @private - */ -function onError(sender, err, cb) { - callCallbacks(sender, err, cb); - sender.onerror(err); -} diff --git a/node_modules/ws/lib/stream.js b/node_modules/ws/lib/stream.js deleted file mode 100644 index 4c58c91..0000000 --- a/node_modules/ws/lib/stream.js +++ /dev/null @@ -1,161 +0,0 @@ -/* eslint no-unused-vars: ["error", { "varsIgnorePattern": "^WebSocket$" }] */ -'use strict'; - -const WebSocket = require('./websocket'); -const { Duplex } = require('stream'); - -/** - * Emits the `'close'` event on a stream. - * - * @param {Duplex} stream The stream. - * @private - */ -function emitClose(stream) { - stream.emit('close'); -} - -/** - * The listener of the `'end'` event. - * - * @private - */ -function duplexOnEnd() { - if (!this.destroyed && this._writableState.finished) { - this.destroy(); - } -} - -/** - * The listener of the `'error'` event. - * - * @param {Error} err The error - * @private - */ -function duplexOnError(err) { - this.removeListener('error', duplexOnError); - this.destroy(); - if (this.listenerCount('error') === 0) { - // Do not suppress the throwing behavior. - this.emit('error', err); - } -} - -/** - * Wraps a `WebSocket` in a duplex stream. - * - * @param {WebSocket} ws The `WebSocket` to wrap - * @param {Object} [options] The options for the `Duplex` constructor - * @return {Duplex} The duplex stream - * @public - */ -function createWebSocketStream(ws, options) { - let terminateOnDestroy = true; - - const duplex = new Duplex({ - ...options, - autoDestroy: false, - emitClose: false, - objectMode: false, - writableObjectMode: false - }); - - ws.on('message', function message(msg, isBinary) { - const data = - !isBinary && duplex._readableState.objectMode ? msg.toString() : msg; - - if (!duplex.push(data)) ws.pause(); - }); - - ws.once('error', function error(err) { - if (duplex.destroyed) return; - - // Prevent `ws.terminate()` from being called by `duplex._destroy()`. - // - // - If the `'error'` event is emitted before the `'open'` event, then - // `ws.terminate()` is a noop as no socket is assigned. - // - Otherwise, the error is re-emitted by the listener of the `'error'` - // event of the `Receiver` object. The listener already closes the - // connection by calling `ws.close()`. This allows a close frame to be - // sent to the other peer. If `ws.terminate()` is called right after this, - // then the close frame might not be sent. - terminateOnDestroy = false; - duplex.destroy(err); - }); - - ws.once('close', function close() { - if (duplex.destroyed) return; - - duplex.push(null); - }); - - duplex._destroy = function (err, callback) { - if (ws.readyState === ws.CLOSED) { - callback(err); - process.nextTick(emitClose, duplex); - return; - } - - let called = false; - - ws.once('error', function error(err) { - called = true; - callback(err); - }); - - ws.once('close', function close() { - if (!called) callback(err); - process.nextTick(emitClose, duplex); - }); - - if (terminateOnDestroy) ws.terminate(); - }; - - duplex._final = function (callback) { - if (ws.readyState === ws.CONNECTING) { - ws.once('open', function open() { - duplex._final(callback); - }); - return; - } - - // If the value of the `_socket` property is `null` it means that `ws` is a - // client websocket and the handshake failed. In fact, when this happens, a - // socket is never assigned to the websocket. Wait for the `'error'` event - // that will be emitted by the websocket. - if (ws._socket === null) return; - - if (ws._socket._writableState.finished) { - callback(); - if (duplex._readableState.endEmitted) duplex.destroy(); - } else { - ws._socket.once('finish', function finish() { - // `duplex` is not destroyed here because the `'end'` event will be - // emitted on `duplex` after this `'finish'` event. The EOF signaling - // `null` chunk is, in fact, pushed when the websocket emits `'close'`. - callback(); - }); - ws.close(); - } - }; - - duplex._read = function () { - if (ws.isPaused) ws.resume(); - }; - - duplex._write = function (chunk, encoding, callback) { - if (ws.readyState === ws.CONNECTING) { - ws.once('open', function open() { - duplex._write(chunk, encoding, callback); - }); - return; - } - - ws.send(chunk, callback); - }; - - duplex.on('end', duplexOnEnd); - duplex.on('error', duplexOnError); - return duplex; -} - -module.exports = createWebSocketStream; diff --git a/node_modules/ws/lib/subprotocol.js b/node_modules/ws/lib/subprotocol.js deleted file mode 100644 index d4381e8..0000000 --- a/node_modules/ws/lib/subprotocol.js +++ /dev/null @@ -1,62 +0,0 @@ -'use strict'; - -const { tokenChars } = require('./validation'); - -/** - * Parses the `Sec-WebSocket-Protocol` header into a set of subprotocol names. - * - * @param {String} header The field value of the header - * @return {Set} The subprotocol names - * @public - */ -function parse(header) { - const protocols = new Set(); - let start = -1; - let end = -1; - let i = 0; - - for (i; i < header.length; i++) { - const code = header.charCodeAt(i); - - if (end === -1 && tokenChars[code] === 1) { - if (start === -1) start = i; - } else if ( - i !== 0 && - (code === 0x20 /* ' ' */ || code === 0x09) /* '\t' */ - ) { - if (end === -1 && start !== -1) end = i; - } else if (code === 0x2c /* ',' */) { - if (start === -1) { - throw new SyntaxError(`Unexpected character at index ${i}`); - } - - if (end === -1) end = i; - - const protocol = header.slice(start, end); - - if (protocols.has(protocol)) { - throw new SyntaxError(`The "${protocol}" subprotocol is duplicated`); - } - - protocols.add(protocol); - start = end = -1; - } else { - throw new SyntaxError(`Unexpected character at index ${i}`); - } - } - - if (start === -1 || end !== -1) { - throw new SyntaxError('Unexpected end of input'); - } - - const protocol = header.slice(start, i); - - if (protocols.has(protocol)) { - throw new SyntaxError(`The "${protocol}" subprotocol is duplicated`); - } - - protocols.add(protocol); - return protocols; -} - -module.exports = { parse }; diff --git a/node_modules/ws/lib/validation.js b/node_modules/ws/lib/validation.js deleted file mode 100644 index 4a2e68d..0000000 --- a/node_modules/ws/lib/validation.js +++ /dev/null @@ -1,152 +0,0 @@ -'use strict'; - -const { isUtf8 } = require('buffer'); - -const { hasBlob } = require('./constants'); - -// -// Allowed token characters: -// -// '!', '#', '$', '%', '&', ''', '*', '+', '-', -// '.', 0-9, A-Z, '^', '_', '`', a-z, '|', '~' -// -// tokenChars[32] === 0 // ' ' -// tokenChars[33] === 1 // '!' -// tokenChars[34] === 0 // '"' -// ... -// -// prettier-ignore -const tokenChars = [ - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, // 0 - 15 - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, // 16 - 31 - 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, // 32 - 47 - 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, // 48 - 63 - 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, // 64 - 79 - 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, // 80 - 95 - 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, // 96 - 111 - 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0 // 112 - 127 -]; - -/** - * Checks if a status code is allowed in a close frame. - * - * @param {Number} code The status code - * @return {Boolean} `true` if the status code is valid, else `false` - * @public - */ -function isValidStatusCode(code) { - return ( - (code >= 1000 && - code <= 1014 && - code !== 1004 && - code !== 1005 && - code !== 1006) || - (code >= 3000 && code <= 4999) - ); -} - -/** - * Checks if a given buffer contains only correct UTF-8. - * Ported from https://www.cl.cam.ac.uk/%7Emgk25/ucs/utf8_check.c by - * Markus Kuhn. - * - * @param {Buffer} buf The buffer to check - * @return {Boolean} `true` if `buf` contains only correct UTF-8, else `false` - * @public - */ -function _isValidUTF8(buf) { - const len = buf.length; - let i = 0; - - while (i < len) { - if ((buf[i] & 0x80) === 0) { - // 0xxxxxxx - i++; - } else if ((buf[i] & 0xe0) === 0xc0) { - // 110xxxxx 10xxxxxx - if ( - i + 1 === len || - (buf[i + 1] & 0xc0) !== 0x80 || - (buf[i] & 0xfe) === 0xc0 // Overlong - ) { - return false; - } - - i += 2; - } else if ((buf[i] & 0xf0) === 0xe0) { - // 1110xxxx 10xxxxxx 10xxxxxx - if ( - i + 2 >= len || - (buf[i + 1] & 0xc0) !== 0x80 || - (buf[i + 2] & 0xc0) !== 0x80 || - (buf[i] === 0xe0 && (buf[i + 1] & 0xe0) === 0x80) || // Overlong - (buf[i] === 0xed && (buf[i + 1] & 0xe0) === 0xa0) // Surrogate (U+D800 - U+DFFF) - ) { - return false; - } - - i += 3; - } else if ((buf[i] & 0xf8) === 0xf0) { - // 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx - if ( - i + 3 >= len || - (buf[i + 1] & 0xc0) !== 0x80 || - (buf[i + 2] & 0xc0) !== 0x80 || - (buf[i + 3] & 0xc0) !== 0x80 || - (buf[i] === 0xf0 && (buf[i + 1] & 0xf0) === 0x80) || // Overlong - (buf[i] === 0xf4 && buf[i + 1] > 0x8f) || - buf[i] > 0xf4 // > U+10FFFF - ) { - return false; - } - - i += 4; - } else { - return false; - } - } - - return true; -} - -/** - * Determines whether a value is a `Blob`. - * - * @param {*} value The value to be tested - * @return {Boolean} `true` if `value` is a `Blob`, else `false` - * @private - */ -function isBlob(value) { - return ( - hasBlob && - typeof value === 'object' && - typeof value.arrayBuffer === 'function' && - typeof value.type === 'string' && - typeof value.stream === 'function' && - (value[Symbol.toStringTag] === 'Blob' || - value[Symbol.toStringTag] === 'File') - ); -} - -module.exports = { - isBlob, - isValidStatusCode, - isValidUTF8: _isValidUTF8, - tokenChars -}; - -if (isUtf8) { - module.exports.isValidUTF8 = function (buf) { - return buf.length < 24 ? _isValidUTF8(buf) : isUtf8(buf); - }; -} /* istanbul ignore else */ else if (!process.env.WS_NO_UTF_8_VALIDATE) { - try { - const isValidUTF8 = require('utf-8-validate'); - - module.exports.isValidUTF8 = function (buf) { - return buf.length < 32 ? _isValidUTF8(buf) : isValidUTF8(buf); - }; - } catch (e) { - // Continue regardless of the error. - } -} diff --git a/node_modules/ws/lib/websocket-server.js b/node_modules/ws/lib/websocket-server.js deleted file mode 100644 index 75e04c1..0000000 --- a/node_modules/ws/lib/websocket-server.js +++ /dev/null @@ -1,554 +0,0 @@ -/* eslint no-unused-vars: ["error", { "varsIgnorePattern": "^Duplex$", "caughtErrors": "none" }] */ - -'use strict'; - -const EventEmitter = require('events'); -const http = require('http'); -const { Duplex } = require('stream'); -const { createHash } = require('crypto'); - -const extension = require('./extension'); -const PerMessageDeflate = require('./permessage-deflate'); -const subprotocol = require('./subprotocol'); -const WebSocket = require('./websocket'); -const { CLOSE_TIMEOUT, GUID, kWebSocket } = require('./constants'); - -const keyRegex = /^[+/0-9A-Za-z]{22}==$/; - -const RUNNING = 0; -const CLOSING = 1; -const CLOSED = 2; - -/** - * Class representing a WebSocket server. - * - * @extends EventEmitter - */ -class WebSocketServer extends EventEmitter { - /** - * Create a `WebSocketServer` instance. - * - * @param {Object} options Configuration options - * @param {Boolean} [options.allowSynchronousEvents=true] Specifies whether - * any of the `'message'`, `'ping'`, and `'pong'` events can be emitted - * multiple times in the same tick - * @param {Boolean} [options.autoPong=true] Specifies whether or not to - * automatically send a pong in response to a ping - * @param {Number} [options.backlog=511] The maximum length of the queue of - * pending connections - * @param {Boolean} [options.clientTracking=true] Specifies whether or not to - * track clients - * @param {Number} [options.closeTimeout=30000] Duration in milliseconds to - * wait for the closing handshake to finish after `websocket.close()` is - * called - * @param {Function} [options.handleProtocols] A hook to handle protocols - * @param {String} [options.host] The hostname where to bind the server - * @param {Number} [options.maxPayload=104857600] The maximum allowed message - * size - * @param {Boolean} [options.noServer=false] Enable no server mode - * @param {String} [options.path] Accept only connections matching this path - * @param {(Boolean|Object)} [options.perMessageDeflate=false] Enable/disable - * permessage-deflate - * @param {Number} [options.port] The port where to bind the server - * @param {(http.Server|https.Server)} [options.server] A pre-created HTTP/S - * server to use - * @param {Boolean} [options.skipUTF8Validation=false] Specifies whether or - * not to skip UTF-8 validation for text and close messages - * @param {Function} [options.verifyClient] A hook to reject connections - * @param {Function} [options.WebSocket=WebSocket] Specifies the `WebSocket` - * class to use. It must be the `WebSocket` class or class that extends it - * @param {Function} [callback] A listener for the `listening` event - */ - constructor(options, callback) { - super(); - - options = { - allowSynchronousEvents: true, - autoPong: true, - maxPayload: 100 * 1024 * 1024, - skipUTF8Validation: false, - perMessageDeflate: false, - handleProtocols: null, - clientTracking: true, - closeTimeout: CLOSE_TIMEOUT, - verifyClient: null, - noServer: false, - backlog: null, // use default (511 as implemented in net.js) - server: null, - host: null, - path: null, - port: null, - WebSocket, - ...options - }; - - if ( - (options.port == null && !options.server && !options.noServer) || - (options.port != null && (options.server || options.noServer)) || - (options.server && options.noServer) - ) { - throw new TypeError( - 'One and only one of the "port", "server", or "noServer" options ' + - 'must be specified' - ); - } - - if (options.port != null) { - this._server = http.createServer((req, res) => { - const body = http.STATUS_CODES[426]; - - res.writeHead(426, { - 'Content-Length': body.length, - 'Content-Type': 'text/plain' - }); - res.end(body); - }); - this._server.listen( - options.port, - options.host, - options.backlog, - callback - ); - } else if (options.server) { - this._server = options.server; - } - - if (this._server) { - const emitConnection = this.emit.bind(this, 'connection'); - - this._removeListeners = addListeners(this._server, { - listening: this.emit.bind(this, 'listening'), - error: this.emit.bind(this, 'error'), - upgrade: (req, socket, head) => { - this.handleUpgrade(req, socket, head, emitConnection); - } - }); - } - - if (options.perMessageDeflate === true) options.perMessageDeflate = {}; - if (options.clientTracking) { - this.clients = new Set(); - this._shouldEmitClose = false; - } - - this.options = options; - this._state = RUNNING; - } - - /** - * Returns the bound address, the address family name, and port of the server - * as reported by the operating system if listening on an IP socket. - * If the server is listening on a pipe or UNIX domain socket, the name is - * returned as a string. - * - * @return {(Object|String|null)} The address of the server - * @public - */ - address() { - if (this.options.noServer) { - throw new Error('The server is operating in "noServer" mode'); - } - - if (!this._server) return null; - return this._server.address(); - } - - /** - * Stop the server from accepting new connections and emit the `'close'` event - * when all existing connections are closed. - * - * @param {Function} [cb] A one-time listener for the `'close'` event - * @public - */ - close(cb) { - if (this._state === CLOSED) { - if (cb) { - this.once('close', () => { - cb(new Error('The server is not running')); - }); - } - - process.nextTick(emitClose, this); - return; - } - - if (cb) this.once('close', cb); - - if (this._state === CLOSING) return; - this._state = CLOSING; - - if (this.options.noServer || this.options.server) { - if (this._server) { - this._removeListeners(); - this._removeListeners = this._server = null; - } - - if (this.clients) { - if (!this.clients.size) { - process.nextTick(emitClose, this); - } else { - this._shouldEmitClose = true; - } - } else { - process.nextTick(emitClose, this); - } - } else { - const server = this._server; - - this._removeListeners(); - this._removeListeners = this._server = null; - - // - // The HTTP/S server was created internally. Close it, and rely on its - // `'close'` event. - // - server.close(() => { - emitClose(this); - }); - } - } - - /** - * See if a given request should be handled by this server instance. - * - * @param {http.IncomingMessage} req Request object to inspect - * @return {Boolean} `true` if the request is valid, else `false` - * @public - */ - shouldHandle(req) { - if (this.options.path) { - const index = req.url.indexOf('?'); - const pathname = index !== -1 ? req.url.slice(0, index) : req.url; - - if (pathname !== this.options.path) return false; - } - - return true; - } - - /** - * Handle a HTTP Upgrade request. - * - * @param {http.IncomingMessage} req The request object - * @param {Duplex} socket The network socket between the server and client - * @param {Buffer} head The first packet of the upgraded stream - * @param {Function} cb Callback - * @public - */ - handleUpgrade(req, socket, head, cb) { - socket.on('error', socketOnError); - - const key = req.headers['sec-websocket-key']; - const upgrade = req.headers.upgrade; - const version = +req.headers['sec-websocket-version']; - - if (req.method !== 'GET') { - const message = 'Invalid HTTP method'; - abortHandshakeOrEmitwsClientError(this, req, socket, 405, message); - return; - } - - if (upgrade === undefined || upgrade.toLowerCase() !== 'websocket') { - const message = 'Invalid Upgrade header'; - abortHandshakeOrEmitwsClientError(this, req, socket, 400, message); - return; - } - - if (key === undefined || !keyRegex.test(key)) { - const message = 'Missing or invalid Sec-WebSocket-Key header'; - abortHandshakeOrEmitwsClientError(this, req, socket, 400, message); - return; - } - - if (version !== 13 && version !== 8) { - const message = 'Missing or invalid Sec-WebSocket-Version header'; - abortHandshakeOrEmitwsClientError(this, req, socket, 400, message, { - 'Sec-WebSocket-Version': '13, 8' - }); - return; - } - - if (!this.shouldHandle(req)) { - abortHandshake(socket, 400); - return; - } - - const secWebSocketProtocol = req.headers['sec-websocket-protocol']; - let protocols = new Set(); - - if (secWebSocketProtocol !== undefined) { - try { - protocols = subprotocol.parse(secWebSocketProtocol); - } catch (err) { - const message = 'Invalid Sec-WebSocket-Protocol header'; - abortHandshakeOrEmitwsClientError(this, req, socket, 400, message); - return; - } - } - - const secWebSocketExtensions = req.headers['sec-websocket-extensions']; - const extensions = {}; - - if ( - this.options.perMessageDeflate && - secWebSocketExtensions !== undefined - ) { - const perMessageDeflate = new PerMessageDeflate( - this.options.perMessageDeflate, - true, - this.options.maxPayload - ); - - try { - const offers = extension.parse(secWebSocketExtensions); - - if (offers[PerMessageDeflate.extensionName]) { - perMessageDeflate.accept(offers[PerMessageDeflate.extensionName]); - extensions[PerMessageDeflate.extensionName] = perMessageDeflate; - } - } catch (err) { - const message = - 'Invalid or unacceptable Sec-WebSocket-Extensions header'; - abortHandshakeOrEmitwsClientError(this, req, socket, 400, message); - return; - } - } - - // - // Optionally call external client verification handler. - // - if (this.options.verifyClient) { - const info = { - origin: - req.headers[`${version === 8 ? 'sec-websocket-origin' : 'origin'}`], - secure: !!(req.socket.authorized || req.socket.encrypted), - req - }; - - if (this.options.verifyClient.length === 2) { - this.options.verifyClient(info, (verified, code, message, headers) => { - if (!verified) { - return abortHandshake(socket, code || 401, message, headers); - } - - this.completeUpgrade( - extensions, - key, - protocols, - req, - socket, - head, - cb - ); - }); - return; - } - - if (!this.options.verifyClient(info)) return abortHandshake(socket, 401); - } - - this.completeUpgrade(extensions, key, protocols, req, socket, head, cb); - } - - /** - * Upgrade the connection to WebSocket. - * - * @param {Object} extensions The accepted extensions - * @param {String} key The value of the `Sec-WebSocket-Key` header - * @param {Set} protocols The subprotocols - * @param {http.IncomingMessage} req The request object - * @param {Duplex} socket The network socket between the server and client - * @param {Buffer} head The first packet of the upgraded stream - * @param {Function} cb Callback - * @throws {Error} If called more than once with the same socket - * @private - */ - completeUpgrade(extensions, key, protocols, req, socket, head, cb) { - // - // Destroy the socket if the client has already sent a FIN packet. - // - if (!socket.readable || !socket.writable) return socket.destroy(); - - if (socket[kWebSocket]) { - throw new Error( - 'server.handleUpgrade() was called more than once with the same ' + - 'socket, possibly due to a misconfiguration' - ); - } - - if (this._state > RUNNING) return abortHandshake(socket, 503); - - const digest = createHash('sha1') - .update(key + GUID) - .digest('base64'); - - const headers = [ - 'HTTP/1.1 101 Switching Protocols', - 'Upgrade: websocket', - 'Connection: Upgrade', - `Sec-WebSocket-Accept: ${digest}` - ]; - - const ws = new this.options.WebSocket(null, undefined, this.options); - - if (protocols.size) { - // - // Optionally call external protocol selection handler. - // - const protocol = this.options.handleProtocols - ? this.options.handleProtocols(protocols, req) - : protocols.values().next().value; - - if (protocol) { - headers.push(`Sec-WebSocket-Protocol: ${protocol}`); - ws._protocol = protocol; - } - } - - if (extensions[PerMessageDeflate.extensionName]) { - const params = extensions[PerMessageDeflate.extensionName].params; - const value = extension.format({ - [PerMessageDeflate.extensionName]: [params] - }); - headers.push(`Sec-WebSocket-Extensions: ${value}`); - ws._extensions = extensions; - } - - // - // Allow external modification/inspection of handshake headers. - // - this.emit('headers', headers, req); - - socket.write(headers.concat('\r\n').join('\r\n')); - socket.removeListener('error', socketOnError); - - ws.setSocket(socket, head, { - allowSynchronousEvents: this.options.allowSynchronousEvents, - maxPayload: this.options.maxPayload, - skipUTF8Validation: this.options.skipUTF8Validation - }); - - if (this.clients) { - this.clients.add(ws); - ws.on('close', () => { - this.clients.delete(ws); - - if (this._shouldEmitClose && !this.clients.size) { - process.nextTick(emitClose, this); - } - }); - } - - cb(ws, req); - } -} - -module.exports = WebSocketServer; - -/** - * Add event listeners on an `EventEmitter` using a map of - * pairs. - * - * @param {EventEmitter} server The event emitter - * @param {Object.} map The listeners to add - * @return {Function} A function that will remove the added listeners when - * called - * @private - */ -function addListeners(server, map) { - for (const event of Object.keys(map)) server.on(event, map[event]); - - return function removeListeners() { - for (const event of Object.keys(map)) { - server.removeListener(event, map[event]); - } - }; -} - -/** - * Emit a `'close'` event on an `EventEmitter`. - * - * @param {EventEmitter} server The event emitter - * @private - */ -function emitClose(server) { - server._state = CLOSED; - server.emit('close'); -} - -/** - * Handle socket errors. - * - * @private - */ -function socketOnError() { - this.destroy(); -} - -/** - * Close the connection when preconditions are not fulfilled. - * - * @param {Duplex} socket The socket of the upgrade request - * @param {Number} code The HTTP response status code - * @param {String} [message] The HTTP response body - * @param {Object} [headers] Additional HTTP response headers - * @private - */ -function abortHandshake(socket, code, message, headers) { - // - // The socket is writable unless the user destroyed or ended it before calling - // `server.handleUpgrade()` or in the `verifyClient` function, which is a user - // error. Handling this does not make much sense as the worst that can happen - // is that some of the data written by the user might be discarded due to the - // call to `socket.end()` below, which triggers an `'error'` event that in - // turn causes the socket to be destroyed. - // - message = message || http.STATUS_CODES[code]; - headers = { - Connection: 'close', - 'Content-Type': 'text/html', - 'Content-Length': Buffer.byteLength(message), - ...headers - }; - - socket.once('finish', socket.destroy); - - socket.end( - `HTTP/1.1 ${code} ${http.STATUS_CODES[code]}\r\n` + - Object.keys(headers) - .map((h) => `${h}: ${headers[h]}`) - .join('\r\n') + - '\r\n\r\n' + - message - ); -} - -/** - * Emit a `'wsClientError'` event on a `WebSocketServer` if there is at least - * one listener for it, otherwise call `abortHandshake()`. - * - * @param {WebSocketServer} server The WebSocket server - * @param {http.IncomingMessage} req The request object - * @param {Duplex} socket The socket of the upgrade request - * @param {Number} code The HTTP response status code - * @param {String} message The HTTP response body - * @param {Object} [headers] The HTTP response headers - * @private - */ -function abortHandshakeOrEmitwsClientError( - server, - req, - socket, - code, - message, - headers -) { - if (server.listenerCount('wsClientError')) { - const err = new Error(message); - Error.captureStackTrace(err, abortHandshakeOrEmitwsClientError); - - server.emit('wsClientError', err, socket, req); - } else { - abortHandshake(socket, code, message, headers); - } -} diff --git a/node_modules/ws/lib/websocket.js b/node_modules/ws/lib/websocket.js deleted file mode 100644 index 0da2949..0000000 --- a/node_modules/ws/lib/websocket.js +++ /dev/null @@ -1,1393 +0,0 @@ -/* eslint no-unused-vars: ["error", { "varsIgnorePattern": "^Duplex|Readable$", "caughtErrors": "none" }] */ - -'use strict'; - -const EventEmitter = require('events'); -const https = require('https'); -const http = require('http'); -const net = require('net'); -const tls = require('tls'); -const { randomBytes, createHash } = require('crypto'); -const { Duplex, Readable } = require('stream'); -const { URL } = require('url'); - -const PerMessageDeflate = require('./permessage-deflate'); -const Receiver = require('./receiver'); -const Sender = require('./sender'); -const { isBlob } = require('./validation'); - -const { - BINARY_TYPES, - CLOSE_TIMEOUT, - EMPTY_BUFFER, - GUID, - kForOnEventAttribute, - kListener, - kStatusCode, - kWebSocket, - NOOP -} = require('./constants'); -const { - EventTarget: { addEventListener, removeEventListener } -} = require('./event-target'); -const { format, parse } = require('./extension'); -const { toBuffer } = require('./buffer-util'); - -const kAborted = Symbol('kAborted'); -const protocolVersions = [8, 13]; -const readyStates = ['CONNECTING', 'OPEN', 'CLOSING', 'CLOSED']; -const subprotocolRegex = /^[!#$%&'*+\-.0-9A-Z^_`|a-z~]+$/; - -/** - * Class representing a WebSocket. - * - * @extends EventEmitter - */ -class WebSocket extends EventEmitter { - /** - * Create a new `WebSocket`. - * - * @param {(String|URL)} address The URL to which to connect - * @param {(String|String[])} [protocols] The subprotocols - * @param {Object} [options] Connection options - */ - constructor(address, protocols, options) { - super(); - - this._binaryType = BINARY_TYPES[0]; - this._closeCode = 1006; - this._closeFrameReceived = false; - this._closeFrameSent = false; - this._closeMessage = EMPTY_BUFFER; - this._closeTimer = null; - this._errorEmitted = false; - this._extensions = {}; - this._paused = false; - this._protocol = ''; - this._readyState = WebSocket.CONNECTING; - this._receiver = null; - this._sender = null; - this._socket = null; - - if (address !== null) { - this._bufferedAmount = 0; - this._isServer = false; - this._redirects = 0; - - if (protocols === undefined) { - protocols = []; - } else if (!Array.isArray(protocols)) { - if (typeof protocols === 'object' && protocols !== null) { - options = protocols; - protocols = []; - } else { - protocols = [protocols]; - } - } - - initAsClient(this, address, protocols, options); - } else { - this._autoPong = options.autoPong; - this._closeTimeout = options.closeTimeout; - this._isServer = true; - } - } - - /** - * For historical reasons, the custom "nodebuffer" type is used by the default - * instead of "blob". - * - * @type {String} - */ - get binaryType() { - return this._binaryType; - } - - set binaryType(type) { - if (!BINARY_TYPES.includes(type)) return; - - this._binaryType = type; - - // - // Allow to change `binaryType` on the fly. - // - if (this._receiver) this._receiver._binaryType = type; - } - - /** - * @type {Number} - */ - get bufferedAmount() { - if (!this._socket) return this._bufferedAmount; - - return this._socket._writableState.length + this._sender._bufferedBytes; - } - - /** - * @type {String} - */ - get extensions() { - return Object.keys(this._extensions).join(); - } - - /** - * @type {Boolean} - */ - get isPaused() { - return this._paused; - } - - /** - * @type {Function} - */ - /* istanbul ignore next */ - get onclose() { - return null; - } - - /** - * @type {Function} - */ - /* istanbul ignore next */ - get onerror() { - return null; - } - - /** - * @type {Function} - */ - /* istanbul ignore next */ - get onopen() { - return null; - } - - /** - * @type {Function} - */ - /* istanbul ignore next */ - get onmessage() { - return null; - } - - /** - * @type {String} - */ - get protocol() { - return this._protocol; - } - - /** - * @type {Number} - */ - get readyState() { - return this._readyState; - } - - /** - * @type {String} - */ - get url() { - return this._url; - } - - /** - * Set up the socket and the internal resources. - * - * @param {Duplex} socket The network socket between the server and client - * @param {Buffer} head The first packet of the upgraded stream - * @param {Object} options Options object - * @param {Boolean} [options.allowSynchronousEvents=false] Specifies whether - * any of the `'message'`, `'ping'`, and `'pong'` events can be emitted - * multiple times in the same tick - * @param {Function} [options.generateMask] The function used to generate the - * masking key - * @param {Number} [options.maxPayload=0] The maximum allowed message size - * @param {Boolean} [options.skipUTF8Validation=false] Specifies whether or - * not to skip UTF-8 validation for text and close messages - * @private - */ - setSocket(socket, head, options) { - const receiver = new Receiver({ - allowSynchronousEvents: options.allowSynchronousEvents, - binaryType: this.binaryType, - extensions: this._extensions, - isServer: this._isServer, - maxPayload: options.maxPayload, - skipUTF8Validation: options.skipUTF8Validation - }); - - const sender = new Sender(socket, this._extensions, options.generateMask); - - this._receiver = receiver; - this._sender = sender; - this._socket = socket; - - receiver[kWebSocket] = this; - sender[kWebSocket] = this; - socket[kWebSocket] = this; - - receiver.on('conclude', receiverOnConclude); - receiver.on('drain', receiverOnDrain); - receiver.on('error', receiverOnError); - receiver.on('message', receiverOnMessage); - receiver.on('ping', receiverOnPing); - receiver.on('pong', receiverOnPong); - - sender.onerror = senderOnError; - - // - // These methods may not be available if `socket` is just a `Duplex`. - // - if (socket.setTimeout) socket.setTimeout(0); - if (socket.setNoDelay) socket.setNoDelay(); - - if (head.length > 0) socket.unshift(head); - - socket.on('close', socketOnClose); - socket.on('data', socketOnData); - socket.on('end', socketOnEnd); - socket.on('error', socketOnError); - - this._readyState = WebSocket.OPEN; - this.emit('open'); - } - - /** - * Emit the `'close'` event. - * - * @private - */ - emitClose() { - if (!this._socket) { - this._readyState = WebSocket.CLOSED; - this.emit('close', this._closeCode, this._closeMessage); - return; - } - - if (this._extensions[PerMessageDeflate.extensionName]) { - this._extensions[PerMessageDeflate.extensionName].cleanup(); - } - - this._receiver.removeAllListeners(); - this._readyState = WebSocket.CLOSED; - this.emit('close', this._closeCode, this._closeMessage); - } - - /** - * Start a closing handshake. - * - * +----------+ +-----------+ +----------+ - * - - -|ws.close()|-->|close frame|-->|ws.close()|- - - - * | +----------+ +-----------+ +----------+ | - * +----------+ +-----------+ | - * CLOSING |ws.close()|<--|close frame|<--+-----+ CLOSING - * +----------+ +-----------+ | - * | | | +---+ | - * +------------------------+-->|fin| - - - - - * | +---+ | +---+ - * - - - - -|fin|<---------------------+ - * +---+ - * - * @param {Number} [code] Status code explaining why the connection is closing - * @param {(String|Buffer)} [data] The reason why the connection is - * closing - * @public - */ - close(code, data) { - if (this.readyState === WebSocket.CLOSED) return; - if (this.readyState === WebSocket.CONNECTING) { - const msg = 'WebSocket was closed before the connection was established'; - abortHandshake(this, this._req, msg); - return; - } - - if (this.readyState === WebSocket.CLOSING) { - if ( - this._closeFrameSent && - (this._closeFrameReceived || this._receiver._writableState.errorEmitted) - ) { - this._socket.end(); - } - - return; - } - - this._readyState = WebSocket.CLOSING; - this._sender.close(code, data, !this._isServer, (err) => { - // - // This error is handled by the `'error'` listener on the socket. We only - // want to know if the close frame has been sent here. - // - if (err) return; - - this._closeFrameSent = true; - - if ( - this._closeFrameReceived || - this._receiver._writableState.errorEmitted - ) { - this._socket.end(); - } - }); - - setCloseTimer(this); - } - - /** - * Pause the socket. - * - * @public - */ - pause() { - if ( - this.readyState === WebSocket.CONNECTING || - this.readyState === WebSocket.CLOSED - ) { - return; - } - - this._paused = true; - this._socket.pause(); - } - - /** - * Send a ping. - * - * @param {*} [data] The data to send - * @param {Boolean} [mask] Indicates whether or not to mask `data` - * @param {Function} [cb] Callback which is executed when the ping is sent - * @public - */ - ping(data, mask, cb) { - if (this.readyState === WebSocket.CONNECTING) { - throw new Error('WebSocket is not open: readyState 0 (CONNECTING)'); - } - - if (typeof data === 'function') { - cb = data; - data = mask = undefined; - } else if (typeof mask === 'function') { - cb = mask; - mask = undefined; - } - - if (typeof data === 'number') data = data.toString(); - - if (this.readyState !== WebSocket.OPEN) { - sendAfterClose(this, data, cb); - return; - } - - if (mask === undefined) mask = !this._isServer; - this._sender.ping(data || EMPTY_BUFFER, mask, cb); - } - - /** - * Send a pong. - * - * @param {*} [data] The data to send - * @param {Boolean} [mask] Indicates whether or not to mask `data` - * @param {Function} [cb] Callback which is executed when the pong is sent - * @public - */ - pong(data, mask, cb) { - if (this.readyState === WebSocket.CONNECTING) { - throw new Error('WebSocket is not open: readyState 0 (CONNECTING)'); - } - - if (typeof data === 'function') { - cb = data; - data = mask = undefined; - } else if (typeof mask === 'function') { - cb = mask; - mask = undefined; - } - - if (typeof data === 'number') data = data.toString(); - - if (this.readyState !== WebSocket.OPEN) { - sendAfterClose(this, data, cb); - return; - } - - if (mask === undefined) mask = !this._isServer; - this._sender.pong(data || EMPTY_BUFFER, mask, cb); - } - - /** - * Resume the socket. - * - * @public - */ - resume() { - if ( - this.readyState === WebSocket.CONNECTING || - this.readyState === WebSocket.CLOSED - ) { - return; - } - - this._paused = false; - if (!this._receiver._writableState.needDrain) this._socket.resume(); - } - - /** - * Send a data message. - * - * @param {*} data The message to send - * @param {Object} [options] Options object - * @param {Boolean} [options.binary] Specifies whether `data` is binary or - * text - * @param {Boolean} [options.compress] Specifies whether or not to compress - * `data` - * @param {Boolean} [options.fin=true] Specifies whether the fragment is the - * last one - * @param {Boolean} [options.mask] Specifies whether or not to mask `data` - * @param {Function} [cb] Callback which is executed when data is written out - * @public - */ - send(data, options, cb) { - if (this.readyState === WebSocket.CONNECTING) { - throw new Error('WebSocket is not open: readyState 0 (CONNECTING)'); - } - - if (typeof options === 'function') { - cb = options; - options = {}; - } - - if (typeof data === 'number') data = data.toString(); - - if (this.readyState !== WebSocket.OPEN) { - sendAfterClose(this, data, cb); - return; - } - - const opts = { - binary: typeof data !== 'string', - mask: !this._isServer, - compress: true, - fin: true, - ...options - }; - - if (!this._extensions[PerMessageDeflate.extensionName]) { - opts.compress = false; - } - - this._sender.send(data || EMPTY_BUFFER, opts, cb); - } - - /** - * Forcibly close the connection. - * - * @public - */ - terminate() { - if (this.readyState === WebSocket.CLOSED) return; - if (this.readyState === WebSocket.CONNECTING) { - const msg = 'WebSocket was closed before the connection was established'; - abortHandshake(this, this._req, msg); - return; - } - - if (this._socket) { - this._readyState = WebSocket.CLOSING; - this._socket.destroy(); - } - } -} - -/** - * @constant {Number} CONNECTING - * @memberof WebSocket - */ -Object.defineProperty(WebSocket, 'CONNECTING', { - enumerable: true, - value: readyStates.indexOf('CONNECTING') -}); - -/** - * @constant {Number} CONNECTING - * @memberof WebSocket.prototype - */ -Object.defineProperty(WebSocket.prototype, 'CONNECTING', { - enumerable: true, - value: readyStates.indexOf('CONNECTING') -}); - -/** - * @constant {Number} OPEN - * @memberof WebSocket - */ -Object.defineProperty(WebSocket, 'OPEN', { - enumerable: true, - value: readyStates.indexOf('OPEN') -}); - -/** - * @constant {Number} OPEN - * @memberof WebSocket.prototype - */ -Object.defineProperty(WebSocket.prototype, 'OPEN', { - enumerable: true, - value: readyStates.indexOf('OPEN') -}); - -/** - * @constant {Number} CLOSING - * @memberof WebSocket - */ -Object.defineProperty(WebSocket, 'CLOSING', { - enumerable: true, - value: readyStates.indexOf('CLOSING') -}); - -/** - * @constant {Number} CLOSING - * @memberof WebSocket.prototype - */ -Object.defineProperty(WebSocket.prototype, 'CLOSING', { - enumerable: true, - value: readyStates.indexOf('CLOSING') -}); - -/** - * @constant {Number} CLOSED - * @memberof WebSocket - */ -Object.defineProperty(WebSocket, 'CLOSED', { - enumerable: true, - value: readyStates.indexOf('CLOSED') -}); - -/** - * @constant {Number} CLOSED - * @memberof WebSocket.prototype - */ -Object.defineProperty(WebSocket.prototype, 'CLOSED', { - enumerable: true, - value: readyStates.indexOf('CLOSED') -}); - -[ - 'binaryType', - 'bufferedAmount', - 'extensions', - 'isPaused', - 'protocol', - 'readyState', - 'url' -].forEach((property) => { - Object.defineProperty(WebSocket.prototype, property, { enumerable: true }); -}); - -// -// Add the `onopen`, `onerror`, `onclose`, and `onmessage` attributes. -// See https://html.spec.whatwg.org/multipage/comms.html#the-websocket-interface -// -['open', 'error', 'close', 'message'].forEach((method) => { - Object.defineProperty(WebSocket.prototype, `on${method}`, { - enumerable: true, - get() { - for (const listener of this.listeners(method)) { - if (listener[kForOnEventAttribute]) return listener[kListener]; - } - - return null; - }, - set(handler) { - for (const listener of this.listeners(method)) { - if (listener[kForOnEventAttribute]) { - this.removeListener(method, listener); - break; - } - } - - if (typeof handler !== 'function') return; - - this.addEventListener(method, handler, { - [kForOnEventAttribute]: true - }); - } - }); -}); - -WebSocket.prototype.addEventListener = addEventListener; -WebSocket.prototype.removeEventListener = removeEventListener; - -module.exports = WebSocket; - -/** - * Initialize a WebSocket client. - * - * @param {WebSocket} websocket The client to initialize - * @param {(String|URL)} address The URL to which to connect - * @param {Array} protocols The subprotocols - * @param {Object} [options] Connection options - * @param {Boolean} [options.allowSynchronousEvents=true] Specifies whether any - * of the `'message'`, `'ping'`, and `'pong'` events can be emitted multiple - * times in the same tick - * @param {Boolean} [options.autoPong=true] Specifies whether or not to - * automatically send a pong in response to a ping - * @param {Number} [options.closeTimeout=30000] Duration in milliseconds to wait - * for the closing handshake to finish after `websocket.close()` is called - * @param {Function} [options.finishRequest] A function which can be used to - * customize the headers of each http request before it is sent - * @param {Boolean} [options.followRedirects=false] Whether or not to follow - * redirects - * @param {Function} [options.generateMask] The function used to generate the - * masking key - * @param {Number} [options.handshakeTimeout] Timeout in milliseconds for the - * handshake request - * @param {Number} [options.maxPayload=104857600] The maximum allowed message - * size - * @param {Number} [options.maxRedirects=10] The maximum number of redirects - * allowed - * @param {String} [options.origin] Value of the `Origin` or - * `Sec-WebSocket-Origin` header - * @param {(Boolean|Object)} [options.perMessageDeflate=true] Enable/disable - * permessage-deflate - * @param {Number} [options.protocolVersion=13] Value of the - * `Sec-WebSocket-Version` header - * @param {Boolean} [options.skipUTF8Validation=false] Specifies whether or - * not to skip UTF-8 validation for text and close messages - * @private - */ -function initAsClient(websocket, address, protocols, options) { - const opts = { - allowSynchronousEvents: true, - autoPong: true, - closeTimeout: CLOSE_TIMEOUT, - protocolVersion: protocolVersions[1], - maxPayload: 100 * 1024 * 1024, - skipUTF8Validation: false, - perMessageDeflate: true, - followRedirects: false, - maxRedirects: 10, - ...options, - socketPath: undefined, - hostname: undefined, - protocol: undefined, - timeout: undefined, - method: 'GET', - host: undefined, - path: undefined, - port: undefined - }; - - websocket._autoPong = opts.autoPong; - websocket._closeTimeout = opts.closeTimeout; - - if (!protocolVersions.includes(opts.protocolVersion)) { - throw new RangeError( - `Unsupported protocol version: ${opts.protocolVersion} ` + - `(supported versions: ${protocolVersions.join(', ')})` - ); - } - - let parsedUrl; - - if (address instanceof URL) { - parsedUrl = address; - } else { - try { - parsedUrl = new URL(address); - } catch (e) { - throw new SyntaxError(`Invalid URL: ${address}`); - } - } - - if (parsedUrl.protocol === 'http:') { - parsedUrl.protocol = 'ws:'; - } else if (parsedUrl.protocol === 'https:') { - parsedUrl.protocol = 'wss:'; - } - - websocket._url = parsedUrl.href; - - const isSecure = parsedUrl.protocol === 'wss:'; - const isIpcUrl = parsedUrl.protocol === 'ws+unix:'; - let invalidUrlMessage; - - if (parsedUrl.protocol !== 'ws:' && !isSecure && !isIpcUrl) { - invalidUrlMessage = - 'The URL\'s protocol must be one of "ws:", "wss:", ' + - '"http:", "https:", or "ws+unix:"'; - } else if (isIpcUrl && !parsedUrl.pathname) { - invalidUrlMessage = "The URL's pathname is empty"; - } else if (parsedUrl.hash) { - invalidUrlMessage = 'The URL contains a fragment identifier'; - } - - if (invalidUrlMessage) { - const err = new SyntaxError(invalidUrlMessage); - - if (websocket._redirects === 0) { - throw err; - } else { - emitErrorAndClose(websocket, err); - return; - } - } - - const defaultPort = isSecure ? 443 : 80; - const key = randomBytes(16).toString('base64'); - const request = isSecure ? https.request : http.request; - const protocolSet = new Set(); - let perMessageDeflate; - - opts.createConnection = - opts.createConnection || (isSecure ? tlsConnect : netConnect); - opts.defaultPort = opts.defaultPort || defaultPort; - opts.port = parsedUrl.port || defaultPort; - opts.host = parsedUrl.hostname.startsWith('[') - ? parsedUrl.hostname.slice(1, -1) - : parsedUrl.hostname; - opts.headers = { - ...opts.headers, - 'Sec-WebSocket-Version': opts.protocolVersion, - 'Sec-WebSocket-Key': key, - Connection: 'Upgrade', - Upgrade: 'websocket' - }; - opts.path = parsedUrl.pathname + parsedUrl.search; - opts.timeout = opts.handshakeTimeout; - - if (opts.perMessageDeflate) { - perMessageDeflate = new PerMessageDeflate( - opts.perMessageDeflate !== true ? opts.perMessageDeflate : {}, - false, - opts.maxPayload - ); - opts.headers['Sec-WebSocket-Extensions'] = format({ - [PerMessageDeflate.extensionName]: perMessageDeflate.offer() - }); - } - if (protocols.length) { - for (const protocol of protocols) { - if ( - typeof protocol !== 'string' || - !subprotocolRegex.test(protocol) || - protocolSet.has(protocol) - ) { - throw new SyntaxError( - 'An invalid or duplicated subprotocol was specified' - ); - } - - protocolSet.add(protocol); - } - - opts.headers['Sec-WebSocket-Protocol'] = protocols.join(','); - } - if (opts.origin) { - if (opts.protocolVersion < 13) { - opts.headers['Sec-WebSocket-Origin'] = opts.origin; - } else { - opts.headers.Origin = opts.origin; - } - } - if (parsedUrl.username || parsedUrl.password) { - opts.auth = `${parsedUrl.username}:${parsedUrl.password}`; - } - - if (isIpcUrl) { - const parts = opts.path.split(':'); - - opts.socketPath = parts[0]; - opts.path = parts[1]; - } - - let req; - - if (opts.followRedirects) { - if (websocket._redirects === 0) { - websocket._originalIpc = isIpcUrl; - websocket._originalSecure = isSecure; - websocket._originalHostOrSocketPath = isIpcUrl - ? opts.socketPath - : parsedUrl.host; - - const headers = options && options.headers; - - // - // Shallow copy the user provided options so that headers can be changed - // without mutating the original object. - // - options = { ...options, headers: {} }; - - if (headers) { - for (const [key, value] of Object.entries(headers)) { - options.headers[key.toLowerCase()] = value; - } - } - } else if (websocket.listenerCount('redirect') === 0) { - const isSameHost = isIpcUrl - ? websocket._originalIpc - ? opts.socketPath === websocket._originalHostOrSocketPath - : false - : websocket._originalIpc - ? false - : parsedUrl.host === websocket._originalHostOrSocketPath; - - if (!isSameHost || (websocket._originalSecure && !isSecure)) { - // - // Match curl 7.77.0 behavior and drop the following headers. These - // headers are also dropped when following a redirect to a subdomain. - // - delete opts.headers.authorization; - delete opts.headers.cookie; - - if (!isSameHost) delete opts.headers.host; - - opts.auth = undefined; - } - } - - // - // Match curl 7.77.0 behavior and make the first `Authorization` header win. - // If the `Authorization` header is set, then there is nothing to do as it - // will take precedence. - // - if (opts.auth && !options.headers.authorization) { - options.headers.authorization = - 'Basic ' + Buffer.from(opts.auth).toString('base64'); - } - - req = websocket._req = request(opts); - - if (websocket._redirects) { - // - // Unlike what is done for the `'upgrade'` event, no early exit is - // triggered here if the user calls `websocket.close()` or - // `websocket.terminate()` from a listener of the `'redirect'` event. This - // is because the user can also call `request.destroy()` with an error - // before calling `websocket.close()` or `websocket.terminate()` and this - // would result in an error being emitted on the `request` object with no - // `'error'` event listeners attached. - // - websocket.emit('redirect', websocket.url, req); - } - } else { - req = websocket._req = request(opts); - } - - if (opts.timeout) { - req.on('timeout', () => { - abortHandshake(websocket, req, 'Opening handshake has timed out'); - }); - } - - req.on('error', (err) => { - if (req === null || req[kAborted]) return; - - req = websocket._req = null; - emitErrorAndClose(websocket, err); - }); - - req.on('response', (res) => { - const location = res.headers.location; - const statusCode = res.statusCode; - - if ( - location && - opts.followRedirects && - statusCode >= 300 && - statusCode < 400 - ) { - if (++websocket._redirects > opts.maxRedirects) { - abortHandshake(websocket, req, 'Maximum redirects exceeded'); - return; - } - - req.abort(); - - let addr; - - try { - addr = new URL(location, address); - } catch (e) { - const err = new SyntaxError(`Invalid URL: ${location}`); - emitErrorAndClose(websocket, err); - return; - } - - initAsClient(websocket, addr, protocols, options); - } else if (!websocket.emit('unexpected-response', req, res)) { - abortHandshake( - websocket, - req, - `Unexpected server response: ${res.statusCode}` - ); - } - }); - - req.on('upgrade', (res, socket, head) => { - websocket.emit('upgrade', res); - - // - // The user may have closed the connection from a listener of the - // `'upgrade'` event. - // - if (websocket.readyState !== WebSocket.CONNECTING) return; - - req = websocket._req = null; - - const upgrade = res.headers.upgrade; - - if (upgrade === undefined || upgrade.toLowerCase() !== 'websocket') { - abortHandshake(websocket, socket, 'Invalid Upgrade header'); - return; - } - - const digest = createHash('sha1') - .update(key + GUID) - .digest('base64'); - - if (res.headers['sec-websocket-accept'] !== digest) { - abortHandshake(websocket, socket, 'Invalid Sec-WebSocket-Accept header'); - return; - } - - const serverProt = res.headers['sec-websocket-protocol']; - let protError; - - if (serverProt !== undefined) { - if (!protocolSet.size) { - protError = 'Server sent a subprotocol but none was requested'; - } else if (!protocolSet.has(serverProt)) { - protError = 'Server sent an invalid subprotocol'; - } - } else if (protocolSet.size) { - protError = 'Server sent no subprotocol'; - } - - if (protError) { - abortHandshake(websocket, socket, protError); - return; - } - - if (serverProt) websocket._protocol = serverProt; - - const secWebSocketExtensions = res.headers['sec-websocket-extensions']; - - if (secWebSocketExtensions !== undefined) { - if (!perMessageDeflate) { - const message = - 'Server sent a Sec-WebSocket-Extensions header but no extension ' + - 'was requested'; - abortHandshake(websocket, socket, message); - return; - } - - let extensions; - - try { - extensions = parse(secWebSocketExtensions); - } catch (err) { - const message = 'Invalid Sec-WebSocket-Extensions header'; - abortHandshake(websocket, socket, message); - return; - } - - const extensionNames = Object.keys(extensions); - - if ( - extensionNames.length !== 1 || - extensionNames[0] !== PerMessageDeflate.extensionName - ) { - const message = 'Server indicated an extension that was not requested'; - abortHandshake(websocket, socket, message); - return; - } - - try { - perMessageDeflate.accept(extensions[PerMessageDeflate.extensionName]); - } catch (err) { - const message = 'Invalid Sec-WebSocket-Extensions header'; - abortHandshake(websocket, socket, message); - return; - } - - websocket._extensions[PerMessageDeflate.extensionName] = - perMessageDeflate; - } - - websocket.setSocket(socket, head, { - allowSynchronousEvents: opts.allowSynchronousEvents, - generateMask: opts.generateMask, - maxPayload: opts.maxPayload, - skipUTF8Validation: opts.skipUTF8Validation - }); - }); - - if (opts.finishRequest) { - opts.finishRequest(req, websocket); - } else { - req.end(); - } -} - -/** - * Emit the `'error'` and `'close'` events. - * - * @param {WebSocket} websocket The WebSocket instance - * @param {Error} The error to emit - * @private - */ -function emitErrorAndClose(websocket, err) { - websocket._readyState = WebSocket.CLOSING; - // - // The following assignment is practically useless and is done only for - // consistency. - // - websocket._errorEmitted = true; - websocket.emit('error', err); - websocket.emitClose(); -} - -/** - * Create a `net.Socket` and initiate a connection. - * - * @param {Object} options Connection options - * @return {net.Socket} The newly created socket used to start the connection - * @private - */ -function netConnect(options) { - options.path = options.socketPath; - return net.connect(options); -} - -/** - * Create a `tls.TLSSocket` and initiate a connection. - * - * @param {Object} options Connection options - * @return {tls.TLSSocket} The newly created socket used to start the connection - * @private - */ -function tlsConnect(options) { - options.path = undefined; - - if (!options.servername && options.servername !== '') { - options.servername = net.isIP(options.host) ? '' : options.host; - } - - return tls.connect(options); -} - -/** - * Abort the handshake and emit an error. - * - * @param {WebSocket} websocket The WebSocket instance - * @param {(http.ClientRequest|net.Socket|tls.Socket)} stream The request to - * abort or the socket to destroy - * @param {String} message The error message - * @private - */ -function abortHandshake(websocket, stream, message) { - websocket._readyState = WebSocket.CLOSING; - - const err = new Error(message); - Error.captureStackTrace(err, abortHandshake); - - if (stream.setHeader) { - stream[kAborted] = true; - stream.abort(); - - if (stream.socket && !stream.socket.destroyed) { - // - // On Node.js >= 14.3.0 `request.abort()` does not destroy the socket if - // called after the request completed. See - // https://github.com/websockets/ws/issues/1869. - // - stream.socket.destroy(); - } - - process.nextTick(emitErrorAndClose, websocket, err); - } else { - stream.destroy(err); - stream.once('error', websocket.emit.bind(websocket, 'error')); - stream.once('close', websocket.emitClose.bind(websocket)); - } -} - -/** - * Handle cases where the `ping()`, `pong()`, or `send()` methods are called - * when the `readyState` attribute is `CLOSING` or `CLOSED`. - * - * @param {WebSocket} websocket The WebSocket instance - * @param {*} [data] The data to send - * @param {Function} [cb] Callback - * @private - */ -function sendAfterClose(websocket, data, cb) { - if (data) { - const length = isBlob(data) ? data.size : toBuffer(data).length; - - // - // The `_bufferedAmount` property is used only when the peer is a client and - // the opening handshake fails. Under these circumstances, in fact, the - // `setSocket()` method is not called, so the `_socket` and `_sender` - // properties are set to `null`. - // - if (websocket._socket) websocket._sender._bufferedBytes += length; - else websocket._bufferedAmount += length; - } - - if (cb) { - const err = new Error( - `WebSocket is not open: readyState ${websocket.readyState} ` + - `(${readyStates[websocket.readyState]})` - ); - process.nextTick(cb, err); - } -} - -/** - * The listener of the `Receiver` `'conclude'` event. - * - * @param {Number} code The status code - * @param {Buffer} reason The reason for closing - * @private - */ -function receiverOnConclude(code, reason) { - const websocket = this[kWebSocket]; - - websocket._closeFrameReceived = true; - websocket._closeMessage = reason; - websocket._closeCode = code; - - if (websocket._socket[kWebSocket] === undefined) return; - - websocket._socket.removeListener('data', socketOnData); - process.nextTick(resume, websocket._socket); - - if (code === 1005) websocket.close(); - else websocket.close(code, reason); -} - -/** - * The listener of the `Receiver` `'drain'` event. - * - * @private - */ -function receiverOnDrain() { - const websocket = this[kWebSocket]; - - if (!websocket.isPaused) websocket._socket.resume(); -} - -/** - * The listener of the `Receiver` `'error'` event. - * - * @param {(RangeError|Error)} err The emitted error - * @private - */ -function receiverOnError(err) { - const websocket = this[kWebSocket]; - - if (websocket._socket[kWebSocket] !== undefined) { - websocket._socket.removeListener('data', socketOnData); - - // - // On Node.js < 14.0.0 the `'error'` event is emitted synchronously. See - // https://github.com/websockets/ws/issues/1940. - // - process.nextTick(resume, websocket._socket); - - websocket.close(err[kStatusCode]); - } - - if (!websocket._errorEmitted) { - websocket._errorEmitted = true; - websocket.emit('error', err); - } -} - -/** - * The listener of the `Receiver` `'finish'` event. - * - * @private - */ -function receiverOnFinish() { - this[kWebSocket].emitClose(); -} - -/** - * The listener of the `Receiver` `'message'` event. - * - * @param {Buffer|ArrayBuffer|Buffer[])} data The message - * @param {Boolean} isBinary Specifies whether the message is binary or not - * @private - */ -function receiverOnMessage(data, isBinary) { - this[kWebSocket].emit('message', data, isBinary); -} - -/** - * The listener of the `Receiver` `'ping'` event. - * - * @param {Buffer} data The data included in the ping frame - * @private - */ -function receiverOnPing(data) { - const websocket = this[kWebSocket]; - - if (websocket._autoPong) websocket.pong(data, !this._isServer, NOOP); - websocket.emit('ping', data); -} - -/** - * The listener of the `Receiver` `'pong'` event. - * - * @param {Buffer} data The data included in the pong frame - * @private - */ -function receiverOnPong(data) { - this[kWebSocket].emit('pong', data); -} - -/** - * Resume a readable stream - * - * @param {Readable} stream The readable stream - * @private - */ -function resume(stream) { - stream.resume(); -} - -/** - * The `Sender` error event handler. - * - * @param {Error} The error - * @private - */ -function senderOnError(err) { - const websocket = this[kWebSocket]; - - if (websocket.readyState === WebSocket.CLOSED) return; - if (websocket.readyState === WebSocket.OPEN) { - websocket._readyState = WebSocket.CLOSING; - setCloseTimer(websocket); - } - - // - // `socket.end()` is used instead of `socket.destroy()` to allow the other - // peer to finish sending queued data. There is no need to set a timer here - // because `CLOSING` means that it is already set or not needed. - // - this._socket.end(); - - if (!websocket._errorEmitted) { - websocket._errorEmitted = true; - websocket.emit('error', err); - } -} - -/** - * Set a timer to destroy the underlying raw socket of a WebSocket. - * - * @param {WebSocket} websocket The WebSocket instance - * @private - */ -function setCloseTimer(websocket) { - websocket._closeTimer = setTimeout( - websocket._socket.destroy.bind(websocket._socket), - websocket._closeTimeout - ); -} - -/** - * The listener of the socket `'close'` event. - * - * @private - */ -function socketOnClose() { - const websocket = this[kWebSocket]; - - this.removeListener('close', socketOnClose); - this.removeListener('data', socketOnData); - this.removeListener('end', socketOnEnd); - - websocket._readyState = WebSocket.CLOSING; - - // - // The close frame might not have been received or the `'end'` event emitted, - // for example, if the socket was destroyed due to an error. Ensure that the - // `receiver` stream is closed after writing any remaining buffered data to - // it. If the readable side of the socket is in flowing mode then there is no - // buffered data as everything has been already written. If instead, the - // socket is paused, any possible buffered data will be read as a single - // chunk. - // - if ( - !this._readableState.endEmitted && - !websocket._closeFrameReceived && - !websocket._receiver._writableState.errorEmitted && - this._readableState.length !== 0 - ) { - const chunk = this.read(this._readableState.length); - - websocket._receiver.write(chunk); - } - - websocket._receiver.end(); - - this[kWebSocket] = undefined; - - clearTimeout(websocket._closeTimer); - - if ( - websocket._receiver._writableState.finished || - websocket._receiver._writableState.errorEmitted - ) { - websocket.emitClose(); - } else { - websocket._receiver.on('error', receiverOnFinish); - websocket._receiver.on('finish', receiverOnFinish); - } -} - -/** - * The listener of the socket `'data'` event. - * - * @param {Buffer} chunk A chunk of data - * @private - */ -function socketOnData(chunk) { - if (!this[kWebSocket]._receiver.write(chunk)) { - this.pause(); - } -} - -/** - * The listener of the socket `'end'` event. - * - * @private - */ -function socketOnEnd() { - const websocket = this[kWebSocket]; - - websocket._readyState = WebSocket.CLOSING; - websocket._receiver.end(); - this.end(); -} - -/** - * The listener of the socket `'error'` event. - * - * @private - */ -function socketOnError() { - const websocket = this[kWebSocket]; - - this.removeListener('error', socketOnError); - this.on('error', NOOP); - - if (websocket) { - websocket._readyState = WebSocket.CLOSING; - this.destroy(); - } -} diff --git a/node_modules/ws/package.json b/node_modules/ws/package.json deleted file mode 100644 index 91b8269..0000000 --- a/node_modules/ws/package.json +++ /dev/null @@ -1,69 +0,0 @@ -{ - "name": "ws", - "version": "8.19.0", - "description": "Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js", - "keywords": [ - "HyBi", - "Push", - "RFC-6455", - "WebSocket", - "WebSockets", - "real-time" - ], - "homepage": "https://github.com/websockets/ws", - "bugs": "https://github.com/websockets/ws/issues", - "repository": { - "type": "git", - "url": "git+https://github.com/websockets/ws.git" - }, - "author": "Einar Otto Stangvik (http://2x.io)", - "license": "MIT", - "main": "index.js", - "exports": { - ".": { - "browser": "./browser.js", - "import": "./wrapper.mjs", - "require": "./index.js" - }, - "./package.json": "./package.json" - }, - "browser": "browser.js", - "engines": { - "node": ">=10.0.0" - }, - "files": [ - "browser.js", - "index.js", - "lib/*.js", - "wrapper.mjs" - ], - "scripts": { - "test": "nyc --reporter=lcov --reporter=text mocha --throw-deprecation test/*.test.js", - "integration": "mocha --throw-deprecation test/*.integration.js", - "lint": "eslint . && prettier --check --ignore-path .gitignore \"**/*.{json,md,yaml,yml}\"" - }, - "peerDependencies": { - "bufferutil": "^4.0.1", - "utf-8-validate": ">=5.0.2" - }, - "peerDependenciesMeta": { - "bufferutil": { - "optional": true - }, - "utf-8-validate": { - "optional": true - } - }, - "devDependencies": { - "benchmark": "^2.1.4", - "bufferutil": "^4.0.1", - "eslint": "^9.0.0", - "eslint-config-prettier": "^10.0.1", - "eslint-plugin-prettier": "^5.0.0", - "globals": "^16.0.0", - "mocha": "^8.4.0", - "nyc": "^15.0.0", - "prettier": "^3.0.0", - "utf-8-validate": "^6.0.0" - } -} diff --git a/node_modules/ws/wrapper.mjs b/node_modules/ws/wrapper.mjs deleted file mode 100644 index 7245ad1..0000000 --- a/node_modules/ws/wrapper.mjs +++ /dev/null @@ -1,8 +0,0 @@ -import createWebSocketStream from './lib/stream.js'; -import Receiver from './lib/receiver.js'; -import Sender from './lib/sender.js'; -import WebSocket from './lib/websocket.js'; -import WebSocketServer from './lib/websocket-server.js'; - -export { createWebSocketStream, Receiver, Sender, WebSocket, WebSocketServer }; -export default WebSocket; diff --git a/package-lock.json b/package-lock.json deleted file mode 100644 index e96d07e..0000000 --- a/package-lock.json +++ /dev/null @@ -1,32 +0,0 @@ -{ - "name": "app", - "lockfileVersion": 3, - "requires": true, - "packages": { - "": { - "dependencies": { - "ws": "^8.19.0" - } - }, - "node_modules/ws": { - "version": "8.19.0", - "resolved": "https://registry.npmjs.org/ws/-/ws-8.19.0.tgz", - "integrity": "sha512-blAT2mjOEIi0ZzruJfIhb3nps74PRWTCz1IjglWEEpQl5XS/UNama6u2/rjFkDDouqr4L67ry+1aGIALViWjDg==", - "engines": { - "node": ">=10.0.0" - }, - "peerDependencies": { - "bufferutil": "^4.0.1", - "utf-8-validate": ">=5.0.2" - }, - "peerDependenciesMeta": { - "bufferutil": { - "optional": true - }, - "utf-8-validate": { - "optional": true - } - } - } - } -} diff --git a/package.json b/package.json deleted file mode 100644 index 4414d1d..0000000 --- a/package.json +++ /dev/null @@ -1,5 +0,0 @@ -{ - "dependencies": { - "ws": "^8.19.0" - } -} diff --git a/poc-grpc-agent/agent_node/__init__.py b/poc-grpc-agent/agent_node/__init__.py deleted file mode 100644 index e69de29..0000000 --- a/poc-grpc-agent/agent_node/__init__.py +++ /dev/null diff --git a/poc-grpc-agent/agent_node/config.py b/poc-grpc-agent/agent_node/config.py deleted file mode 100644 index cdc367c..0000000 --- a/poc-grpc-agent/agent_node/config.py +++ /dev/null @@ -1,49 +0,0 @@ -import os -import platform -import yaml - -# Path to the generated config file in the bundled distribution -CONFIG_PATH = "agent_config.yaml" - -# Default values -_defaults = { - "node_id": "agent-node-007", - "node_description": "Modular Stateful Node", - "hub_url": "https://ai.jerxie.com", - "grpc_endpoint": "localhost:50051", - "auth_token": os.getenv("AGENT_AUTH_TOKEN", "cortex-secret-shared-key"), - "sync_root": "/tmp/cortex-sync", - "tls": True, - "max_skill_workers": 5, - "health_report_interval": 10, -} - -# 1. Load from YAML if present -_config = _defaults.copy() -if os.path.exists(CONFIG_PATH): - try: - with open(CONFIG_PATH, 'r') as f: - yaml_config = yaml.safe_load(f) or {} - _config.update(yaml_config) - print(f"[*] Loaded node configuration from {CONFIG_PATH}") - except Exception as e: - print(f"[!] Error loading {CONFIG_PATH}: {e}") - -# 2. Override with Environment Variables (12-Factor style) -NODE_ID = os.getenv("AGENT_NODE_ID", _config["node_id"]) -NODE_DESC = os.getenv("AGENT_NODE_DESC", _config["node_description"]) -SERVER_HOST_PORT = os.getenv("GRPC_ENDPOINT", _config["grpc_endpoint"]) # e.g. "ai.jerxie.com:50051" -AUTH_TOKEN = os.getenv("AGENT_AUTH_TOKEN", _config["auth_token"]) -SYNC_DIR = os.getenv("CORTEX_SYNC_DIR", _config["sync_root"]) -TLS_ENABLED = os.getenv("AGENT_TLS_ENABLED", str(_config["tls"])).lower() == 'true' - -HEALTH_REPORT_INTERVAL = int(os.getenv("HEALTH_REPORT_INTERVAL", _config["health_report_interval"])) -MAX_SKILL_WORKERS = int(os.getenv("MAX_SKILL_WORKERS", _config["max_skill_workers"])) - -SECRET_KEY = os.getenv("AGENT_SECRET_KEY", _config.get("secret_key", "dev-secret-key-1337")) - -# These are still available but likely replaced by AUTH_TOKEN / TLS_ENABLED logic -CERT_CA = os.getenv("CERT_CA", "certs/ca.crt") -CERT_CLIENT_CRT = os.getenv("CERT_CLIENT_CRT", "certs/client.crt") -CERT_CLIENT_KEY = os.getenv("CERT_CLIENT_KEY", "certs/client.key") - diff --git a/poc-grpc-agent/agent_node/core/__init__.py b/poc-grpc-agent/agent_node/core/__init__.py deleted file mode 100644 index e69de29..0000000 --- a/poc-grpc-agent/agent_node/core/__init__.py +++ /dev/null diff --git a/poc-grpc-agent/agent_node/core/sandbox.py b/poc-grpc-agent/agent_node/core/sandbox.py deleted file mode 100644 index 9f9390c..0000000 --- a/poc-grpc-agent/agent_node/core/sandbox.py +++ /dev/null @@ -1,31 +0,0 @@ -from protos import agent_pb2 - -class SandboxEngine: - """Core Security Engine for Local Command Verification.""" - def __init__(self): - self.policy = None - - def sync(self, p): - """Syncs the latest policy from the Orchestrator.""" - self.policy = { - "MODE": "STRICT" if p.mode == agent_pb2.SandboxPolicy.STRICT else "PERMISSIVE", - "ALLOWED": list(p.allowed_commands), - "DENIED": list(p.denied_commands), - "SENSITIVE": list(p.sensitive_commands) - } - - def verify(self, command_str): - """Verifies if a command string is allowed under the current policy.""" - if not self.policy: return False, "No Policy" - - parts = (command_str or "").strip().split() - if not parts: return False, "Empty" - - base_cmd = parts[0] - if base_cmd in self.policy["DENIED"]: - return False, f"Forbidden command: {base_cmd}" - - if self.policy["MODE"] == "STRICT" and base_cmd not in self.policy["ALLOWED"]: - return False, f"Command '{base_cmd}' not whitelisted" - - return True, "OK" diff --git a/poc-grpc-agent/agent_node/core/sync.py b/poc-grpc-agent/agent_node/core/sync.py deleted file mode 100644 index 887c284..0000000 --- a/poc-grpc-agent/agent_node/core/sync.py +++ /dev/null @@ -1,69 +0,0 @@ -import os -import hashlib -from agent_node.config import SYNC_DIR -from protos import agent_pb2 - -class NodeSyncManager: - """Handles local filesystem synchronization on the Agent Node.""" - def __init__(self, base_sync_dir=SYNC_DIR): - self.base_sync_dir = base_sync_dir - if not os.path.exists(self.base_sync_dir): - os.makedirs(self.base_sync_dir, exist_ok=True) - - def get_session_dir(self, session_id: str) -> str: - """Returns the unique identifier directory for this session's sync.""" - path = os.path.join(self.base_sync_dir, session_id) - os.makedirs(path, exist_ok=True) - return path - - def handle_manifest(self, session_id: str, manifest: agent_pb2.DirectoryManifest) -> list: - """Compares local files with the server manifest and returns paths needing update.""" - session_dir = self.get_session_dir(session_id) - print(f"[📁] Reconciling Sync Directory: {session_dir}") - - needs_update = [] - for file_info in manifest.files: - target_path = os.path.join(session_dir, file_info.path) - - if file_info.is_dir: - os.makedirs(target_path, exist_ok=True) - continue - - # File Check - if not os.path.exists(target_path): - needs_update.append(file_info.path) - else: - # Hash comparison - with open(target_path, "rb") as f: - actual_hash = hashlib.sha256(f.read()).hexdigest() - if actual_hash != file_info.hash: - print(f" [⚠️] Drift Detected: {file_info.path} (Local: {actual_hash[:8]} vs Remote: {file_info.hash[:8]})") - needs_update.append(file_info.path) - - return needs_update - - def write_chunk(self, session_id: str, payload: agent_pb2.FilePayload) -> bool: - """Writes a file chunk to the local session directory.""" - session_dir = self.get_session_dir(session_id) - target_path = os.path.normpath(os.path.join(session_dir, payload.path)) - - if not target_path.startswith(session_dir): - return False # Path traversal guard - - os.makedirs(os.path.dirname(target_path), exist_ok=True) - - mode = "ab" if payload.chunk_index > 0 else "wb" - with open(target_path, mode) as f: - f.write(payload.chunk) - - if payload.is_final and payload.hash: - return self._verify(target_path, payload.hash) - return True - - def _verify(self, path, expected_hash): - with open(path, "rb") as f: - actual = hashlib.sha256(f.read()).hexdigest() - if actual != expected_hash: - print(f"[⚠️] Sync Hash Mismatch for {path}") - return False - return True diff --git a/poc-grpc-agent/agent_node/core/watcher.py b/poc-grpc-agent/agent_node/core/watcher.py deleted file mode 100644 index 1e0207c..0000000 --- a/poc-grpc-agent/agent_node/core/watcher.py +++ /dev/null @@ -1,105 +0,0 @@ - -import time -import os -import hashlib -from watchdog.observers import Observer -from watchdog.events import FileSystemEventHandler -from shared_core.ignore import CortexIgnore -from protos import agent_pb2 - -class SyncEventHandler(FileSystemEventHandler): - """Listens for FS events and triggers gRPC delta pushes.""" - def __init__(self, session_id, root_path, callback): - self.session_id = session_id - self.root_path = root_path - self.callback = callback - self.ignore_filter = CortexIgnore(root_path) - self.last_sync = {} # path -> last_hash - self.locked = False - - def on_modified(self, event): - if not event.is_directory: - self._process_change(event.src_path) - - def on_created(self, event): - if not event.is_directory: - self._process_change(event.src_path) - - def on_moved(self, event): - # Simplification: treat move as a delete and create, or just process the dest - self._process_change(event.dest_path) - - def _process_change(self, abs_path): - if self.locked: - return # Block all user edits when session is locked - - rel_path = os.path.normpath(os.path.relpath(abs_path, self.root_path)) - - # Phase 3: Dynamic reload if .cortexignore / .gitignore changed - if rel_path in [".cortexignore", ".gitignore"]: - print(f" [*] Reloading Ignore Filter for {self.session_id}") - self.ignore_filter = CortexIgnore(self.root_path) - - if self.ignore_filter.is_ignored(rel_path): - return - - try: - with open(abs_path, "rb") as f: - content = f.read() - file_hash = hashlib.sha256(content).hexdigest() - - if self.last_sync.get(rel_path) == file_hash: - return # No actual change - - self.last_sync[rel_path] = file_hash - print(f" [📁📤] Detected Change: {rel_path}") - - # Chunk and Send - chunk_size = 64 * 1024 - for i in range(0, len(content), chunk_size): - chunk = content[i:i + chunk_size] - is_final = i + chunk_size >= len(content) - payload = agent_pb2.FilePayload( - path=rel_path, - chunk=chunk, - chunk_index=i // chunk_size, - is_final=is_final, - hash=file_hash if is_final else "" - ) - self.callback(self.session_id, payload) - except Exception as e: - print(f" [!] Watcher Error for {rel_path}: {e}") - -class WorkspaceWatcher: - """Manages FS observers for active synchronization.""" - def __init__(self, callback): - self.callback = callback - self.observers = {} # session_id -> (observer, handler) - - def set_lock(self, session_id, locked=True): - if session_id in self.observers: - print(f"[*] Workspace LOCK for {session_id}: {locked}") - self.observers[session_id][1].locked = locked - - def start_watching(self, session_id, root_path): - if session_id in self.observers: - self.stop_watching(session_id) - - print(f"[*] Starting Watcher for Session {session_id} at {root_path}") - handler = SyncEventHandler(session_id, root_path, self.callback) - observer = Observer() - observer.schedule(handler, root_path, recursive=True) - observer.start() - self.observers[session_id] = (observer, handler) - - def stop_watching(self, session_id): - if session_id in self.observers: - print(f"[*] Stopping Watcher for Session {session_id}") - obs, _ = self.observers[session_id] - obs.stop() - obs.join() - del self.observers[session_id] - - def shutdown(self): - for sid in list(self.observers.keys()): - self.stop_watching(sid) diff --git a/poc-grpc-agent/agent_node/main.py b/poc-grpc-agent/agent_node/main.py deleted file mode 100644 index 6fa4eb0..0000000 --- a/poc-grpc-agent/agent_node/main.py +++ /dev/null @@ -1,35 +0,0 @@ -import sys -import os - -# Add root to path to find protos and other packages -sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))) - -import signal -from agent_node.node import AgentNode -from agent_node.config import NODE_ID - -def main(): - print(f"[*] Starting Antigravity Agent Node: {NODE_ID}...") - - # 1. Initialization - node = AgentNode() - - # 2. Signal Handling for Graceful Shutdown - def handle_exit(sig, frame): - node.stop() - sys.exit(0) - - signal.signal(signal.SIGINT, handle_exit) - signal.signal(signal.SIGTERM, handle_exit) - - # Handshake: Sync configuration and Sandbox Policy - node.sync_configuration() - - # 3. Background: Start health reporting (Heartbeats) - node.start_health_reporting() - - # 4. Foreground: Run Persistent Task Stream - node.run_task_stream() - -if __name__ == '__main__': - main() diff --git a/poc-grpc-agent/agent_node/node.py b/poc-grpc-agent/agent_node/node.py deleted file mode 100644 index 6b04881..0000000 --- a/poc-grpc-agent/agent_node/node.py +++ /dev/null @@ -1,256 +0,0 @@ -import threading -import queue -import time -import sys -import os -import hashlib -from protos import agent_pb2, agent_pb2_grpc -from agent_node.skills.manager import SkillManager -from agent_node.core.sandbox import SandboxEngine -from agent_node.core.sync import NodeSyncManager -from agent_node.core.watcher import WorkspaceWatcher -from agent_node.utils.auth import verify_task_signature -from agent_node.utils.network import get_secure_stub -from agent_node.config import NODE_ID, NODE_DESC, AUTH_TOKEN, HEALTH_REPORT_INTERVAL, MAX_SKILL_WORKERS - - -class AgentNode: - """The 'Agent Core': Orchestrates Local Skills and Maintains gRPC Connection.""" - def __init__(self, node_id=NODE_ID): - self.node_id = node_id - self.sandbox = SandboxEngine() - self.sync_mgr = NodeSyncManager() - self.skills = SkillManager(max_workers=MAX_SKILL_WORKERS, sync_mgr=self.sync_mgr) - self.watcher = WorkspaceWatcher(self._on_sync_delta) - self.task_queue = queue.Queue() - self.stub = get_secure_stub() - - def sync_configuration(self): - """Initial handshake to retrieve policy and metadata.""" - print(f"[*] Handshake with Orchestrator: {self.node_id}") - reg_req = agent_pb2.RegistrationRequest( - node_id=self.node_id, - auth_token=AUTH_TOKEN, - node_description=NODE_DESC, - capabilities={"shell": "v1", "browser": "playwright-sync-bridge"} - ) - - - try: - res = self.stub.SyncConfiguration(reg_req) - if res.success: - self.sandbox.sync(res.policy) - print("[OK] Sandbox Policy Synced.") - else: - print(f"[!] Rejection: {res.error_message}") - sys.exit(1) - except Exception as e: - print(f"[!] Connection Fail: {e}") - sys.exit(1) - - def start_health_reporting(self): - """Streaming node metrics to the orchestrator for load balancing.""" - def _gen(): - while True: - ids = self.skills.get_active_ids() - yield agent_pb2.Heartbeat( - node_id=self.node_id, cpu_usage_percent=1.0, - active_worker_count=len(ids), - max_worker_capacity=MAX_SKILL_WORKERS, - running_task_ids=ids - ) - time.sleep(HEALTH_REPORT_INTERVAL) - - # Non-blocking thread for health heartbeat - threading.Thread( - target=lambda: list(self.stub.ReportHealth(_gen())), - daemon=True, name=f"Health-{self.node_id}" - ).start() - - def run_task_stream(self): - """Main Persistent Bi-directional Stream for Task Management.""" - def _gen(): - # Initial announcement for routing identity - yield agent_pb2.ClientTaskMessage( - announce=agent_pb2.NodeAnnounce(node_id=self.node_id) - ) - while True: - yield self.task_queue.get() - - responses = self.stub.TaskStream(_gen()) - print(f"[*] Task Stream Online: {self.node_id}", flush=True) - - try: - for msg in responses: - kind = msg.WhichOneof('payload') - print(f" [📥] Received from Stream: {kind}", flush=True) - self._process_server_message(msg) - except Exception as e: - print(f"[!] Task Stream Failure: {e}", flush=True) - - def _process_server_message(self, msg): - kind = msg.WhichOneof('payload') - print(f"[*] Inbound: {kind}", flush=True) - - if kind == 'task_request': - self._handle_task(msg.task_request) - - elif kind == 'task_cancel': - if self.skills.cancel(msg.task_cancel.task_id): - self._send_response(msg.task_cancel.task_id, None, agent_pb2.TaskResponse.CANCELLED) - - elif kind == 'work_pool_update': - # Claim logical idle tasks from global pool with slight randomized jitter - # to prevent thundering herd where every node claims the same task at the exact same ms. - if len(self.skills.get_active_ids()) < MAX_SKILL_WORKERS: - for tid in msg.work_pool_update.available_task_ids: - # Deterministic delay based on node_id to distribute claims - import random - time.sleep(random.uniform(0.1, 0.5)) - - self.task_queue.put(agent_pb2.ClientTaskMessage( - task_claim=agent_pb2.TaskClaimRequest(task_id=tid, node_id=self.node_id) - )) - - elif kind == 'claim_status': - status = "GRANTED" if msg.claim_status.granted else "DENIED" - print(f" [📦] Claim {msg.claim_status.task_id}: {status} ({msg.claim_status.reason})", flush=True) - - elif kind == 'file_sync': - self._handle_file_sync(msg.file_sync) - - def _on_sync_delta(self, session_id, file_payload): - """Callback from watcher to push local changes to server.""" - self.task_queue.put(agent_pb2.ClientTaskMessage( - file_sync=agent_pb2.FileSyncMessage( - session_id=session_id, - file_data=file_payload - ) - )) - - def _handle_file_sync(self, fs): - """Processes inbound file synchronization messages from the Orchestrator.""" - sid = fs.session_id - if fs.HasField("manifest"): - needs_update = self.sync_mgr.handle_manifest(sid, fs.manifest) - if needs_update: - print(f" [📁⚠️] Drift Detected for {sid}: {len(needs_update)} files need sync") - self.task_queue.put(agent_pb2.ClientTaskMessage( - file_sync=agent_pb2.FileSyncMessage( - session_id=sid, - status=agent_pb2.SyncStatus( - code=agent_pb2.SyncStatus.RECONCILE_REQUIRED, - message=f"Drift detected in {len(needs_update)} files", - reconcile_paths=needs_update - ) - ) - )) - else: - self.task_queue.put(agent_pb2.ClientTaskMessage( - file_sync=agent_pb2.FileSyncMessage( - session_id=sid, - status=agent_pb2.SyncStatus(code=agent_pb2.SyncStatus.OK, message="Synchronized") - ) - )) - elif fs.HasField("file_data"): - success = self.sync_mgr.write_chunk(sid, fs.file_data) - if fs.file_data.is_final: - print(f" [📁] File Received: {fs.file_data.path} (Verified: {success})") - status = agent_pb2.SyncStatus.OK if success else agent_pb2.SyncStatus.ERROR - self.task_queue.put(agent_pb2.ClientTaskMessage( - file_sync=agent_pb2.FileSyncMessage( - session_id=sid, - status=agent_pb2.SyncStatus(code=status, message=f"File {fs.file_data.path} synced") - ) - )) - elif fs.HasField("control"): - ctrl = fs.control - if ctrl.action == agent_pb2.SyncControl.START_WATCHING: - # Path relative to sync dir or absolute - watch_path = ctrl.path if os.path.isabs(ctrl.path) else os.path.join(self.sync_mgr.get_session_dir(sid), ctrl.path) - self.watcher.start_watching(sid, watch_path) - elif ctrl.action == agent_pb2.SyncControl.STOP_WATCHING: - self.watcher.stop_watching(sid) - elif ctrl.action == agent_pb2.SyncControl.LOCK: - self.watcher.set_lock(sid, True) - elif ctrl.action == agent_pb2.SyncControl.UNLOCK: - self.watcher.set_lock(sid, False) - elif ctrl.action == agent_pb2.SyncControl.REFRESH_MANIFEST: - # Node -> Server Manifest Push - self._push_full_manifest(sid, ctrl.path) - elif ctrl.action == agent_pb2.SyncControl.RESYNC: - # Server -> Node asks for a check, but Node only has its own manifest? - # Actually RESYNC usually comes with a manifest or implies "send me yours so I can check" - # Here we'll treat RESYNC as "Send me your manifest" - self._push_full_manifest(sid, ctrl.path) - - def _push_full_manifest(self, session_id, rel_path="."): - """Pushes the current local manifest back to the server.""" - print(f" [📁📤] Pushing Full Manifest for {session_id}") - watch_path = rel_path if os.path.isabs(rel_path) else os.path.join(self.sync_mgr.get_session_dir(session_id), rel_path) - - # We need a manifest generator similar to GhostMirrorManager but on the node - # For Phase 3, we'll implement a simple one here - files = [] - for root, dirs, filenames in os.walk(watch_path): - for filename in filenames: - abs_path = os.path.join(root, filename) - r_path = os.path.relpath(abs_path, watch_path) - with open(abs_path, "rb") as f: - h = hashlib.sha256(f.read()).hexdigest() - files.append(agent_pb2.FileInfo(path=r_path, size=os.path.getsize(abs_path), hash=h)) - - self.task_queue.put(agent_pb2.ClientTaskMessage( - file_sync=agent_pb2.FileSyncMessage( - session_id=session_id, - manifest=agent_pb2.DirectoryManifest(root_path=rel_path, files=files) - ) - )) - - def _handle_task(self, task): - print(f"[*] Task Launch: {task.task_id}", flush=True) - # 1. Cryptographic Signature Verification - if not verify_task_signature(task): - print(f"[!] Signature Validation Failed for {task.task_id}", flush=True) - return - - print(f"[✅] Validated task {task.task_id}", flush=True) - - # 2. Skill Manager Submission - success, reason = self.skills.submit(task, self.sandbox, self._on_finish, self._on_event) - if not success: - print(f"[!] Execution Rejected: {reason}", flush=True) - - def _on_event(self, event): - """Live Event Tunneler: Routes browser/skill events into the main stream.""" - self.task_queue.put(agent_pb2.ClientTaskMessage(browser_event=event)) - - def _on_finish(self, tid, res, trace): - """Final Completion Callback: Routes task results back to server.""" - print(f"[*] Completion: {tid}", flush=True) - status = agent_pb2.TaskResponse.SUCCESS if res['status'] == 1 else agent_pb2.TaskResponse.ERROR - - tr = agent_pb2.TaskResponse( - task_id=tid, status=status, - stdout=res.get('stdout',''), - stderr=res.get('stderr',''), - trace_id=trace, - browser_result=res.get("browser_result") - ) - self._send_response(tid, tr) - - def _send_response(self, tid, tr=None, status=None): - """Utility for placing response messages into the gRPC outbound queue.""" - if tr: - self.task_queue.put(agent_pb2.ClientTaskMessage(task_response=tr)) - else: - self.task_queue.put(agent_pb2.ClientTaskMessage( - task_response=agent_pb2.TaskResponse(task_id=tid, status=status) - )) - - def stop(self): - """Gracefully stops all background services and skills.""" - print(f"\n[🛑] Stopping Agent Node: {self.node_id}") - self.skills.shutdown() - # Optionally close gRPC channel if we want to be very clean - # self.channel.close() diff --git a/poc-grpc-agent/agent_node/skills/__init__.py b/poc-grpc-agent/agent_node/skills/__init__.py deleted file mode 100644 index e69de29..0000000 --- a/poc-grpc-agent/agent_node/skills/__init__.py +++ /dev/null diff --git a/poc-grpc-agent/agent_node/skills/base.py b/poc-grpc-agent/agent_node/skills/base.py deleted file mode 100644 index 33c88ec..0000000 --- a/poc-grpc-agent/agent_node/skills/base.py +++ /dev/null @@ -1,13 +0,0 @@ -class BaseSkill: - """Abstract interface for all Node capabilities (Shell, Browser, etc.).""" - def execute(self, task, sandbox, on_complete, on_event=None): - """Processes the given task and notifies results via callbacks.""" - raise NotImplementedError - - def cancel(self, task_id: str) -> bool: - """Attempts to cancel the task and returns success status.""" - return False - - def shutdown(self): - """Cleanup resources on node exit.""" - pass diff --git a/poc-grpc-agent/agent_node/skills/browser.py b/poc-grpc-agent/agent_node/skills/browser.py deleted file mode 100644 index 3205b7d..0000000 --- a/poc-grpc-agent/agent_node/skills/browser.py +++ /dev/null @@ -1,148 +0,0 @@ -import threading -import queue -import time -import json -from playwright.sync_api import sync_playwright -from agent_node.skills.base import BaseSkill -from protos import agent_pb2 - -class BrowserSkill(BaseSkill): - """The 'Antigravity Bridge': Persistent Browser Skill using a dedicated Actor thread.""" - def __init__(self, sync_mgr=None): - self.task_queue = queue.Queue() - self.sessions = {} # session_id -> { "context": Context, "page": Page } - self.sync_mgr = sync_mgr - self.lock = threading.Lock() - threading.Thread(target=self._browser_actor, daemon=True, name="BrowserActor").start() - - def _setup_listeners(self, sid, page, on_event): - """Tunnels browser internal events back to the Orchestrator.""" - if not on_event: return - - # Live Console Redirector - page.on("console", lambda msg: on_event(agent_pb2.BrowserEvent( - session_id=sid, console_msg=agent_pb2.ConsoleMessage( - level=msg.type, text=msg.text, timestamp_ms=int(time.time()*1000) - ) - ))) - - # Live Network Redirector - page.on("requestfinished", lambda req: on_event(agent_pb2.BrowserEvent( - session_id=sid, network_req=agent_pb2.NetworkRequest( - method=req.method, url=req.url, status=req.response().status if req.response() else 0, - resource_type=req.resource_type, latency_ms=0 - ) - ))) - - # Live Download Redirector - page.on("download", lambda download: self._handle_download(sid, download)) - - def _handle_download(self, sid, download): - """Saves browser downloads directly into the synchronized session workspace.""" - with self.lock: - sess = self.sessions.get(sid) - if sess and sess.get("download_dir"): - target = os.path.join(sess["download_dir"], download.suggested_filename) - print(f" [🌐📥] Browser Download Sync: {download.suggested_filename} -> {target}") - download.save_as(target) - - def _browser_actor(self): - """Serializes all Playwright operations on a single dedicated thread.""" - print("[🌐] Browser Actor Starting...", flush=True) - pw = None - browser = None - try: - pw = sync_playwright().start() - # 12-Factor/Container Optimization: Standard non-sandbox arguments - browser = pw.chromium.launch(headless=True, args=[ - '--no-sandbox', '--disable-setuid-sandbox', '--disable-dev-shm-usage', '--disable-gpu' - ]) - print("[🌐] Browser Engine Online.", flush=True) - except Exception as e: - print(f"[!] Browser Actor Startup Fail: {e}", flush=True) - if pw: pw.stop() - return - - while True: - try: - item = self.task_queue.get() - if item is None: # Sentinel for shutdown - print("[🌐] Browser Actor Shutting Down...", flush=True) - break - - task, sandbox, on_complete, on_event = item - action = task.browser_action - sid = action.session_id or "default" - - with self.lock: - if sid not in self.sessions: - # Phase 4: Mount workspace for downloads/uploads - download_dir = None - if self.sync_mgr and task.session_id: - download_dir = self.sync_mgr.get_session_dir(task.session_id) - print(f" [🌐📁] Mapping Browser Context to: {download_dir}") - - ctx = browser.new_context(accept_downloads=True) - pg = ctx.new_page() - self._setup_listeners(sid, pg, on_event) - self.sessions[sid] = {"context": ctx, "page": pg, "download_dir": download_dir} - - page = self.sessions[sid]["page"] - print(f" [🌐] Browser Actor Processing: {agent_pb2.BrowserAction.ActionType.Name(action.action)} | Session: {sid}", flush=True) - - res_data = {} - # State-Machine Logic for Actions - if action.action == agent_pb2.BrowserAction.NAVIGATE: - page.goto(action.url, wait_until="commit") - elif action.action == agent_pb2.BrowserAction.CLICK: - page.click(action.selector) - elif action.action == agent_pb2.BrowserAction.TYPE: - page.fill(action.selector, action.text) - elif action.action == agent_pb2.BrowserAction.SCREENSHOT: - res_data["snapshot"] = page.screenshot() - elif action.action == agent_pb2.BrowserAction.GET_DOM: - res_data["dom_content"] = page.content() - elif action.action == agent_pb2.BrowserAction.HOVER: - page.hover(action.selector) - elif action.action == agent_pb2.BrowserAction.SCROLL: - page.mouse.wheel(x=0, y=action.y) - elif action.action == agent_pb2.BrowserAction.EVAL: - res_data["eval_result"] = str(page.evaluate(action.text)) - elif action.action == agent_pb2.BrowserAction.GET_A11Y: - res_data["a11y_tree"] = json.dumps(page.accessibility.snapshot()) - elif action.action == agent_pb2.BrowserAction.CLOSE: - with self.lock: - sess = self.sessions.pop(sid, None) - if sess: sess["context"].close() - - # Results Construction - br_res = agent_pb2.BrowserResponse( - url=page.url, title=page.title(), - snapshot=res_data.get("snapshot", b""), - dom_content=res_data.get("dom_content", ""), - a11y_tree=res_data.get("a11y_tree", ""), - eval_result=res_data.get("eval_result", "") - ) - on_complete(task.task_id, {"status": 1, "browser_result": br_res}, task.trace_id) - except Exception as e: - print(f" [!] Browser Actor Error: {e}", flush=True) - on_complete(task.task_id, {"stderr": str(e), "status": 2}, task.trace_id) - - # Cleanup on loop exit - print("[🌐] Cleaning up Browser Engine...", flush=True) - with self.lock: - for s in self.sessions.values(): - try: s["context"].close() - except: pass - self.sessions.clear() - if browser: browser.close() - if pw: pw.stop() - - def execute(self, task, sandbox, on_complete, on_event=None): - self.task_queue.put((task, sandbox, on_complete, on_event)) - - def cancel(self, task_id): return False - - def shutdown(self): - """Triggers graceful shutdown of the browser engine.""" - self.task_queue.put(None) diff --git a/poc-grpc-agent/agent_node/skills/manager.py b/poc-grpc-agent/agent_node/skills/manager.py deleted file mode 100644 index f5a85b8..0000000 --- a/poc-grpc-agent/agent_node/skills/manager.py +++ /dev/null @@ -1,64 +0,0 @@ -import threading -from concurrent import futures -from agent_node.skills.shell import ShellSkill -from agent_node.skills.browser import BrowserSkill -from agent_node.config import MAX_SKILL_WORKERS - -class SkillManager: - """Orchestrates multiple modular skills and manages the task worker pool.""" - def __init__(self, max_workers=MAX_SKILL_WORKERS, sync_mgr=None): - self.executor = futures.ThreadPoolExecutor(max_workers=max_workers, thread_name_prefix="skill-worker") - self.active_tasks = {} # task_id -> future - self.sync_mgr = sync_mgr - self.skills = { - "shell": ShellSkill(sync_mgr=sync_mgr), - "browser": BrowserSkill(sync_mgr=sync_mgr) - } - self.max_workers = max_workers - self.lock = threading.Lock() - - def submit(self, task, sandbox, on_complete, on_event=None): - """Routes a task to the appropriate skill and submits it to the thread pool.""" - with self.lock: - if len(self.active_tasks) >= self.max_workers: - return False, "Node Capacity Reached" - - # 1. Routing Engine - if task.HasField("browser_action"): - skill = self.skills["browser"] - else: - skill = self.skills["shell"] - - # 2. Execution submission - future = self.executor.submit(skill.execute, task, sandbox, on_complete, on_event) - self.active_tasks[task.task_id] = future - - # Cleanup hook - future.add_done_callback(lambda f: self._cleanup(task.task_id)) - return True, "Accepted" - - def cancel(self, task_id): - """Attempts to cancel an active task through all registered skills.""" - with self.lock: - cancelled = any(s.cancel(task_id) for s in self.skills.values()) - return cancelled - - def get_active_ids(self): - """Returns the list of currently running task IDs.""" - with self.lock: - return list(self.active_tasks.keys()) - - def _cleanup(self, task_id): - """Internal callback to release capacity when a task finishes.""" - with self.lock: - self.active_tasks.pop(task_id, None) - - def shutdown(self): - """Triggers shutdown for all skills and the worker pool.""" - print("[🔧] Shutting down Skill Manager...") - with self.lock: - for name, skill in self.skills.items(): - print(f" [🔧] Shutting down skill: {name}") - skill.shutdown() - # Shutdown thread pool - self.executor.shutdown(wait=True) diff --git a/poc-grpc-agent/agent_node/skills/shell.py b/poc-grpc-agent/agent_node/skills/shell.py deleted file mode 100644 index 9d17464..0000000 --- a/poc-grpc-agent/agent_node/skills/shell.py +++ /dev/null @@ -1,72 +0,0 @@ -import subprocess -import threading -from .base import BaseSkill - -class ShellSkill(BaseSkill): - """Default Skill: Executing shell commands with sandbox safety.""" - def __init__(self, sync_mgr=None): - self.processes = {} # task_id -> Popen - self.sync_mgr = sync_mgr - self.lock = threading.Lock() - - def execute(self, task, sandbox, on_complete, on_event=None): - """Processes shell-based commands for the Node.""" - try: - cmd = task.payload_json - - # 1. Verification Logic - allowed, status_msg = sandbox.verify(cmd) - if not allowed: - err_msg = f"SANDBOX_VIOLATION: {status_msg}" - return on_complete(task.task_id, {"stderr": err_msg, "status": 2}, task.trace_id) - - # 2. Sequential Execution - print(f" [🐚] Executing Shell: {cmd}", flush=True) - - # Resolve CWD for the skill based on session_id - cwd = None - if self.sync_mgr and task.session_id: - cwd = self.sync_mgr.get_session_dir(task.session_id) - print(f" [📁] Setting CWD to {cwd}") - - p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, cwd=cwd) - - with self.lock: - self.processes[task.task_id] = p - - # 3. Timeout Handling - timeout = task.timeout_ms / 1000.0 if task.timeout_ms > 0 else None - stdout, stderr = p.communicate(timeout=timeout) - - print(f" [🐚] Shell Done: {cmd} | Stdout Size: {len(stdout)}", flush=True) - on_complete(task.task_id, { - "stdout": stdout, "stderr": stderr, - "status": 1 if p.returncode == 0 else 2 - }, task.trace_id) - - except subprocess.TimeoutExpired: - self.cancel(task.task_id) - on_complete(task.task_id, {"stderr": "TASK_TIMEOUT", "status": 2}, task.trace_id) - except Exception as e: - on_complete(task.task_id, {"stderr": str(e), "status": 2}, task.trace_id) - finally: - with self.lock: - self.processes.pop(task.task_id, None) - - def cancel(self, task_id: str): - """Standard process termination for shell tasks.""" - with self.lock: - p = self.processes.get(task_id) - if p: - print(f"[🛑] Killing Shell Task: {task_id}") - p.kill() - return True - return False - def shutdown(self): - """Standard cleanup: Terminates all active shell processes.""" - with self.lock: - for tid, p in list(self.processes.items()): - print(f"[🛑] Killing Orphan Shell Task: {tid}") - try: p.kill() - except: pass - self.processes.clear() diff --git a/poc-grpc-agent/agent_node/utils/__init__.py b/poc-grpc-agent/agent_node/utils/__init__.py deleted file mode 100644 index e69de29..0000000 --- a/poc-grpc-agent/agent_node/utils/__init__.py +++ /dev/null diff --git a/poc-grpc-agent/agent_node/utils/auth.py b/poc-grpc-agent/agent_node/utils/auth.py deleted file mode 100644 index 202fd4c..0000000 --- a/poc-grpc-agent/agent_node/utils/auth.py +++ /dev/null @@ -1,28 +0,0 @@ -import jwt -import datetime -import hmac -import hashlib -from protos import agent_pb2 -from agent_node.config import SECRET_KEY - -def create_auth_token(node_id: str) -> str: - """Creates a JWT for node authentication.""" - payload = { - "sub": node_id, - "iat": datetime.datetime.utcnow(), - "exp": datetime.datetime.utcnow() + datetime.timedelta(minutes=10) - } - return jwt.encode(payload, SECRET_KEY, algorithm="HS256") - -def verify_task_signature(task, secret=SECRET_KEY) -> bool: - """Verifies HMAC signature for shell or browser tasks.""" - if task.HasField("browser_action"): - a = task.browser_action - # Aligned with orchestrator's sign_browser_action using the string Name - kind = agent_pb2.BrowserAction.ActionType.Name(a.action) - sign_base = f"{kind}:{a.url}:{a.session_id}" - else: - sign_base = task.payload_json - - expected_sig = hmac.new(secret.encode(), sign_base.encode(), hashlib.sha256).hexdigest() - return hmac.compare_digest(task.signature, expected_sig) diff --git a/poc-grpc-agent/agent_node/utils/network.py b/poc-grpc-agent/agent_node/utils/network.py deleted file mode 100644 index 3eac1c6..0000000 --- a/poc-grpc-agent/agent_node/utils/network.py +++ /dev/null @@ -1,27 +0,0 @@ -import grpc -import os -from protos import agent_pb2_grpc -from agent_node.config import SERVER_HOST_PORT, TLS_ENABLED, CERT_CA, CERT_CLIENT_CRT, CERT_CLIENT_KEY - -def get_secure_stub(): - """Initializes a gRPC channel (Secure or Insecure) and returns the orchestrator stub.""" - - if not TLS_ENABLED: - print(f"[!] TLS is disabled. Connecting via insecure channel to {SERVER_HOST_PORT}") - channel = grpc.insecure_channel(SERVER_HOST_PORT) - return agent_pb2_grpc.AgentOrchestratorStub(channel) - - print(f"[*] Connecting via secure (mTLS) channel to {SERVER_HOST_PORT}") - try: - with open(CERT_CLIENT_KEY, 'rb') as f: pkey = f.read() - with open(CERT_CLIENT_CRT, 'rb') as f: cert = f.read() - with open(CERT_CA, 'rb') as f: ca = f.read() - - creds = grpc.ssl_channel_credentials(ca, pkey, cert) - channel = grpc.secure_channel(SERVER_HOST_PORT, creds) - return agent_pb2_grpc.AgentOrchestratorStub(channel) - except FileNotFoundError as e: - print(f"[!] Certificate files not found: {e}. Falling back to insecure channel...") - channel = grpc.insecure_channel(SERVER_HOST_PORT) - return agent_pb2_grpc.AgentOrchestratorStub(channel) - diff --git a/poc-grpc-agent/certs/ca.crt b/poc-grpc-agent/certs/ca.crt deleted file mode 100644 index 5102a17..0000000 --- a/poc-grpc-agent/certs/ca.crt +++ /dev/null @@ -1,32 +0,0 @@ ------BEGIN CERTIFICATE----- -MIIFfzCCA2egAwIBAgIUbCHgWz7k+WP8AqSDeDRN4jsSTHcwDQYJKoZIhvcNAQEL -BQAwTzELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMQswCQYDVQQHDAJTRjEPMA0G -A1UECgwGQ29ydGV4MRUwEwYDVQQDDAxDb3J0ZXhSb290Q0EwHhcNMjYwMzAyMDgz -OTEzWhcNMjcwMzAyMDgzOTEzWjBPMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQ0Ex -CzAJBgNVBAcMAlNGMQ8wDQYDVQQKDAZDb3J0ZXgxFTATBgNVBAMMDENvcnRleFJv -b3RDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAM9DT7DZGxfu+7tI -I6sStQF1ERMeh9F5E01cby5GrE1J8EfOSIViaaHwHpLXP4icBzF6fHAoDBQ4JdC3 -pO1HpcjAbxcIb/AkL//B7hNLQ2My6x/Z8jDxLt1vLidH77rTcf+bOSApYkEWTO1G -2w4veUIbqF7G9jemT/KMzIjJpjnR9911EQ5yUcrKZMtTfceEqPrQl9gThTQhQbmS -cNQoSdCirHju14f3u1tzYyiaJqi8Vo07GbWlWnqp5Zt20Bdhvqzo4WQLfy9b9AHO -LZMSanTN/S9EKFogHIYb8+XxaGjDSH7mb/VPHPdsCovQUcSIVgC37fEIZjnfalgC -SWhrVB1L6NjSnWf1xcQpVt4kl5FhOGLbuim/1ACvitAfPCTlTr+bdpzYOnZLrXGQ -OKgmum8YAYIpotnXpg9M1/CEf+LUfG1d5HfLpPZwpwkeNVdMCi7o+Zix7vcId+Wd -WBrQ8wa9YoMQAWKYNfcdj3kXhYE3uq1naoFrVWzX2cGEK5JwdvdqZ24MPqLISbGr -AeOUVQOXFLP82LKVG7KAAf7znp0+xsot0gYyQiPi3PUONtU6j8qR7uZOBjZdA2Re -0XGfoPuzJCmVf1SofRFhySKKcxxf0mX/c6dqhjRZIK2X56dH3+pXFbRg009AyLAg -GKSBp5dVdTz2QY3ZoX9a5VFSgezFAgMBAAGjUzBRMB0GA1UdDgQWBBTdPMqMdMZV -qWUSUupSXOjKYVip8zAfBgNVHSMEGDAWgBTdPMqMdMZVqWUSUupSXOjKYVip8zAP -BgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4ICAQCGQPdVAJxAtV+OTM4k -ZyyZEGoXaaTXYAUc6uPCNiP1vnsQKByhApZyvpEl2X2BiLYBfLVVU7fjRR4drWLv -n4eR7JaWs5Roczq7isygIguRrxbB927KIsFe3FW2qCTs0j8drTYz40CklvTo3paB -l05AF1h4KW57161UCdw6KrK3P1RSp2ow1cPgrBUFjCUS1VnBw1N+TS7sMeTa03CJ -r/x7riO/Z4P9kW+0pX1sM6Oo7eYeo8bqL1G24Y3/1B2MhA7Y+Kc2kaexgPiJ+ZGl -xeGkmBEMWdMI70i4AXttY/ayWJQz37zicVuPXrfQQA7M82eZvoAaw18giyCMX0Na -7TPVknXj2mrBvIydVm7Ik/AdUjwpnLC2jQ6gDOhcWY4fUjueMvJqMsTrl5cxI4mN -OHgGao0MA0ETjBGFhe8dfzNm5njwiUU+6gFae+kOmsE6JYaDAHxVLJderrV1iAnt -R80e5zRqXiY8WoayCHfdt9hHeYJSmOsHJUkYH5MqoZ1iIhSYrfKYJ4SKi8Z+mjwp -FVN2WVMeC1IKB59Q3IHHrVXerp7SID9nGjEL8GJ+Cm58ZZp/1WOJf1PXr/Ajw+H9 -ZJ3QS3x3vyxt/sMHPWzJ/EY15yzVmOrn20Cw+nsK3p3ZawwMzAsUbq0i8/SmC91M -UmvP7jZ8sdQp3gFsXWZ9ymSpvg== ------END CERTIFICATE----- diff --git a/poc-grpc-agent/certs/ca.key b/poc-grpc-agent/certs/ca.key deleted file mode 100644 index 89cebc4..0000000 --- a/poc-grpc-agent/certs/ca.key +++ /dev/null @@ -1,52 +0,0 @@ ------BEGIN PRIVATE KEY----- -MIIJQQIBADANBgkqhkiG9w0BAQEFAASCCSswggknAgEAAoICAQDPQ0+w2RsX7vu7 -SCOrErUBdRETHofReRNNXG8uRqxNSfBHzkiFYmmh8B6S1z+InAcxenxwKAwUOCXQ -t6TtR6XIwG8XCG/wJC//we4TS0NjMusf2fIw8S7dby4nR++603H/mzkgKWJBFkzt -RtsOL3lCG6hexvY3pk/yjMyIyaY50ffddREOclHKymTLU33HhKj60JfYE4U0IUG5 -knDUKEnQoqx47teH97tbc2MomiaovFaNOxm1pVp6qeWbdtAXYb6s6OFkC38vW/QB -zi2TEmp0zf0vRChaIByGG/Pl8Whow0h+5m/1Txz3bAqL0FHEiFYAt+3xCGY532pY -Akloa1QdS+jY0p1n9cXEKVbeJJeRYThi27opv9QAr4rQHzwk5U6/m3ac2Dp2S61x -kDioJrpvGAGCKaLZ16YPTNfwhH/i1HxtXeR3y6T2cKcJHjVXTAou6PmYse73CHfl -nVga0PMGvWKDEAFimDX3HY95F4WBN7qtZ2qBa1Vs19nBhCuScHb3amduDD6iyEmx -qwHjlFUDlxSz/NiylRuygAH+856dPsbKLdIGMkIj4tz1DjbVOo/Kke7mTgY2XQNk -XtFxn6D7syQplX9UqH0RYckiinMcX9Jl/3OnaoY0WSCtl+enR9/qVxW0YNNPQMiw -IBikgaeXVXU89kGN2aF/WuVRUoHsxQIDAQABAoICAEPaJmmf+bWxICokqMClpCow -+AEJWq9h8sa9vwwoSNoYnZf0WVuJZ0mDgY7S9tKzOcuh7MEO6z1nUEHvDQg9D3IU -RYoF0heM0UXqaBVa61m7XqwTvqz1GEGX10U20K2Z8VUbrOzxf2ANe+ul6arQMeNJ -iKpWel6njL68B220jj2Zloqie43+MPaxoaPK1n+N14Ac78jmQxJY3NpyrYtXEStD -RjFlB5xUprp+oPS22ncdCTy9H2KPGnrTyf5GPEObVT/oEXmeJeoMMWqx48ulGMLa -eMuThZ5TquLgnc0mZeb+H2qj5/0oBDSf4yf4b/xmIbmkfToOZOEHWhorzXpowKUr -AniOoh9GhwAJbMxciuVwk/u/1m64zO+N3815qZ0w8YkfymId6fM1vPJ8PRE4WW7c -KKt+m34ZwsTQVOH8uvcwM501j4AsK7f10CUDTGrRu69KnDs+h+GZRjYj6YJMAEXM -JOzfIH6zB8X2moTPhXkVGzcm07fPYkxlHmoyVhnKxRkHnP1APVeT2xdH/APeWU4u -J3jFZ/iYUmb5fbT1WpyVtWHtZC9k2Cbe5TkmamPnnRZJ6qGZm+sMSKpMKWplH8P9 -rCwmVWBPUi4HB8EoMwIArRSBHWX8h/Ii233HK+8Xdn9BwNBwaeA9Gs2kKLaCf9zd -/ZxTBecpV6RdxYPNd3l1AoIBAQD/vKCVUdgLGpxbaHmgJu2C469NPd/TG61PiyGX -QFnpNQ6ZADdJ0bepOENUfvBYjCNngO+vGnGOjtJc9JBD7oAarp0DKlVV9N5RkPaB -XzZAp4N5iOgQKrAfVghoTod5UtJD9/hX5DIHQ0YDPK/+uh0QTF4RJ3mvPxxepJwI -CghlIUbfidIcyE2nwhSeAcZWFu1TB7gIyUEL1q4R7JMD8y76Ta5/45zf2RnJ7H8r -r+fDoebcFaaO9TuKdJIAlMOjbTv1gUK5DnSRaWw0Pz99Qov9oY3fEa+ZsozUd4cI -nz2GAF+RU7SJzvyCaUJj+sNuF63uz32J3Vq3ZjCVUgCoVEcLAoIBAQDPeentxcUy -b5u67POQwC2/md3z8+BDtUFDuksVssJZ77zm4ba5zguinGkCBBWY9OBqBpugjFYi -TTMLVdXbrw+2la5bsnESQp30rccQ8O1FGy9gE9lk76XpN9Nr55pQ/yx8oRIZNjyo -r9xsWYvHMEG5MzIpr4UUIblTk5WEVi50x1gF0v3GUFTZq3qyouE1isRuV3nroKS1 -dzAH17RHZgLpcdJxnStY1cpnc05DX0w6YvqYhVZd0JNAMSxA6WZgBd6IKtUsAkIs -pNQ8KGme4HluJUewGqxlC9iMh0GdCzkR+jCBGbuTEpYHNHPqgQB6kPUVh0IC06RX -VMkwS9X7DL1vAoIBAAmwaNkfb7MEABaKf8kskGUcIUEo7fj+nHNeDxi+7GkkhHgR -hQa79lxn8E0cPhjsvk6mmO4mb1T6Xkf9UBXyzFG2eeZrzS3jiCTI/D3skI6kihup -rzkllOSrCsiA6SsUkzjWBUe3MpoJ13Y572UUQhOjARFfUIHuPzHqxKqdTrIeL6Q6 -gYZrpF2NweA2qwAKAFXb/gH/NgKv0IqHTw6gQRBkrw7TXdcxT4PR/QN3t602zhta -iqPx8J6PShTRjhP8CICFtDR0sr/roZjdKJejVNB4NXrVHbUSCbnnCWuvNNKF4xkL -ddSezfxW5pgJISxjo0hf/h6iD1TRf1e48qNuBf8CggEAWfchassxQTeILbwFuaS7 -sbOEvP3pJzL3g+jKGjSTdfAw12TUmSkxfmeYWRlwTA0TKqaG4U05JFKZabbkrwfw -JlotavGreiGM4MZh5YSzPh4VovG4eL46ETD16npZPfoITlqBwJD2KKdpS0phBBR2 -y1nZzJ2hdSNSe10pnmLIbjbqgkwFYvL+eAyVfdSHF3J+zuH7qiLUiSOPnjb4o2Um -qheDC2T9oN3DkKw9KZWvNjopM+3Nj4yb7V/lMpiCneytnBoGqbio/TbUGOnlMtFf -llVwCnrmekJyui0EVJbDPnpggfqojZOnnqQuB2e8z2j//T/Tbepb/spzGxAnT18s -3QKCAQAq87DBIeHVDsaWDJFt9omBF0Jj9+Lkx49/fv3cZ0HEjtK9yowoRmdURKEY -rpAQWOtpmnkbg/ye6aDkBLj+E9JqTW/rlGQHjhgQWKLP/f5Ifxv6kXPddsy6w0ia -ubF4AjTPtSQsbBoiXRZZ51ivZc4+DG2qCTGyP2UtnSeSnROFF77yVfeI5y5yOF4x -zndlUrC/Nee/BaUEkSsuLUmLSIHWfqrc0fP5bUaYQfd1z1OXEV2qR3xP7ZchfVr6 -s9zKecm/VnVMNJKqFST/hmo8HNy/g70u4lB1zcrvU4LazuAgRPqHHvQLPri02D73 -Xui8h1oRAmoqI4VjtPZr6VK0KkkI ------END PRIVATE KEY----- diff --git a/poc-grpc-agent/certs/client.crt b/poc-grpc-agent/certs/client.crt deleted file mode 100644 index 4d1f5dd..0000000 --- a/poc-grpc-agent/certs/client.crt +++ /dev/null @@ -1,26 +0,0 @@ ------BEGIN CERTIFICATE----- -MIIEXTCCAkWgAwIBAgIBAjANBgkqhkiG9w0BAQsFADBPMQswCQYDVQQGEwJVUzEL -MAkGA1UECAwCQ0ExCzAJBgNVBAcMAlNGMQ8wDQYDVQQKDAZDb3J0ZXgxFTATBgNV -BAMMDENvcnRleFJvb3RDQTAeFw0yNjAzMDIwODM5MTRaFw0yNzAzMDIwODM5MTRa -MFExCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJDQTELMAkGA1UEBwwCU0YxDzANBgNV -BAoMBkNvcnRleDEXMBUGA1UEAwwOYWdlbnQtbm9kZS0wMDcwggEiMA0GCSqGSIb3 -DQEBAQUAA4IBDwAwggEKAoIBAQC+i4wP9bE2Iw9L/W1kCB1Z8xUzvPvRxUtlZk6P -qUggbi+hGhiQVbdZcuF4vh0wnFSR/dnjnIWwcHFSBGsDs4ReySX22SKNMY5ceuGt -CmeabhaVhIzRgXiK14/vncxcKKaEvko8d2pp42VWzj9nyvIMz1Ow2HS0JVeDSSkD -XDv5QSvaFkWUJYBRS+/2rieLB5/g2TP6ZQV2MNh54FXAnZwVnUQQEWe8uxsR5u99 -WoPaZLxCxuf0CbGmrfIWFmZ38UkHM00XX8Fbn2QQ++fUc6+wQy/OWhX1vkApHeWV -R4U8MHUrdZT9pot8LKcqxJupQfwJxGX+W617Fl5elp7HDVvlAgMBAAGjQjBAMB0G -A1UdDgQWBBQE4dDLQFNpgqmbZvhHisbhfevvBDAfBgNVHSMEGDAWgBTdPMqMdMZV -qWUSUupSXOjKYVip8zANBgkqhkiG9w0BAQsFAAOCAgEAjOu9M11TtPgaCK6IFx0H -7loGds+eYi6N/xf5gyXvH0WEuvmd6j4FR1lmNNr4XwK4eL+2mEuv1AQhpK3GTx2r -Qxna02ME0GGIMulsNytr9S55tPQMHBg/Qu7745p2xqEzbR9TQnzvq4PSyiOGhYSc -IgLBOgVUdaG1pel/V3ZSXstVuiBOvS4rrBduolkwDOI8S/q2ClBr0k8/RFqezeB5 -0NGwraIP3BxCMyaEzUFQXwAWqPD6RbdrvJ9u2B/IuMi/8xzdgwqgQQniq8V2w7dq -Dl7iGpOzZ9TnGud2sRoE02o/uNZnZ/xb4vZWJzzwZSsjvT1GLTVOCQwySp9KKn5e -1S+Ahe7VxMByAUzrdDK4CwMAVVp8+J5UasxV8iPZKZIA4U+pPSkDclPOp9kqGwTU -rvwsqA0SRhxvSuR3H/hYxGa/KA3P+ALW3+SI1Gx2AhX2yy/7Upnzx3hS1r60heAD -aRMtkOA/7UwhHryFpeYqreQlsK7b58yjxIkG5cuRp5J8eHC+FFIYD7IIrGltPTId -+Wwhkx3IN3y4ABRilkuQisUudF31IyIdQC6NBIfP1PJFOW0mC8lijcimSgjpsXXH -V2KwsNdur7uS+TgZzvGprAKFuyQMSxZ2BFZkL/L0rANyWqZvLLyjk7cOhoVEhnkY -o1R4SYn2w3pOHYJP493OnMQ= ------END CERTIFICATE----- diff --git a/poc-grpc-agent/certs/client.key b/poc-grpc-agent/certs/client.key deleted file mode 100644 index eb0ea90..0000000 --- a/poc-grpc-agent/certs/client.key +++ /dev/null @@ -1,28 +0,0 @@ ------BEGIN PRIVATE KEY----- -MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC+i4wP9bE2Iw9L -/W1kCB1Z8xUzvPvRxUtlZk6PqUggbi+hGhiQVbdZcuF4vh0wnFSR/dnjnIWwcHFS -BGsDs4ReySX22SKNMY5ceuGtCmeabhaVhIzRgXiK14/vncxcKKaEvko8d2pp42VW -zj9nyvIMz1Ow2HS0JVeDSSkDXDv5QSvaFkWUJYBRS+/2rieLB5/g2TP6ZQV2MNh5 -4FXAnZwVnUQQEWe8uxsR5u99WoPaZLxCxuf0CbGmrfIWFmZ38UkHM00XX8Fbn2QQ -++fUc6+wQy/OWhX1vkApHeWVR4U8MHUrdZT9pot8LKcqxJupQfwJxGX+W617Fl5e -lp7HDVvlAgMBAAECggEAEmtcdoV+ZDiW57Zfnvoe1j1mmQoFgMV08KAna4FGeOYV -4hmq8rbqgrHVhG3CVhrinPtAVx2gGcqA1dgZ/TFbFCuXKSnbynDWLW/uhWL6WWYX -dkwqLa15mNhWMGhdYzJFyJK5i+dSSNqjxvSokfC/Hchj83Y1L93lPAp0NcAyhvlg -XavNEGtlA1/eiV5m8p+2tPe2+FD3zIKyU0y6ktAgITquQ1S4PCLgdezPawIH0udE -sS5Czp7OVNfT99+iXlnCNtNRlHFliXqlsHkv1dP+Jb391WHBTHDFMtCgl6Ylqbdw -AmDz9VqSwhFl//ANtF43inNlMQCAKC3TGj1KU7uJCQKBgQDhOBaR4xdfnZPPXFwp -+HxpO9pFD20fh8Jveri+9sx5CXvQ+PO8oHW9kEkBoiYeNuFD8eSNmVmiQ4r8AoHx -i7Ku77UTixQ3/aljgkKihBsuVpmPYLOHMDyXqYBmVgosernf8W9mxxw5NpDF2Fv0 -AacEGzrmiFfrvvBlF9SmXC9oDQKBgQDYlksyQNTZ2Cmjq1hP0tPlQ5AvcHtK/A8x -gWPdKJul5aOJNrIyzyBBhWeGZZY2oC+rF3mqDitHVlxmX+NI2PbiPQLmoO9QefeH -6X89xumDaWVvCJ1obPz4X4mjzkwiQoPMLhMjXGmGbIQq5I4mBsBHlLX7danXOaYW -sQHF8ia1OQKBgEHcYhVFgIdQkHH6Q2VuqgsoGptJeJLY444wKCiICaF3mYKx2q0V -i3jk4cSdg2IgkF2LNlgGOUUPVWx+2zskrBsmNCDD8iSxhEB6TjwyP7ScVImuMLHe -9Ekxoz/J922scgDAHODEZ0d/4nRI4hMIDKxRvja+Nl/VVX1qq5/+o0pdAoGBAIr7 -zt89mRj9zKKZhn8atBzvwSugC44vt3Q2KqY1s8O+W7XmYm2WWoWRHMCymbUOD+jD -lLAajY0mjv6m04vgpnTBYAYtCcTjr4MIxD0ZUqmgTZX1ukTTg3XCoOl7rYFim36/ -pkpPt+up4RpBNjKSrHqCpFDrzYQuGzV+ervSSyKJAoGActtXHBiosnrzM3Z8Kyvr -nswYclzKYLUAKc204Ml/6umg8FfP42S61BCm+be0V0B6WziI3I+UGGQ76dG/zgUA -XHRSrunCsPQHmmjco4yBIoEnDv0d5BIPH6mLk7VAGq/vs9el+qtbnJ/cvGgUiPwu -mfIRLgBX2r8XMhsFt3k0Tv4= ------END PRIVATE KEY----- diff --git a/poc-grpc-agent/certs/server.crt b/poc-grpc-agent/certs/server.crt deleted file mode 100644 index 20ccfa3..0000000 --- a/poc-grpc-agent/certs/server.crt +++ /dev/null @@ -1,26 +0,0 @@ ------BEGIN CERTIFICATE----- -MIIEdDCCAlygAwIBAgIBATANBgkqhkiG9w0BAQsFADBPMQswCQYDVQQGEwJVUzEL -MAkGA1UECAwCQ0ExCzAJBgNVBAcMAlNGMQ8wDQYDVQQKDAZDb3J0ZXgxFTATBgNV -BAMMDENvcnRleFJvb3RDQTAeFw0yNjAzMDIwODM5MTNaFw0yNzAzMDIwODM5MTNa -MEwxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJDQTELMAkGA1UEBwwCU0YxDzANBgNV -BAoMBkNvcnRleDESMBAGA1UEAwwJbG9jYWxob3N0MIIBIjANBgkqhkiG9w0BAQEF -AAOCAQ8AMIIBCgKCAQEA2E/WBtD24rTo+OAMoGRMDoa985v2Oz4macDee+y6OPud -VQXYZ3Tje9bjj9RN01wZI59Cze0o2uq6QVPtvndwGUkXn8+zAQ644RrkxBUewU6Q -PYwQ73Fqeq+NNkygu1yxn1sndVu6lSlrG5RaTq6qDv+N0bxQFkqojFck4wwxcH9K -XRzFVWWHNU7jzcR9hLlPI5ohF744kGTlfVYOYtJgb4D7nykuX+ZuksKS9AIoA6Zm -jo7OJCoPeH09fbDqw01S74BEOvnazu29RVrQtPB2EQFbxjI073gCD3zdMYXuEOpn -yVg1hTv9T4dJxZ/ueiceVdb0Lh7o82MyMMn3+1XiewIDAQABo14wXDAaBgNVHREE -EzARgglsb2NhbGhvc3SHBH8AAAEwHQYDVR0OBBYEFH/6Lt++kf3MwCV4juQacuR0 -jGJjMB8GA1UdIwQYMBaAFN08yox0xlWpZRJS6lJc6MphWKnzMA0GCSqGSIb3DQEB -CwUAA4ICAQBxkBwlfEYbWl+JQe0NbFzQdFNAUbzRs/H7O4y36w91sPOaUdifyzV+ -ZAMIEDrV+9YLW8BNq6u9ADbZmhQ0QYWw4Awudvu3/IJpR3ItsBY/byciGg4eXK9G -oF2Pu9oA7m7Ca6bqSqU0j5uNMDF46xH99uiAJ/w4VhFDhmy3oTG1P5/ryayeNuDi -+3t7fTgDOOKNrbIWQHDwzhTL2Q40Gl7uOzqrdKGgkn6e7wbJmPdpzi4Aw45zhTbw -ujcoywYhAdULsIXEhEY92SwiV/yLhHWQ0PyeafczvjSWbXpm9y3yLTf4Jh299wJT -ECJDMntwEBCfHBKhkCcDVBWjYrlZrKuDFUnJcyklRySVrf0KHrKCu3trYx6GyJvn -VLpHra4ZjAiH461gsVCvZFesvTBzWa1JldGuNU77TUM4viLsy0Y8nyQuEQEDze0i -7Pit1GqmxR6j5vJNxkPcz7iypMrbWa7KI+t95OvkVYzW2swgdne78YsY4W3YNcdD -5y6dYW+TuDNhn7UbArMev39VKZfazgyHUNRp8PZWU5f9xCMmWl8jeJFjCPacv4J3 -JFGCg/QmamVEYuk6/xn5UogM0NHL5DpMwSNEGf+BPSUz0bPKFuKFtQxUjfkFdomS -MI4q2dGTMNBRsRYB8O6gyDlq7PPqnXJ1Iy+fXgBwaWAk2nXibRCkvg== ------END CERTIFICATE----- diff --git a/poc-grpc-agent/certs/server.key b/poc-grpc-agent/certs/server.key deleted file mode 100644 index 4b011f4..0000000 --- a/poc-grpc-agent/certs/server.key +++ /dev/null @@ -1,28 +0,0 @@ ------BEGIN PRIVATE KEY----- -MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDYT9YG0PbitOj4 -4AygZEwOhr3zm/Y7PiZpwN577Lo4+51VBdhndON71uOP1E3TXBkjn0LN7Sja6rpB -U+2+d3AZSRefz7MBDrjhGuTEFR7BTpA9jBDvcWp6r402TKC7XLGfWyd1W7qVKWsb -lFpOrqoO/43RvFAWSqiMVyTjDDFwf0pdHMVVZYc1TuPNxH2EuU8jmiEXvjiQZOV9 -Vg5i0mBvgPufKS5f5m6SwpL0AigDpmaOjs4kKg94fT19sOrDTVLvgEQ6+drO7b1F -WtC08HYRAVvGMjTveAIPfN0xhe4Q6mfJWDWFO/1Ph0nFn+56Jx5V1vQuHujzYzIw -yff7VeJ7AgMBAAECggEABTSuqn8bYfYHhN+VuN/J8pOMrvgWeJ3bvFx66OqX5oGd -62B2v1zfkYxouH5qS0rmA/wCOeh9yxtnRwIZFBfVxECvSvndODT2YsAMKg089XeQ -rOGhEdIlDG6ZXjCdsag0NbAkeGgFg9okGvl6snZJ3wCIlVRhbQmw7y7RRP/MdMx8 -iYENSj1WHV+GvU8MbA20KD3QsTEYEakIwBr716LwGlXtx2ju77+JG+fqeFOx7XuQ -nqBTgurWsFPJ92K0KlszCfskHU1LALLRlgJpbw0Ez0GVVFpGkXCLmeguADnGcqcc -L6bRNODDPgt7NOba9L6fifTwbwusotroaqVOyzTJgQKBgQD6UFchQ52Gswp8p8kz -hwHj/m8x3TccY1fbTDxB3JTtl0gzU7eguSBr66fYwz/hPxV6o5EJVYSIHIxzZcT1 -rINjAhRUUiyGfPHtrg3RW+BL5UGcdCUunxQxuS8mzZSAXo/8GzUCQirBWXXXTKU6 -VilG5Av+kJobcQuV5hl5h+ufQQKBgQDdOcNB9+0htBUrls3LqSZ1fORgTc4o6YNQ -z3NwYCH4YcQUmQ6GDnesbiEax65X0VOkRZZj/pRjQ19GfDqpJ2laMZAZUR6m1F26 -bajPtzzy9Wtb43ki7Uoyu3DN98W2Mrj3N0V9JRAYt4iBnQ5rona8LB64cdbykwap -3z+ZQcMOuwKBgQDvOOLcVotw1SFbmtruFMPYyiwowprN1Z98ZPJdm1r1ahRFgWfI -AcUbfr8NqSQet7RmXXXaLtGXZ3lPO96tT+7NK4qUP2hwK27m0OZBxIWq4vH+fP2f -/cZF8w4+DlEzEayXqsTRYL0NxdqaJZTvGLMgHgfchQPS4AnLe3mzLRQhQQKBgBHr -xPqKGAab7P8b903hRQFNfb6jbuj3ibC5LXPUBcx2NwkoIPoRH/ay8TGXLXNlvK3Z -CUbOb7zez1AJbkMXszwgObkjTiVbnMAmc/9nq6NO6ESIV97RdCpJ7uhwgu6wizVT -n+h0YSpva7p8O5fSkGXL+S0d47jA2lBWinNi1WdTAoGBAIrrAO/BSznJ9VIiw35e -EExZuUfSavdEpmTaMtn/oo7a936JIxYAsaUALXXKKMfoAiFHyM6cRpTO6+E+JL3E -vXXrjW0Z3A2/n95QsatNd8fOOsGedw0JtjyFbFCuYjGIgAK1vm9lSvqyZ6OvjMuY -9GEr4qj2iUmyJgJs+c/3h3o4 ------END PRIVATE KEY----- diff --git a/poc-grpc-agent/compile_protos.sh b/poc-grpc-agent/compile_protos.sh deleted file mode 100755 index 1af7aa3..0000000 --- a/poc-grpc-agent/compile_protos.sh +++ /dev/null @@ -1,3 +0,0 @@ -#!/bin/bash -python -m grpc_tools.protoc -I./protos --python_out=. --grpc_python_out=. ./protos/agent.proto -echo "Protobuf compiled successfully." diff --git a/poc-grpc-agent/orchestrator/__init__.py b/poc-grpc-agent/orchestrator/__init__.py deleted file mode 100644 index e69de29..0000000 --- a/poc-grpc-agent/orchestrator/__init__.py +++ /dev/null diff --git a/poc-grpc-agent/orchestrator/app.py b/poc-grpc-agent/orchestrator/app.py deleted file mode 100644 index d8ecd77..0000000 --- a/poc-grpc-agent/orchestrator/app.py +++ /dev/null @@ -1,127 +0,0 @@ -import grpc -import time -import os -import sys -from concurrent import futures - -# Add root to path to find protos -sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))) - -from protos import agent_pb2, agent_pb2_grpc -from orchestrator.config import ( - CERT_CA, CERT_SERVER_CRT, CERT_SERVER_KEY, - GRPC_HOST, GRPC_PORT, SIMULATION_DELAY_SEC, MAX_WORKERS -) -from orchestrator.services.grpc_server import AgentOrchestrator - -def serve(): - print(f"[🛡️] Boss Plane Orchestrator Starting on {GRPC_HOST}:{GRPC_PORT}...") - - # 1. SSL/TLS Setup - with open(CERT_SERVER_KEY, 'rb') as f: pkey = f.read() - with open(CERT_SERVER_CRT, 'rb') as f: cert = f.read() - with open(CERT_CA, 'rb') as f: ca = f.read() - creds = grpc.ssl_server_credentials([(pkey, cert)], ca, True) - - # 2. Server Initialization - server = grpc.server(futures.ThreadPoolExecutor(max_workers=MAX_WORKERS)) - orch = AgentOrchestrator() - agent_pb2_grpc.add_AgentOrchestratorServicer_to_server(orch, server) - - server.add_secure_port(f'{GRPC_HOST}:{GRPC_PORT}', creds) - - # 3. Start - server.start() - print("[🛡️] Boss Plane Refactored & Online.", flush=True) - - # 4. Simulation Launcher - # (In Production, this would be an API interface or Webhook handler) - _run_simulation(orch) - - server.wait_for_termination() - -def _run_simulation(orch): - """Refactored AI Simulation logic using the TaskAssistant service.""" - time.sleep(SIMULATION_DELAY_SEC) - print("\n[🧠] AI Simulation Start...", flush=True) - - # Collaborative Mesh Test: Pushing Shared Work - print("[📦📤] Pushing shared tasks to Global Work Pool...") - orch.pool.push_work("shared-mesh-001", "uname -a") - time.sleep(2) - orch.pool.push_work("shared-mesh-002", "uptime") - time.sleep(5) # Let nodes claim - - active_nodes = orch.registry.list_nodes() - if not active_nodes: - print("[!] No nodes available for direct task simulation.") - return - target_node = active_nodes[0] - - # Ghost Mirror Sync Phase 1 & 2 - print("\n[🧠] AI Phase: Ghost Mirror Workspace Sync (Multi-Node Broadcast)...") - for node_id in active_nodes: - orch.assistant.push_workspace(node_id, "test-session-001") - # Ensure Phase 5 recovery tracking works - orch.assistant.push_workspace(node_id, "recovery-session") - - time.sleep(2) - # Start watching only on the first node to test broadcast to others - orch.assistant.control_sync(active_nodes[0], "test-session-001", action="START") - - # Phase 3: Context-Aware Skills (Shell + Browser) - print("\n[🧠] AI Phase 3: Executing Context-Aware Shell Task...") - res_single = orch.assistant.dispatch_single(target_node, 'ls -la', session_id="test-session-001") - print(f" CWD Listing Output: {res_single}", flush=True) - - # Phase 3: LOCK Test (Simulate an AI edit phase where user edits are blocked) - time.sleep(10) - print("\n[🔒] Orchestrator: Locking Node 0 to prevent user interference (Phase 3)...") - orch.assistant.lock_workspace(active_nodes[0], "test-session-001") - - # Phase 4: Browser Bridge - print("\n[🧠] AI Phase 4: Navigating Browser (Antigravity Bridge)...") - nav_action = agent_pb2.BrowserAction( - action=agent_pb2.BrowserAction.NAVIGATE, - url="https://google.com", - session_id="test-session-001" - ) - res_browser = orch.assistant.dispatch_browser(target_node, nav_action, session_id="test-session-001") - print(f" Browser Result: {res_browser}", flush=True) - - # Stay alive for diagnostics - time.sleep(55) - - # Phase 5: Distributed Drift Recovery - print("\n[🧠] AI Phase 5: Re-triggering Sync for Drift Recovery (Phase 5)...") - orch.assistant.push_workspace(target_node, "test-session-001") - - time.sleep(10) - # Phase 4 Pro: Perception & Evaluation - print("\n[🧠] AI Phase 4 Pro: Perception & Advanced Logic...") - a11y_action = agent_pb2.BrowserAction( - action=agent_pb2.BrowserAction.GET_A11Y, - session_id="antigravity-session-1" - ) - res_a11y = orch.assistant.dispatch_browser(target_node, a11y_action) - print(f" A11y Result: {res_a11y.get('browser', {}).get('a11y')}") - - eval_action = agent_pb2.BrowserAction( - action=agent_pb2.BrowserAction.EVAL, - text="window.performance.now()", - session_id="antigravity-session-1" - ) - res_eval = orch.assistant.dispatch_browser(target_node, eval_action) - print(f" Eval Result: {res_eval.get('browser', {}).get('eval')}") - - # Real-time Events - print("\n[🧠] AI Phase 4 Pro: Triggering Real-time Events (Tunneling)...") - trigger_action = agent_pb2.BrowserAction( - action=agent_pb2.BrowserAction.EVAL, - text="console.log('Refactored Hello!'); fetch('https://example.com/api/ping');", - session_id="antigravity-session-1" - ) - orch.assistant.dispatch_browser(target_node, trigger_action) - -if __name__ == '__main__': - serve() diff --git a/poc-grpc-agent/orchestrator/config.py b/poc-grpc-agent/orchestrator/config.py deleted file mode 100644 index 2493988..0000000 --- a/poc-grpc-agent/orchestrator/config.py +++ /dev/null @@ -1,17 +0,0 @@ -import os - -# 12-Factor Config: Load from environment variables with defaults -SECRET_KEY = os.getenv("ORCHESTRATOR_SECRET_KEY", "cortex-secret-shared-key") - -# Network Settings -GRPC_HOST = os.getenv("GRPC_HOST", "[::]") -GRPC_PORT = os.getenv("GRPC_PORT", "50051") - -# Certificate Paths -CERT_CA = os.getenv("CERT_CA", "certs/ca.crt") -CERT_SERVER_CRT = os.getenv("CERT_SERVER_CRT", "certs/server.crt") -CERT_SERVER_KEY = os.getenv("CERT_SERVER_KEY", "certs/server.key") - -# Operational Settings -SIMULATION_DELAY_SEC = int(os.getenv("SIMULATION_DELAY_SEC", "10")) -MAX_WORKERS = int(os.getenv("MAX_WORKERS", "10")) diff --git a/poc-grpc-agent/orchestrator/core/__init__.py b/poc-grpc-agent/orchestrator/core/__init__.py deleted file mode 100644 index e69de29..0000000 --- a/poc-grpc-agent/orchestrator/core/__init__.py +++ /dev/null diff --git a/poc-grpc-agent/orchestrator/core/journal.py b/poc-grpc-agent/orchestrator/core/journal.py deleted file mode 100644 index b223f2f..0000000 --- a/poc-grpc-agent/orchestrator/core/journal.py +++ /dev/null @@ -1,34 +0,0 @@ -import threading - -class TaskJournal: - """State machine for tracking tasks through their asynchronous lifecycle.""" - def __init__(self): - self.lock = threading.Lock() - self.tasks = {} # task_id -> { "event": Event, "result": None, "node_id": str } - - def register(self, task_id, node_id=None): - """Initializes state for a new task and returns its notification event.""" - event = threading.Event() - with self.lock: - self.tasks[task_id] = {"event": event, "result": None, "node_id": node_id} - return event - - def fulfill(self, task_id, result): - """Processes a result from a node and triggers the waiting thread.""" - with self.lock: - if task_id in self.tasks: - self.tasks[task_id]["result"] = result - self.tasks[task_id]["event"].set() - return True - return False - - def get_result(self, task_id): - """Returns the result associated with the given task ID.""" - with self.lock: - data = self.tasks.get(task_id) - return data["result"] if data else None - - def pop(self, task_id): - """Removes the task's state from the journal.""" - with self.lock: - return self.tasks.pop(task_id, None) diff --git a/poc-grpc-agent/orchestrator/core/mirror.py b/poc-grpc-agent/orchestrator/core/mirror.py deleted file mode 100644 index 0a60bb3..0000000 --- a/poc-grpc-agent/orchestrator/core/mirror.py +++ /dev/null @@ -1,81 +0,0 @@ -import os -import shutil -import hashlib -from typing import Dict, List -from shared_core.ignore import CortexIgnore -from protos import agent_pb2 - -class GhostMirrorManager: - """Manages local server-side copies of node workspaces.""" - def __init__(self, storage_root="/app/data/mirrors"): - self.storage_root = storage_root - if not os.path.exists(self.storage_root): - os.makedirs(self.storage_root, exist_ok=True) - - def get_ignore_filter(self, session_id: str) -> CortexIgnore: - """Returns a CortexIgnore instance for a session.""" - return CortexIgnore(self.get_workspace_path(session_id)) - - def get_workspace_path(self, session_id: str) -> str: - """Returns the local absolute path for a session's mirror.""" - path = os.path.join(self.storage_root, session_id) - os.makedirs(path, exist_ok=True) - return path - - def write_file_chunk(self, session_id: str, file_payload: agent_pb2.FilePayload): - """Writes a chunk of data to the local mirror.""" - workspace = self.get_workspace_path(session_id) - - # Phase 3 ignore filter - ignore_filter = self.get_ignore_filter(session_id) - if ignore_filter.is_ignored(file_payload.path): - print(f" [📁🚷] Ignoring write to {file_payload.path}") - return - - # Prevent path traversal - safe_path = os.path.normpath(os.path.join(workspace, file_payload.path)) - if not safe_path.startswith(workspace): - raise ValueError(f"Malicious path detected: {file_payload.path}") - - os.makedirs(os.path.dirname(safe_path), exist_ok=True) - - mode = "ab" if file_payload.chunk_index > 0 else "wb" - with open(safe_path, mode) as f: - f.write(file_payload.chunk) - - if file_payload.is_final and file_payload.hash: - self._verify_hash(safe_path, file_payload.hash) - - def generate_manifest(self, session_id: str) -> agent_pb2.DirectoryManifest: - """Generates a manifest of the current local mirror state.""" - workspace = self.get_workspace_path(session_id) - ignore_filter = self.get_ignore_filter(session_id) - files = [] - for root, dirs, filenames in os.walk(workspace): - # Efficiently prune skipped directories - dirs[:] = [d for d in dirs if not ignore_filter.is_ignored(os.path.relpath(os.path.join(root, d), workspace))] - - for filename in filenames: - abs_path = os.path.join(root, filename) - rel_path = os.path.relpath(abs_path, workspace) - - if ignore_filter.is_ignored(rel_path): - continue - - with open(abs_path, "rb") as f: - file_hash = hashlib.sha256(f.read()).hexdigest() - - files.append(agent_pb2.FileInfo( - path=rel_path, - size=os.path.getsize(abs_path), - hash=file_hash, - is_dir=False - )) - return agent_pb2.DirectoryManifest(root_path=workspace, files=files) - - def _verify_hash(self, path: str, expected_hash: str): - content = open(path, "rb").read() - actual_hash = hashlib.sha256(content).hexdigest() - if actual_hash != expected_hash: - print(f"[⚠️] Hash Mismatch for {path}: expected {expected_hash}, got {actual_hash}") - # In a real system, we'd trigger a re-download/re-sync diff --git a/poc-grpc-agent/orchestrator/core/pool.py b/poc-grpc-agent/orchestrator/core/pool.py deleted file mode 100644 index f53a1db..0000000 --- a/poc-grpc-agent/orchestrator/core/pool.py +++ /dev/null @@ -1,29 +0,0 @@ -import threading - -class GlobalWorkPool: - """Thread-safe pool of unassigned tasks that can be claimed by any node.""" - def __init__(self): - self.lock = threading.Lock() - self.available = {} # task_id -> payload - self.on_new_work = None # Callback to notify nodes - - def push_work(self, task_id, payload): - """Adds new task to global discovery pool.""" - with self.lock: - self.available[task_id] = payload - print(f" [📦] New Shared Task: {task_id}") - if self.on_new_work: - self.on_new_work(task_id) - - def claim(self, task_id, node_id): - """Allows a node to pull a specific task from the pool.""" - with self.lock: - if task_id in self.available: - print(f" [📦] Task {task_id} Claimed by {node_id}") - return True, self.available.pop(task_id) - return False, None - - def list_available(self): - """Returns IDs of all currently available unclaimed tasks.""" - with self.lock: - return list(self.available.keys()) diff --git a/poc-grpc-agent/orchestrator/core/registry.py b/poc-grpc-agent/orchestrator/core/registry.py deleted file mode 100644 index 509f412..0000000 --- a/poc-grpc-agent/orchestrator/core/registry.py +++ /dev/null @@ -1,79 +0,0 @@ -import threading -import queue -import time - -class AbstractNodeRegistry: - """Interface for finding and tracking Agent Nodes.""" - def register(self, node_id: str, q: queue.Queue, metadata: dict): raise NotImplementedError - def update_stats(self, node_id: str, stats: dict): raise NotImplementedError - def get_best(self) -> str: raise NotImplementedError - def get_node(self, node_id: str) -> dict: raise NotImplementedError - -class MemoryNodeRegistry(AbstractNodeRegistry): - """In-memory implementation of the Node Registry.""" - def __init__(self): - self.lock = threading.Lock() - self.nodes = {} # node_id -> { stats: {}, queue: queue, metadata: {} } - self.subscribers = set() # WebSocket connection objects - - def register(self, node_id, q, metadata): - with self.lock: - self.nodes[node_id] = {"stats": {}, "queue": q, "metadata": metadata} - print(f"[📋] Registered Agent Node: {node_id}") - - def update_stats(self, node_id, stats): - with self.lock: - if node_id in self.nodes: - self.nodes[node_id]["stats"].update(stats) - - def get_best(self): - """Picks the agent with the lowest active worker count.""" - with self.lock: - if not self.nodes: return None - # Simple heuristic: sort by active worker count - return sorted(self.nodes.items(), key=lambda x: x[1]["stats"].get("active_worker_count", 999))[0][0] - - def get_node(self, node_id): - with self.lock: - return self.nodes.get(node_id) - - def list_nodes(self): - with self.lock: - return list(self.nodes.keys()) - - def subscribe(self, websocket): - with self.lock: - self.subscribers.add(websocket) - - def unsubscribe(self, websocket): - with self.lock: - if websocket in self.subscribers: - self.subscribers.remove(websocket) - - def emit(self, node_id, event, data, task_id=None): - """Broadcasts an event to all attached UI clients.""" - msg = { - "node_id": node_id, - "event": event, - "data": data, - "task_id": task_id, - "timestamp": time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()) - } - # In a real app, this would use an async-friendly event loop or Redis PUB/SUB. - # Here we just iterate. Note: caller is usually the gRPC thread. - import json - payload = json.dumps(msg) - - # We need to be careful with async WebSockets from a sync gRPC thread. - # This implementation assumes the WebSocket handler will poll a queue - # or we use a separate 'Bridge' to push from sync to async. - # For the POC, we'll log it; the AI Hub bridge will handle the actual WS push. - print(f"[📡 EventBus] {node_id} -> {event}: {payload[:100]}...") - - # Internal registry record (optional) - if "events" not in self.nodes.get(node_id, {}): - if node_id in self.nodes: - self.nodes[node_id]["events"] = [] - else: return - self.nodes[node_id]["events"] = ([msg] + self.nodes[node_id]["events"])[:50] - diff --git a/poc-grpc-agent/orchestrator/services/__init__.py b/poc-grpc-agent/orchestrator/services/__init__.py deleted file mode 100644 index e69de29..0000000 --- a/poc-grpc-agent/orchestrator/services/__init__.py +++ /dev/null diff --git a/poc-grpc-agent/orchestrator/services/assistant.py b/poc-grpc-agent/orchestrator/services/assistant.py deleted file mode 100644 index 245ada6..0000000 --- a/poc-grpc-agent/orchestrator/services/assistant.py +++ /dev/null @@ -1,199 +0,0 @@ -import time -import json -import os -import hashlib -from orchestrator.utils.crypto import sign_payload, sign_browser_action -from protos import agent_pb2 - -class TaskAssistant: - """The 'Brain' of the Orchestrator: High-Level AI API for Dispatching Tasks.""" - def __init__(self, registry, journal, pool, mirror=None): - self.registry = registry - self.journal = journal - self.pool = pool - self.mirror = mirror - self.memberships = {} # session_id -> list(node_id) - - def push_workspace(self, node_id, session_id): - """Initial unidirectional push from server ghost mirror to a node.""" - node = self.registry.get_node(node_id) - if not node or not self.mirror: return - - print(f"[📁📤] Initiating Workspace Push for Session {session_id} to {node_id}") - - # Track for recovery - if session_id not in self.memberships: - self.memberships[session_id] = [] - if node_id not in self.memberships[session_id]: - self.memberships[session_id].append(node_id) - - manifest = self.mirror.generate_manifest(session_id) - - # 1. Send Manifest - node["queue"].put(agent_pb2.ServerTaskMessage( - file_sync=agent_pb2.FileSyncMessage( - session_id=session_id, - manifest=manifest - ) - )) - - # 2. Send File Data - for file_info in manifest.files: - if not file_info.is_dir: - self.push_file(node_id, session_id, file_info.path) - - def push_file(self, node_id, session_id, rel_path): - """Pushes a specific file to a node (used for drift recovery).""" - node = self.registry.get_node(node_id) - if not node: return - - workspace = self.mirror.get_workspace_path(session_id) - abs_path = os.path.join(workspace, rel_path) - - if not os.path.exists(abs_path): - print(f" [📁❓] Requested file {rel_path} not found in mirror") - return - - with open(abs_path, "rb") as f: - full_data = f.read() - full_hash = hashlib.sha256(full_data).hexdigest() - f.seek(0) - - index = 0 - while True: - chunk = f.read(1024 * 1024) # 1MB chunks - is_final = len(chunk) < 1024 * 1024 - - node["queue"].put(agent_pb2.ServerTaskMessage( - file_sync=agent_pb2.FileSyncMessage( - session_id=session_id, - file_data=agent_pb2.FilePayload( - path=rel_path, - chunk=chunk, - chunk_index=index, - is_final=is_final, - hash=full_hash if is_final else "" - ) - ) - )) - - if is_final or not chunk: - break - index += 1 - - def reconcile_node(self, node_id): - """Forces a re-sync check for all sessions this node belongs to.""" - print(f" [📁🔄] Triggering Resync Check for {node_id}...") - for sid, nodes in self.memberships.items(): - if node_id in nodes: - # Re-push manifest to trigger node-side drift check - self.push_workspace(node_id, sid) - - def broadcast_file_chunk(self, session_id: str, sender_node_id: str, file_payload): - """Broadcasts a file chunk received from one node to all other nodes in the mesh.""" - print(f" [📁📢] Broadcasting {file_payload.path} from {sender_node_id} to other nodes...") - for node_id in self.registry.list_nodes(): - if node_id == sender_node_id: - continue - - node = self.registry.get_node(node_id) - if not node: - continue - - # Forward the exact same FileSyncMessage - node["queue"].put(agent_pb2.ServerTaskMessage( - file_sync=agent_pb2.FileSyncMessage( - session_id=session_id, - file_data=file_payload - ) - )) - - def lock_workspace(self, node_id, session_id): - """Disables user-side synchronization from a node during AI refactors.""" - self.control_sync(node_id, session_id, action="LOCK") - - def unlock_workspace(self, node_id, session_id): - """Re-enables user-side synchronization from a node.""" - self.control_sync(node_id, session_id, action="UNLOCK") - - def request_manifest(self, node_id, session_id, path="."): - """Requests a full directory manifest from a node for drift checking.""" - node = self.registry.get_node(node_id) - if not node: return - node["queue"].put(agent_pb2.ServerTaskMessage( - file_sync=agent_pb2.FileSyncMessage( - session_id=session_id, - control=agent_pb2.SyncControl(action=agent_pb2.SyncControl.REFRESH_MANIFEST, path=path) - ) - )) - - def control_sync(self, node_id, session_id, action="START", path="."): - """Sends a SyncControl command to a node (e.g. START_WATCHING, LOCK).""" - node = self.registry.get_node(node_id) - if not node: return - - action_map = { - "START": agent_pb2.SyncControl.START_WATCHING, - "STOP": agent_pb2.SyncControl.STOP_WATCHING, - "LOCK": agent_pb2.SyncControl.LOCK, - "UNLOCK": agent_pb2.SyncControl.UNLOCK - } - proto_action = action_map.get(action, agent_pb2.SyncControl.START_WATCHING) - - node["queue"].put(agent_pb2.ServerTaskMessage( - file_sync=agent_pb2.FileSyncMessage( - session_id=session_id, - control=agent_pb2.SyncControl(action=proto_action, path=path) - ) - )) - - def dispatch_single(self, node_id, cmd, timeout=30, session_id=None): - """Dispatches a shell command to a specific node.""" - node = self.registry.get_node(node_id) - if not node: return {"error": f"Node {node_id} Offline"} - - tid = f"task-{int(time.time()*1000)}" - event = self.journal.register(tid, node_id) - - # 12-Factor Signing Logic - sig = sign_payload(cmd) - req = agent_pb2.ServerTaskMessage(task_request=agent_pb2.TaskRequest( - task_id=tid, payload_json=cmd, signature=sig, session_id=session_id)) - - print(f"[📤] Dispatching shell {tid} to {node_id}") - node["queue"].put(req) - - if event.wait(timeout): - res = self.journal.get_result(tid) - self.journal.pop(tid) - return res - self.journal.pop(tid) - return {"error": "Timeout"} - - def dispatch_browser(self, node_id, action, timeout=60, session_id=None): - """Dispatches a browser action to a directed session node.""" - node = self.registry.get_node(node_id) - if not node: return {"error": f"Node {node_id} Offline"} - - tid = f"br-{int(time.time()*1000)}" - event = self.journal.register(tid, node_id) - - # Secure Browser Signing - sig = sign_browser_action( - agent_pb2.BrowserAction.ActionType.Name(action.action), - action.url, - action.session_id - ) - - req = agent_pb2.ServerTaskMessage(task_request=agent_pb2.TaskRequest( - task_id=tid, browser_action=action, signature=sig, session_id=session_id)) - - print(f"[🌐📤] Dispatching browser {tid} to {node_id}") - node["queue"].put(req) - - if event.wait(timeout): - res = self.journal.get_result(tid) - self.journal.pop(tid) - return res - self.journal.pop(tid) - return {"error": "Timeout"} diff --git a/poc-grpc-agent/orchestrator/services/grpc_server.py b/poc-grpc-agent/orchestrator/services/grpc_server.py deleted file mode 100644 index 023599b..0000000 --- a/poc-grpc-agent/orchestrator/services/grpc_server.py +++ /dev/null @@ -1,241 +0,0 @@ -import threading -import queue -import time -import os -try: - import requests as _requests # optional; only needed for M4 token validation -except ImportError: - _requests = None -from protos import agent_pb2, agent_pb2_grpc -from orchestrator.core.registry import MemoryNodeRegistry -from orchestrator.core.journal import TaskJournal -from orchestrator.core.pool import GlobalWorkPool -from orchestrator.core.mirror import GhostMirrorManager -from orchestrator.services.assistant import TaskAssistant -from orchestrator.utils.crypto import sign_payload - -# M4: Hub HTTP API for invite-token validation -# Calls POST /nodes/validate-token before accepting any SyncConfiguration. -# Set HUB_API_URL=http://localhost:8000 (or 0 to skip validation in dev mode). -HUB_API_URL = os.getenv("HUB_API_URL", "") # empty = skip validation (dev) -HUB_API_PATH = "/nodes/validate-token" - -class AgentOrchestrator(agent_pb2_grpc.AgentOrchestratorServicer): - """Refactored gRPC Servicer for Agent Orchestration.""" - def __init__(self): - self.registry = MemoryNodeRegistry() - self.journal = TaskJournal() - self.pool = GlobalWorkPool() - self.mirror = GhostMirrorManager() - self.assistant = TaskAssistant(self.registry, self.journal, self.pool, self.mirror) - self.pool.on_new_work = self._broadcast_work - - # 4. Mesh Observation (Aggregated Health Dashboard) - threading.Thread(target=self._monitor_mesh, daemon=True, name="MeshMonitor").start() - - def _monitor_mesh(self): - """Periodically prints status of all nodes in the mesh.""" - while True: - time.sleep(10) - active_nodes = self.registry.list_nodes() - print("\n" + "="*50) - print(f"📡 CORTEX MESH DASHBOARD | {len(active_nodes)} Nodes Online") - print("-" * 50) - if not active_nodes: - print(" No nodes currently connected.") - for nid in active_nodes: - node = self.registry.get_node(nid) - stats = node.get("stats", {}) - tasks = stats.get("running", []) - capability = node.get("metadata", {}).get("caps", {}) - print(f" 🟢 {nid:15} | Workers: {stats.get('active_worker_count', 0)} | Running: {len(tasks)} tasks") - print(f" Capabilities: {capability}") - print("="*50 + "\n", flush=True) - - def _broadcast_work(self, _): - """Pushes work notifications to all active nodes.""" - with self.registry.lock: - for node_id, node in self.registry.nodes.items(): - print(f" [📢] Broadcasting availability to {node_id}") - node["queue"].put(agent_pb2.ServerTaskMessage( - work_pool_update=agent_pb2.WorkPoolUpdate(available_task_ids=self.pool.list_available()) - )) - - def SyncConfiguration(self, request, context): - """M4 Authenticated Handshake: Validate invite_token, then send policy.""" - node_id = request.node_id - invite_token = request.auth_token # field in RegistrationRequest proto - - # --- M4: Token validation via Hub API --- - if HUB_API_URL and _requests: - try: - resp = _requests.post( - f"{HUB_API_URL}{HUB_API_PATH}", - params={"node_id": node_id, "token": invite_token}, - timeout=5, - ) - payload = resp.json() - if not payload.get("valid"): - reason = payload.get("reason", "Token rejected") - print(f"[🔒] SyncConfiguration REJECTED {node_id}: {reason}") - return agent_pb2.RegistrationResponse( - success=False, - message=reason, - ) - skill_cfg = payload.get("skill_config", {}) - print(f"[🔑] Token validated for {node_id} (display: {payload.get('display_name')})") - except Exception as e: - # If Hub is unreachable in dev, fall through with a warning - print(f"[⚠️] Hub token validation unavailable ({e}); proceeding without auth.") - skill_cfg = {} - else: - # Dev mode: skip validation - skill_cfg = {} - print(f"[⚠️] HUB_API_URL not set — skipping invite_token validation for {node_id}") - - # Build allowed_commands from skill_config (shell skill) - shell_cfg = skill_cfg.get("shell", {}) - if shell_cfg.get("enabled", True): - allowed_commands = ["ls", "cat", "echo", "pwd", "uname", "curl", "python3", "git"] - else: - allowed_commands = [] # Shell disabled by admin - - # Register the node in the local in-memory registry - self.registry.register(request.node_id, queue.Queue(), { - "desc": request.node_description, - "caps": dict(request.capabilities), - }) - - return agent_pb2.RegistrationResponse( - success=True, - policy=agent_pb2.SandboxPolicy( - mode=agent_pb2.SandboxPolicy.STRICT, - allowed_commands=allowed_commands, - ) - ) - - def TaskStream(self, request_iterator, context): - """Persistent Bi-directional Stream for Command & Control.""" - try: - # 1. Blocking wait for Node Identity - first_msg = next(request_iterator) - if first_msg.WhichOneof('payload') != 'announce': - print("[!] Stream rejected: No NodeAnnounce") - return - - node_id = first_msg.announce.node_id - node = self.registry.get_node(node_id) - if not node: - print(f"[!] Stream rejected: Node {node_id} not registered") - return - - print(f"[📶] Stream Online for {node_id}") - - # Phase 5: Automatic Reconciliation on Reconnect - self.assistant.reconcile_node(node_id) - - # 2. Results Listener (Read Thread) - def _read_results(): - for msg in request_iterator: - self._handle_client_message(msg, node_id, node) - - threading.Thread(target=_read_results, daemon=True, name=f"Results-{node_id}").start() - - # 3. Work Dispatcher (Main Stream) - last_keepalive = 0 - while context.is_active(): - try: - # Non-blocking wait to check context periodically - msg = node["queue"].get(timeout=1.0) - yield msg - except queue.Empty: - # Occasional broadcast to nodes to ensure pool sync - now = time.time() - if (now - last_keepalive) > 10.0: - last_keepalive = now - if self.pool.available: - yield agent_pb2.ServerTaskMessage( - work_pool_update=agent_pb2.WorkPoolUpdate(available_task_ids=self.pool.list_available()) - ) - continue - - except StopIteration: pass - except Exception as e: - print(f"[!] TaskStream Error for {node_id}: {e}") - - def _handle_client_message(self, msg, node_id, node): - kind = msg.WhichOneof('payload') - if kind == 'task_claim': - task_id = msg.task_claim.task_id - success, payload = self.pool.claim(task_id, node_id) - - # Send status response back to the node first - node["queue"].put(agent_pb2.ServerTaskMessage( - claim_status=agent_pb2.TaskClaimResponse( - task_id=task_id, - granted=success, - reason="Task successfully claimed" if success else "Task already claimed by another node" - ) - )) - # M6: Notify UI that a node is claiming a global task - self.registry.emit(node_id, "task_claim", {"task_id": task_id, "granted": success}) - - if success: - sig = sign_payload(payload) - node["queue"].put(agent_pb2.ServerTaskMessage( - task_request=agent_pb2.TaskRequest( - task_id=task_id, - payload_json=payload, - signature=sig - ) - )) - - elif kind == 'task_response': - tr = msg.task_response - res_obj = {"stdout": tr.stdout, "status": tr.status} - if tr.HasField("browser_result"): - br = tr.browser_result - res_obj["browser"] = { - "url": br.url, "title": br.title, "has_snapshot": len(br.snapshot) > 0, - "eval": br.eval_result - } - self.journal.fulfill(tr.task_id, res_obj) - - # M6: Emit to EventBus for UI streaming - event_type = "task_complete" if tr.status == agent_pb2.TaskResponse.SUCCESS else "task_error" - self.registry.emit(node_id, event_type, res_obj, task_id=tr.task_id) - - elif kind == 'browser_event': - e = msg.browser_event - event_data = {} - if e.HasField("console_msg"): - event_data = {"type": "console", "text": e.console_msg.text, "level": e.console_msg.level} - elif e.HasField("network_req"): - event_data = {"type": "network", "method": e.network_req.method, "url": e.network_req.url} - - # M6: Stream live browser logs to UI - self.registry.emit(node_id, "browser_event", event_data) - - elif kind == 'file_sync': - fs = msg.file_sync - if fs.HasField("file_data"): - self.mirror.write_file_chunk(fs.session_id, fs.file_data) - self.assistant.broadcast_file_chunk(fs.session_id, node_id, fs.file_data) - # M6: Emit sync progress (rarely to avoid flood, but good for large pushes) - if fs.file_data.chunk_index % 10 == 0: - self.registry.emit(node_id, "sync_progress", {"path": fs.file_data.path, "chunk": fs.file_data.chunk_index}) - elif fs.HasField("status"): - print(f" [📁] Sync Status from {node_id}: {fs.status.message}") - self.registry.emit(node_id, "sync_status", {"message": fs.status.message, "code": fs.status.code}) - if fs.status.code == agent_pb2.SyncStatus.RECONCILE_REQUIRED: - for path in fs.status.reconcile_paths: - self.assistant.push_file(node_id, fs.session_id, path) - - def ReportHealth(self, request_iterator, context): - """Collect Health Metrics and Feed Policy Updates.""" - for hb in request_iterator: - self.registry.update_stats(hb.node_id, { - "active_worker_count": hb.active_worker_count, - "running": list(hb.running_task_ids) - }) - yield agent_pb2.HealthCheckResponse(server_time_ms=int(time.time()*1000)) diff --git a/poc-grpc-agent/orchestrator/utils/__init__.py b/poc-grpc-agent/orchestrator/utils/__init__.py deleted file mode 100644 index e69de29..0000000 --- a/poc-grpc-agent/orchestrator/utils/__init__.py +++ /dev/null diff --git a/poc-grpc-agent/orchestrator/utils/crypto.py b/poc-grpc-agent/orchestrator/utils/crypto.py deleted file mode 100644 index c34a495..0000000 --- a/poc-grpc-agent/orchestrator/utils/crypto.py +++ /dev/null @@ -1,17 +0,0 @@ -import hmac -import hashlib -from orchestrator.config import SECRET_KEY - -def sign_payload(payload: str) -> str: - """Signs a string payload using HMAC-SHA256.""" - return hmac.new(SECRET_KEY.encode(), payload.encode(), hashlib.sha256).hexdigest() - -def sign_browser_action(action_type: str, url: str, session_id: str) -> str: - """Signs a browser action based on its key identify fields.""" - sign_base = f"{action_type}:{url}:{session_id}" - return sign_payload(sign_base) - -def verify_signature(payload: str, signature: str) -> bool: - """Verifies a signature against a payload using HMAC-SHA256.""" - expected = sign_payload(payload) - return hmac.compare_digest(signature, expected) diff --git a/poc-grpc-agent/protos/__init__.py b/poc-grpc-agent/protos/__init__.py deleted file mode 100644 index e69de29..0000000 --- a/poc-grpc-agent/protos/__init__.py +++ /dev/null diff --git a/poc-grpc-agent/protos/agent.proto b/poc-grpc-agent/protos/agent.proto deleted file mode 100644 index 5e3932d..0000000 --- a/poc-grpc-agent/protos/agent.proto +++ /dev/null @@ -1,246 +0,0 @@ -syntax = "proto3"; - -package agent; - -// The Cortex Server exposes this service -service AgentOrchestrator { - // 1. Control Channel: Sync policies and settings (Unary) - rpc SyncConfiguration(RegistrationRequest) returns (RegistrationResponse); - - // 2. Task Channel: Bidirectional work dispatch and reporting (Persistent) - rpc TaskStream(stream ClientTaskMessage) returns (stream ServerTaskMessage); - - // 3. Health Channel: Dedicated Ping-Pong / Heartbeat (Persistent) - rpc ReportHealth(stream Heartbeat) returns (stream HealthCheckResponse); -} - -// --- Channel 1: Registration & Policy --- -message RegistrationRequest { - string node_id = 1; - string version = 2; - string auth_token = 3; - string node_description = 4; // AI-readable description of this node's role - map capabilities = 5; // e.g. "gpu": "nvidia-3080", "os": "ubuntu-22.04" -} - -message SandboxPolicy { - enum Mode { - STRICT = 0; - PERMISSIVE = 1; - } - Mode mode = 1; - repeated string allowed_commands = 2; - repeated string denied_commands = 3; - repeated string sensitive_commands = 4; - string working_dir_jail = 5; -} - -message RegistrationResponse { - bool success = 1; - string error_message = 2; - string session_id = 3; - SandboxPolicy policy = 4; -} - -// --- Channel 2: Tasks & Collaboration --- -message ClientTaskMessage { - oneof payload { - TaskResponse task_response = 1; - TaskClaimRequest task_claim = 2; - BrowserEvent browser_event = 3; - NodeAnnounce announce = 4; // NEW: Identification on stream connect - FileSyncMessage file_sync = 5; // NEW: Ghost Mirror Sync - } -} - -message NodeAnnounce { - string node_id = 1; -} - -message BrowserEvent { - string session_id = 1; - oneof event { - ConsoleMessage console_msg = 2; - NetworkRequest network_req = 3; - } -} - -message ServerTaskMessage { - oneof payload { - TaskRequest task_request = 1; - WorkPoolUpdate work_pool_update = 2; - TaskClaimResponse claim_status = 3; - TaskCancelRequest task_cancel = 4; - FileSyncMessage file_sync = 5; // NEW: Ghost Mirror Sync - } -} - -message TaskCancelRequest { - string task_id = 1; -} - -message TaskRequest { - string task_id = 1; - string task_type = 2; - oneof payload { - string payload_json = 3; // For legacy shell/fallback - BrowserAction browser_action = 7; // NEW: Structured Browser Skill - } - int32 timeout_ms = 4; - string trace_id = 5; - string signature = 6; - string session_id = 8; // NEW: Map execution to a sync workspace -} - -message BrowserAction { - enum ActionType { - NAVIGATE = 0; - CLICK = 1; - TYPE = 2; - SCREENSHOT = 3; - GET_DOM = 4; - HOVER = 5; - SCROLL = 6; - CLOSE = 7; - EVAL = 8; - GET_A11Y = 9; - } - ActionType action = 1; - string url = 2; - string selector = 3; - string text = 4; - string session_id = 5; - int32 x = 6; - int32 y = 7; -} - -message TaskResponse { - string task_id = 1; - enum Status { - SUCCESS = 0; - ERROR = 1; - TIMEOUT = 2; - CANCELLED = 3; - } - Status status = 2; - string stdout = 3; - string stderr = 4; - string trace_id = 5; - map artifacts = 6; - - // NEW: Structured Skill Results - oneof result { - BrowserResponse browser_result = 7; - } -} - -message BrowserResponse { - string url = 1; - string title = 2; - bytes snapshot = 3; - string dom_content = 4; - string a11y_tree = 5; - string eval_result = 6; - repeated ConsoleMessage console_history = 7; - repeated NetworkRequest network_history = 8; -} - -message ConsoleMessage { - string level = 1; - string text = 2; - int64 timestamp_ms = 3; -} - -message NetworkRequest { - string method = 1; - string url = 2; - int32 status = 3; - string resource_type = 4; - int64 latency_ms = 5; -} - -message WorkPoolUpdate { - repeated string available_task_ids = 1; -} - -message TaskClaimRequest { - string task_id = 1; - string node_id = 2; -} - -message TaskClaimResponse { - string task_id = 1; - bool granted = 2; - string reason = 3; -} - -// --- Channel 3: Health & Observation --- -message Heartbeat { - string node_id = 1; - float cpu_usage_percent = 2; - float memory_usage_percent = 3; - int32 active_worker_count = 4; - int32 max_worker_capacity = 5; - string status_message = 6; - repeated string running_task_ids = 7; -} - -message HealthCheckResponse { - int64 server_time_ms = 1; -} - -// --- Channel 4: Ghost Mirror File Sync --- -message FileSyncMessage { - string session_id = 1; - oneof payload { - DirectoryManifest manifest = 2; - FilePayload file_data = 3; - SyncStatus status = 4; - SyncControl control = 5; - } -} - -message SyncControl { - enum Action { - START_WATCHING = 0; - STOP_WATCHING = 1; - LOCK = 2; // Server -> Node: Disable user-side edits - UNLOCK = 3; // Server -> Node: Enable user-side edits - REFRESH_MANIFEST = 4; // Server -> Node: Request a full manifest from node - RESYNC = 5; // Server -> Node: Force a hash-based reconciliation - } - Action action = 1; - string path = 2; -} - -message DirectoryManifest { - string root_path = 1; - repeated FileInfo files = 2; -} - -message FileInfo { - string path = 1; - int64 size = 2; - string hash = 3; // For drift detection - bool is_dir = 4; -} - -message FilePayload { - string path = 1; - bytes chunk = 2; - int32 chunk_index = 3; - bool is_final = 4; - string hash = 5; // Full file hash for verification on final chunk -} - -message SyncStatus { - enum Code { - OK = 0; - ERROR = 1; - RECONCILE_REQUIRED = 2; - IN_PROGRESS = 3; - } - Code code = 1; - string message = 2; - repeated string reconcile_paths = 3; // NEW: Files needing immediate re-sync -} diff --git a/poc-grpc-agent/protos/agent_pb2.py b/poc-grpc-agent/protos/agent_pb2.py deleted file mode 100644 index 3472d01..0000000 --- a/poc-grpc-agent/protos/agent_pb2.py +++ /dev/null @@ -1,94 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: protos/agent.proto -# Protobuf Python Version: 4.25.1 -"""Generated protocol buffer code.""" -from google.protobuf import descriptor as _descriptor -from google.protobuf import descriptor_pool as _descriptor_pool -from google.protobuf import symbol_database as _symbol_database -from google.protobuf.internal import builder as _builder -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - - - -DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x12protos/agent.proto\x12\x05\x61gent\"\xde\x01\n\x13RegistrationRequest\x12\x0f\n\x07node_id\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\t\x12\x12\n\nauth_token\x18\x03 \x01(\t\x12\x18\n\x10node_description\x18\x04 \x01(\t\x12\x42\n\x0c\x63\x61pabilities\x18\x05 \x03(\x0b\x32,.agent.RegistrationRequest.CapabilitiesEntry\x1a\x33\n\x11\x43\x61pabilitiesEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\"\xc5\x01\n\rSandboxPolicy\x12\'\n\x04mode\x18\x01 \x01(\x0e\x32\x19.agent.SandboxPolicy.Mode\x12\x18\n\x10\x61llowed_commands\x18\x02 \x03(\t\x12\x17\n\x0f\x64\x65nied_commands\x18\x03 \x03(\t\x12\x1a\n\x12sensitive_commands\x18\x04 \x03(\t\x12\x18\n\x10working_dir_jail\x18\x05 \x01(\t\"\"\n\x04Mode\x12\n\n\x06STRICT\x10\x00\x12\x0e\n\nPERMISSIVE\x10\x01\"x\n\x14RegistrationResponse\x12\x0f\n\x07success\x18\x01 \x01(\x08\x12\x15\n\rerror_message\x18\x02 \x01(\t\x12\x12\n\nsession_id\x18\x03 \x01(\t\x12$\n\x06policy\x18\x04 \x01(\x0b\x32\x14.agent.SandboxPolicy\"\xff\x01\n\x11\x43lientTaskMessage\x12,\n\rtask_response\x18\x01 \x01(\x0b\x32\x13.agent.TaskResponseH\x00\x12-\n\ntask_claim\x18\x02 \x01(\x0b\x32\x17.agent.TaskClaimRequestH\x00\x12,\n\rbrowser_event\x18\x03 \x01(\x0b\x32\x13.agent.BrowserEventH\x00\x12\'\n\x08\x61nnounce\x18\x04 \x01(\x0b\x32\x13.agent.NodeAnnounceH\x00\x12+\n\tfile_sync\x18\x05 \x01(\x0b\x32\x16.agent.FileSyncMessageH\x00\x42\t\n\x07payload\"\x1f\n\x0cNodeAnnounce\x12\x0f\n\x07node_id\x18\x01 \x01(\t\"\x87\x01\n\x0c\x42rowserEvent\x12\x12\n\nsession_id\x18\x01 \x01(\t\x12,\n\x0b\x63onsole_msg\x18\x02 \x01(\x0b\x32\x15.agent.ConsoleMessageH\x00\x12,\n\x0bnetwork_req\x18\x03 \x01(\x0b\x32\x15.agent.NetworkRequestH\x00\x42\x07\n\x05\x65vent\"\x8d\x02\n\x11ServerTaskMessage\x12*\n\x0ctask_request\x18\x01 \x01(\x0b\x32\x12.agent.TaskRequestH\x00\x12\x31\n\x10work_pool_update\x18\x02 \x01(\x0b\x32\x15.agent.WorkPoolUpdateH\x00\x12\x30\n\x0c\x63laim_status\x18\x03 \x01(\x0b\x32\x18.agent.TaskClaimResponseH\x00\x12/\n\x0btask_cancel\x18\x04 \x01(\x0b\x32\x18.agent.TaskCancelRequestH\x00\x12+\n\tfile_sync\x18\x05 \x01(\x0b\x32\x16.agent.FileSyncMessageH\x00\x42\t\n\x07payload\"$\n\x11TaskCancelRequest\x12\x0f\n\x07task_id\x18\x01 \x01(\t\"\xd1\x01\n\x0bTaskRequest\x12\x0f\n\x07task_id\x18\x01 \x01(\t\x12\x11\n\ttask_type\x18\x02 \x01(\t\x12\x16\n\x0cpayload_json\x18\x03 \x01(\tH\x00\x12.\n\x0e\x62rowser_action\x18\x07 \x01(\x0b\x32\x14.agent.BrowserActionH\x00\x12\x12\n\ntimeout_ms\x18\x04 \x01(\x05\x12\x10\n\x08trace_id\x18\x05 \x01(\t\x12\x11\n\tsignature\x18\x06 \x01(\t\x12\x12\n\nsession_id\x18\x08 \x01(\tB\t\n\x07payload\"\xa0\x02\n\rBrowserAction\x12/\n\x06\x61\x63tion\x18\x01 \x01(\x0e\x32\x1f.agent.BrowserAction.ActionType\x12\x0b\n\x03url\x18\x02 \x01(\t\x12\x10\n\x08selector\x18\x03 \x01(\t\x12\x0c\n\x04text\x18\x04 \x01(\t\x12\x12\n\nsession_id\x18\x05 \x01(\t\x12\t\n\x01x\x18\x06 \x01(\x05\x12\t\n\x01y\x18\x07 \x01(\x05\"\x86\x01\n\nActionType\x12\x0c\n\x08NAVIGATE\x10\x00\x12\t\n\x05\x43LICK\x10\x01\x12\x08\n\x04TYPE\x10\x02\x12\x0e\n\nSCREENSHOT\x10\x03\x12\x0b\n\x07GET_DOM\x10\x04\x12\t\n\x05HOVER\x10\x05\x12\n\n\x06SCROLL\x10\x06\x12\t\n\x05\x43LOSE\x10\x07\x12\x08\n\x04\x45VAL\x10\x08\x12\x0c\n\x08GET_A11Y\x10\t\"\xe0\x02\n\x0cTaskResponse\x12\x0f\n\x07task_id\x18\x01 \x01(\t\x12*\n\x06status\x18\x02 \x01(\x0e\x32\x1a.agent.TaskResponse.Status\x12\x0e\n\x06stdout\x18\x03 \x01(\t\x12\x0e\n\x06stderr\x18\x04 \x01(\t\x12\x10\n\x08trace_id\x18\x05 \x01(\t\x12\x35\n\tartifacts\x18\x06 \x03(\x0b\x32\".agent.TaskResponse.ArtifactsEntry\x12\x30\n\x0e\x62rowser_result\x18\x07 \x01(\x0b\x32\x16.agent.BrowserResponseH\x00\x1a\x30\n\x0e\x41rtifactsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\x0c:\x02\x38\x01\"<\n\x06Status\x12\x0b\n\x07SUCCESS\x10\x00\x12\t\n\x05\x45RROR\x10\x01\x12\x0b\n\x07TIMEOUT\x10\x02\x12\r\n\tCANCELLED\x10\x03\x42\x08\n\x06result\"\xdc\x01\n\x0f\x42rowserResponse\x12\x0b\n\x03url\x18\x01 \x01(\t\x12\r\n\x05title\x18\x02 \x01(\t\x12\x10\n\x08snapshot\x18\x03 \x01(\x0c\x12\x13\n\x0b\x64om_content\x18\x04 \x01(\t\x12\x11\n\ta11y_tree\x18\x05 \x01(\t\x12\x13\n\x0b\x65val_result\x18\x06 \x01(\t\x12.\n\x0f\x63onsole_history\x18\x07 \x03(\x0b\x32\x15.agent.ConsoleMessage\x12.\n\x0fnetwork_history\x18\x08 \x03(\x0b\x32\x15.agent.NetworkRequest\"C\n\x0e\x43onsoleMessage\x12\r\n\x05level\x18\x01 \x01(\t\x12\x0c\n\x04text\x18\x02 \x01(\t\x12\x14\n\x0ctimestamp_ms\x18\x03 \x01(\x03\"h\n\x0eNetworkRequest\x12\x0e\n\x06method\x18\x01 \x01(\t\x12\x0b\n\x03url\x18\x02 \x01(\t\x12\x0e\n\x06status\x18\x03 \x01(\x05\x12\x15\n\rresource_type\x18\x04 \x01(\t\x12\x12\n\nlatency_ms\x18\x05 \x01(\x03\",\n\x0eWorkPoolUpdate\x12\x1a\n\x12\x61vailable_task_ids\x18\x01 \x03(\t\"4\n\x10TaskClaimRequest\x12\x0f\n\x07task_id\x18\x01 \x01(\t\x12\x0f\n\x07node_id\x18\x02 \x01(\t\"E\n\x11TaskClaimResponse\x12\x0f\n\x07task_id\x18\x01 \x01(\t\x12\x0f\n\x07granted\x18\x02 \x01(\x08\x12\x0e\n\x06reason\x18\x03 \x01(\t\"\xc1\x01\n\tHeartbeat\x12\x0f\n\x07node_id\x18\x01 \x01(\t\x12\x19\n\x11\x63pu_usage_percent\x18\x02 \x01(\x02\x12\x1c\n\x14memory_usage_percent\x18\x03 \x01(\x02\x12\x1b\n\x13\x61\x63tive_worker_count\x18\x04 \x01(\x05\x12\x1b\n\x13max_worker_capacity\x18\x05 \x01(\x05\x12\x16\n\x0estatus_message\x18\x06 \x01(\t\x12\x18\n\x10running_task_ids\x18\x07 \x03(\t\"-\n\x13HealthCheckResponse\x12\x16\n\x0eserver_time_ms\x18\x01 \x01(\x03\"\xd3\x01\n\x0f\x46ileSyncMessage\x12\x12\n\nsession_id\x18\x01 \x01(\t\x12,\n\x08manifest\x18\x02 \x01(\x0b\x32\x18.agent.DirectoryManifestH\x00\x12\'\n\tfile_data\x18\x03 \x01(\x0b\x32\x12.agent.FilePayloadH\x00\x12#\n\x06status\x18\x04 \x01(\x0b\x32\x11.agent.SyncStatusH\x00\x12%\n\x07\x63ontrol\x18\x05 \x01(\x0b\x32\x12.agent.SyncControlH\x00\x42\t\n\x07payload\"\xaf\x01\n\x0bSyncControl\x12)\n\x06\x61\x63tion\x18\x01 \x01(\x0e\x32\x19.agent.SyncControl.Action\x12\x0c\n\x04path\x18\x02 \x01(\t\"g\n\x06\x41\x63tion\x12\x12\n\x0eSTART_WATCHING\x10\x00\x12\x11\n\rSTOP_WATCHING\x10\x01\x12\x08\n\x04LOCK\x10\x02\x12\n\n\x06UNLOCK\x10\x03\x12\x14\n\x10REFRESH_MANIFEST\x10\x04\x12\n\n\x06RESYNC\x10\x05\"F\n\x11\x44irectoryManifest\x12\x11\n\troot_path\x18\x01 \x01(\t\x12\x1e\n\x05\x66iles\x18\x02 \x03(\x0b\x32\x0f.agent.FileInfo\"D\n\x08\x46ileInfo\x12\x0c\n\x04path\x18\x01 \x01(\t\x12\x0c\n\x04size\x18\x02 \x01(\x03\x12\x0c\n\x04hash\x18\x03 \x01(\t\x12\x0e\n\x06is_dir\x18\x04 \x01(\x08\"_\n\x0b\x46ilePayload\x12\x0c\n\x04path\x18\x01 \x01(\t\x12\r\n\x05\x63hunk\x18\x02 \x01(\x0c\x12\x13\n\x0b\x63hunk_index\x18\x03 \x01(\x05\x12\x10\n\x08is_final\x18\x04 \x01(\x08\x12\x0c\n\x04hash\x18\x05 \x01(\t\"\xa0\x01\n\nSyncStatus\x12$\n\x04\x63ode\x18\x01 \x01(\x0e\x32\x16.agent.SyncStatus.Code\x12\x0f\n\x07message\x18\x02 \x01(\t\x12\x17\n\x0freconcile_paths\x18\x03 \x03(\t\"B\n\x04\x43ode\x12\x06\n\x02OK\x10\x00\x12\t\n\x05\x45RROR\x10\x01\x12\x16\n\x12RECONCILE_REQUIRED\x10\x02\x12\x0f\n\x0bIN_PROGRESS\x10\x03\x32\xe9\x01\n\x11\x41gentOrchestrator\x12L\n\x11SyncConfiguration\x12\x1a.agent.RegistrationRequest\x1a\x1b.agent.RegistrationResponse\x12\x44\n\nTaskStream\x12\x18.agent.ClientTaskMessage\x1a\x18.agent.ServerTaskMessage(\x01\x30\x01\x12@\n\x0cReportHealth\x12\x10.agent.Heartbeat\x1a\x1a.agent.HealthCheckResponse(\x01\x30\x01\x62\x06proto3') - -_globals = globals() -_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals) -_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'protos.agent_pb2', _globals) -if _descriptor._USE_C_DESCRIPTORS == False: - DESCRIPTOR._options = None - _globals['_REGISTRATIONREQUEST_CAPABILITIESENTRY']._options = None - _globals['_REGISTRATIONREQUEST_CAPABILITIESENTRY']._serialized_options = b'8\001' - _globals['_TASKRESPONSE_ARTIFACTSENTRY']._options = None - _globals['_TASKRESPONSE_ARTIFACTSENTRY']._serialized_options = b'8\001' - _globals['_REGISTRATIONREQUEST']._serialized_start=30 - _globals['_REGISTRATIONREQUEST']._serialized_end=252 - _globals['_REGISTRATIONREQUEST_CAPABILITIESENTRY']._serialized_start=201 - _globals['_REGISTRATIONREQUEST_CAPABILITIESENTRY']._serialized_end=252 - _globals['_SANDBOXPOLICY']._serialized_start=255 - _globals['_SANDBOXPOLICY']._serialized_end=452 - _globals['_SANDBOXPOLICY_MODE']._serialized_start=418 - _globals['_SANDBOXPOLICY_MODE']._serialized_end=452 - _globals['_REGISTRATIONRESPONSE']._serialized_start=454 - _globals['_REGISTRATIONRESPONSE']._serialized_end=574 - _globals['_CLIENTTASKMESSAGE']._serialized_start=577 - _globals['_CLIENTTASKMESSAGE']._serialized_end=832 - _globals['_NODEANNOUNCE']._serialized_start=834 - _globals['_NODEANNOUNCE']._serialized_end=865 - _globals['_BROWSEREVENT']._serialized_start=868 - _globals['_BROWSEREVENT']._serialized_end=1003 - _globals['_SERVERTASKMESSAGE']._serialized_start=1006 - _globals['_SERVERTASKMESSAGE']._serialized_end=1275 - _globals['_TASKCANCELREQUEST']._serialized_start=1277 - _globals['_TASKCANCELREQUEST']._serialized_end=1313 - _globals['_TASKREQUEST']._serialized_start=1316 - _globals['_TASKREQUEST']._serialized_end=1525 - _globals['_BROWSERACTION']._serialized_start=1528 - _globals['_BROWSERACTION']._serialized_end=1816 - _globals['_BROWSERACTION_ACTIONTYPE']._serialized_start=1682 - _globals['_BROWSERACTION_ACTIONTYPE']._serialized_end=1816 - _globals['_TASKRESPONSE']._serialized_start=1819 - _globals['_TASKRESPONSE']._serialized_end=2171 - _globals['_TASKRESPONSE_ARTIFACTSENTRY']._serialized_start=2051 - _globals['_TASKRESPONSE_ARTIFACTSENTRY']._serialized_end=2099 - _globals['_TASKRESPONSE_STATUS']._serialized_start=2101 - _globals['_TASKRESPONSE_STATUS']._serialized_end=2161 - _globals['_BROWSERRESPONSE']._serialized_start=2174 - _globals['_BROWSERRESPONSE']._serialized_end=2394 - _globals['_CONSOLEMESSAGE']._serialized_start=2396 - _globals['_CONSOLEMESSAGE']._serialized_end=2463 - _globals['_NETWORKREQUEST']._serialized_start=2465 - _globals['_NETWORKREQUEST']._serialized_end=2569 - _globals['_WORKPOOLUPDATE']._serialized_start=2571 - _globals['_WORKPOOLUPDATE']._serialized_end=2615 - _globals['_TASKCLAIMREQUEST']._serialized_start=2617 - _globals['_TASKCLAIMREQUEST']._serialized_end=2669 - _globals['_TASKCLAIMRESPONSE']._serialized_start=2671 - _globals['_TASKCLAIMRESPONSE']._serialized_end=2740 - _globals['_HEARTBEAT']._serialized_start=2743 - _globals['_HEARTBEAT']._serialized_end=2936 - _globals['_HEALTHCHECKRESPONSE']._serialized_start=2938 - _globals['_HEALTHCHECKRESPONSE']._serialized_end=2983 - _globals['_FILESYNCMESSAGE']._serialized_start=2986 - _globals['_FILESYNCMESSAGE']._serialized_end=3197 - _globals['_SYNCCONTROL']._serialized_start=3200 - _globals['_SYNCCONTROL']._serialized_end=3375 - _globals['_SYNCCONTROL_ACTION']._serialized_start=3272 - _globals['_SYNCCONTROL_ACTION']._serialized_end=3375 - _globals['_DIRECTORYMANIFEST']._serialized_start=3377 - _globals['_DIRECTORYMANIFEST']._serialized_end=3447 - _globals['_FILEINFO']._serialized_start=3449 - _globals['_FILEINFO']._serialized_end=3517 - _globals['_FILEPAYLOAD']._serialized_start=3519 - _globals['_FILEPAYLOAD']._serialized_end=3614 - _globals['_SYNCSTATUS']._serialized_start=3617 - _globals['_SYNCSTATUS']._serialized_end=3777 - _globals['_SYNCSTATUS_CODE']._serialized_start=3711 - _globals['_SYNCSTATUS_CODE']._serialized_end=3777 - _globals['_AGENTORCHESTRATOR']._serialized_start=3780 - _globals['_AGENTORCHESTRATOR']._serialized_end=4013 -# @@protoc_insertion_point(module_scope) diff --git a/poc-grpc-agent/protos/agent_pb2_grpc.py b/poc-grpc-agent/protos/agent_pb2_grpc.py deleted file mode 100644 index f551b0b..0000000 --- a/poc-grpc-agent/protos/agent_pb2_grpc.py +++ /dev/null @@ -1,138 +0,0 @@ -# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! -"""Client and server classes corresponding to protobuf-defined services.""" -import grpc - -from protos import agent_pb2 as protos_dot_agent__pb2 - - -class AgentOrchestratorStub(object): - """The Cortex Server exposes this service - """ - - def __init__(self, channel): - """Constructor. - - Args: - channel: A grpc.Channel. - """ - self.SyncConfiguration = channel.unary_unary( - '/agent.AgentOrchestrator/SyncConfiguration', - request_serializer=protos_dot_agent__pb2.RegistrationRequest.SerializeToString, - response_deserializer=protos_dot_agent__pb2.RegistrationResponse.FromString, - ) - self.TaskStream = channel.stream_stream( - '/agent.AgentOrchestrator/TaskStream', - request_serializer=protos_dot_agent__pb2.ClientTaskMessage.SerializeToString, - response_deserializer=protos_dot_agent__pb2.ServerTaskMessage.FromString, - ) - self.ReportHealth = channel.stream_stream( - '/agent.AgentOrchestrator/ReportHealth', - request_serializer=protos_dot_agent__pb2.Heartbeat.SerializeToString, - response_deserializer=protos_dot_agent__pb2.HealthCheckResponse.FromString, - ) - - -class AgentOrchestratorServicer(object): - """The Cortex Server exposes this service - """ - - def SyncConfiguration(self, request, context): - """1. Control Channel: Sync policies and settings (Unary) - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details('Method not implemented!') - raise NotImplementedError('Method not implemented!') - - def TaskStream(self, request_iterator, context): - """2. Task Channel: Bidirectional work dispatch and reporting (Persistent) - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details('Method not implemented!') - raise NotImplementedError('Method not implemented!') - - def ReportHealth(self, request_iterator, context): - """3. Health Channel: Dedicated Ping-Pong / Heartbeat (Persistent) - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details('Method not implemented!') - raise NotImplementedError('Method not implemented!') - - -def add_AgentOrchestratorServicer_to_server(servicer, server): - rpc_method_handlers = { - 'SyncConfiguration': grpc.unary_unary_rpc_method_handler( - servicer.SyncConfiguration, - request_deserializer=protos_dot_agent__pb2.RegistrationRequest.FromString, - response_serializer=protos_dot_agent__pb2.RegistrationResponse.SerializeToString, - ), - 'TaskStream': grpc.stream_stream_rpc_method_handler( - servicer.TaskStream, - request_deserializer=protos_dot_agent__pb2.ClientTaskMessage.FromString, - response_serializer=protos_dot_agent__pb2.ServerTaskMessage.SerializeToString, - ), - 'ReportHealth': grpc.stream_stream_rpc_method_handler( - servicer.ReportHealth, - request_deserializer=protos_dot_agent__pb2.Heartbeat.FromString, - response_serializer=protos_dot_agent__pb2.HealthCheckResponse.SerializeToString, - ), - } - generic_handler = grpc.method_handlers_generic_handler( - 'agent.AgentOrchestrator', rpc_method_handlers) - server.add_generic_rpc_handlers((generic_handler,)) - - - # This class is part of an EXPERIMENTAL API. -class AgentOrchestrator(object): - """The Cortex Server exposes this service - """ - - @staticmethod - def SyncConfiguration(request, - target, - options=(), - channel_credentials=None, - call_credentials=None, - insecure=False, - compression=None, - wait_for_ready=None, - timeout=None, - metadata=None): - return grpc.experimental.unary_unary(request, target, '/agent.AgentOrchestrator/SyncConfiguration', - protos_dot_agent__pb2.RegistrationRequest.SerializeToString, - protos_dot_agent__pb2.RegistrationResponse.FromString, - options, channel_credentials, - insecure, call_credentials, compression, wait_for_ready, timeout, metadata) - - @staticmethod - def TaskStream(request_iterator, - target, - options=(), - channel_credentials=None, - call_credentials=None, - insecure=False, - compression=None, - wait_for_ready=None, - timeout=None, - metadata=None): - return grpc.experimental.stream_stream(request_iterator, target, '/agent.AgentOrchestrator/TaskStream', - protos_dot_agent__pb2.ClientTaskMessage.SerializeToString, - protos_dot_agent__pb2.ServerTaskMessage.FromString, - options, channel_credentials, - insecure, call_credentials, compression, wait_for_ready, timeout, metadata) - - @staticmethod - def ReportHealth(request_iterator, - target, - options=(), - channel_credentials=None, - call_credentials=None, - insecure=False, - compression=None, - wait_for_ready=None, - timeout=None, - metadata=None): - return grpc.experimental.stream_stream(request_iterator, target, '/agent.AgentOrchestrator/ReportHealth', - protos_dot_agent__pb2.Heartbeat.SerializeToString, - protos_dot_agent__pb2.HealthCheckResponse.FromString, - options, channel_credentials, - insecure, call_credentials, compression, wait_for_ready, timeout, metadata) diff --git a/poc-grpc-agent/requirements.txt b/poc-grpc-agent/requirements.txt deleted file mode 100644 index 5644c64..0000000 --- a/poc-grpc-agent/requirements.txt +++ /dev/null @@ -1,6 +0,0 @@ -grpcio==1.62.1 -grpcio-tools==1.62.1 -PyJWT==2.8.0 -playwright==1.42.0 -watchdog==4.0.0 -PyYAML==6.0.1 diff --git a/poc-grpc-agent/scripts/generate_certs.sh b/poc-grpc-agent/scripts/generate_certs.sh deleted file mode 100755 index 2b3eb38..0000000 --- a/poc-grpc-agent/scripts/generate_certs.sh +++ /dev/null @@ -1,37 +0,0 @@ -#!/bin/bash -# Exit on any error -set -e - -CERT_DIR="./certs" -mkdir -p "$CERT_DIR" - -echo "🔐 Generating Root CA..." -# 1. Generate Root CA Key -openssl genrsa -out "$CERT_DIR/ca.key" 4096 -# 2. Generate Root CA Certificate (Self-signed) -openssl req -new -x509 -days 365 -key "$CERT_DIR/ca.key" -out "$CERT_DIR/ca.crt" \ - -subj "/C=US/ST=CA/L=SF/O=Cortex/CN=CortexRootCA" - -echo "🖥️ Generating Server Certificate..." -# 3. Generate Server Private Key -openssl genrsa -out "$CERT_DIR/server.key" 2048 -# 4. Generate Server Certificate Signing Request (CSR) -openssl req -new -key "$CERT_DIR/server.key" -out "$CERT_DIR/server.csr" \ - -subj "/C=US/ST=CA/L=SF/O=Cortex/CN=localhost" -# 5. Sign Server CSR with Root CA -# Adding SAN (Subject Alternative Name) for localhost to prevent SSL verification errors -echo "subjectAltName = DNS:localhost, IP:127.0.0.1" > "$CERT_DIR/server.ext" -openssl x509 -req -days 365 -in "$CERT_DIR/server.csr" -CA "$CERT_DIR/ca.crt" -CAkey "$CERT_DIR/ca.key" -set_serial 01 -out "$CERT_DIR/server.crt" -extfile "$CERT_DIR/server.ext" - -echo "🤖 Generating Client Certificate..." -# 6. Generate Client Private Key -openssl genrsa -out "$CERT_DIR/client.key" 2048 -# 7. Generate Client CSR -openssl req -new -key "$CERT_DIR/client.key" -out "$CERT_DIR/client.csr" \ - -subj "/C=US/ST=CA/L=SF/O=Cortex/CN=agent-node-007" -# 8. Sign Client CSR with Root CA -openssl x509 -req -days 365 -in "$CERT_DIR/client.csr" -CA "$CERT_DIR/ca.crt" -CAkey "$CERT_DIR/ca.key" -set_serial 02 -out "$CERT_DIR/client.crt" - -echo "✅ Certificates and keys generated in $CERT_DIR" -# Clean up temporary CSR/EXT files -rm "$CERT_DIR"/*.csr "$CERT_DIR"/*.ext diff --git a/poc-grpc-agent/scripts/multi_node_test.sh b/poc-grpc-agent/scripts/multi_node_test.sh deleted file mode 100755 index 1396986..0000000 --- a/poc-grpc-agent/scripts/multi_node_test.sh +++ /dev/null @@ -1,47 +0,0 @@ -#!/bin/bash -# 🚀 Collaborative Multi-Agent Mesh Test Script - -echo "--- 🛠️ Antigravity Mesh Bootstrap ---" - -# 1. Start Orchestrator in Background -export PYTHONPATH=. -export PYTHONUNBUFFERED=1 -export SIMULATION_DELAY_SEC=10 -python3 orchestrator/app.py > server_mesh.log 2>&1 & -ORCH_PID=$! -echo "[🛡️] Orchestrator PID: $ORCH_PID" - -sleep 3 - -# 2. Start Agent Node 001 -export AGENT_NODE_ID="agent-node-001" -python3 agent_node/main.py > node_001.log 2>&1 & -NODE_001_PID=$! -echo "[🤖] Agent 001 PID: $NODE_001_PID" - -# 3. Start Agent Node 002 -export AGENT_NODE_ID="agent-node-002" -python3 agent_node/main.py > node_002.log 2>&1 & -NODE_002_PID=$! -echo "[🤖] Agent 002 PID: $NODE_002_PID" - -echo "[⏳] Simulation Running (30s)..." -sleep 40 - -# 4. Cleanup -echo "\n--- 🛑 Shutdown Mesh ---" -kill -TERM $NODE_001_PID -kill -TERM $NODE_002_PID -kill -TERM $ORCH_PID -sleep 3 - -echo "\n--- 📊 Mesh Observations ---" -echo "--- SERVER LOG ---" -tail -n 20 server_mesh.log -echo "\n--- NODE 001 LOG ---" -tail -n 15 node_001.log -echo "\n--- NODE 002 LOG ---" -tail -n 15 node_002.log - -# Cleanup logs if needed -# rm server_mesh.log node_001.log node_002.log diff --git a/poc-grpc-agent/shared_core/__init__.py b/poc-grpc-agent/shared_core/__init__.py deleted file mode 100644 index e69de29..0000000 --- a/poc-grpc-agent/shared_core/__init__.py +++ /dev/null diff --git a/poc-grpc-agent/shared_core/ignore.py b/poc-grpc-agent/shared_core/ignore.py deleted file mode 100644 index c3f0cb5..0000000 --- a/poc-grpc-agent/shared_core/ignore.py +++ /dev/null @@ -1,38 +0,0 @@ -import os -import fnmatch - -class CortexIgnore: - """Handles .cortexignore (and .gitignore) pattern matching.""" - def __init__(self, root_path): - self.root_path = root_path - self.patterns = self._load_patterns() - - def _load_patterns(self): - patterns = [".git", "node_modules", ".cortex_sync", "__pycache__", "*.pyc"] # Default ignores - ignore_file = os.path.join(self.root_path, ".cortexignore") - if not os.path.exists(ignore_file): - ignore_file = os.path.join(self.root_path, ".gitignore") - - if os.path.exists(ignore_file): - with open(ignore_file, "r") as f: - for line in f: - line = line.strip() - if line and not line.startswith("#"): - patterns.append(line) - return patterns - - def is_ignored(self, rel_path): - """Returns True if the path matches any ignore pattern.""" - for pattern in self.patterns: - # Handle directory patterns - if pattern.endswith("/"): - if rel_path.startswith(pattern) or f"/{pattern}" in f"/{rel_path}": - return True - # Standard glob matching - if fnmatch.fnmatch(rel_path, pattern) or fnmatch.fnmatch(os.path.basename(rel_path), pattern): - return True - # Handle nested matches - for part in rel_path.split(os.sep): - if fnmatch.fnmatch(part, pattern): - return True - return False diff --git a/poc-grpc-agent/test_mesh.py b/poc-grpc-agent/test_mesh.py deleted file mode 100644 index 260a82d..0000000 --- a/poc-grpc-agent/test_mesh.py +++ /dev/null @@ -1,112 +0,0 @@ - -import time -import subprocess -import os -import signal - -def run_mesh_test(): - print("[🚀] Starting Collaborative Mesh Test...") - print("[🛡️] Orchestrator: Starting...") - - # 1. Start Orchestrator - orchestrator = subprocess.Popen( - ["python3", "-m", "orchestrator.app"], - cwd="/app/poc-grpc-agent", - stdout=subprocess.PIPE, - stderr=subprocess.STDOUT, - text=True, - bufsize=1 - ) - - time.sleep(3) # Wait for start - - print("[🤖] Node Alpha: Starting...") - # 2. Start Agent Node 1 - node1 = subprocess.Popen( - ["python3", "-m", "agent_node.main"], - cwd="/app/poc-grpc-agent", - env={**os.environ, "AGENT_NODE_ID": "node-alpha", "CORTEX_SYNC_DIR": "/tmp/cortex-sync-alpha"}, - stdout=subprocess.PIPE, - stderr=subprocess.STDOUT, - text=True, - bufsize=1 - ) - - print("[🤖] Node Beta: Starting...") - # 3. Start Agent Node 2 - node2 = subprocess.Popen( - ["python3", "-m", "agent_node.main"], - cwd="/app/poc-grpc-agent", - env={**os.environ, "AGENT_NODE_ID": "node-beta", "CORTEX_SYNC_DIR": "/tmp/cortex-sync-beta"}, - stdout=subprocess.PIPE, - stderr=subprocess.STDOUT, - text=True, - bufsize=1 - ) - - print("[⏳] Running simulation for 60 seconds...") - start_time = time.time() - - # Simple thread to print outputs in real-time - import threading - def pipe_output(name, pipe): - for line in pipe: - print(f"[{name}] {line.strip()}") - - threading.Thread(target=pipe_output, args=("ORCH", orchestrator.stdout), daemon=True).start() - threading.Thread(target=pipe_output, args=("N1", node1.stdout), daemon=True).start() - threading.Thread(target=pipe_output, args=("N2", node2.stdout), daemon=True).start() - - # Simulate a local edit on Node Alpha (N1) after a delay to test real-time sync - def simulate_local_edit(): - time.sleep(22) - root_alpha = "/tmp/cortex-sync-alpha/test-session-001" - os.makedirs(root_alpha, exist_ok=True) - - # 1. Create .cortexignore - print(f"\n[📝] User Sim: Creating .cortexignore on Node Alpha...") - with open(os.path.join(root_alpha, ".cortexignore"), "w") as f: - f.write("*.tmp\nsecret.txt\n") - - # 2. Edit hello.py (Should Sync) - sync_file = os.path.join(root_alpha, "hello.py") - print(f"[📝] User Sim: Editing {sync_file} (Should Sync)...") - with open(sync_file, "a") as f: - f.write("\n# Phase 3: Regular edit\n") - - # 3. Create secret.txt (Should be IGNORED) - secret_file = os.path.join(root_alpha, "secret.txt") - print(f"[📝] User Sim: Creating {secret_file} (Should be IGNORED)...") - with open(secret_file, "w") as f: - f.write("THIS SHOULD NOT SYNC") - - time.sleep(20) # Wait for Lock reliably - # 4. Workspace LOCK Test - print(f"\n[🔒] User Sim: Node Alpha should be LOCKED by Orchestrator now...") - with open(locked_file, "a") as f: - f.write("\n# USER TRYING TO EDIT WHILE LOCKED\n") - - time.sleep(15) - # 5. Drift Recovery Test - print(f"\n[💥] User Sim: Corrupting {sync_file} on Node Alpha (Simulating Sync Drift)...") - with open(sync_file, "w") as f: - f.write("CORRUPTED CONTENT") - - threading.Thread(target=simulate_local_edit, daemon=True).start() - - time.sleep(60) - - # 4. Cleanup - print("\n[🛑] Test Finished. Terminating processes...") - orchestrator.terminate() - node1.terminate() - node2.terminate() - - time.sleep(2) - orchestrator.kill() - node1.kill() - node2.kill() - print("[✅] Done.") - -if __name__ == "__main__": - run_mesh_test() diff --git a/poc-grpc-agent/test_recovery.py b/poc-grpc-agent/test_recovery.py deleted file mode 100644 index 8f96bd8..0000000 --- a/poc-grpc-agent/test_recovery.py +++ /dev/null @@ -1,80 +0,0 @@ -import time -import subprocess -import os -import shutil - -def run_recovery_test(): - print("[🚀] Starting Ghost Mirror Recovery Test...") - - # 0. Prep Mirror (Server side) - mirror_dir = "/app/data/mirrors/recovery-session" - os.makedirs(mirror_dir, exist_ok=True) - with open(os.path.join(mirror_dir, "app.py"), "w") as f: - f.write("print('v1')") - - # 1. Start Orchestrator - print("[🛡️] Orchestrator: Starting...") - orchestrator = subprocess.Popen( - ["python3", "-m", "orchestrator.app"], - cwd="/app/poc-grpc-agent", - stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True, bufsize=1 - ) - time.sleep(3) - - # 2. Start Node Alpha - print("[🤖] Node Alpha: Starting...") - node_alpha_sync = "/tmp/cortex-sync-recovery" - if os.path.exists(node_alpha_sync): shutil.rmtree(node_alpha_sync) - - node1 = subprocess.Popen( - ["python3", "-m", "agent_node.main"], - cwd="/app/poc-grpc-agent", - env={**os.environ, "AGENT_NODE_ID": "node-alpha", "CORTEX_SYNC_DIR": node_alpha_sync}, - stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True, bufsize=1 - ) - - time.sleep(10) # Wait for initial sync - print("[✅] Initial Sync should be done.") - - # 3. Stop Node Alpha - print("[🛑] Stopping Node Alpha...") - node1.terminate() - node1.wait() - - # 4. Modify Server Mirror (Simulate updates while node is offline) - print("[📝] Updating Server Mirror to v2...") - with open(os.path.join(mirror_dir, "app.py"), "w") as f: - f.write("print('v2')") - - time.sleep(2) - - # 5. Restart Node Alpha - print("[🤖] Node Alpha: Restarting...") - node1_v2 = subprocess.Popen( - ["python3", "-m", "agent_node.main"], - cwd="/app/poc-grpc-agent", - env={**os.environ, "AGENT_NODE_ID": "node-alpha", "CORTEX_SYNC_DIR": node_alpha_sync}, - stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True, bufsize=1 - ) - - # 6. Wait and Check - time.sleep(15) - print("\n[📊] Checking Results...") - node_file = os.path.join(node_alpha_sync, "recovery-session", "app.py") - if os.path.exists(node_file): - with open(node_file, "r") as f: - content = f.read() - print(f"Node Content: {content}") - if content == "print('v2')": - print("[🏆] RECOVERY SUCCESSFUL!") - else: - print("[❌] RECOVERY FAILED - Content mismatch") - else: - print("[❌] RECOVERY FAILED - File not found") - - orchestrator.terminate() - node1_v2.terminate() - print("[✅] Done.") - -if __name__ == "__main__": - run_recovery_test() diff --git a/run_web.sh b/run_web.sh deleted file mode 100644 index d12c5f8..0000000 --- a/run_web.sh +++ /dev/null @@ -1,88 +0,0 @@ -#!/bin/bash - -# Enable strict mode -set -euo pipefail - -# Default to HTTP -USE_HTTPS=false - - - -# Parse arguments -for arg in "$@"; do - if [[ "$arg" == "--https" ]]; then - USE_HTTPS=true - fi -done - -# Resolve script directory -SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -AI_HUB_DIR="$(realpath "$SCRIPT_DIR/ai-hub")" -CLIENT_DIR="$(realpath "$SCRIPT_DIR/ui/client-app")" - -AI_HUB_HOST="0.0.0.0" -AI_HUB_PORT="8001" -APP_MODULE="app.main:app" - -echo "--- Cleaning up existing processes ---" -# Kill existing uvicorn processes on the expected port -EXISTING_UVICORN_PID=$(lsof -ti tcp:${AI_HUB_PORT} || true) -if [ -n "$EXISTING_UVICORN_PID" ]; then - echo "Killing existing process on port ${AI_HUB_PORT} (PID: $EXISTING_UVICORN_PID)" - kill -9 "$EXISTING_UVICORN_PID" -fi - -# Kill existing React frontend on port 8000 -EXISTING_REACT_PID=$(lsof -ti tcp:8000 || true) -if [ -n "$EXISTING_REACT_PID" ]; then - echo "Killing existing frontend process on port 8000 (PID: $EXISTING_REACT_PID)" - kill -9 "$EXISTING_REACT_PID" -fi -pushd "$AI_HUB_DIR" > /dev/null - -pip install -e . - -SSL_ARGS="" -FRONTEND_ENV="" - -if [ "$USE_HTTPS" = true ]; then - echo "--- Generating self-signed SSL certificates ---" - - # Create a temporary directory for certs - SSL_TEMP_DIR=$(mktemp -d) - SSL_KEYFILE="${SSL_TEMP_DIR}/key.pem" - SSL_CERTFILE="${SSL_TEMP_DIR}/cert.pem" - - # Generate self-signed certificate - openssl req -x509 -nodes -days 1 -newkey rsa:2048 \ - -keyout "$SSL_KEYFILE" \ - -out "$SSL_CERTFILE" \ - -subj "/CN=localhost" - - # Cleanup function to remove certs on exit - cleanup() { - echo "--- Cleaning up SSL certificates ---" - rm -rf "$SSL_TEMP_DIR" - } - trap cleanup EXIT - - SSL_ARGS="--ssl-keyfile $SSL_KEYFILE --ssl-certfile $SSL_CERTFILE" - FRONTEND_ENV="HTTPS=true" -fi - -# New step: Install frontend dependencies -echo "--- Installing frontend dependencies ---" -pushd "$CLIENT_DIR" > /dev/null -npm install -popd > /dev/null - -echo "--- Starting AI Hub Server, React frontend, and backend proxy ---" - -# Run backend and frontend concurrently -npx concurrently \ - --prefix "[{name}]" \ - --names "aihub,tts-frontend" \ - "LOG_LEVEL=DEBUG uvicorn $APP_MODULE --host $AI_HUB_HOST --log-level debug --port $AI_HUB_PORT $SSL_ARGS --reload" \ - "cd $CLIENT_DIR && $FRONTEND_ENV GENERATE_SOURCEMAP=false HOST=0.0.0.0 PORT=8000 npm start" - -popd > /dev/null diff --git a/scripts/get_user.py b/scripts/get_user.py new file mode 100644 index 0000000..1d7a304 --- /dev/null +++ b/scripts/get_user.py @@ -0,0 +1,13 @@ +import os +import sqlite3 + +def get_db(): + db = sqlite3.connect("ai-hub/data/ai_hub.db") + cur = db.cursor() + cur.execute("SELECT id, email, role, group_id FROM users") + for row in cur.fetchall(): + print(row) + db.close() + +get_db() + diff --git a/scripts/register_test_nodes.py b/scripts/register_test_nodes.py new file mode 100644 index 0000000..867819a --- /dev/null +++ b/scripts/register_test_nodes.py @@ -0,0 +1,85 @@ +import sys +import os + +# Ensure we can import from app +sys.path.append("/app") + +from app.db.database import SessionLocal +from app.db.models import AgentNode, NodeGroupAccess + +db = SessionLocal() + +nodes = [ + { + "node_id": "test-node-1", + "display_name": "Test Node 1", + "description": "Scaled Pod 1", + "registered_by": "9a333ccd-9c3f-432f-a030-7b1e1284a436", + "invite_token": "cortex-secret-shared-key", + "is_active": True, + "skill_config": { + "shell": {"enabled": True}, + "browser": {"enabled": True}, + "sync": {"enabled": True} + }, + "capabilities": {"shell": "v1", "browser": "playwright-sync-bridge"} + }, + { + "node_id": "test-node-2", + "display_name": "Test Node 2", + "description": "Scaled Pod 2", + "registered_by": "9a333ccd-9c3f-432f-a030-7b1e1284a436", + "invite_token": "cortex-secret-shared-key", + "is_active": True, + "skill_config": { + "shell": {"enabled": True}, + "browser": {"enabled": True}, + "sync": {"enabled": True} + }, + "capabilities": {"shell": "v1", "browser": "playwright-sync-bridge"} + } +] + +group_id = None +# Fetch the user's group to assign access +from app.db.models import User +user = db.query(User).filter(User.id == "9a333ccd-9c3f-432f-a030-7b1e1284a436").first() +if user: + group_id = user.group_id + +try: + for node_data in nodes: + # Check if exists + existing = db.query(AgentNode).filter(AgentNode.node_id == node_data['node_id']).first() + if existing: + # Update + for k, v in node_data.items(): + setattr(existing, k, v) + else: + new_node = AgentNode(**node_data) + db.add(new_node) + + db.commit() + + # Grant Group Access + if group_id: + access = db.query(NodeGroupAccess).filter( + NodeGroupAccess.node_id == node_data['node_id'], + NodeGroupAccess.group_id == group_id + ).first() + if not access: + new_access = NodeGroupAccess( + node_id=node_data['node_id'], + group_id=group_id, + access_level="admin", + granted_by="9a333ccd-9c3f-432f-a030-7b1e1284a436" + ) + db.add(new_access) + db.commit() + + print("Successfully registered test nodes directly via ORM!") +except Exception as e: + db.rollback() + print(f"Error registering nodes: {e}") +finally: + db.close() diff --git a/scripts/run_web.sh b/scripts/run_web.sh new file mode 100644 index 0000000..d12c5f8 --- /dev/null +++ b/scripts/run_web.sh @@ -0,0 +1,88 @@ +#!/bin/bash + +# Enable strict mode +set -euo pipefail + +# Default to HTTP +USE_HTTPS=false + + + +# Parse arguments +for arg in "$@"; do + if [[ "$arg" == "--https" ]]; then + USE_HTTPS=true + fi +done + +# Resolve script directory +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +AI_HUB_DIR="$(realpath "$SCRIPT_DIR/ai-hub")" +CLIENT_DIR="$(realpath "$SCRIPT_DIR/ui/client-app")" + +AI_HUB_HOST="0.0.0.0" +AI_HUB_PORT="8001" +APP_MODULE="app.main:app" + +echo "--- Cleaning up existing processes ---" +# Kill existing uvicorn processes on the expected port +EXISTING_UVICORN_PID=$(lsof -ti tcp:${AI_HUB_PORT} || true) +if [ -n "$EXISTING_UVICORN_PID" ]; then + echo "Killing existing process on port ${AI_HUB_PORT} (PID: $EXISTING_UVICORN_PID)" + kill -9 "$EXISTING_UVICORN_PID" +fi + +# Kill existing React frontend on port 8000 +EXISTING_REACT_PID=$(lsof -ti tcp:8000 || true) +if [ -n "$EXISTING_REACT_PID" ]; then + echo "Killing existing frontend process on port 8000 (PID: $EXISTING_REACT_PID)" + kill -9 "$EXISTING_REACT_PID" +fi +pushd "$AI_HUB_DIR" > /dev/null + +pip install -e . + +SSL_ARGS="" +FRONTEND_ENV="" + +if [ "$USE_HTTPS" = true ]; then + echo "--- Generating self-signed SSL certificates ---" + + # Create a temporary directory for certs + SSL_TEMP_DIR=$(mktemp -d) + SSL_KEYFILE="${SSL_TEMP_DIR}/key.pem" + SSL_CERTFILE="${SSL_TEMP_DIR}/cert.pem" + + # Generate self-signed certificate + openssl req -x509 -nodes -days 1 -newkey rsa:2048 \ + -keyout "$SSL_KEYFILE" \ + -out "$SSL_CERTFILE" \ + -subj "/CN=localhost" + + # Cleanup function to remove certs on exit + cleanup() { + echo "--- Cleaning up SSL certificates ---" + rm -rf "$SSL_TEMP_DIR" + } + trap cleanup EXIT + + SSL_ARGS="--ssl-keyfile $SSL_KEYFILE --ssl-certfile $SSL_CERTFILE" + FRONTEND_ENV="HTTPS=true" +fi + +# New step: Install frontend dependencies +echo "--- Installing frontend dependencies ---" +pushd "$CLIENT_DIR" > /dev/null +npm install +popd > /dev/null + +echo "--- Starting AI Hub Server, React frontend, and backend proxy ---" + +# Run backend and frontend concurrently +npx concurrently \ + --prefix "[{name}]" \ + --names "aihub,tts-frontend" \ + "LOG_LEVEL=DEBUG uvicorn $APP_MODULE --host $AI_HUB_HOST --log-level debug --port $AI_HUB_PORT $SSL_ARGS --reload" \ + "cd $CLIENT_DIR && $FRONTEND_ENV GENERATE_SOURCEMAP=false HOST=0.0.0.0 PORT=8000 npm start" + +popd > /dev/null diff --git a/scripts/system_setup.sh b/scripts/system_setup.sh new file mode 100644 index 0000000..fc2529f --- /dev/null +++ b/scripts/system_setup.sh @@ -0,0 +1,56 @@ +#!/bin/bash + +# Check the operating system +OS="$(uname -s)" + +# --- Setup for macOS --- +if [ "$OS" == "Darwin" ]; then + echo "Detected macOS. Using Homebrew for setup." + + # Check for Homebrew, install if not present + if ! command -v brew &> /dev/null; then + echo "Homebrew not found. Please install it first:" + echo '/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"' + exit 1 + fi + + echo "Updating Homebrew..." + brew update + + echo "Installing Node.js..." + brew install node + +# --- Setup for Linux (Debian/Ubuntu) --- +elif [ "$OS" == "Linux" ]; then + echo "Detected Linux. Assuming Debian/Ubuntu-based system for setup." + + # Update package list + sudo apt-get update + + # Install curl if not installed + if ! command -v curl &> /dev/null; then + sudo apt-get install -y curl + fi + + # Download and run NodeSource setup script for Node.js 18.x + curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash - + + # Install Node.js + sudo apt-get install -y nodejs + +else + echo "Unsupported operating system: $OS" + exit 1 +fi + +# --- Common Steps --- +# Check versions installed +node -v +npm -v +pip install -e ./ai-hub + +# Install concurrently globally +echo "Installing concurrently globally..." +npm install -g concurrently + +echo "Setup complete!" \ No newline at end of file diff --git a/scripts/test_node.py b/scripts/test_node.py new file mode 100644 index 0000000..70fb64e --- /dev/null +++ b/scripts/test_node.py @@ -0,0 +1,36 @@ +import sys +import os +sys.path.append("/app/agent-node") +from agent_node.skills.shell import ShellSkill + +class DummySync: + def get_session_dir(self, sid): + return "/tmp" + +class DummySandbox: + def verify(self, cmd): + return True, "OK" + @property + def policy(self): + return {} + +def main(): + s = ShellSkill(sync_mgr=DummySync()) + class Task: + task_id = "t1" + payload_json = "pwd\n" + session_id = "s1" + trace_id = "tr1" + + def on_event(msg): + print("ON_EVENT:", msg) + + def on_complete(tid, res, tr): + print("ON_COMPLETE:", res) + + s.execute(Task(), DummySandbox(), on_complete, on_event) + import time + time.sleep(2) + +if __name__ == '__main__': + main() diff --git a/scripts/test_proto.py b/scripts/test_proto.py new file mode 100644 index 0000000..e6dc34a --- /dev/null +++ b/scripts/test_proto.py @@ -0,0 +1,11 @@ +import sys +sys.path.append("/app/agent-node") +from protos import agent_pb2 + +event = agent_pb2.ClientTaskMessage(skill_event=agent_pb2.SkillEvent()) +print("Created msg") +try: + wrapped = agent_pb2.ClientTaskMessage(browser_event=event) + print("Wrapped!") +except Exception as e: + print("Exception:", type(e), e) diff --git a/scripts/test_ws.js b/scripts/test_ws.js new file mode 100644 index 0000000..08e4269 --- /dev/null +++ b/scripts/test_ws.js @@ -0,0 +1,31 @@ +const WebSocket = require('ws'); + +const ws = new WebSocket('wss://ai.jerxie.com/api/v1/nodes/test-prod-node/stream'); + +ws.on('open', function open() { + console.log('connected, waiting for events...'); +}); + +ws.on('message', function incoming(data) { + const msg = JSON.parse(data); + if (msg.event !== 'heartbeat') { + console.log('[EVENT]', msg.event, JSON.stringify(msg.data)); + } + if (msg.event === 'task_complete' || msg.event === 'task_error') { + process.exit(0); + } +}); + +// Also dispatch a command +setTimeout(async () => { + const http = require('https'); + const body = JSON.stringify({ command: 'uname -a; id; pwd', timeout_ms: 15000 }); + const req = http.request({ + hostname: 'ai.jerxie.com', path: '/api/v1/nodes/test-prod-node/dispatch', + method: 'POST', headers: {'Content-Type': 'application/json', 'X-User-ID': '9a333ccd-9c3f-432f-a030-7b1e1284a436', 'Content-Length': body.length} + }, res => { let data = ''; res.on('data', d => data += d); res.on('end', () => console.log('[DISPATCH]', data)); }); + req.write(body); + req.end(); +}, 1000); + +setTimeout(() => { console.log('timeout'); process.exit(1); }, 20000); diff --git a/setup.sh b/setup.sh deleted file mode 100644 index fc2529f..0000000 --- a/setup.sh +++ /dev/null @@ -1,56 +0,0 @@ -#!/bin/bash - -# Check the operating system -OS="$(uname -s)" - -# --- Setup for macOS --- -if [ "$OS" == "Darwin" ]; then - echo "Detected macOS. Using Homebrew for setup." - - # Check for Homebrew, install if not present - if ! command -v brew &> /dev/null; then - echo "Homebrew not found. Please install it first:" - echo '/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"' - exit 1 - fi - - echo "Updating Homebrew..." - brew update - - echo "Installing Node.js..." - brew install node - -# --- Setup for Linux (Debian/Ubuntu) --- -elif [ "$OS" == "Linux" ]; then - echo "Detected Linux. Assuming Debian/Ubuntu-based system for setup." - - # Update package list - sudo apt-get update - - # Install curl if not installed - if ! command -v curl &> /dev/null; then - sudo apt-get install -y curl - fi - - # Download and run NodeSource setup script for Node.js 18.x - curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash - - - # Install Node.js - sudo apt-get install -y nodejs - -else - echo "Unsupported operating system: $OS" - exit 1 -fi - -# --- Common Steps --- -# Check versions installed -node -v -npm -v -pip install -e ./ai-hub - -# Install concurrently globally -echo "Installing concurrently globally..." -npm install -g concurrently - -echo "Setup complete!" \ No newline at end of file diff --git a/test_node.py b/test_node.py deleted file mode 100644 index 70fb64e..0000000 --- a/test_node.py +++ /dev/null @@ -1,36 +0,0 @@ -import sys -import os -sys.path.append("/app/agent-node") -from agent_node.skills.shell import ShellSkill - -class DummySync: - def get_session_dir(self, sid): - return "/tmp" - -class DummySandbox: - def verify(self, cmd): - return True, "OK" - @property - def policy(self): - return {} - -def main(): - s = ShellSkill(sync_mgr=DummySync()) - class Task: - task_id = "t1" - payload_json = "pwd\n" - session_id = "s1" - trace_id = "tr1" - - def on_event(msg): - print("ON_EVENT:", msg) - - def on_complete(tid, res, tr): - print("ON_COMPLETE:", res) - - s.execute(Task(), DummySandbox(), on_complete, on_event) - import time - time.sleep(2) - -if __name__ == '__main__': - main() diff --git a/test_proto.py b/test_proto.py deleted file mode 100644 index e6dc34a..0000000 --- a/test_proto.py +++ /dev/null @@ -1,11 +0,0 @@ -import sys -sys.path.append("/app/agent-node") -from protos import agent_pb2 - -event = agent_pb2.ClientTaskMessage(skill_event=agent_pb2.SkillEvent()) -print("Created msg") -try: - wrapped = agent_pb2.ClientTaskMessage(browser_event=event) - print("Wrapped!") -except Exception as e: - print("Exception:", type(e), e) diff --git a/test_ws.js b/test_ws.js deleted file mode 100644 index 08e4269..0000000 --- a/test_ws.js +++ /dev/null @@ -1,31 +0,0 @@ -const WebSocket = require('ws'); - -const ws = new WebSocket('wss://ai.jerxie.com/api/v1/nodes/test-prod-node/stream'); - -ws.on('open', function open() { - console.log('connected, waiting for events...'); -}); - -ws.on('message', function incoming(data) { - const msg = JSON.parse(data); - if (msg.event !== 'heartbeat') { - console.log('[EVENT]', msg.event, JSON.stringify(msg.data)); - } - if (msg.event === 'task_complete' || msg.event === 'task_error') { - process.exit(0); - } -}); - -// Also dispatch a command -setTimeout(async () => { - const http = require('https'); - const body = JSON.stringify({ command: 'uname -a; id; pwd', timeout_ms: 15000 }); - const req = http.request({ - hostname: 'ai.jerxie.com', path: '/api/v1/nodes/test-prod-node/dispatch', - method: 'POST', headers: {'Content-Type': 'application/json', 'X-User-ID': '9a333ccd-9c3f-432f-a030-7b1e1284a436', 'Content-Length': body.length} - }, res => { let data = ''; res.on('data', d => data += d); res.on('end', () => console.log('[DISPATCH]', data)); }); - req.write(body); - req.end(); -}, 1000); - -setTimeout(() => { console.log('timeout'); process.exit(1); }, 20000); diff --git a/ui/client-app/src/pages/NodesPage.js b/ui/client-app/src/pages/NodesPage.js index a7be194..4d12dd5 100644 --- a/ui/client-app/src/pages/NodesPage.js +++ b/ui/client-app/src/pages/NodesPage.js @@ -13,6 +13,7 @@ const [loading, setLoading] = useState(true); const [error, setError] = useState(null); const [showCreateModal, setShowCreateModal] = useState(false); + const [nodeToDelete, setNodeToDelete] = useState(null); const [newNode, setNewNode] = useState({ node_id: '', display_name: '', description: '', skill_config: { shell: { enabled: true }, browser: { enabled: true }, sync: { enabled: true } } }); const [expandedTerminals, setExpandedTerminals] = useState({}); // node_id -> boolean const [expandedNodes, setExpandedNodes] = useState({}); // node_id -> boolean @@ -111,10 +112,11 @@ } }; - const handleDeleteNode = async (nodeId) => { - if (!window.confirm(`Are you sure you want to deregister node ${nodeId}? This will remove all access grants.`)) return; + const confirmDeleteNode = async () => { + if (!nodeToDelete) return; try { - await adminDeleteNode(nodeId); + await adminDeleteNode(nodeToDelete); + setNodeToDelete(null); fetchData(); } catch (err) { alert(err.message); @@ -140,6 +142,91 @@ } }; + // ─── Node Name Hover Tooltip ───────────────────────────────────────────── + const NodeNameWithTooltip = ({ node }) => { + const liveCaps = meshStatus[node.node_id]?.caps || {}; + const caps = { ...(node.capabilities || {}), ...liveCaps }; + + const gpu = caps.gpu; + const arch = caps.arch === 'aarch64' ? 'arm64' : caps.arch; + const os = caps.os; + const osRelease = caps.os_release; + const hasCaps = caps && Object.keys(caps).length > 0; + + const gpuColor = !gpu || gpu === 'none' + ? 'bg-gray-100 dark:bg-gray-700 text-gray-500' + : gpu === 'apple-silicon' + ? 'bg-purple-100 dark:bg-purple-900/50 text-purple-600 dark:text-purple-300' + : 'bg-green-100 dark:bg-green-900/50 text-green-700 dark:text-green-300'; + const gpuIcon = (!gpu || gpu === 'none') ? '—' : gpu === 'apple-silicon' ? '🍎' : '🟢'; + const gpuText = !gpu || gpu === 'none' ? 'No GPU' : gpu === 'apple-silicon' ? 'Apple GPU (Metal)' : gpu.split(',')[0].trim(); + + const osIcon = os === 'linux' ? '🐧' : os === 'darwin' ? '🍏' : os === 'windows' ? '🪟' : '💻'; + const archIsArm = arch && (arch.startsWith('arm') || arch === 'arm64'); + + return ( +
+ {/* Node Name — the hover trigger */} +

+ {node.display_name} +

+ + {/* Tooltip — appears below the name */} +
+ {/* Arrow */} +
+ + {/* Header */} +
+
{node.display_name}
+
{node.node_id}
+ {node.description && ( +
{node.description}
+ )} +
+ + {hasCaps ? ( +
+ {/* GPU Row */} +
+ GPU + + {gpuIcon} {gpuText} + +
+ {/* OS Row */} + {os && ( +
+ OS + + {osIcon} {os === 'darwin' ? 'macOS' : os === 'linux' ? `Linux ${osRelease || ''}`.trim() : os} + +
+ )} + {/* Arch Row */} + {arch && ( +
+ Arch + + {archIsArm ? '🔩' : '🔲'} {arch} + +
+ )} + {/* Registered Owner */} +
+ Owner + {node.registered_by || 'system'} +
+
+ ) : ( +
Capabilities available when node connects
+ )} +
+
+ ); + }; + // ───────────────────────────────────────────────────────────────────────── + const NodeHealthMetrics = ({ node, compact = false }) => { const live = meshStatus[node.node_id]; const stats = live?.stats || node.stats; @@ -278,13 +365,13 @@ return (
{/* Header */} -
+
-

+

🚀 Agent Node Mesh

-

+

{isAdmin ? "Manage distributed execution nodes and monitor live health." : "Monitor the health and availability of your accessible agent nodes."} @@ -316,7 +403,7 @@

{/* Main Content */} -
+
{loading ? (
@@ -329,75 +416,88 @@
{nodes.map(node => ( -
- {/* Top Row: Basic Info & Actions */} -
-
-
-
-
-
- - {meshStatus[node.node_id]?.status || node.last_status || 'offline'} - -
-

{node.display_name}

- -
-
- ID: {node.node_id} -
+
+ {/* ── Top Row (Mobile-friendly) ─────────────────────────── */} +
+ + {/* Row 1: Status pill + Name + (desktop) CPU/RAM */} +
+ {/* Status dot */} +
+
+ + {meshStatus[node.node_id]?.status || node.last_status || 'offline'} + +
+ + {/* Name (tooltip) — takes remaining width */} +
+ +
ID: {node.node_id}
+
+ + {/* Desktop-only CPU/RAM inline */} +
+
-
- {/* Active Toggle Switch */} -
- - -
+ {/* Mobile-only: CPU/RAM row below name */} +
+ +
+ {/* Row 2: Action toolbar */} +
+ {/* Active/Disabled pill — slim on mobile */} - - -
- + + {/* Icon action buttons */} +
+ + + + {isAdmin && ( + <> +
+ + + )} +
@@ -506,31 +606,37 @@ ))}
- {/* Event Timeline (Execution Live Bus) */} -
-
-

- - Execution Live Bus -

- Subscribed to task stream... + {/* Event Timeline (Execution Live Bus) — Debug/Tracing, collapsed by default */} +
+ +
+ + Execution Live Bus + Debug +
+ + + +
+ +
+
+ {recentEvents.length === 0 && ( +
Listening for mesh events...
+ )} + {recentEvents.map((evt, i) => ( +
+ [{evt.timestamp?.split('T')[1].split('.')[0]}] + {evt.node_id?.slice(0, 8)} + + {evt.label || evt.event}: + {JSON.stringify(evt.data)} + +
+ ))} +
-
- {recentEvents.length === 0 && ( -
Listening for mesh events...
- )} - {recentEvents.map((evt, i) => ( -
- [{evt.timestamp?.split('T')[1].split('.')[0]}] - {evt.node_id?.slice(0, 8)} - - {evt.label || evt.event}: - {JSON.stringify(evt.data)} - -
- ))} -
-
+
)}
@@ -586,6 +692,39 @@
)} + + {/* DELETE NODE MODAL */} + {nodeToDelete && ( +
+
+
+
+ + + +
+

Deregister Node?

+

+ Are you sure you want to completely deregister node {nodeToDelete}? This will permanently remove all access grants for this node. +

+
+ + +
+
+
+
+ )} ); };