diff --git a/.agent/workflows/deploy_to_production.md b/.agent/workflows/deploy_to_production.md index e1d0a9b..3d52b12 100644 --- a/.agent/workflows/deploy_to_production.md +++ b/.agent/workflows/deploy_to_production.md @@ -7,7 +7,7 @@ **MAIN KNOWLEDGE POINT:** Agents and Users should refer to `.agent/workflows/deployment_reference.md` to understand the full proxy and architecture layout prior to running production debugging. -1. **Automated Secret Fetching**: The `deploy_prod.sh` script will automatically pull the production password from the GitBucket Secret Vault if the `GITBUCKET_TOKEN` is available in `/app/.env.gitbucket`. +1. **Automated Secret Fetching**: The `scripts/remote_deploy.sh` script will automatically pull the production password from the GitBucket Secret Vault if the `GITBUCKET_TOKEN` is available in `/app/.env.gitbucket`. 2. **Sync**: Sync local codebase to `/tmp/cortex-hub/` on the server. 3. **Proto Regeneration**: If `ai-hub/app/protos/agent.proto` has changed, the agent must regenerate the Python stubs: ```bash @@ -16,13 +16,13 @@ cd /app/agent-node && python3 -m grpc_tools.protoc -Iprotos --python_out=. --grpc_python_out=. protos/agent.proto ``` > **CAVEAT**: Because you changed the protobuf interface, all clients must be updated as well! Remember to rebuild and restart all agent nodes (like `cortex-test-1` and `cortex-test-2`) whenever the `.proto` files change so they have the latest interface stubs. -4. **Migrate & Rebuild**: Overwrite production files and run `bash local_rebuild.sh` on the server. +4. **Migrate & Rebuild**: Overwrite production files and run `bash scripts/local_rebuild.sh` on the server. 5. **Post-Deployment Health Check**: Perform a backend connectivity check (Python Trick). Only use `/frontend_tester` as a last resort if UI behavior is suspect. ### Automated Command ```bash # This script handles authentication, syncing, and remote rebuilding. -bash /app/deploy_prod.sh +bash /app/scripts/remote_deploy.sh ``` ### Post-Deployment (MANDATORY) diff --git a/.agent/workflows/deployment_reference.md b/.agent/workflows/deployment_reference.md index 7991b2d..f8d5926 100644 --- a/.agent/workflows/deployment_reference.md +++ b/.agent/workflows/deployment_reference.md @@ -42,30 +42,40 @@ ### The Scripts -1. **`deploy_remote.sh` (The Triggger)** +1. **`scripts/remote_deploy.sh` (The Trigger)** * **Where it runs**: *Locally on your dev machine* * **What it does**: 1. Uses `rsync` over SSH (`sshpass`) to securely copy local workspace (`/app/`) changes onto the production server `192.168.68.113` under a temporary `/tmp/` directory. 2. It specifically excludes massive or unnecessary folders (`.git`, `node_modules`, `__pycache__`). 3. Overwrites the destination project folder (`/home/coder/project/cortex-hub`) taking care to retain system permissions. - 4. SSH triggers the `deploy_local.sh` script centrally on the production server. + 4. SSH triggers the `scripts/local_rebuild.sh` script centrally on the production server. -2. **`deploy_local.sh` (The Builder)** +2. **`scripts/local_rebuild.sh` (The Builder)** * **Where it runs**: *Server 192.168.68.113* * **What it does**: - 1. Destroys the old running containers. - 2. Triggers Docker Compose (`docker compose up -d --build --remove-orphans`) to rebuild the application context and discard deprecated container setups (e.g., when the UI shifted into Nginx). - 3. Performs automated database migrations running parallel idempotent logic (`app/db/migrate.py`) via the `Uvicorn` startup lifecycle. + 1. Detects the deployment environment by checking for override files in `deployment/`. + 2. Destroys the old running containers. + 3. Triggers Docker Compose with multiple layers: + - `-f docker-compose.yml` (Generic common config) + - `-f deployment/jerxie-prod/docker-compose.production.yml` (Jerxie-specific overrides) + - `-f deployment/test-nodes/docker-compose.test-nodes.yml` (Optional test nodes) + 4. Rebuilds and starts the containers. + 5. Performs automated database migrations running parallel idempotent logic via the `Uvicorn` startup lifecycle. ### How to Release -You or an Agent must safely pass the authentication key into the script from the command line. An Agent should **always prompt the USER** for this password before running it: +You or an Agent must safely pass the authentication key into the script from the command line: ```bash -REMOTE_PASS='' bash /app/deploy_remote.sh +REMOTE_PASS='' bash scripts/remote_deploy.sh ``` ---- +## 3. Decoupled Folder Structure -## 3. Automation Logic for Agents (.agent Workflows) +To maintain a clean repository and allow for generic onboarding while keeping Jerxie's specific production requirements separate, the following layout is enforced: + +* **`docker-compose.yml`**: Baseline configuration. Uses local Docker volumes and generic localhost endpoints. Default for new users. +* **`deployment/jerxie-prod/`**: Contains Jerry's specific production overrides (NFS volume on `192.168.68.90`, SSL/OIDC endpoints for `ai.jerxie.com`). +* **`deployment/test-nodes/`**: Contains internal test node definitions for simulation. +* **`scripts/`**: Centralized automation scripts for remote sync, local rebuilding, and node registration. For any automated AI attempting to debug or push changes: * **Primary Source of Truth**: This file (`.agent/workflows/deployment_reference.md`) defines the architecture rules. * **Subflow - Deployment:** Look at `.agent/workflows/deploy_to_production.md` for explicit directory movement maps and scripting. diff --git a/deploy_prod.sh b/deploy_prod.sh deleted file mode 100755 index f146992..0000000 --- a/deploy_prod.sh +++ /dev/null @@ -1,93 +0,0 @@ -#!/bin/bash -# Description: Automates deployment from the local environment to the production host 192.168.68.113 - -# Credentials - Can be set via ENV or fetched from GitBucket -HOST="${REMOTE_HOST}" -USER="${REMOTE_USER}" -PASS="${REMOTE_PASS}" - -# If credentials are missing, try to fetch from GitBucket Private Snippet -if [ -z "$PASS" ] || [ -z "$HOST" ]; then - # Load token/id from local env if present - if [ -f "/app/.env.gitbucket" ]; then - source "/app/.env.gitbucket" - fi - - GITBUCKET_TOKEN="${GITBUCKET_TOKEN}" - SNIPPET_ID="${DEPLOYMENT_SNIPPET_ID}" - - if [ -n "$GITBUCKET_TOKEN" ] && [ -n "$SNIPPET_ID" ]; then - echo "Secrets not provided in environment. Attempting to fetch from GitBucket..." - - TMP_SECRETS=$(mktemp -d) - if git clone "https://yangyangxie:${GITBUCKET_TOKEN}@gitbucket.jerxie.com/git/gist/yangyangxie/${SNIPPET_ID}.git" "$TMP_SECRETS" &> /dev/null; then - if [ -f "$TMP_SECRETS/.env.production" ]; then - source "$TMP_SECRETS/.env.production" - HOST="${REMOTE_HOST:-$HOST}" - USER="${REMOTE_USER:-$USER}" - PASS="${REMOTE_PASSWORD:-$PASS}" - echo "Successfully loaded credentials from GitBucket." - # Strip potential carriage returns - HOST=$(echo "$HOST" | tr -d '\r') - USER=$(echo "$USER" | tr -d '\r') - PASS=$(echo "$PASS" | tr -d '\r') - fi - else - echo "Failed to fetch secrets from GitBucket." - fi - rm -rf "$TMP_SECRETS" - fi -fi - -# Fallback defaults if still not set -HOST="${HOST:-192.168.68.113}" -USER="${USER:-axieyangb}" - -# System Paths -REMOTE_TMP="/tmp/cortex-hub/" -REMOTE_PROJ="/home/coder/project/cortex-hub" - -if [ -z "$PASS" ]; then - echo "Error: REMOTE_PASS not found and could not be fetched from GitBucket." - echo "Please set REMOTE_PASS or GITBUCKET_TOKEN environment variables." - exit 1 -fi - -echo "Checking if sshpass is installed..." -if ! command -v sshpass &> /dev/null; then - echo "sshpass could not be found, installing..." - sudo apt-get update && sudo apt-get install -y sshpass -fi - -# 1. Sync local codebase to temporary directory on remote server -echo "Syncing local files to production [USER: $USER, HOST: $HOST]..." -sshpass -p "$PASS" rsync -avz \ - --exclude '.git' \ - --exclude 'node_modules' \ - --exclude 'ui/client-app/node_modules' \ - --exclude 'ui/client-app/build' \ - --exclude 'ai-hub/__pycache__' \ - --exclude '.venv' \ - -e "ssh -o StrictHostKeyChecking=no" /app/ "$USER@$HOST:$REMOTE_TMP" - -if [ $? -ne 0 ]; then - echo "Rsync failed! Exiting." - exit 1 -fi - -# 2. Copy the synced files into the actual project directory replacing the old ones -echo "Overwriting production project files..." -sshpass -p "$PASS" ssh -o StrictHostKeyChecking=no "$USER@$HOST" << EOF - echo '$PASS' | sudo -S rm -rf $REMOTE_PROJ/nginx.conf - echo '$PASS' | sudo -S cp -r ${REMOTE_TMP}* $REMOTE_PROJ/ - echo '$PASS' | sudo -S chown -R $USER:$USER $REMOTE_PROJ -EOF - -# 3. Rebuild and restart services remotely -echo "Deploying on production server..." -sshpass -p "$PASS" ssh -o StrictHostKeyChecking=no "$USER@$HOST" << EOF - cd $REMOTE_PROJ - echo '$PASS' | sudo -S bash local_rebuild.sh -EOF - -echo "Done! The new code is deployed to $HOST." diff --git a/deploy_test_nodes.sh b/deploy_test_nodes.sh deleted file mode 100755 index ad5d8f8..0000000 --- a/deploy_test_nodes.sh +++ /dev/null @@ -1,66 +0,0 @@ -#!/bin/bash -# deploy_test_nodes.sh: Spawns N test agent nodes on the production host for mesh testing. -# Usage: ./deploy_test_nodes.sh [COUNT] (default 2) - -COUNT=${1:-2} -HOST="${REMOTE_HOST:-192.168.68.113}" -USER="${REMOTE_USER:-axieyangb}" -PASS="${REMOTE_PASS}" - -# Load credentials from GitBucket if not in environment -if [ -z "$PASS" ]; then - if [ -f "/app/.env.gitbucket" ]; then source "/app/.env.gitbucket"; fi - GITBUCKET_TOKEN="${GITBUCKET_TOKEN}" - SNIPPET_ID="${DEPLOYMENT_SNIPPET_ID}" - if [ -n "$GITBUCKET_TOKEN" ] && [ -n "$SNIPPET_ID" ]; then - TMP_SECRETS=$(mktemp -d) - if git clone "https://yangyangxie:${GITBUCKET_TOKEN}@gitbucket.jerxie.com/git/gist/yangyangxie/${SNIPPET_ID}.git" "$TMP_SECRETS" &> /dev/null; then - source "$TMP_SECRETS/.env.production" - PASS="${REMOTE_PASSWORD:-$PASS}" - fi - rm -rf "$TMP_SECRETS" - fi -fi - -if [ -z "$PASS" ]; then echo "Error: REMOTE_PASS not found."; exit 1; fi - -REMOTE_PROJ="/home/coder/project/cortex-hub" -AGENT_DIR="$REMOTE_PROJ/agent-node" - -echo "๐Ÿš€ Deploying $COUNT test nodes to $HOST..." - -# We use docker run instead of compose to allow scaling with unique names/IDs easily -# without modifying the persistent docker-compose.yml on the server. - -sshpass -p "$PASS" ssh -o StrictHostKeyChecking=no "$USER@$HOST" << EOF - # 1. Ensure the base image is built - cd $AGENT_DIR - echo '$PASS' | sudo -S docker build -t agent-node-base . - - # 2. Cleanup any previous test nodes - echo "Cleaning up old test nodes..." - echo '$PASS' | sudo -S docker ps -a --filter "name=cortex-test-node-" -q | xargs -r sudo docker rm -f - - # 3. Spawn N nodes - for i in \$(seq 1 $COUNT); do - NODE_ID="test-node-\$i" - CONTAINER_NAME="cortex-test-node-\$i" - - echo "[+] Starting \$CONTAINER_NAME..." - - echo '$PASS' | sudo -S docker run -d \\ - --name "\$CONTAINER_NAME" \\ - --network cortex-hub_default \\ - -e AGENT_NODE_ID="\$NODE_ID" \\ - -e AGENT_NODE_DESC="Scalable Test Node #\$i" \\ - -e GRPC_ENDPOINT="ai_hub_service:50051" \\ - -e AGENT_SECRET_KEY="cortex-secret-shared-key" \\ - -e AGENT_TLS_ENABLED="false" \\ - agent-node-base - done - - echo "โœ… Spawning complete. Currently running test nodes:" - echo '$PASS' | sudo -S docker ps --filter "name=cortex-test-node-" -EOF - -echo "โœจ Done! Check https://ai.jerxie.com/nodes to see the new nodes join the mesh." diff --git a/deployment/jerxie-prod/docker-compose.production.yml b/deployment/jerxie-prod/docker-compose.production.yml new file mode 100644 index 0000000..b5f137c --- /dev/null +++ b/deployment/jerxie-prod/docker-compose.production.yml @@ -0,0 +1,25 @@ +# Production Override for Jerxie AI Cortex Hub +# Specific to 192.168.68.113 environment with NFS storage on 192.168.68.90 + +version: '3.8' + +services: + ai-hub: + environment: + - HUB_PUBLIC_URL=https://ai.jerxie.com + - HUB_GRPC_ENDPOINT=ai.jerxie.com:443 + - OIDC_CLIENT_ID=cortex-server + - OIDC_CLIENT_SECRET=aYc2j1lYUUZXkBFFUndnleZI + - OIDC_SERVER_URL=https://auth.jerxie.com + - OIDC_REDIRECT_URI=https://ai.jerxie.com/api/v1/users/login/callback + - SUPER_ADMINS=axieyangb@gmail.com,jerxie.app@gmail.com + - SECRET_KEY=aYc2j1lYUUZXkBFFUndnleZI + +# Redirect the persistent data to the NFS volume +volumes: + ai_hub_data: + driver: local + driver_opts: + type: "nfs" + o: "addr=192.168.68.90,rw" + device: ":/volume1/docker/ai-hub/data" diff --git a/deployment/test-nodes/docker-compose.test-nodes.yml b/deployment/test-nodes/docker-compose.test-nodes.yml new file mode 100644 index 0000000..45ad0d0 --- /dev/null +++ b/deployment/test-nodes/docker-compose.test-nodes.yml @@ -0,0 +1,41 @@ +# docker-compose.test-nodes.yml +# Internal testing setup for multiple Agent Nodes (e.g. Test Node 1, Test Node 2). +# This is NOT meant for end-user deployment. +services: + test-node-1: + build: + context: ./agent-node + container_name: cortex-test-1 + environment: + - AGENT_NODE_ID=test-node-1 + - AGENT_NODE_DESC=Primary Test Node + - GRPC_ENDPOINT=ai_hub_service:50051 + - AGENT_SECRET_KEY=aYc2j1lYUUZXkBFFUndnleZI + - AGENT_AUTH_TOKEN=cortex-secret-shared-key + - AGENT_TLS_ENABLED=false + - DEBUG_GRPC=true + restart: always + cap_add: + - NET_ADMIN + privileged: true + volumes: + - ./skills:/app/node_skills:ro + + test-node-2: + build: + context: ./agent-node + container_name: cortex-test-2 + environment: + - AGENT_NODE_ID=test-node-2 + - AGENT_NODE_DESC=Secondary Test Node + - GRPC_ENDPOINT=ai_hub_service:50051 + - AGENT_SECRET_KEY=aYc2j1lYUUZXkBFFUndnleZI + - AGENT_AUTH_TOKEN=ysHjZIRXeWo-YYK6EWtBsIgJ4uNBihSnZMtt0BQW3eI + - AGENT_TLS_ENABLED=false + - DEBUG_GRPC=true + restart: always + cap_add: + - NET_ADMIN + privileged: true + volumes: + - ./skills:/app/node_skills:ro diff --git a/docker-compose.test-nodes.yml b/docker-compose.test-nodes.yml deleted file mode 100644 index 45ad0d0..0000000 --- a/docker-compose.test-nodes.yml +++ /dev/null @@ -1,41 +0,0 @@ -# docker-compose.test-nodes.yml -# Internal testing setup for multiple Agent Nodes (e.g. Test Node 1, Test Node 2). -# This is NOT meant for end-user deployment. -services: - test-node-1: - build: - context: ./agent-node - container_name: cortex-test-1 - environment: - - AGENT_NODE_ID=test-node-1 - - AGENT_NODE_DESC=Primary Test Node - - GRPC_ENDPOINT=ai_hub_service:50051 - - AGENT_SECRET_KEY=aYc2j1lYUUZXkBFFUndnleZI - - AGENT_AUTH_TOKEN=cortex-secret-shared-key - - AGENT_TLS_ENABLED=false - - DEBUG_GRPC=true - restart: always - cap_add: - - NET_ADMIN - privileged: true - volumes: - - ./skills:/app/node_skills:ro - - test-node-2: - build: - context: ./agent-node - container_name: cortex-test-2 - environment: - - AGENT_NODE_ID=test-node-2 - - AGENT_NODE_DESC=Secondary Test Node - - GRPC_ENDPOINT=ai_hub_service:50051 - - AGENT_SECRET_KEY=aYc2j1lYUUZXkBFFUndnleZI - - AGENT_AUTH_TOKEN=ysHjZIRXeWo-YYK6EWtBsIgJ4uNBihSnZMtt0BQW3eI - - AGENT_TLS_ENABLED=false - - DEBUG_GRPC=true - restart: always - cap_add: - - NET_ADMIN - privileged: true - volumes: - - ./skills:/app/node_skills:ro diff --git a/docker-compose.yml b/docker-compose.yml index a380d0a..30cf167 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -1,7 +1,7 @@ version: '3.8' services: - # Unified Frontend and Nginx Gateway (Production Build) + # Unified Frontend and Nginx Gateway ai-frontend: build: ./ui/client-app container_name: ai_unified_frontend @@ -15,8 +15,6 @@ limits: cpus: '0.50' memory: 512M - reservations: - memory: 128M depends_on: - ai-hub @@ -30,15 +28,14 @@ environment: - PATH_PREFIX=/api/v1 - HUB_API_URL=http://localhost:8000 - - HUB_PUBLIC_URL=https://ai.jerxie.com - - HUB_GRPC_ENDPOINT=ai.jerxie.com:443 + - HUB_PUBLIC_URL=http://localhost:8002 + - HUB_GRPC_ENDPOINT=localhost:50051 - OIDC_CLIENT_ID=cortex-server - - OIDC_CLIENT_SECRET=aYc2j1lYUUZXkBFFUndnleZI - - OIDC_SERVER_URL=https://auth.jerxie.com - - OIDC_REDIRECT_URI=https://ai.jerxie.com/api/v1/users/login/callback - - SUPER_ADMINS=axieyangb@gmail.com,jerxie.app@gmail.com - # IMPORTANT: Agent nodes must have AGENT_SECRET_KEY set to this same value - - SECRET_KEY=aYc2j1lYUUZXkBFFUndnleZI + - OIDC_CLIENT_SECRET=change-me-at-runtime + - OIDC_SERVER_URL=http://localhost:8080 # Placeholder for generic setup + - OIDC_REDIRECT_URI=http://localhost:8002/api/v1/users/login/callback + - SUPER_ADMINS=admin@example.com + - SECRET_KEY=default-insecure-key - DEBUG_GRPC=true volumes: - ai_hub_data:/app/data:rw @@ -49,16 +46,8 @@ limits: cpus: '1.0' memory: 1G - reservations: - memory: 256M -# Define the named volume for the AI hub's data +# Generic named volume using local driver volumes: ai_hub_data: driver: local - driver_opts: - type: "nfs" - # IMPORTANT: Replace the IP address below with your NFS server's actual IP - o: "addr=192.168.68.90,rw" - # IMPORTANT: Replace this path with the correct directory on your NFS server - device: ":/volume1/docker/ai-hub/data" \ No newline at end of file diff --git a/local_rebuild.sh b/local_rebuild.sh deleted file mode 100755 index f965134..0000000 --- a/local_rebuild.sh +++ /dev/null @@ -1,68 +0,0 @@ -#!/bin/bash - -# --- Deployment Script for AI Hub --- -# This script is designed to automate the deployment of the AI Hub application -# using Docker Compose. It's intended to be run on the production server. -# -# The script performs the following actions: -# 1. Defines project-specific configuration variables. -# 2. **Installs Docker Compose if it's not found on the system.** -# 3. Navigates to the correct project directory. -# 4. Stops and removes any currently running Docker containers for the project. -# 5. Pulls the latest Docker images from a registry (if applicable). -# 6. Builds the new Docker images from the source code. -# 7. Starts the new containers in detached mode, with production settings. -# 8. Performs cleanup of old, unused Docker images. - -# --- Configuration --- -# Set the project directory to the directory where this script is located. -PROJECT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" &> /dev/null && pwd)" - - -# --- Helper Function --- -# Find the correct docker-compose command (modern plugin or standalone v1) -if docker compose version &> /dev/null; then - DOCKER_CMD="docker compose" -else - DOCKER_CMD="docker-compose" -fi - -# --- Script Execution --- -echo "๐Ÿš€ Starting AI Hub deployment process..." - -# Navigate to the project directory. Exit if the directory doesn't exist. -cd "$PROJECT_DIR" || { echo "Error: Project directory '$PROJECT_DIR' not found. Exiting."; exit 1; } - -# Stop and remove any existing containers to ensure a clean deployment. -echo "๐Ÿ›‘ Stopping and removing old Docker containers and networks..." -sudo $DOCKER_CMD down || true - -# Pull the latest images if they are hosted on a registry. -# echo "๐Ÿ“ฅ Pulling latest Docker images..." -# sudo $DOCKER_CMD pull - -# Build new images and start the services. -echo "๐Ÿ—๏ธ Building and starting new containers..." - -COMPOSE_FILES="-f docker-compose.yml" -if [ -f "docker-compose.test-nodes.yml" ]; then - echo "๐Ÿ”— Including Internal Test Nodes in deployment..." - COMPOSE_FILES="$COMPOSE_FILES -f docker-compose.test-nodes.yml" -fi - -# We use --remove-orphans only if we are SURE we want to clean up everything not in these files. -sudo $DOCKER_CMD $COMPOSE_FILES up -d --build --remove-orphans > /tmp/deploy_log.txt 2>&1 -echo "โœ… Containers started! Checking status..." -cat /tmp/deploy_log.txt -sudo docker ps --filter "name=ai_" -sudo docker ps --filter "name=cortex-" - -echo "โœ… Deployment complete! The AI Hub application is now running." - -# --- Post-Deployment Cleanup --- -echo "๐Ÿงน Cleaning up unused Docker resources..." - -# Remove dangling images (images without a tag). -sudo docker image prune -f || true - -echo "โœจ Cleanup finished." \ No newline at end of file diff --git a/request.json b/request.json deleted file mode 100644 index e8de3d6..0000000 --- a/request.json +++ /dev/null @@ -1,5 +0,0 @@ -{ - "name": "https_listener", - "yaml": "'@type': type.googleapis.com/envoy.config.listener.v3.Listener\naddress:\n socketAddress:\n address: 0.0.0.0\n portValue: 10001\nfilterChains:\n- filterChainMatch:\n serverNames:\n - pcb.jerxie.com\n filters:\n - name: envoy.filters.network.http_connection_manager\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n httpFilters:\n - name: envoy.filters.http.router\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n routeConfig:\n name: pcb_service\n virtualHosts:\n - domains:\n - pcb.jerxie.com\n name: pcb_service\n routes:\n - match:\n prefix: /\n route:\n cluster: _pcb_server\n timeout: 0s\n statPrefix: ingress_http\n upgradeConfigs:\n - upgradeType: websocket\n transportSocket:\n name: envoy.transport_sockets.tls\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n commonTlsContext:\n tlsCertificateSdsSecretConfigs:\n - name: pcb_jerxie_com\n sdsConfig:\n apiConfigSource:\n apiType: GRPC\n grpcServices:\n - envoyGrpc:\n clusterName: xds_cluster\n transportApiVersion: V3\n resourceApiVersion: V3\n- filterChainMatch:\n serverNames:\n - monitor.jerxie.com\n filters:\n - name: envoy.filters.network.http_connection_manager\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n httpFilters:\n - name: envoy.filters.http.router\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n routeConfig:\n virtualHosts:\n - domains:\n - monitor.jerxie.com\n name: monitor_service\n routes:\n - match:\n prefix: /\n route:\n cluster: _monitor_server\n timeout: 0s\n statPrefix: ingress_http\n upgradeConfigs:\n - upgradeType: websocket\n transportSocket:\n name: envoy.transport_sockets.tls\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n commonTlsContext:\n tlsCertificateSdsSecretConfigs:\n - name: monitor_jerxie_com\n sdsConfig:\n apiConfigSource:\n apiType: GRPC\n grpcServices:\n - envoyGrpc:\n clusterName: xds_cluster\n transportApiVersion: V3\n resourceApiVersion: V3\n- filterChainMatch:\n serverNames:\n - ai.jerxie.com\n filters:\n - name: envoy.filters.network.http_connection_manager\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n httpFilters:\n - name: envoy.filters.http.router\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n routeConfig:\n name: ai_unified_service\n virtualHosts:\n - domains:\n - ai.jerxie.com\n name: ai_service\n routes:\n - match:\n prefix: /agent.\n route:\n cluster: _ai_agent_orchestrator\n maxStreamDuration:\n grpcTimeoutHeaderMax: 0s\n timeout: 0s\n - match:\n prefix: /\n route:\n cluster: _ai_unified_server\n timeout: 0s\n statPrefix: ingress_http\n upgradeConfigs:\n - upgradeType: websocket\n transportSocket:\n name: envoy.transport_sockets.tls\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n commonTlsContext:\n alpnProtocols:\n - h2\n - http/1.1\n tlsCertificateSdsSecretConfigs:\n - name: ai_jerxie_com\n sdsConfig:\n apiConfigSource:\n apiType: GRPC\n grpcServices:\n - envoyGrpc:\n clusterName: xds_cluster\n transportApiVersion: V3\n resourceApiVersion: V3\n- filterChainMatch:\n serverNames:\n - container.jerxie.com\n filters:\n - name: envoy.filters.network.http_connection_manager\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n httpFilters:\n - name: envoy.filters.http.router\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n routeConfig:\n virtualHosts:\n - domains:\n - container.jerxie.com\n name: container_service\n routes:\n - match:\n prefix: /\n route:\n cluster: _portainer_ui\n statPrefix: ingress_http\n upgradeConfigs:\n - upgradeType: websocket\n transportSocket:\n name: envoy.transport_sockets.tls\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n commonTlsContext:\n tlsCertificateSdsSecretConfigs:\n - name: container_jerxie_com\n sdsConfig:\n apiConfigSource:\n apiType: GRPC\n grpcServices:\n - envoyGrpc:\n clusterName: xds_cluster\n transportApiVersion: V3\n resourceApiVersion: V3\n- filterChainMatch:\n serverNames:\n - password.jerxie.com\n filters:\n - name: envoy.filters.network.http_connection_manager\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n httpFilters:\n - name: envoy.filters.http.router\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n routeConfig:\n virtualHosts:\n - domains:\n - password.jerxie.com\n name: password_service\n routes:\n - match:\n prefix: /\n route:\n cluster: _bitwarden_service\n statPrefix: ingress_http\n transportSocket:\n name: envoy.transport_sockets.tls\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n commonTlsContext:\n tlsCertificateSdsSecretConfigs:\n - name: password_jerxie_com\n sdsConfig:\n apiConfigSource:\n apiType: GRPC\n grpcServices:\n - envoyGrpc:\n clusterName: xds_cluster\n transportApiVersion: V3\n resourceApiVersion: V3\n- filterChainMatch:\n serverNames:\n - docker.jerxie.com\n - docker.local\n filters:\n - name: envoy.filters.network.http_connection_manager\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n httpFilters:\n - name: envoy.filters.http.router\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n routeConfig:\n virtualHosts:\n - domains:\n - docker.jerxie.com\n name: docker_service\n routes:\n - match:\n prefix: /\n route:\n cluster: _docker_registry\n timeout: 0s\n statPrefix: ingress_http\n transportSocket:\n name: envoy.transport_sockets.tls\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n commonTlsContext:\n tlsCertificateSdsSecretConfigs:\n - name: docker_jerxie_com\n sdsConfig:\n apiConfigSource:\n apiType: GRPC\n grpcServices:\n - envoyGrpc:\n clusterName: xds_cluster\n transportApiVersion: V3\n resourceApiVersion: V3\n- filterChainMatch:\n serverNames:\n - video.jerxie.com\n filters:\n - name: envoy.filters.network.http_connection_manager\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n httpFilters:\n - name: envoy.filters.http.router\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n routeConfig:\n virtualHosts:\n - domains:\n - video.jerxie.com\n name: docker_service\n routes:\n - match:\n prefix: /\n route:\n cluster: _nas_video\n timeout: 0s\n statPrefix: ingress_http\n transportSocket:\n name: envoy.transport_sockets.tls\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n commonTlsContext:\n tlsCertificateSdsSecretConfigs:\n - name: video_jerxie_com\n sdsConfig:\n apiConfigSource:\n apiType: GRPC\n grpcServices:\n - envoyGrpc:\n clusterName: xds_cluster\n transportApiVersion: V3\n resourceApiVersion: V3\n- filterChainMatch:\n serverNames:\n - audio.jerxie.com\n - audio.local\n filters:\n - name: envoy.filters.network.http_connection_manager\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n httpFilters:\n - name: envoy.filters.http.router\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n routeConfig:\n virtualHosts:\n - domains:\n - audio.jerxie.com\n - audio.local\n name: docker_service\n routes:\n - match:\n prefix: /\n route:\n cluster: _nas_audio\n statPrefix: ingress_http\n transportSocket:\n name: envoy.transport_sockets.tls\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n commonTlsContext:\n tlsCertificateSdsSecretConfigs:\n - name: audio_jerxie_com\n sdsConfig:\n apiConfigSource:\n apiType: GRPC\n grpcServices:\n - envoyGrpc:\n clusterName: xds_cluster\n transportApiVersion: V3\n resourceApiVersion: V3\n- filterChainMatch:\n serverNames:\n - gitbucket.jerxie.com\n filters:\n - name: envoy.filters.network.http_connection_manager\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n httpFilters:\n - name: envoy.filters.http.router\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n routeConfig:\n virtualHosts:\n - domains:\n - gitbucket.jerxie.com\n name: gitbucket_service\n routes:\n - match:\n prefix: /\n route:\n cluster: _git_bucket\n statPrefix: ingress_http\n transportSocket:\n name: envoy.transport_sockets.tls\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n commonTlsContext:\n tlsCertificateSdsSecretConfigs:\n - name: gitbucket_jerxie_com\n sdsConfig:\n apiConfigSource:\n apiType: GRPC\n grpcServices:\n - envoyGrpc:\n clusterName: xds_cluster\n transportApiVersion: V3\n resourceApiVersion: V3\n- filterChainMatch:\n serverNames:\n - photo.jerxie.com\n filters:\n - name: envoy.filters.network.http_connection_manager\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n httpFilters:\n - name: envoy.filters.http.router\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n routeConfig:\n virtualHosts:\n - domains:\n - photo.jerxie.com\n name: photo_service\n routes:\n - match:\n prefix: /\n route:\n cluster: _nas_photo\n timeout: 0s\n statPrefix: ingress_http\n transportSocket:\n name: envoy.transport_sockets.tls\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n commonTlsContext:\n tlsCertificateSdsSecretConfigs:\n - name: photo_jerxie_com\n sdsConfig:\n apiConfigSource:\n apiType: GRPC\n grpcServices:\n - envoyGrpc:\n clusterName: xds_cluster\n transportApiVersion: V3\n resourceApiVersion: V3\n- filterChainMatch:\n serverNames:\n - note.jerxie.com\n filters:\n - name: envoy.filters.network.http_connection_manager\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n httpFilters:\n - name: envoy.filters.http.router\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n routeConfig:\n virtualHosts:\n - domains:\n - note.jerxie.com\n name: note_service\n routes:\n - match:\n prefix: /\n route:\n cluster: _nas_note\n statPrefix: ingress_http\n transportSocket:\n name: envoy.transport_sockets.tls\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n commonTlsContext:\n tlsCertificateSdsSecretConfigs:\n - name: note_jerxie_com\n sdsConfig:\n apiConfigSource:\n apiType: GRPC\n grpcServices:\n - envoyGrpc:\n clusterName: xds_cluster\n transportApiVersion: V3\n resourceApiVersion: V3\n- filterChainMatch:\n serverNames:\n - home.jerxie.com\n filters:\n - name: envoy.filters.network.http_connection_manager\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n httpFilters:\n - name: envoy.filters.http.router\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n mergeSlashes: true\n normalizePath: true\n requestTimeout: 300s\n routeConfig:\n virtualHosts:\n - domains:\n - home.jerxie.com\n name: home_service\n routes:\n - match:\n prefix: /\n route:\n cluster: _homeassistant_service\n statPrefix: ingress_http\n streamIdleTimeout: 300s\n upgradeConfigs:\n - upgradeType: websocket\n transportSocket:\n name: envoy.transport_sockets.tls\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n commonTlsContext:\n tlsCertificateSdsSecretConfigs:\n - name: home_jerxie_com\n sdsConfig:\n apiConfigSource:\n apiType: GRPC\n grpcServices:\n - envoyGrpc:\n clusterName: xds_cluster\n transportApiVersion: V3\n resourceApiVersion: V3\n- filterChainMatch:\n serverNames:\n - auth.jerxie.com\n filters:\n - name: envoy.filters.network.http_connection_manager\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n httpFilters:\n - name: envoy.filters.http.router\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n routeConfig:\n virtualHosts:\n - domains:\n - auth.jerxie.com\n name: auth_service\n routes:\n - match:\n prefix: /\n route:\n cluster: _auth_server\n statPrefix: ingress_http\n upgradeConfigs:\n - upgradeType: websocket\n transportSocket:\n name: envoy.transport_sockets.tls\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n commonTlsContext:\n tlsCertificateSdsSecretConfigs:\n - name: auth_jerxie_com\n sdsConfig:\n apiConfigSource:\n apiType: GRPC\n grpcServices:\n - envoyGrpc:\n clusterName: xds_cluster\n transportApiVersion: V3\n resourceApiVersion: V3\n- filterChainMatch:\n serverNames:\n - nas\n - nas.jerxie.com\n filters:\n - name: envoy.filters.network.http_connection_manager\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n httpFilters:\n - name: envoy.filters.http.router\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n maxRequestHeadersKb: 96\n routeConfig:\n virtualHosts:\n - domains:\n - nas.jerxie.com\n - nas:10001\n name: docker_service\n routes:\n - match:\n prefix: /\n route:\n cluster: _nas_service\n timeout: 0s\n statPrefix: ingress_http\n upgradeConfigs:\n - upgradeType: websocket\n transportSocket:\n name: envoy.transport_sockets.tls\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n commonTlsContext:\n tlsCertificateSdsSecretConfigs:\n - name: nas_jerxie_com\n sdsConfig:\n apiConfigSource:\n apiType: GRPC\n grpcServices:\n - envoyGrpc:\n clusterName: xds_cluster\n transportApiVersion: V3\n resourceApiVersion: V3\n- filterChainMatch:\n serverNames:\n - code.jerxie.com\n filters:\n - name: envoy.filters.network.http_connection_manager\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n httpFilters:\n - configDiscovery:\n configSource:\n ads: {}\n resourceApiVersion: V3\n typeUrls:\n - type.googleapis.com/envoy.extensions.filters.http.oauth2.v3.OAuth2\n name: oidc_oauth2_config_code-server\n - configDiscovery:\n configSource:\n ads: {}\n resourceApiVersion: V3\n typeUrls:\n - type.googleapis.com/envoy.extensions.filters.http.jwt_authn.v3.JwtAuthentication\n name: oidc_jwt_authn_config\n - configDiscovery:\n configSource:\n ads: {}\n resourceApiVersion: V3\n typeUrls:\n - type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua\n name: oidc_authz_lua\n - name: envoy.filters.http.router\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n routeConfig:\n virtualHosts:\n - domains:\n - code.jerxie.com\n name: code_service\n routes:\n - match:\n prefix: /\n route:\n cluster: _code_server\n statPrefix: ingress_http\n upgradeConfigs:\n - upgradeType: websocket\n name: code_server_filter_chain\n transportSocket:\n name: envoy.transport_sockets.tls\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n commonTlsContext:\n tlsCertificateSdsSecretConfigs:\n - name: code_jerxie_com\n sdsConfig:\n apiConfigSource:\n apiType: GRPC\n grpcServices:\n - envoyGrpc:\n clusterName: xds_cluster\n transportApiVersion: V3\n resourceApiVersion: V3\n- filterChainMatch:\n serverNames:\n - envoy.jerxie.com\n filters:\n - name: envoy.filters.network.http_connection_manager\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n httpFilters:\n - configDiscovery:\n configSource:\n ads: {}\n resourceApiVersion: V3\n typeUrls:\n - type.googleapis.com/envoy.extensions.filters.http.oauth2.v3.OAuth2\n name: oidc_oauth2_config_envoy-server\n - configDiscovery:\n configSource:\n ads: {}\n resourceApiVersion: V3\n typeUrls:\n - type.googleapis.com/envoy.extensions.filters.http.jwt_authn.v3.JwtAuthentication\n name: oidc_jwt_authn_config\n - configDiscovery:\n configSource:\n ads: {}\n resourceApiVersion: V3\n typeUrls:\n - type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua\n name: oidc_authz_lua\n - name: envoy.filters.http.router\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n routeConfig:\n virtualHosts:\n - domains:\n - envoy.jerxie.com\n name: envoy_service\n routes:\n - match:\n prefix: /\n route:\n cluster: _envoy_server\n statPrefix: ingress_http_envoy\n upgradeConfigs:\n - upgradeType: websocket\n name: envoy_server_filter_chain\n transportSocket:\n name: envoy.transport_sockets.tls\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n commonTlsContext:\n tlsCertificateSdsSecretConfigs:\n - name: envoy_jerxie_com\n sdsConfig:\n apiConfigSource:\n apiType: GRPC\n grpcServices:\n - envoyGrpc:\n clusterName: xds_cluster\n transportApiVersion: V3\n resourceApiVersion: V3\n- filterChainMatch:\n serverNames:\n - openclaw.jerxie.com\n filters:\n - name: envoy.filters.network.http_connection_manager\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n httpFilters:\n - configDiscovery:\n configSource:\n ads: {}\n resourceApiVersion: V3\n typeUrls:\n - type.googleapis.com/envoy.extensions.filters.http.oauth2.v3.OAuth2\n name: oidc_oauth2_config_openclaw\n - configDiscovery:\n configSource:\n ads: {}\n resourceApiVersion: V3\n typeUrls:\n - type.googleapis.com/envoy.extensions.filters.http.jwt_authn.v3.JwtAuthentication\n name: oidc_jwt_authn_config\n - configDiscovery:\n configSource:\n ads: {}\n resourceApiVersion: V3\n typeUrls:\n - type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua\n name: oidc_authz_lua\n - name: envoy.filters.http.router\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router\n routeConfig:\n virtualHosts:\n - domains:\n - openclaw.jerxie.com\n name: openclaw_service\n routes:\n - match:\n prefix: /webhooks\n route:\n cluster: _openclaw_server\n typedPerFilterConfig:\n oidc_authz_lua:\n '@type': type.googleapis.com/envoy.config.route.v3.FilterConfig\n disabled: true\n oidc_jwt_authn_config:\n '@type': type.googleapis.com/envoy.config.route.v3.FilterConfig\n disabled: true\n oidc_oauth2_config_openclaw:\n '@type': type.googleapis.com/envoy.config.route.v3.FilterConfig\n disabled: true\n - match:\n prefix: /\n route:\n cluster: _openclaw_server\n statPrefix: ingress_http\n upgradeConfigs:\n - upgradeType: websocket\n name: openclaw_server_filter_chain\n transportSocket:\n name: envoy.transport_sockets.tls\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext\n commonTlsContext:\n tlsCertificateSdsSecretConfigs:\n - name: openclaw_jerxie_com\n sdsConfig:\n apiConfigSource:\n apiType: GRPC\n grpcServices:\n - envoyGrpc:\n clusterName: xds_cluster\n transportApiVersion: V3\n resourceApiVersion: V3\nlistenerFilters:\n- name: envoy.filters.listener.tls_inspector\n typedConfig:\n '@type': type.googleapis.com/envoy.extensions.filters.listener.tls_inspector.v3.TlsInspector\nname: https_listener", - "upsert": true -} diff --git a/scripts/deploy_test_nodes.sh b/scripts/deploy_test_nodes.sh new file mode 100755 index 0000000..ad5d8f8 --- /dev/null +++ b/scripts/deploy_test_nodes.sh @@ -0,0 +1,66 @@ +#!/bin/bash +# deploy_test_nodes.sh: Spawns N test agent nodes on the production host for mesh testing. +# Usage: ./deploy_test_nodes.sh [COUNT] (default 2) + +COUNT=${1:-2} +HOST="${REMOTE_HOST:-192.168.68.113}" +USER="${REMOTE_USER:-axieyangb}" +PASS="${REMOTE_PASS}" + +# Load credentials from GitBucket if not in environment +if [ -z "$PASS" ]; then + if [ -f "/app/.env.gitbucket" ]; then source "/app/.env.gitbucket"; fi + GITBUCKET_TOKEN="${GITBUCKET_TOKEN}" + SNIPPET_ID="${DEPLOYMENT_SNIPPET_ID}" + if [ -n "$GITBUCKET_TOKEN" ] && [ -n "$SNIPPET_ID" ]; then + TMP_SECRETS=$(mktemp -d) + if git clone "https://yangyangxie:${GITBUCKET_TOKEN}@gitbucket.jerxie.com/git/gist/yangyangxie/${SNIPPET_ID}.git" "$TMP_SECRETS" &> /dev/null; then + source "$TMP_SECRETS/.env.production" + PASS="${REMOTE_PASSWORD:-$PASS}" + fi + rm -rf "$TMP_SECRETS" + fi +fi + +if [ -z "$PASS" ]; then echo "Error: REMOTE_PASS not found."; exit 1; fi + +REMOTE_PROJ="/home/coder/project/cortex-hub" +AGENT_DIR="$REMOTE_PROJ/agent-node" + +echo "๐Ÿš€ Deploying $COUNT test nodes to $HOST..." + +# We use docker run instead of compose to allow scaling with unique names/IDs easily +# without modifying the persistent docker-compose.yml on the server. + +sshpass -p "$PASS" ssh -o StrictHostKeyChecking=no "$USER@$HOST" << EOF + # 1. Ensure the base image is built + cd $AGENT_DIR + echo '$PASS' | sudo -S docker build -t agent-node-base . + + # 2. Cleanup any previous test nodes + echo "Cleaning up old test nodes..." + echo '$PASS' | sudo -S docker ps -a --filter "name=cortex-test-node-" -q | xargs -r sudo docker rm -f + + # 3. Spawn N nodes + for i in \$(seq 1 $COUNT); do + NODE_ID="test-node-\$i" + CONTAINER_NAME="cortex-test-node-\$i" + + echo "[+] Starting \$CONTAINER_NAME..." + + echo '$PASS' | sudo -S docker run -d \\ + --name "\$CONTAINER_NAME" \\ + --network cortex-hub_default \\ + -e AGENT_NODE_ID="\$NODE_ID" \\ + -e AGENT_NODE_DESC="Scalable Test Node #\$i" \\ + -e GRPC_ENDPOINT="ai_hub_service:50051" \\ + -e AGENT_SECRET_KEY="cortex-secret-shared-key" \\ + -e AGENT_TLS_ENABLED="false" \\ + agent-node-base + done + + echo "โœ… Spawning complete. Currently running test nodes:" + echo '$PASS' | sudo -S docker ps --filter "name=cortex-test-node-" +EOF + +echo "โœจ Done! Check https://ai.jerxie.com/nodes to see the new nodes join the mesh." diff --git a/scripts/local_rebuild.sh b/scripts/local_rebuild.sh new file mode 100755 index 0000000..f8f714a --- /dev/null +++ b/scripts/local_rebuild.sh @@ -0,0 +1,57 @@ +#!/bin/bash + +# --- Deployment Script for AI Hub (Server Side) --- +# This script is designed to automate the deployment of the AI Hub application +# using Docker Compose. It's intended to be run on the production server. + +# Set the project directory to the directory where this script is located. +# We expect this script to be in 'scripts/' or root. +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" &> /dev/null && pwd)" +PROJECT_DIR="$(dirname "$SCRIPT_DIR")" # Assuming it's in scripts/ +if [ ! -f "$PROJECT_DIR/docker-compose.yml" ]; then + PROJECT_DIR="$SCRIPT_DIR" # Fallback if run from root +fi + +cd "$PROJECT_DIR" || { echo "Error: Project directory '$PROJECT_DIR' not found. Exiting."; exit 1; } + +# Find the correct docker-compose command +if docker compose version &> /dev/null; then + DOCKER_CMD="docker compose" +else + DOCKER_CMD="docker-compose" +fi + +echo "๐Ÿš€ Starting AI Hub deployment process..." + +# 1. Base compose file +COMPOSE_FILES="-f docker-compose.yml" + +# 2. Check for production overrides (Jerxie specific) +if [ -f "deployment/jerxie-prod/docker-compose.production.yml" ]; then + echo "๐Ÿ—๏ธ Applying Jerxie Production overrides..." + COMPOSE_FILES="$COMPOSE_FILES -f deployment/jerxie-prod/docker-compose.production.yml" +fi + +# 3. Check for test nodes +if [ -f "deployment/test-nodes/docker-compose.test-nodes.yml" ]; then + echo "๐Ÿ”— Including Internal Test Nodes in deployment..." + COMPOSE_FILES="$COMPOSE_FILES -f deployment/test-nodes/docker-compose.test-nodes.yml" +fi + +echo "๐Ÿ›‘ Stopping and removing old Docker containers and networks..." +sudo $DOCKER_CMD $COMPOSE_FILES down --remove-orphans || true + +echo "๐Ÿ—๏ธ Building and starting new containers..." +sudo $DOCKER_CMD $COMPOSE_FILES up -d --build > /tmp/deploy_log.txt 2>&1 + +echo "โœ… Containers started! Checking status..." +cat /tmp/deploy_log.txt +sudo docker ps --filter "name=ai_" +sudo docker ps --filter "name=cortex-" + +echo "โœ… Deployment complete! The AI Hub application is now running." + +echo "๐Ÿงน Cleaning up unused Docker resources..." +sudo docker image prune -f || true + +echo "โœจ Cleanup finished." diff --git a/scripts/remote_deploy.sh b/scripts/remote_deploy.sh new file mode 100755 index 0000000..66b2eae --- /dev/null +++ b/scripts/remote_deploy.sh @@ -0,0 +1,93 @@ +#!/bin/bash +# Description: Automates deployment from the local environment to the production host 192.168.68.113 + +# Credentials - Can be set via ENV or fetched from GitBucket +HOST="${REMOTE_HOST}" +USER="${REMOTE_USER}" +PASS="${REMOTE_PASS}" + +# If credentials are missing, try to fetch from GitBucket Private Snippet +if [ -z "$PASS" ] || [ -z "$HOST" ]; then + # Load token/id from local env if present + if [ -f "/app/.env.gitbucket" ]; then + source "/app/.env.gitbucket" + fi + + GITBUCKET_TOKEN="${GITBUCKET_TOKEN}" + SNIPPET_ID="${DEPLOYMENT_SNIPPET_ID}" + + if [ -n "$GITBUCKET_TOKEN" ] && [ -n "$SNIPPET_ID" ]; then + echo "Secrets not provided in environment. Attempting to fetch from GitBucket..." + + TMP_SECRETS=$(mktemp -d) + if git clone "https://yangyangxie:${GITBUCKET_TOKEN}@gitbucket.jerxie.com/git/gist/yangyangxie/${SNIPPET_ID}.git" "$TMP_SECRETS" &> /dev/null; then + if [ -f "$TMP_SECRETS/.env.production" ]; then + source "$TMP_SECRETS/.env.production" + HOST="${REMOTE_HOST:-$HOST}" + USER="${REMOTE_USER:-$USER}" + PASS="${REMOTE_PASSWORD:-$PASS}" + echo "Successfully loaded credentials from GitBucket." + # Strip potential carriage returns + HOST=$(echo "$HOST" | tr -d '\r') + USER=$(echo "$USER" | tr -d '\r') + PASS=$(echo "$PASS" | tr -d '\r') + fi + else + echo "Failed to fetch secrets from GitBucket." + fi + rm -rf "$TMP_SECRETS" + fi +fi + +# Fallback defaults if still not set +HOST="${HOST:-192.168.68.113}" +USER="${USER:-axieyangb}" + +# System Paths +REMOTE_TMP="/tmp/cortex-hub/" +REMOTE_PROJ="/home/coder/project/cortex-hub" + +if [ -z "$PASS" ]; then + echo "Error: REMOTE_PASS not found and could not be fetched from GitBucket." + echo "Please set REMOTE_PASS or GITBUCKET_TOKEN environment variables." + exit 1 +fi + +echo "Checking if sshpass is installed..." +if ! command -v sshpass &> /dev/null; then + echo "sshpass could not be found, installing..." + sudo apt-get update && sudo apt-get install -y sshpass +fi + +# 1. Sync local codebase to temporary directory on remote server +echo "Syncing local files to production [USER: $USER, HOST: $HOST]..." +sshpass -p "$PASS" rsync -avz \ + --exclude '.git' \ + --exclude 'node_modules' \ + --exclude 'ui/client-app/node_modules' \ + --exclude 'ui/client-app/build' \ + --exclude 'ai-hub/__pycache__' \ + --exclude '.venv' \ + -e "ssh -o StrictHostKeyChecking=no" /app/ "$USER@$HOST:$REMOTE_TMP" + +if [ $? -ne 0 ]; then + echo "Rsync failed! Exiting." + exit 1 +fi + +# 2. Copy the synced files into the actual project directory replacing the old ones +echo "Overwriting production project files..." +sshpass -p "$PASS" ssh -o StrictHostKeyChecking=no "$USER@$HOST" << EOF + echo '$PASS' | sudo -S rm -rf $REMOTE_PROJ/nginx.conf + echo '$PASS' | sudo -S cp -r ${REMOTE_TMP}* $REMOTE_PROJ/ + echo '$PASS' | sudo -S chown -R $USER:$USER $REMOTE_PROJ +EOF + +# 3. Rebuild and restart services remotely +echo "Deploying on production server..." +sshpass -p "$PASS" ssh -o StrictHostKeyChecking=no "$USER@$HOST" << EOF + cd $REMOTE_PROJ + echo '$PASS' | sudo -S bash scripts/local_rebuild.sh +EOF + +echo "Done! The new code is deployed to $HOST." diff --git a/ui/client-app/src/App.js b/ui/client-app/src/App.js index 6fc1202..685c734 100644 --- a/ui/client-app/src/App.js +++ b/ui/client-app/src/App.js @@ -159,7 +159,7 @@ switch (currentPage) { case "home": // Pass both isLoggedIn and handleLogout to HomePage - return ; + return ; case "voice-chat": return ; case "swarm-control": @@ -167,11 +167,11 @@ case "settings": // Only admins can see global settings if (userProfile?.role !== "admin") { - return ; + return ; } return ; case "profile": - return ; + return ; case "nodes": return ; case "skills": @@ -179,7 +179,7 @@ case "login": return ; default: - return ; + return ; } }; diff --git a/ui/client-app/src/components/Navbar.js b/ui/client-app/src/components/Navbar.js index 5279542..73d8717 100644 --- a/ui/client-app/src/components/Navbar.js +++ b/ui/client-app/src/components/Navbar.js @@ -116,16 +116,8 @@ )} - {/* Conditional Login/Logout Button */} - {isLoggedIn ? ( -
- - {isOpen && Logout} -
- ) : ( + {/* Login Button for non-authenticated users */} + {!isLoggedIn && (
onNavigate("login")} className="flex items-center space-x-4 p-2 rounded-lg cursor-pointer bg-blue-500 text-white hover:bg-blue-600 transition-colors duration-200" diff --git a/ui/client-app/src/pages/HomePage.js b/ui/client-app/src/pages/HomePage.js index 53cfaa2..81e7fed 100644 --- a/ui/client-app/src/pages/HomePage.js +++ b/ui/client-app/src/pages/HomePage.js @@ -1,7 +1,7 @@ // HomePage.js import React from 'react'; -const HomePage = ({ onNavigate, isLoggedIn, onLogout }) => { +const HomePage = ({ onNavigate, isLoggedIn }) => { const buttonStyle = (enabled) => enabled ? "w-full sm:w-auto bg-blue-600 hover:bg-blue-700 text-white font-bold py-3 px-6 rounded-lg transition-colors duration-200 focus:outline-none focus:ring-2 focus:ring-blue-500 focus:ring-opacity-50" @@ -54,14 +54,7 @@ > Swarm Control ๐Ÿ’ป - {isLoggedIn ? ( - - ) : ( + {!isLoggedIn && (
-
-

- {profile.full_name || profile.username || 'Citizen'} -

-

{profile.email}

-

Member since {new Date(profile.created_at).toLocaleDateString()}

-
- - {profile.role} - - {profile.group_name && ( - - {profile.group_name} Group +
+
+

+ {profile.full_name || profile.username || 'Citizen'} +

+

{profile.email}

+

Member since {new Date(profile.created_at).toLocaleDateString()}

+
+ + {profile.role} - )} + {profile.group_name && ( + + {profile.group_name} Group + + )} +
+ {onLogout && ( + + )}