diff --git a/.agent/workflows/deploy_to_production.md b/.agent/workflows/deploy_to_production.md index a79db0e..1ea9eaf 100644 --- a/.agent/workflows/deploy_to_production.md +++ b/.agent/workflows/deploy_to_production.md @@ -2,20 +2,35 @@ description: How to deploy the AI Hub application to the production server (192.168.68.113) --- -This workflow automates the deployment of the Cortex Hub to the production server located at `192.168.68.113`. -The production server runs the application out of `/home/coder/project/cortex-hub`. +## Environment Context -**MAIN KNOWLEDGE POINT:** Agents and Users should refer to `.agent/workflows/deployment_reference.md` to understand the full proxy and architecture layout prior to running production debugging. +| Feature | Local Development (Mac Mini) | Production (Alpine Linux Server) | +| :--- | :--- | :--- | +| **Hostname** | `jerxie-macbookpro` | `192.168.68.113` | +| **Root Path** | `/Users/axieyangb/Project/CortexAI` | `/home/coder/project/cortex-hub` | +| **Venv** | `source cortex-ai/bin/activate` | Internal Docker Venv (`/app/venv`) | +| **Execution** | Native Shell (Zsh) | Docker Compose | +| **Database** | `sqlite:///./data/ai-hub.db` | PostgreSQL (Docker) | --- ## ⚠️ MANDATORY: Run verification BEFORE committing / deploying ### 1. Sanity Check Backend Startup -Before committing or deploying, ensure the backend can start without immediate crashes (e.g., NameErrors or import issues): +Before committing or deploying, ensure the backend can start without immediate crashes. +**Local Development (Native):** ```bash -cd /app/ai-hub && export DEVELOPER_MODE=true && export DATABASE_URL="sqlite:///./test_cortex.db" && uvicorn app.main:app --host 0.0.0.0 --port 8000 --log-level debug +source cortex-ai/bin/activate +cd ai-hub && export DEVELOPER_MODE=true && export DATABASE_URL="sqlite:///./test_cortex.db" +uvicorn app.main:app --host 0.0.0.0 --port 8010 --log-level debug +``` + +**Remote Production Environment Check:** +```bash +# This command runs inside the container on the remote server +sshpass -p "$REMOTE_PASSWORD" ssh ${REMOTE_USER}@${REMOTE_HOST} \ + "docker exec ai_hub_service uvicorn app.main:app --host 0.0.0.0 --port 8000 --log-level debug" ``` Ensure the logs show `Started server process [...]` and `Waiting for application startup.` without errors. diff --git a/.agent/workflows/local_env.md b/.agent/workflows/local_env.md new file mode 100644 index 0000000..5181bf9 --- /dev/null +++ b/.agent/workflows/local_env.md @@ -0,0 +1,27 @@ +# Local Environment Skill (Cortex AI) + +This skill provides context about the local development environment for the agent. + +## Environment Overview +- **OS**: macOS (Darwin) +- **Hostname**: `jerxie-macbookpro` +- **User**: `axieyangb` +- **Filesystem**: NFS Mount (Note: Permissions and ownership are often fixed; use `sudo -S` for restricted operations). + +## Python Environment +- **Venv Name**: `cortex-ai` +- **Location**: Project Root (`/Users/axieyangb/Project/CortexAI/cortex-ai`) +- **Activation**: `source cortex-ai/bin/activate` +- **Base Python**: `/opt/anaconda3/bin/python3.12` + +## Rules for the Agent +1. **Always activate the venv**: Before running any Python, build, or test command, ensure the venv is activated. + - Example: `source cortex-ai/bin/activate && pytest` +2. **Handle NFS Permissions**: If a file operation fails with "Permission Denied", check if it's due to the NFS mount. + - The sudo password can be found in the `.env` file as `REMOTE_PASS`. + - Use `echo $REMOTE_PASS | sudo -S ...` if necessary. +3. **Internal Paths**: Several scripts (like `run_integration_tests.sh`) may expect `/tmp/venv`. A compatible venv is also maintained at `/tmp/venv` if the local one fails. + +## Common Operations +- **Startup Backend (Native)**: `source cortex-ai/bin/activate && ./run_integration_tests.sh --native` +- **Compile Protos**: `source cortex-ai/bin/activate && bash agent-node/scripts/compile_protos.sh` diff --git a/.dockerignore b/.dockerignore new file mode 100644 index 0000000..ba004fa --- /dev/null +++ b/.dockerignore @@ -0,0 +1,7 @@ +**/__pycache__ +**/.venv +**/cortex-ai +**/.git +**/.pytest_cache +**/data +ai-hub/native_hub.log diff --git a/.gitignore b/.gitignore index 520f051..ef13550 100644 --- a/.gitignore +++ b/.gitignore @@ -4,6 +4,14 @@ .cortex/ blueprints/ docs/plans/ +# Python & Virtual Environments +.venv/ +venv/ +cortex-ai/ +ENV/ +__pycache__/ +*.py[cod] +*$py.class **/__pycache__/ # Environment & Secrets @@ -30,4 +38,7 @@ **/@eaDir/ # Temporary / Source Backups -CaudeCodeSourceCode/ \ No newline at end of file +CaudeCodeSourceCode/ +scratch/ +._* +**/*.nfs* \ No newline at end of file diff --git a/ai-hub/.dockerignore b/ai-hub/.dockerignore new file mode 100644 index 0000000..b7749fb --- /dev/null +++ b/ai-hub/.dockerignore @@ -0,0 +1,2 @@ +**/__pycache__ +**/._* diff --git a/ai-hub/app/api/routes/sessions.py b/ai-hub/app/api/routes/sessions.py index 242c369..d5de67a 100644 --- a/ai-hub/app/api/routes/sessions.py +++ b/ai-hub/app/api/routes/sessions.py @@ -324,7 +324,7 @@ from app.config import settings mirror_path = os.path.join(settings.DATA_DIR, "mirrors", sync_workspace_id) if os.path.exists(mirror_path): - shutil.rmtree(mirror_path) + shutil.rmtree(mirror_path, ignore_errors=True) except Exception as e: import logging logging.exception(f"[📁⚠️] Fast local purge failed for {sync_workspace_id}: {e}") @@ -376,7 +376,7 @@ for wid in workspaces_to_purge: mirror_path = os.path.join(settings.DATA_DIR, "mirrors", wid) if os.path.exists(mirror_path): - shutil.rmtree(mirror_path) + shutil.rmtree(mirror_path, ignore_errors=True) except Exception as e: import logging logging.exception(f"[📁⚠️] Fast local bulk purge failed: {e}") diff --git a/ai-hub/app/config.py b/ai-hub/app/config.py index 678d540..97a8068 100644 --- a/ai-hub/app/config.py +++ b/ai-hub/app/config.py @@ -354,12 +354,6 @@ "redirect_uri": self.OIDC_REDIRECT_URI, "allow_oidc_login": self.ALLOW_OIDC_LOGIN }, - "llm_providers": self.LLM_PROVIDERS, - "active_llm_provider": getattr(self, "ACTIVE_LLM_PROVIDER", list(self.LLM_PROVIDERS.keys())[0] if self.LLM_PROVIDERS else None), - "tts_providers": self.TTS_PROVIDERS, - "active_tts_provider": self.TTS_PROVIDER, - "stt_providers": self.STT_PROVIDERS, - "active_stt_provider": self.STT_PROVIDER, "swarm": { "external_endpoint": self.GRPC_EXTERNAL_ENDPOINT }, diff --git a/ai-hub/app/core/grpc/services/grpc_server.py b/ai-hub/app/core/grpc/services/grpc_server.py index f13588d..77b66af 100644 --- a/ai-hub/app/core/grpc/services/grpc_server.py +++ b/ai-hub/app/core/grpc/services/grpc_server.py @@ -29,7 +29,7 @@ self.journal = TaskJournal() self.pool = GlobalWorkPool() self.mirror = GhostMirrorManager(storage_root=os.path.join(settings.DATA_DIR, "mirrors")) - self.io_locks = WeakValueDictionary() # key -> threading.Lock (weakly referenced) + self.io_locks = {} # key -> threading.Lock self.io_locks_lock = threading.Lock() self.assistant = TaskAssistant(self.registry, self.journal, self.pool, self.mirror) self.pool.on_new_work = self._broadcast_work diff --git a/ai-hub/app/core/services/preference.py b/ai-hub/app/core/services/preference.py index 30d0e04..6352f05 100644 --- a/ai-hub/app/core/services/preference.py +++ b/ai-hub/app/core/services/preference.py @@ -228,11 +228,6 @@ global_settings.STT_MODEL_NAME = p_data.get("model") or global_settings.STT_MODEL_NAME global_settings.STT_API_KEY = p_data.get("api_key") or global_settings.STT_API_KEY - try: - global_settings.save_to_yaml() - except Exception as ey: - logger.error(f"Failed to sync settings to YAML: {ey}") - logger.info(f"Saving updated global preferences via admin {user.id}") else: user.preferences["llm"]["active_provider"] = prefs.llm.get("active_provider") diff --git a/ai-hub/app/core/services/tool.py b/ai-hub/app/core/services/tool.py index 3fdf57e..cc05c92 100644 --- a/ai-hub/app/core/services/tool.py +++ b/ai-hub/app/core/services/tool.py @@ -77,6 +77,11 @@ # User preference override m_name = user.preferences.get("llm_model", m_name) + # M6: Fix litellm mapping by ensuring provider prefix + if "/" not in m_name: + provider = user.preferences.get("llm_provider", settings.ACTIVE_LLM_PROVIDER) if user and user.preferences else settings.ACTIVE_LLM_PROVIDER + m_name = f"{provider}/{m_name}" + model_info = litellm.get_model_info(m_name) if model_info: max_tokens = model_info.get("max_input_tokens", 8192) diff --git a/ai-hub/integration_tests/conftest.py b/ai-hub/integration_tests/conftest.py index 4398f47..cac24c4 100644 --- a/ai-hub/integration_tests/conftest.py +++ b/ai-hub/integration_tests/conftest.py @@ -143,7 +143,7 @@ if not is_docker_disabled: print("[conftest] Starting local docker node containers...") - network = "cortex-hub_default" + network = "cortexai_default" subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True) image_id = image_proc.stdout.strip() diff --git a/docker-compose.no_vols_clean.yml b/docker-compose.no_vols_clean.yml new file mode 100644 index 0000000..d7fff1d --- /dev/null +++ b/docker-compose.no_vols_clean.yml @@ -0,0 +1,55 @@ + +services: + # Unified Frontend and Nginx Gateway + ai-frontend: + build: ./frontend + container_name: ai_unified_frontend + restart: always + ports: + - "8002:80" + resources: + limits: + cpus: '0.50' + memory: 512M + depends_on: + - ai-hub + + # AI Hub Backend Service + ai-hub: + build: ./ai-hub + container_name: ai_hub_service + restart: always + ports: + - "50051:50051" + environment: + - PATH_PREFIX=/api/v1 + - HUB_API_URL=http://localhost:8000 + - HUB_PUBLIC_URL=http://localhost:8002 + - HUB_GRPC_ENDPOINT=localhost:50051 + - SUPER_ADMINS=${SUPER_ADMINS:-admin@example.com} + - CORTEX_ADMIN_PASSWORD=${CORTEX_ADMIN_PASSWORD} + - SECRET_KEY=${SECRET_KEY:-default-insecure-key} + - DEBUG_GRPC=true + resources: + limits: + cpus: '1.0' + memory: 1G + + + # Dedicated Browser Service (M6 Refactor) + browser-service: + build: ./browser-service + container_name: cortex_browser_service + restart: always + ports: + - "50053:50052" + environment: + - SHM_PATH=/dev/shm/cortex_browser + - PYTHONPATH=/app:/app/protos + resources: + limits: + cpus: '2.0' + memory: 2G + + +# Generic named volume using local driver diff --git a/docker-compose.test_no_vols.yml b/docker-compose.test_no_vols.yml new file mode 100644 index 0000000..3b43bf2 --- /dev/null +++ b/docker-compose.test_no_vols.yml @@ -0,0 +1,84 @@ + +services: + # Unified Frontend and Nginx Gateway + ai-frontend: + build: ./frontend + container_name: ai_unified_frontend + restart: always + ports: + - "8002:80" + volumes: + - ./nginx.conf:/etc/nginx/nginx.conf:ro + deploy: + resources: + limits: + cpus: '0.50' + memory: 512M + depends_on: + - ai-hub + + # AI Hub Backend Service + ai-hub: + build: ./ai-hub + container_name: ai_hub_service + restart: always + ports: + - "50051:50051" + environment: + - PATH_PREFIX=/api/v1 + - HUB_API_URL=http://localhost:8000 + - HUB_PUBLIC_URL=http://localhost:8002 + - HUB_GRPC_ENDPOINT=localhost:50051 + - SUPER_ADMINS=${SUPER_ADMINS:-admin@example.com} + - CORTEX_ADMIN_PASSWORD=${CORTEX_ADMIN_PASSWORD} + - SECRET_KEY=${SECRET_KEY:-default-insecure-key} + - DEBUG_GRPC=true + volumes: + - ai_hub_data:/app/data:rw + - ./config.yaml:/app/config.yaml:rw + - ./ai-hub/app:/app/app:rw + - ./agent-node:/app/agent-node-source:ro + - ./skills:/app/skills:ro + - ./docs:/app/docs:ro + - ./blueprints:/app/blueprints:ro + - browser_shm:/dev/shm:rw + deploy: + resources: + limits: + cpus: '1.0' + memory: 1G + + + # Dedicated Browser Service (M6 Refactor) + browser-service: + build: ./browser-service + container_name: cortex_browser_service + restart: always + ports: + - "50053:50052" + environment: + - SHM_PATH=/dev/shm/cortex_browser + - PYTHONPATH=/app:/app/protos + volumes: + - ./browser-service:/app + - browser_shm:/dev/shm:rw + working_dir: /app + command: python3 main.py + deploy: + resources: + limits: + cpus: '2.0' + memory: 2G + + +# Generic named volume using local driver +volumes: + ai_hub_data: + driver: local + + browser_shm: + driver: local + driver_opts: + type: tmpfs + device: tmpfs + o: "size=1g,uid=1000" diff --git a/docker-compose.yml b/docker-compose.yml index 3b43bf2..d35b263 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -6,7 +6,7 @@ container_name: ai_unified_frontend restart: always ports: - - "8002:80" + - "${FRONTEND_PORT:-8002}:80" volumes: - ./nginx.conf:/etc/nginx/nginx.conf:ro deploy: @@ -33,8 +33,9 @@ - CORTEX_ADMIN_PASSWORD=${CORTEX_ADMIN_PASSWORD} - SECRET_KEY=${SECRET_KEY:-default-insecure-key} - DEBUG_GRPC=true + - DATABASE_URL=sqlite:////tmp/ai-hub.db volumes: - - ai_hub_data:/app/data:rw +# - ai_hub_data:/app/data:rw - ./config.yaml:/app/config.yaml:rw - ./ai-hub/app:/app/app:rw - ./agent-node:/app/agent-node-source:ro diff --git a/frontend/Dockerfile b/frontend/Dockerfile index 6ee3227..3bd8a75 100644 --- a/frontend/Dockerfile +++ b/frontend/Dockerfile @@ -17,6 +17,7 @@ ENV REACT_APP_API_BASE_URL=/api/v1 # Build the production-ready static files +RUN chmod +x node_modules/.bin/* || true RUN npm run build # Stage 2: Serve the static files using Nginx diff --git a/frontend/src/features/agents/components/AgentDrillDown.js b/frontend/src/features/agents/components/AgentDrillDown.js index 8cae428..bf2d48d 100644 --- a/frontend/src/features/agents/components/AgentDrillDown.js +++ b/frontend/src/features/agents/components/AgentDrillDown.js @@ -20,7 +20,8 @@ runningSeconds, lastTotalConsumption, currentAction, lastAction, lastActionDuration, handleAction, handleClearHistory, handleSaveConfig, handleSaveGroundTruth, fetchData, handleAddTrigger, handleDeleteTrigger, handleFireTrigger, handleFireWebhook, - handleResetMetrics, overrideText, setOverrideText, handleInjectOverride + handleResetMetrics, overrideText, setOverrideText, handleInjectOverride, + availableModels, fetchingModels, allProviders } = hookData; if (loading && !agent) return ( @@ -136,6 +137,9 @@ setOverrideText={setOverrideText} handleInjectOverride={handleInjectOverride} agentId={agentId} + availableModels={availableModels} + fetchingModels={fetchingModels} + allProviders={allProviders} /> )} diff --git a/frontend/src/features/agents/components/drilldown/ChatTracker.js b/frontend/src/features/agents/components/drilldown/ChatTracker.js index 83a4a44..ead8c49 100644 --- a/frontend/src/features/agents/components/drilldown/ChatTracker.js +++ b/frontend/src/features/agents/components/drilldown/ChatTracker.js @@ -69,16 +69,25 @@ {/* Tracking HUD Overlay */}
{currentAction && ( -
+
-
+ {currentAction.isDone ? ( +
+ +
+ ) : ( +
70 ? 'border-amber-500/30 border-t-amber-500' : 'border-indigo-500/30 border-t-indigo-500'} animate-spin rounded-full shrink-0`}>
+ )}
- + {currentAction.display}
-
-
+
+
{runningSeconds}s
diff --git a/frontend/src/features/agents/components/drilldown/ConfigPanel.js b/frontend/src/features/agents/components/drilldown/ConfigPanel.js index 64914db..b0d266e 100644 --- a/frontend/src/features/agents/components/drilldown/ConfigPanel.js +++ b/frontend/src/features/agents/components/drilldown/ConfigPanel.js @@ -24,7 +24,10 @@ overrideText, setOverrideText, handleInjectOverride, - agentId + agentId, + availableModels, + fetchingModels, + allProviders }) => { return (
@@ -93,6 +96,38 @@
+ + +
+
+ + +
+
+ +
+
{ const isRunning = agent?.status === 'active' || agent?.status === 'starting'; + + // If it was running and now it's idle, we show a 'Completed' state + if (!isRunning && (previousStatus === 'active' || previousStatus === 'starting')) { + setCurrentAction({ + display: '✅ Process Complete: Workflow finished successfully', + raw: 'completed', + progress: 100, + isDone: true + }); + return; + } + if (!isRunning) return; if (previousStatus !== 'active' && previousStatus !== 'starting') { @@ -234,8 +246,18 @@ const rawStatus = agent?.evaluation_status || 'Orchestrating task payload...'; + // Calculate a rough progress percentage based on keywords + let calculatedProgress = 10; + const lowStatus = rawStatus.toLowerCase(); + + if (lowStatus.includes('orchestrating')) calculatedProgress = 15; + else if (lowStatus.includes('thinking')) calculatedProgress = 30; + else if (lowStatus.includes('executing')) calculatedProgress = 50; + else if (lowStatus.includes('audit' || lowStatus.includes('coworker'))) calculatedProgress = 75; + else if (lowStatus.includes('passed') || lowStatus.includes('finished')) calculatedProgress = 100; + else if (lowStatus.includes('reworking')) calculatedProgress = 40; + if (rawStatus !== (currentAction?.raw || '')) { - const lowStatus = rawStatus.toLowerCase(); const hasPrefix = lowStatus.includes('agent:') || lowStatus.includes('audit:') || lowStatus.includes('co-worker:'); let cleanStatus = rawStatus; @@ -245,13 +267,20 @@ : `🤖 Main Agent: ${rawStatus}`; } - if (currentAction) { + if (currentAction && !currentAction.isDone) { setLastAction(currentAction); setLastActionDuration(runningSeconds - actionStartTime); } - setCurrentAction({ display: cleanStatus, raw: rawStatus }); + setCurrentAction({ + display: cleanStatus, + raw: rawStatus, + progress: calculatedProgress, + isDone: false + }); setActionStartTime(runningSeconds); + } else if (currentAction && currentAction.progress !== calculatedProgress) { + setCurrentAction(prev => ({ ...prev, progress: calculatedProgress })); } }, [agent?.status, agent?.evaluation_status, previousStatus, currentAction, runningSeconds, actionStartTime]); diff --git a/frontend/src/features/settings/components/cards/AIConfigurationCard.js b/frontend/src/features/settings/components/cards/AIConfigurationCard.js index 1881a33..0bff453 100644 --- a/frontend/src/features/settings/components/cards/AIConfigurationCard.js +++ b/frontend/src/features/settings/components/cards/AIConfigurationCard.js @@ -26,7 +26,8 @@ setExpandedProvider, verifying, handleVerifyProvider, - handleDeleteProvider + handleDeleteProvider, + handleRenameProvider } = context; const renderConfigSection = (sectionKey, title, description) => { @@ -57,6 +58,7 @@ verifying={verifying} handleVerifyProvider={handleVerifyProvider} handleDeleteProvider={handleDeleteProvider} + handleRenameProvider={handleRenameProvider} handleConfigChange={handleConfigChange} labelClass={labelClass} inputClass={inputClass} diff --git a/frontend/src/features/settings/components/shared/ProviderPanel.js b/frontend/src/features/settings/components/shared/ProviderPanel.js index 4a3aa14..55c1346 100644 --- a/frontend/src/features/settings/components/shared/ProviderPanel.js +++ b/frontend/src/features/settings/components/shared/ProviderPanel.js @@ -14,7 +14,8 @@ handleConfigChange, labelClass, inputClass, - fetchedModels + fetchedModels, + handleRenameProvider }) => { const isExpanded = expandedProvider === `${sectionKey}_${id}`; const providerType = prefs.provider_type || id.split('_')[0]; @@ -99,6 +100,19 @@
+ +
+ handleRenameProvider(sectionKey, id, e.target.value)} + placeholder="e.g. prod, dev" + className={inputClass} + /> +
+

Changing this will rename the resource instance ID.

+
+
diff --git a/frontend/src/features/settings/pages/SettingsPage.js b/frontend/src/features/settings/pages/SettingsPage.js index 538c41a..a7285dd 100644 --- a/frontend/src/features/settings/pages/SettingsPage.js +++ b/frontend/src/features/settings/pages/SettingsPage.js @@ -497,6 +497,52 @@ setExpandedProvider(`${sectionKey}_${newId}`); }; + const handleRenameProvider = (sectionKey, oldId, newSuffix) => { + if (!config[sectionKey]?.providers?.[oldId]) return; + + const providerData = config[sectionKey].providers[oldId]; + const providerType = providerData.provider_type || oldId.split('_')[0]; + const newId = newSuffix ? `${providerType}_${newSuffix.toLowerCase().replace(/\s+/g, '_')}` : providerType; + + if (newId === oldId) return; + + if (config[sectionKey].providers[newId]) { + setMessage({ type: 'error', text: `Instance "${newId}" already exists.` }); + return; + } + + const newProviders = { ...config[sectionKey].providers }; + delete newProviders[oldId]; + newProviders[newId] = providerData; + + let newActiveProvider = config[sectionKey].active_provider; + if (newActiveProvider === oldId) { + newActiveProvider = newId; + } + + setConfig(prev => ({ + ...prev, + [sectionKey]: { + ...prev[sectionKey], + active_provider: newActiveProvider, + providers: newProviders + } + })); + + setProviderStatuses(prev => { + const updated = { ...prev }; + if (updated[`${sectionKey}_${oldId}`]) { + updated[`${sectionKey}_${newId}`] = updated[`${sectionKey}_${oldId}`]; + delete updated[`${sectionKey}_${oldId}`]; + } + return updated; + }); + + if (expandedProvider === `${sectionKey}_${oldId}`) { + setExpandedProvider(`${sectionKey}_${newId}`); + } + }; + const loadConfig = async () => { try { @@ -568,6 +614,7 @@ handleTestConnection, handleDeleteProvider: handleDeleteProviderAction, handleAddInstance, + handleRenameProvider, confirmAction, setConfirmAction, executeConfirmAction, diff --git a/frontend/src/services/api/userService.js b/frontend/src/services/api/userService.js index c9a4272..75d6f5a 100644 --- a/frontend/src/services/api/userService.js +++ b/frontend/src/services/api/userService.js @@ -119,17 +119,3 @@ body: formData }); }; - -/** - * Fetches available providers for a section (llm, tts, stt). - */ -export const getProviders = async (section = "llm") => { - return await fetchWithAuth(`/users/me/config/providers?section=${section}&configured_only=true`); -}; - -/** - * Fetches models for a specific provider. - */ -export const getProviderModels = async (providerName, section = "llm") => { - return await fetchWithAuth(`/users/me/config/models?provider_name=${providerName}§ion=${section}`); -}; diff --git a/run_integration_tests.sh b/run_integration_tests.sh index a964fae..98c2af2 100755 --- a/run_integration_tests.sh +++ b/run_integration_tests.sh @@ -49,17 +49,19 @@ echo "Skipping rebuild and starting tests directly..." else if [ "$DOCKER_AVAILABLE" = true ] && [ "$NATIVE_MODE" = false ]; then - # 2. Clean start: purge the database / volumes + # 2. Clean start: purge the database / volumes (Crucial for integration tests) echo "Purging database and old containers..." docker compose down -v --remove-orphans docker kill test-node-1 test-node-2 2>/dev/null || true docker rm test-node-1 test-node-2 2>/dev/null || true - # 3. Build & start the Hub stack - echo "Starting AI Hub mesh..." - mkdir -p data - docker compose build ai-hub ai-frontend - docker compose up -d + # 3. Build & start the Hub stack via the unified start_server.sh script + echo "Starting AI Hub mesh via ./start_server.sh..." + if [ "$NO_REBUILD" = true ]; then + bash ./start_server.sh + else + bash ./start_server.sh --rebuild + fi # Wait for healthy echo "Waiting for AI Hub to be ready..." @@ -89,7 +91,7 @@ elif [ -d "/app/ai-hub" ]; then cd /app/ai-hub fi - source ../venv/bin/activate || source venv/bin/activate || source /tmp/venv2/bin/activate || source /tmp/venv/bin/activate || echo "No venv found for uvicorn" + if [ -f "../venv/bin/activate" ]; then source ../venv/bin/activate; elif [ -f "venv/bin/activate" ]; then source venv/bin/activate; elif [ -f "../cortex-ai/bin/activate" ]; then source ../cortex-ai/bin/activate; elif [ -f "/tmp/venv2/bin/activate" ]; then source /tmp/venv2/bin/activate; elif [ -f "/tmp/venv/bin/activate" ]; then source /tmp/venv/bin/activate; else echo "No venv found for uvicorn"; fi PYTHONDONTWRITEBYTECODE=1 GRPC_PORT=50055 DATA_DIR=./data DATABASE_URL=sqlite:////tmp/cortex_hub_test.db AGENT_NODE_SRC_DIR=../agent-node SKILLS_SRC_DIR=../skills uvicorn app.main:app --host 0.0.0.0 --port 8010 > native_hub.log 2>&1 & HUB_PID=$! cd - > /dev/null @@ -115,7 +117,7 @@ export TEST_HUB_URL="http://127.0.0.1:8010" export TEST_GRPC_ENDPOINT="127.0.0.1:50055" fi -source /tmp/venv2/bin/activate || source venv/bin/activate || source /tmp/venv/bin/activate || echo "No venv found, hoping pytest is in global PATH." +if [ -f "cortex-ai/bin/activate" ]; then source cortex-ai/bin/activate; elif [ -f "/tmp/venv2/bin/activate" ]; then source /tmp/venv2/bin/activate; elif [ -f "venv/bin/activate" ]; then source venv/bin/activate; elif [ -f "/tmp/venv/bin/activate" ]; then source /tmp/venv/bin/activate; else echo "No venv found, hoping pytest is in global PATH."; fi TEST_TARGETS=() diff --git a/start_server.sh b/start_server.sh new file mode 100755 index 0000000..a77eec5 --- /dev/null +++ b/start_server.sh @@ -0,0 +1,49 @@ +#!/bin/bash + +# Configuration +FRONTEND_PORT=8002 +CONTAINER_NAME="ai_unified_frontend" + +# Load environment +export PATH=$PATH:/usr/local/bin:/opt/homebrew/bin + +# Parse flags +REBUILD=false +for arg in "$@"; do + if [[ "$arg" == "--rebuild" ]]; then + REBUILD=true + fi +done + +# Check if docker is available +if ! command -v docker &> /dev/null; then + echo "❌ Error: 'docker' command not found. Please ensure Docker is installed and running." + exit 1 +fi + +# Check if containers are already running +if [ "$REBUILD" = false ] && docker ps --format '{{.Names}}' | grep -q "^${CONTAINER_NAME}$"; then + echo "✅ CortexAI is already running on port ${FRONTEND_PORT}." + echo " Use './start_server.sh --rebuild' to force a restart/rebuild." + exit 0 +fi + +if [ "$REBUILD" = true ]; then + echo "🔄 Rebuilding and restarting services..." + if ! FRONTEND_PORT=$FRONTEND_PORT docker compose up -d --build --remove-orphans; then + echo "❌ Error: Docker Compose failed to rebuild/start." + exit 1 + fi +else + echo "🚀 Starting CortexAI on port ${FRONTEND_PORT}..." + if ! FRONTEND_PORT=$FRONTEND_PORT docker compose up -d; then + echo "❌ Error: Docker Compose failed to start." + exit 1 + fi +fi + +echo "" +echo "==========================================" +echo "🌐 CortexAI is locally available at:" +echo " http://localhost:${FRONTEND_PORT}" +echo "==========================================" diff --git a/stop_server.sh b/stop_server.sh new file mode 100755 index 0000000..be17ff0 --- /dev/null +++ b/stop_server.sh @@ -0,0 +1,18 @@ +#!/bin/bash + +# Load environment +export PATH=$PATH:/usr/local/bin:/opt/homebrew/bin:/opt/anaconda3/bin + +# Check if docker is available +if ! command -v docker &> /dev/null; then + echo "❌ Error: 'docker' command not found." + exit 1 +fi + +echo "🛑 Stopping CortexAI services..." +if docker compose down; then + echo "✅ CortexAI has been stopped successfully." +else + echo "⚠️ Some services might still be running. Checking..." + docker ps | grep "ai_" +fi