==========================================
CORTEX HUB INTEGRATION TESTS SETUP
==========================================
Purging existing test/dev environment...
Container cortex_browser_service Stopping
Container ai_unified_frontend Stopping
Container cortex_browser_service Stopped
Container cortex_browser_service Removing
Container ai_unified_frontend Stopped
Container ai_unified_frontend Removing
Container cortex_browser_service Removed
Container ai_unified_frontend Removed
Container ai_hub_service Stopping
Container ai_hub_service Stopped
Container ai_hub_service Removing
Container ai_hub_service Removed
Volume cortexai_browser_shm Removing
Network cortexai_default Removing
Volume cortexai_browser_shm Removed
Network cortexai_default Removed
Purging database and old containers...
time="2026-04-18T17:58:18-07:00" level=warning msg="The \"DEEPSEEK_API_KEY\" variable is not set. Defaulting to a blank string."
time="2026-04-18T17:58:18-07:00" level=warning msg="The \"OPENAI_API_KEY\" variable is not set. Defaulting to a blank string."
Volume cortexai_browser_shm Removing
Volume cortexai_browser_shm Removed
Starting AI Hub mesh via ./start_server.sh...
🚀 Starting CortexAI on port 8002...
time="2026-04-18T17:58:18-07:00" level=warning msg="The \"DEEPSEEK_API_KEY\" variable is not set. Defaulting to a blank string."
time="2026-04-18T17:58:18-07:00" level=warning msg="The \"OPENAI_API_KEY\" variable is not set. Defaulting to a blank string."
Network cortexai_default Creating
Network cortexai_default Created
Volume "cortexai_browser_shm" Creating
Volume "cortexai_browser_shm" Created
Container cortex_browser_service Creating
Container ai_hub_service Creating
Container cortex_browser_service Created
Container ai_hub_service Created
Container ai_unified_frontend Creating
Container ai_unified_frontend Created
Container cortex_browser_service Starting
Container ai_hub_service Starting
Container ai_hub_service Started
Container ai_unified_frontend Starting
Container cortex_browser_service Started
Container ai_unified_frontend Started
==========================================
🌐 CortexAI is locally available at:
http://localhost:8002
==========================================
Waiting for AI Hub to be ready...
AI Hub Backend is online.
==========================================
EXECUTING E2E INTEGRATION SUITE
==========================================
============================= test session starts ==============================
platform darwin -- Python 3.12.7, pytest-9.0.3, pluggy-1.6.0 -- /Users/axieyangb/Project/CortexAI/cortex-ai/bin/python3
cachedir: .pytest_cache
rootdir: /Users/axieyangb/Project/CortexAI/ai-hub
configfile: pytest.ini (WARNING: ignoring pytest config in pyproject.toml!)
plugins: anyio-4.13.0, tornasync-0.6.0.post2, mock-3.15.1, asyncio-1.3.0, trio-0.8.0
asyncio: mode=Mode.STRICT, debug=False, asyncio_default_fixture_loop_scope=None, asyncio_default_test_loop_scope=function
collecting ... collected 41 items
ai-hub/integration_tests/test_advanced_fs.py::TestAdvancedFS::test_mesh_move_atomic ERROR [ 2%]
ai-hub/integration_tests/test_advanced_fs.py::TestAdvancedFS::test_mesh_copy_atomic ERROR [ 4%]
ai-hub/integration_tests/test_advanced_fs.py::TestAdvancedFS::test_mesh_stat_speed ERROR [ 7%]
ai-hub/integration_tests/test_agents.py::test_agent_lifecycle_and_api_coverage ERROR [ 9%]
ai-hub/integration_tests/test_agents.py::test_agent_webhook_trigger ERROR [ 12%]
ai-hub/integration_tests/test_agents.py::test_agent_metrics_reset ERROR [ 14%]
ai-hub/integration_tests/test_audio.py::test_tts_voices ERROR [ 17%]
ai-hub/integration_tests/test_audio.py::test_tts_to_stt_lifecycle ERROR [ 19%]
ai-hub/integration_tests/test_browser_llm.py::test_browser_skill_weather ERROR [ 21%]
ai-hub/integration_tests/test_coworker_flow.py::test_coworker_sc1_mirror_check ERROR [ 24%]
ai-hub/integration_tests/test_coworker_flow.py::test_coworker_sc3_limit_check ERROR [ 26%]
ai-hub/integration_tests/test_coworker_flow.py::test_coworker_sc2_rework_loop ERROR [ 29%]
ai-hub/integration_tests/test_coworker_flow.py::test_coworker_sc4_context_compaction ERROR [ 31%]
ai-hub/integration_tests/test_coworker_full_journey.py::test_coworker_full_journey ERROR [ 34%]
ai-hub/integration_tests/test_file_sync.py::TestSmallFileSync::test_case1_write_from_node1_visible_on_node2_and_server ERROR [ 36%]
ai-hub/integration_tests/test_file_sync.py::TestSmallFileSync::test_case2_write_from_server_visible_on_all_nodes ERROR [ 39%]
ai-hub/integration_tests/test_file_sync.py::TestSmallFileSync::test_case3_delete_from_server_purges_client_nodes ERROR [ 41%]
ai-hub/integration_tests/test_file_sync.py::TestSmallFileSync::test_case4_delete_from_node2_purges_server_and_node1 ERROR [ 43%]
ai-hub/integration_tests/test_file_sync.py::TestSmallFileSync::test_case9_cat_deleted_file_returns_quickly_not_timeout ERROR [ 46%]
ai-hub/integration_tests/test_file_sync.py::TestSmallFileSync::test_case11_hub_pseudo_node_write_visibility ERROR [ 48%]
ai-hub/integration_tests/test_file_sync.py::TestNodeResync::test_case10_node_resync_after_restart ERROR [ 51%]
ai-hub/integration_tests/test_file_sync.py::TestLargeFileSync::test_case5_large_file_from_node1_to_server_and_node2 ERROR [ 53%]
ai-hub/integration_tests/test_file_sync.py::TestLargeFileSync::test_case6_large_file_from_server_to_all_nodes ERROR [ 56%]
ai-hub/integration_tests/test_file_sync.py::TestLargeFileSync::test_case7_delete_large_file_from_server_purges_nodes ERROR [ 58%]
ai-hub/integration_tests/test_file_sync.py::TestLargeFileSync::test_case8_delete_large_file_from_node2_purges_server_and_node1 ERROR [ 60%]
ai-hub/integration_tests/test_file_sync.py::TestGigabyteFileSync::test_case_1gb_sync_from_client_to_server_and_node ERROR [ 63%]
ai-hub/integration_tests/test_file_sync.py::TestSessionAutoPurge::test_session_lifecycle_cleanup ERROR [ 65%]
ai-hub/integration_tests/test_llm_chat.py::test_create_session_and_chat_gemini ERROR [ 68%]
ai-hub/integration_tests/test_login.py::test_login_success ERROR [ 70%]
ai-hub/integration_tests/test_login.py::test_login_failure_invalid_password ERROR [ 73%]
ai-hub/integration_tests/test_login.py::test_login_failure_invalid_user ERROR [ 75%]
ai-hub/integration_tests/test_node_registration.py::test_node_full_lifecycle_and_api_coverage ERROR [ 78%]
ai-hub/integration_tests/test_parallel_coworker.py::test_parallel_rubric_generation ERROR [ 80%]
ai-hub/integration_tests/test_provider_config.py::test_verify_llm_success_gemini ERROR [ 82%]
ai-hub/integration_tests/test_provider_config.py::test_verify_llm_failure_invalid_key ERROR [ 85%]
ai-hub/integration_tests/test_provider_config.py::test_update_user_llm_preferences ERROR [ 87%]
ai-hub/integration_tests/test_provider_config.py::test_verify_llm_success_gemini_masked_key_fallback ERROR [ 90%]
ai-hub/integration_tests/test_provider_config.py::test_verify_llm_unrecognized_provider ERROR [ 92%]
ai-hub/integration_tests/test_provider_config.py::test_get_provider_models ERROR [ 95%]
ai-hub/integration_tests/test_tools.py::test_mesh_file_explorer_none_path_and_session ERROR [ 97%]
ai-hub/integration_tests/test_tools.py::test_tool_service_node_id_validation ERROR [100%]
==================================== ERRORS ====================================
____________ ERROR at setup of TestAdvancedFS.test_mesh_move_atomic ____________
fixturedef = <FixtureDef argname='setup_mesh_environment' scope='session' baseid='integration_tests'>
request = <SubRequest 'setup_mesh_environment' for <Function test_mesh_move_atomic>>
@pytest.hookimpl(wrapper=True)
def pytest_fixture_setup(fixturedef: FixtureDef, request) -> object | None:
asyncio_mode = _get_asyncio_mode(request.config)
if not _is_asyncio_fixture_function(fixturedef.func):
if asyncio_mode == Mode.STRICT:
# Ignore async fixtures without explicit asyncio mark in strict mode
# This applies to pytest_trio fixtures, for example
> return (yield)
^^^^^
cortex-ai/lib/python3.12/site-packages/pytest_asyncio/plugin.py:728:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:167: in setup_mesh_environment
raise e
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
---------------------------- Captured stdout setup -----------------------------
[conftest] Starting Mesh Integration Setup...
[conftest] Logging in as root...
[conftest] Configuring LLM provider and grouping...
[conftest] Registering test nodes...
[conftest] Creating access group...
[conftest] Starting local docker node containers...
[conftest] Building agent-node image...
❌ [conftest] Docker build failed!
STDOUT:
STDERR: ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operation not permitted
View build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b
____________ ERROR at setup of TestAdvancedFS.test_mesh_copy_atomic ____________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
____________ ERROR at setup of TestAdvancedFS.test_mesh_stat_speed _____________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
___________ ERROR at setup of test_agent_lifecycle_and_api_coverage ____________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_________________ ERROR at setup of test_agent_webhook_trigger _________________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
__________________ ERROR at setup of test_agent_metrics_reset __________________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
______________________ ERROR at setup of test_tts_voices _______________________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_________________ ERROR at setup of test_tts_to_stt_lifecycle __________________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_________________ ERROR at setup of test_browser_skill_weather _________________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_______________ ERROR at setup of test_coworker_sc1_mirror_check _______________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_______________ ERROR at setup of test_coworker_sc3_limit_check ________________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_______________ ERROR at setup of test_coworker_sc2_rework_loop ________________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
____________ ERROR at setup of test_coworker_sc4_context_compaction ____________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_________________ ERROR at setup of test_coworker_full_journey _________________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_ ERROR at setup of TestSmallFileSync.test_case1_write_from_node1_visible_on_node2_and_server _
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_ ERROR at setup of TestSmallFileSync.test_case2_write_from_server_visible_on_all_nodes _
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_ ERROR at setup of TestSmallFileSync.test_case3_delete_from_server_purges_client_nodes _
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_ ERROR at setup of TestSmallFileSync.test_case4_delete_from_node2_purges_server_and_node1 _
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_ ERROR at setup of TestSmallFileSync.test_case9_cat_deleted_file_returns_quickly_not_timeout _
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_ ERROR at setup of TestSmallFileSync.test_case11_hub_pseudo_node_write_visibility _
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
____ ERROR at setup of TestNodeResync.test_case10_node_resync_after_restart ____
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_ ERROR at setup of TestLargeFileSync.test_case5_large_file_from_node1_to_server_and_node2 _
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_ ERROR at setup of TestLargeFileSync.test_case6_large_file_from_server_to_all_nodes _
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_ ERROR at setup of TestLargeFileSync.test_case7_delete_large_file_from_server_purges_nodes _
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_ ERROR at setup of TestLargeFileSync.test_case8_delete_large_file_from_node2_purges_server_and_node1 _
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_ ERROR at setup of TestGigabyteFileSync.test_case_1gb_sync_from_client_to_server_and_node _
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
____ ERROR at setup of TestSessionAutoPurge.test_session_lifecycle_cleanup _____
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
____________ ERROR at setup of test_create_session_and_chat_gemini _____________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_____________________ ERROR at setup of test_login_success _____________________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
____________ ERROR at setup of test_login_failure_invalid_password _____________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
______________ ERROR at setup of test_login_failure_invalid_user _______________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_________ ERROR at setup of test_node_full_lifecycle_and_api_coverage __________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
______________ ERROR at setup of test_parallel_rubric_generation _______________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_______________ ERROR at setup of test_verify_llm_success_gemini _______________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
____________ ERROR at setup of test_verify_llm_failure_invalid_key _____________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
______________ ERROR at setup of test_update_user_llm_preferences ______________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_____ ERROR at setup of test_verify_llm_success_gemini_masked_key_fallback _____
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
___________ ERROR at setup of test_verify_llm_unrecognized_provider ____________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
__________________ ERROR at setup of test_get_provider_models __________________
@pytest.fixture(scope="session")
def setup_mesh_environment():
"""
Simulates the CUJ:
1. Login as super admin.
2. Add API provider configurations (using env vars).
3. Create a group.
4. Register nodes and assign nodes to the group.
5. Spin up node docker containers with correct tokens.
"""
print("\n[conftest] Starting Mesh Integration Setup...")
client = httpx.Client(timeout=90.0)
# 1. Login
print(f"[conftest] Logging in as {ADMIN_EMAIL}...")
# NOTE: The Hub uses /users/login/local
login_data = {
"email": ADMIN_EMAIL,
"password": ADMIN_PASSWORD
}
r = client.post(f"{BASE_URL}/users/login/local", json=login_data)
assert r.status_code == 200, f"Login failed: {r.text}"
user_id = r.json().get("user_id")
assert user_id, "No user_id found in local login response."
os.environ["SYNC_TEST_USER_ID"] = user_id
client.headers.update({
"X-User-ID": user_id
})
# 2. Add API Providers and Configure LLM RBAC
print("[conftest] Configuring LLM provider and grouping...")
# Enable Gemini securely
prefs_payload = {
"llm": {
"active_provider": "gemini",
"providers": {
"gemini": {
"api_key": os.getenv("GEMINI_API_KEY", ""),
"model": "gemini/gemini-3-flash-preview"
}
}
},
"tts": {}, "stt": {}, "statuses": {}
}
r_config = client.put(f"{BASE_URL}/users/me/config", json=prefs_payload)
assert r_config.status_code == 200, f"Failed to configure LLM provider: {r_config.text}"
# Establish a Group securely provisioned for AI Usage
group_payload = {
"name": "Integration Default Group",
"description": "Global RBAC group for all integration tasks",
"policy": {"llm": ["gemini"]}
}
r_group = client.post(f"{BASE_URL}/users/admin/groups", json=group_payload)
if r_group.status_code == 409:
r_groups = client.get(f"{BASE_URL}/users/admin/groups")
target_group = next(g for g in r_groups.json() if g["name"] == group_payload["name"])
group_id = target_group["id"]
# Update policy to ensure it remains clean
client.put(f"{BASE_URL}/users/admin/groups/{group_id}", json=group_payload)
else:
group_id = r_group.json().get("id")
# Bind the admin testing account into this fully capable RBAC group
client.put(f"{BASE_URL}/users/admin/users/{user_id}/group", json={"group_id": group_id})
# 3. Register Nodes
print("[conftest] Registering test nodes...")
tokens = {}
for node_id in [NODE_1, NODE_2]:
payload = {
"node_id": node_id,
"display_name": f"Integration {node_id}",
"is_active": True,
"skill_config": {"shell": {"enabled": True}, "sync": {"enabled": True}}
}
r_node = client.post(
f"{BASE_URL}/nodes/admin",
params={"admin_id": user_id},
json=payload
)
# If node already exists, delete it and recreate to obtain a fresh token
if r_node.status_code in (400, 409):
client.delete(f"{BASE_URL}/nodes/admin/{node_id}", params={"admin_id": user_id})
r_node = client.post(f"{BASE_URL}/nodes/admin", params={"admin_id": user_id}, json=payload)
assert r_node.status_code == 200, f"Node registration failed: {r_node.text}"
tokens[node_id] = r_node.json().get("invite_token")
# 4. Add Group & Assign Permission (optional - tests use the user_id that registered it for now,
# but per CUJ we can mimic group creation)
print("[conftest] Creating access group...")
# Note: Using /users/admin/groups if it exists...
group_r = client.post(f"{BASE_URL}/users/admin/groups", json={
"name": "Integration Test Group",
"description": "Integration Test Group"
})
if group_r.status_code == 200:
group_id = group_r.json().get("id")
# Give group access to nodes
for node_id in [NODE_1, NODE_2]:
client.post(
f"{BASE_URL}/nodes/admin/{node_id}/access",
params={"admin_id": user_id},
json={
"group_id": group_id,
"access_level": "use"
}
)
# CRITICAL FIX: Ensure the user's DB preferences points to these fresh
# nodes so that tools correctly route instead of using stale nodes from prior runs.
updated_prefs = {
"default_node_ids": [NODE_1]
}
client.patch(
f"{BASE_URL}/nodes/preferences",
params={"user_id": user_id},
json=updated_prefs
)
# 5. Start Node Processes
is_docker_disabled = os.getenv("SKIP_DOCKER_NODES", "true").lower() == "true"
node_processes = []
if not is_docker_disabled:
print("[conftest] Starting local docker node containers...")
network = "cortexai_default"
subprocess.run(["docker", "rm", "-f", NODE_1, NODE_2], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("[conftest] Building agent-node image...")
try:
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
image_id = image_proc.stdout.strip()
if not image_id:
raise Exception("Docker build -q returned empty image ID")
except subprocess.CalledProcessError as e:
print(f"❌ [conftest] Docker build failed!\nSTDOUT: {e.stdout}\nSTDERR: {e.stderr}")
> raise e
ai-hub/integration_tests/conftest.py:167:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
_______ ERROR at setup of test_mesh_file_explorer_none_path_and_session ________
self = <Coroutine test_mesh_file_explorer_none_path_and_session>
def setup(self) -> None:
runner_fixture_id = f"_{self._loop_scope}_scoped_runner"
if runner_fixture_id not in self.fixturenames:
self.fixturenames.append(runner_fixture_id)
> return super().setup()
^^^^^^^^^^^^^^^
cortex-ai/lib/python3.12/site-packages/pytest_asyncio/plugin.py:458:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:167: in setup_mesh_environment
raise e
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
____________ ERROR at setup of test_tool_service_node_id_validation ____________
self = <Coroutine test_tool_service_node_id_validation>
def setup(self) -> None:
runner_fixture_id = f"_{self._loop_scope}_scoped_runner"
if runner_fixture_id not in self.fixturenames:
self.fixturenames.append(runner_fixture_id)
> return super().setup()
^^^^^^^^^^^^^^^
cortex-ai/lib/python3.12/site-packages/pytest_asyncio/plugin.py:458:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
ai-hub/integration_tests/conftest.py:167: in setup_mesh_environment
raise e
ai-hub/integration_tests/conftest.py:161: in setup_mesh_environment
image_proc = subprocess.run(["docker", "build", "-q", "./agent-node"], capture_output=True, text=True, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
input = None, capture_output = True, timeout = None, check = True
popenargs = (['docker', 'build', '-q', './agent-node'],)
kwargs = {'stderr': -1, 'stdout': -1, 'text': True}
process = <Popen: returncode: 1 args: ['docker', 'build', '-q', './agent-node']>
stdout = ''
stderr = 'ERROR: failed to solve: error from sender: failed to xattr agent-node/src/agent_node/core/._regex_patterns.py: operat...mitted\n\nView build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/z6ipo64eg08tve1o6b142h40b\n'
retcode = 1
def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.
The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them,
or pass capture_output=True to capture both.
If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return code
in the returncode attribute, and output & stderr attributes if those streams
were captured.
If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.
There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin. If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.
By default, all communication is in bytes, and therefore any "input" should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or universal_newlines.
The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be used.')
kwargs['stdin'] = PIPE
if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
raise ValueError('stdout and stderr arguments may not be used '
'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE
with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads. communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except: # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
> raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
E subprocess.CalledProcessError: Command '['docker', 'build', '-q', './agent-node']' returned non-zero exit status 1.
/opt/anaconda3/lib/python3.12/subprocess.py:571: CalledProcessError
=========================== short test summary info ============================
ERROR ai-hub/integration_tests/test_advanced_fs.py::TestAdvancedFS::test_mesh_move_atomic
ERROR ai-hub/integration_tests/test_advanced_fs.py::TestAdvancedFS::test_mesh_copy_atomic
ERROR ai-hub/integration_tests/test_advanced_fs.py::TestAdvancedFS::test_mesh_stat_speed
ERROR ai-hub/integration_tests/test_agents.py::test_agent_lifecycle_and_api_coverage
ERROR ai-hub/integration_tests/test_agents.py::test_agent_webhook_trigger - s...
ERROR ai-hub/integration_tests/test_agents.py::test_agent_metrics_reset - sub...
ERROR ai-hub/integration_tests/test_audio.py::test_tts_voices - subprocess.Ca...
ERROR ai-hub/integration_tests/test_audio.py::test_tts_to_stt_lifecycle - sub...
ERROR ai-hub/integration_tests/test_browser_llm.py::test_browser_skill_weather
ERROR ai-hub/integration_tests/test_coworker_flow.py::test_coworker_sc1_mirror_check
ERROR ai-hub/integration_tests/test_coworker_flow.py::test_coworker_sc3_limit_check
ERROR ai-hub/integration_tests/test_coworker_flow.py::test_coworker_sc2_rework_loop
ERROR ai-hub/integration_tests/test_coworker_flow.py::test_coworker_sc4_context_compaction
ERROR ai-hub/integration_tests/test_coworker_full_journey.py::test_coworker_full_journey
ERROR ai-hub/integration_tests/test_file_sync.py::TestSmallFileSync::test_case1_write_from_node1_visible_on_node2_and_server
ERROR ai-hub/integration_tests/test_file_sync.py::TestSmallFileSync::test_case2_write_from_server_visible_on_all_nodes
ERROR ai-hub/integration_tests/test_file_sync.py::TestSmallFileSync::test_case3_delete_from_server_purges_client_nodes
ERROR ai-hub/integration_tests/test_file_sync.py::TestSmallFileSync::test_case4_delete_from_node2_purges_server_and_node1
ERROR ai-hub/integration_tests/test_file_sync.py::TestSmallFileSync::test_case9_cat_deleted_file_returns_quickly_not_timeout
ERROR ai-hub/integration_tests/test_file_sync.py::TestSmallFileSync::test_case11_hub_pseudo_node_write_visibility
ERROR ai-hub/integration_tests/test_file_sync.py::TestNodeResync::test_case10_node_resync_after_restart
ERROR ai-hub/integration_tests/test_file_sync.py::TestLargeFileSync::test_case5_large_file_from_node1_to_server_and_node2
ERROR ai-hub/integration_tests/test_file_sync.py::TestLargeFileSync::test_case6_large_file_from_server_to_all_nodes
ERROR ai-hub/integration_tests/test_file_sync.py::TestLargeFileSync::test_case7_delete_large_file_from_server_purges_nodes
ERROR ai-hub/integration_tests/test_file_sync.py::TestLargeFileSync::test_case8_delete_large_file_from_node2_purges_server_and_node1
ERROR ai-hub/integration_tests/test_file_sync.py::TestGigabyteFileSync::test_case_1gb_sync_from_client_to_server_and_node
ERROR ai-hub/integration_tests/test_file_sync.py::TestSessionAutoPurge::test_session_lifecycle_cleanup
ERROR ai-hub/integration_tests/test_llm_chat.py::test_create_session_and_chat_gemini
ERROR ai-hub/integration_tests/test_login.py::test_login_success - subprocess...
ERROR ai-hub/integration_tests/test_login.py::test_login_failure_invalid_password
ERROR ai-hub/integration_tests/test_login.py::test_login_failure_invalid_user
ERROR ai-hub/integration_tests/test_node_registration.py::test_node_full_lifecycle_and_api_coverage
ERROR ai-hub/integration_tests/test_parallel_coworker.py::test_parallel_rubric_generation
ERROR ai-hub/integration_tests/test_provider_config.py::test_verify_llm_success_gemini
ERROR ai-hub/integration_tests/test_provider_config.py::test_verify_llm_failure_invalid_key
ERROR ai-hub/integration_tests/test_provider_config.py::test_update_user_llm_preferences
ERROR ai-hub/integration_tests/test_provider_config.py::test_verify_llm_success_gemini_masked_key_fallback
ERROR ai-hub/integration_tests/test_provider_config.py::test_verify_llm_unrecognized_provider
ERROR ai-hub/integration_tests/test_provider_config.py::test_get_provider_models
ERROR ai-hub/integration_tests/test_tools.py::test_mesh_file_explorer_none_path_and_session
ERROR ai-hub/integration_tests/test_tools.py::test_tool_service_node_id_validation
======================== 41 errors in 66.99s (0:01:06) =========================