This workflow documents common issues encountered with the AI Hub file synchronization and node management, along with strategies and scripts used to diagnose and resolve them.
Symptoms:
dd or cp).Troubleshooting Steps & Checklist:
payload.offset rather than using payload.HasField("offset") for primitive variables, as Proto3 does not reliably support .HasField for int64..watcher and Server .mirror are actively ignoring files ending with .cortex_tmp and .cortex_lock to prevent recursion loops (Sync Echo).source="empty" workspaces, verify that the AI Hub actually sends a START_WATCHING gRPC signal to the target nodes. Without this signal, nodes will create the empty directory but the watchdog daemon will not attach to it, resulting in zero outbound file syncs.dd triggers 60+ on_modified events, which can occasionally choke the stream. Implementing an on_closed forced-sync is more reliable.Symptoms:
UID 1000) is unable to manipulate or delete these files directly from the host machine because the container writes them as root.Troubleshooting Steps & Checklist:
os.chown in the Mirror: During the atomic swap phase on the AI Hub (os.replace()), forcefully capture the os.stat() of the parent directory.os.chown(tmp_path, parent_stat.st_uid, parent_stat.st_gid) to the .cortex_tmp file immediately before the final swap. This ensures the host user retains ownership of all synced data on NFS/Mounted volumes.Symptoms:
synology-nas) keeps automatically attaching itself to newly created sessions.Troubleshooting Steps & Checklist:
preferences schema. In the Cortex Hub, default node attachments are tied to the user profile, not just individual prior sessions.default_node_ids array.Example Surgical Database Script:
import sys
sys.path.append("/app")
from app.db.session import get_db_session
from app.db.models import User, Session
from sqlalchemy.orm.attributes import flag_modified
try:
with get_db_session() as db:
# Find the specific user and update their preferences dict
users = db.query(User).all()
for u in users:
prefs = u.preferences or {}
nodes = prefs.get("nodes", {})
defaults = nodes.get("default_node_ids", [])
if "synology-nas" in defaults:
defaults.remove("synology-nas")
nodes["default_node_ids"] = defaults
prefs["nodes"] = nodes
u.preferences = prefs
flag_modified(u, "preferences")
# Clean up any already corrupted sessions
sessions = db.query(Session).filter(Session.sync_workspace_id == "YOUR_SESSION_ID").all()
for s in sessions:
attached = s.attached_node_ids or []
if "synology-nas" in attached:
attached.remove("synology-nas")
s.attached_node_ids = attached
flag_modified(s, "attached_node_ids")
db.commit()
print("Surgical cleanup complete.")
except Exception as e:
print(f"Error: {e}")
docker logs cortex-test-1 2>&1 | grep "📁👁️"
docker logs ai_hub_service | grep "dd_test_new_live.bin"
echo 'your-password' | sudo -S ls -la /var/lib/docker/volumes/cortex-hub_ai_hub_data/_data/mirrors/session-YOUR_SESSION