Newer
Older
cortex-hub / skills / mesh-terminal-control / SKILL.md

name: mesh_terminal_control emoji: "🖥️" description: Execute stateful shell commands and manage terminal sessions across the agent mesh (Swarm Control). skill_type: remote_grpc is_enabled: true features:

  • swarm_control config: service: TerminalService method: Execute capabilities:
    • shell
    • pty
    • interactive parameters: type: object properties: command:
      type: string
      description: 'Command to run. Use !RAW: prefix for REPL inputs.'
      node_id:
      type: string
      description: "Target node ID. Use 'hub' or 'server' for local server commands, but CRITICAL WARNING: NEVER use shell commands (rm, mkdir) to manipulate synchronized workspace files here; you MUST use mesh_file_explorer instead to avoid breaking the sync engine!"
      node_ids:
      type: array
      items:
        type: string
      description: List of node IDs for parallel swarm execution.
      timeout:
      type: integer
      description: Max seconds to wait. Default 30.
      no_abort:
      type: boolean
      description: 'Internal use: If true, don''t kill on timeout.'
      session_id:
      type: string
      description: Optional persistent session ID.
      required:
      • command

        is_system: true

Documentation

This capability allows the orchestrator to interact with terminal sessions on remote nodes. It supports stateful REPLs, parallel execution across multiple nodes, and background task management.

Important Note on Hub File Operations

CRITICAL WARNING for 'hub' node_id and File Operations: When node_id is set to 'hub' (or 'server'), mesh_terminal_control executes commands directly on the Hub's host operating system. For operations involving files within the synchronized Ghost Mirror workspace (/tmp/cortex-sync/{session_id}/), using native shell commands like rm, mkdir, cp, or mv will BYPASS the mesh synchronization engine. This can lead to file drift, inconsistencies, or unintended file restorations as the Hub's reconciliation logic may conflict with direct out-of-band modifications.

For ALL file creation, modification, or deletion actions intended to be synchronized across the mesh, you MUST use the mesh_file_explorer skill, even when targeting the 'hub' node. mesh_file_explorer is specifically designed to interact with the gRPC synchronization engine to ensure proper broadcast and consistency.

AI Instructions

You are a high-level Mesh Orchestrator. When executing commands:

  1. Parallel Execution: Use 'node_ids' (plural) for simultaneous swarm sweeps.
  2. Immediate Knowledge: Calls return as soon as the task finishes. If a task takes 1s but you set timeout=60, you get the result in 1s.
  3. Asynchronous Polling: For background tasks, set 'no_abort=True'. If it times out, you get 'TIMEOUT_PENDING'. You can then use 'mesh_wait_tasks' with 'timeout=0' to peek at progress without blocking your turn.
  4. Interactive Sub-shells: Subsequent REPL inputs MUST use the !RAW: prefix.
  5. Swarm Flow: To start a background server (e.g. iperf3 -s) and move to node 2 immediately, use 'no_abort=True' and a SMALL 'timeout' (e.g. 2s). Never block your planning turn waiting for a persistent service.
  6. Privilege-Aware Commands: Each node's 'Privilege Level' is shown in the mesh context. Use it to decide how to run privileged operations:
    • 'root': Run commands directly (no sudo prefix needed or available).
    • 'standard user with passwordless sudo': Prepend sudo to privileged commands.
    • 'standard user (sudo NOT available)': Avoid privileged ops or inform the user.
  7. iperf3 Speed Test Pattern: Step A: On the server node, run 'iperf3 -s -D' (daemon mode) with timeout=3, no_abort=True. Step B: On the client node, run 'iperf3 -c -t 10' with timeout=20. Node IPs are in the mesh context.