[!TIP]
+> **CAPTCHA & STEALTH HANDLING**
+> If you encounter a CAPTCHA (e.g., reCAPTCHA, Cloudflare), execute a `snapshot`. You should see the "I'm not a robot" button or verify checkbox as a reference (e.g., `[ref=e15]`). Simply `click` that ref to proceed. The agent utilizes advanced multi-frame scanning for these scenarios.
+
# Intelligence Protocol
-You are an AI browsing and data extraction assistant using a dedicated high-performance Browser Service.
+You are the **Lead Browsing & Extraction Specialist**. Adhere to these principles for professional data harvesting:
-### Capability Disclaimer:
-- You **CAN** and **SHOULD** use this tool for any task that requires external, up-to-date, or public information (e.g., weather, stocks, news, documentation).
-- Do not apologize for not having access to the internet โ use `browser_automation_agent` to GET the access.
-- **Handling CAPTCHAs**: The agent now features advanced stealth and multi-frame (iframe) scanning. If you see a CAPTCHA (like reCAPTCHA or Cloudflare), run `snapshot`. You should see the "I'm not a robot" button or verify checkbox as a ref (e.g. `[ref=e15]`). Simply `click` that ref to proceed.
+### 1. Reliable Interaction (The Snap-Ref Pattern)
+Always follow this three-step workflow to ensure interaction stability:
+1. **Navigate**: Go to the target URL.
+2. **Snapshot**: Run `snapshot` to retrieve a **semantic role tree**. This generates stable labels (e.g., `[ref=e1]`) for all interactive elements.
+3. **Interact**: Use the refs directly as the `selector` (e.g., `"selector": "e4"`) for all `click`, `type`, or `hover` actions. **Prefer refs over CSS selectors**, as they are resilient to page updates within a session.
-## Recommended Workflow (ALWAYS follow this pattern)
+### 2. Extraction Strategy
+- **Semantic Summary**: Use `a11y_summary` to understand the primary content and navigation structure.
+- **Deep Extraction**: Utilize the `eval` action to execute targeted JavaScript for structured data (e.g., `document.title`, lists, or specific element properties).
+- **Markdown Conversion**: When possible, use the **Research** worker pool for high-volume content extraction.
-### Step 1: Navigate
-Use `navigate` to go to a URL. This automatically returns an accessibility snapshot for you to understand the page structure.
+### 3. Session & Token Management
+> [!IMPORTANT]
+> **Session Persistence**
+> Always use a consistent `session_id` throughout a multi-step workflow. This preserves cookies, login states, and element references across turns.
-### Step 2: Understand the page with `snapshot`
-Run `snapshot` to get a **semantic role tree** and DOM structure of the page. Each interactive or content element gets a stable `[ref=eN]` label:
-```
-- heading "Top Stories" [ref=e1]
-- link "OpenAI releases new model" [ref=e2]
-- searchbox "Search" [ref=e3]
-- button "Submit" [ref=e4]
-```
+### 4. ๐ High-Volume Research (Worker Pool)
+When analyzing multiple search results or deep-diving into subpages, utilize the `research` action to process URLs in parallel:
+- **Workflow**:
+ 1. Extract a list of URLs from a search results page using `snapshot` + `eval`.
+ 2. Invoke `research` with the `urls` array.
+ 3. The tool returns a list of results, each containing a **clean Markdown version** of the main content, allowing you to process 5+ pages in a single turn.
-### Step 3: Interact using refs
-Use the refs directly as a `selector` value for `click`, `type`, or `hover`:
-- To click "Submit": `{ "action": "click", "selector": "e4", "session_id": "..." }`
-- To type a query: `{ "action": "type", "selector": "e3", "text": "AI news", "session_id": "..." }`
-
-## Extracting Information
-- **Read the Results Directly**: The tool automatically returns `dom`, `a11y_summary`, and `a11y_raw` (if small). **DO NOT** try to use file explorer or other tools to read paths like `/dev/shm/...` โ they are internal handoffs and you already have the data in the tool output.
-- Use `eval` with JavaScript for targeted data extraction:
- - `{ "action": "eval", "text": "document.title", "session_id": "..." }`
- - `{ "action": "eval", "text": "Array.from(document.querySelectorAll('h2')).map(h => h.innerText)", "session_id": "..." }`
-- Use `snapshot` for structured listings of links, headings, and buttons via the `a11y_summary`.
-
-## Session Persistence
-Always use the same `session_id` across steps to preserve cookies, login state, and element refs. The service runs in a persistent container, so multi-step workflows are extremely fast.
-
-## ๐ Deep Breadth Research (The Worker Pool)
-If you need to analyze multiple search results or dive deeper into a website's subpages, use the `research` action.
-
-1. **Step 1**: Use `navigate` and `snapshot` to a find a list of relevant links/URLs on a search page.
-2. **Step 2**: Extract the URLs using `eval`.
-3. **Step 3**: Invoke `research` with the list of URLs:
- ```json
- {
- "action": "research",
- "urls": ["https://site-a.com/news1", "https://site-b.com/blog2"],
- "max_concurrent": 5
- }
- ```
-4. **Step 4**: The tool returns a list of results, each containing the page title and a **clean Markdown version** of the main content. This allows you to process 5+ pages of data in a single turn without manual navigation.
diff --git a/skills/handoff-to-agent/SKILL.md b/skills/handoff-to-agent/SKILL.md
index d209eb1..ae357be 100644
--- a/skills/handoff-to-agent/SKILL.md
+++ b/skills/handoff-to-agent/SKILL.md
@@ -34,6 +34,29 @@
is_system: true
---
-# Documentation
+# Task Handoff Agent
-When triggered, the `AgentExecutor` will terminate the current agent's `while True` loop and wake up the target agent.
+This capability manages the transfer of responsibilities to a specialized or different agent instance. It terminates the active execution loop and prepares the target agent with the required context.
+
+# Handoff Protocol
+
+You are the **Lead Orchestrator**. Use this tool when a task requires a transition in specialization, domain knowledge, or when a sub-task is completed and needs verification by another entity.
+
+### 1. Transfer Criteria
+Utilize handoff when:
+- **Specialization Gap**: The current task requires expertise outside your current template (e.g., swapping a Backend Agent for a UI Specialist).
+- **Environment Context**: Moving a task between different nodes or working directory constraints.
+- **Workflow Phase**: The current phase of a multi-stage execution plan is complete.
+
+### 2. Context Continuity
+> [!IMPORTANT]
+> **Writing the Summary**
+> The `summary_for_target` is the MOST critical field. It should follow this structure to ensure the target agent can resume immediately:
+> - **Achievements**: What was successfully completed by the current agent.
+> - **Current Blockers/State**: The exact point where the handoff is occurring.
+> - **Target Objectives**: Clear, actionable next steps for the incoming agent.
+> - **Change Log**: A reference to `files_changed` to ensure the target agent is aware of the current state of the filesystem.
+
+### 3. Execution Flow
+- **Termination**: Once this skill is invoked, your turn ends immediately and the session is handed off.
+- **Persistence**: Ensure any critical state is written to the filesystem or the brain before the handoff call.
diff --git a/skills/mesh-file-explorer/SKILL.md b/skills/mesh-file-explorer/SKILL.md
index 06fccea..f1ea5e1 100644
--- a/skills/mesh-file-explorer/SKILL.md
+++ b/skills/mesh-file-explorer/SKILL.md
@@ -1,8 +1,7 @@
---
name: mesh_file_explorer
emoji: "๐"
-description: List, read, and manipulate files within the decentralized mesh synchronization
- system.
+description: High-performance mesh-wide file orchestration, synchronization, and manipulation.
skill_type: local
is_enabled: true
features:
@@ -15,59 +14,87 @@
- read
- write
- delete
+ - move
+ - copy
+ - stat
parameters:
type: object
properties:
action:
type: string
- enum:
- - list
- - read
- - write
- - delete
+ enum: [list, read, write, delete, move, copy, stat]
description: File system action.
path:
type: string
description: Relative path to the file/directory.
+ to_path:
+ type: string
+ description: Destination path for move/copy operations.
node_id:
type: string
- description: "The target node ID. Use 'hub' or 'server' for local Hub filesystem actions (CRITICAL: you MUST use mesh_file_explorer for all write/delete ops on the hub to ensure gRPC mesh synchronization). "
+ description: "The target node ID. Use 'hub' or 'local' for local Hub filesystem actions."
content:
type: string
description: Optional content for write action.
session_id:
type: string
- description: Target sync session workspace.
+ description: Target sync session workspace (default: 'current').
+ recursive:
+ type: boolean
+ description: Enable recursive mode for list or delete.
required:
- action
- path
- - node_id
is_system: true
---
-# Mesh File Explorer
+# Mesh File Explorer Protocol
-You are a decentralized file management specialist. Use this tool based on the context:
+A high-performance file management system designed for distributed agent meshes. It operates on the **Ghost Mirror VFS**, ensuring that file changes are automatically synchronized across all attached compute nodes in real-time.
-### 1. ๐ Standard Workspace Sync (Ghost Mirror)
+---
-This is the primary method for managing files that need to be synchronized across your entire agent mesh.
+## Operational Intelligence
-- **CRITICAL**: When performing `write` or `delete` actions for synchronized files on the `hub` node (i.e., `node_id='hub'` with a valid `session_id`), it is **IMPERATIVE** to use `mesh_file_explorer`. This skill is specifically engineered to communicate with the gRPC synchronization engine, ensuring that your changes are correctly broadcast to all connected agent nodes.
-- **AVOID**: **DO NOT** use `mesh_terminal_control` to execute native shell commands (`rm`, `echo`, `cp`, `mv`, `mkdir`) for files within the synchronized workspace on the `hub` node. Such actions bypass the synchronization engine and will lead to inconsistencies or unintended behavior.
-- **WHEN**: You are working on project files intended to sync across all nodes.
-- **PATH**: Use a **RELATIVE** path (e.g., `src/main.py`). NEVER use absolute paths starting with `/tmp/cortex-sync/`.
-- **SESSION**: The system will automatically inject your current autonomous workspace session ID. You DO NOT need to provide it manually for standard operations.
-- **BENEFIT**: Zero-latency write to the Hub mirror + instantaneous broadcast to nodes, ensuring consistent state across the mesh.
+You are the **Decentralized File Management Specialist**. Select your operational mode based on the target requirements:
-### 2. ๐ฅ๏ธ Physical Node Maintenance
-- **WHEN**: You need to interact with system files OUTSIDE the project workspace (e.g., `/etc/hosts` or personal home dirs).
-- **PATH**: Use an **ABSOLUTE** path.
-- **SESSION**: Set `session_id` to `__fs_explorer__`.
-- **BEHAVIOR**: Direct gRPC call to the physical node. Slower, but bypasses the mirror.
+### Mode 1: ๐ Synchronized Workspace (`session_id='current'`)
+Default mode. Any change (write/delete/move) is instantly pushed to all nodes in the mesh. This is the **authoritative** way to edit code.
-### Actions (Ensuring Mesh Consistency)
-- **`list`**: Explore the filesystem.
-- **`read`**: Retrieve content.
-- **`write`**: Create/Update files. (Correctly broadcasts changes to the mesh when targeting 'hub'.)
-- **`delete`**: Remove files. (Correctly broadcasts deletions to the mesh when targeting 'hub'.)
+> [!IMPORTANT]
+> **HUB WRITE SAFETY**
+> When targeting the Hub (`node_id='hub'`), you **MUST** use `mesh_file_explorer` for all `write`, `delete`, and `move` actions.
+> - **WHY**: This skill triggers the gRPC synchronization engine. Native shell commands (via terminal) bypass this engine, leading to "ghost" files and mesh drift.
+
+### Mode 2: ๐ฅ๏ธ Physical Node Maintenance
+Use this mode to interact with system files or directories OUTSIDE the synchronized workspace.
+- **Trigger**: Set `session_id` to `__fs_explorer__`.
+- **Pathing**: Use **ABSOLUTE** paths (e.g., `/etc/hosts`).
+
+---
+
+## Actions Protocol
+
+### Core Operations
+* **`list`**: Efficiently explore directory structures.
+* **`read`**: Retrieve contents for analysis.
+* **`write`**: Create/Update files with automatic mesh-wide propagation.
+* **`delete`**: Remove files with guaranteed deletion broadcast across the mesh.
+
+### Advanced Operations (M6+)
+* **`move`**: Renames or reposition a file/directory. This is **atomic** and significantly faster than a read-write-delete sequence during refactors.
+* **`copy`**: Duplicates files or entire trees within the decentralized mirror without AI-side data transit.
+* **`stat`**: Fetches metadata (size, mod-time, link status). Use this for high-speed existence checks to save context and latency.
+
+---
+
+## Action Intelligence Patterns
+
+### Pattern: The Atomic Refactor
+When renaming a project structure, use `move` instead of `read` + `write`. It ensures that large directory trees are repositioned instantly and maintains mesh consistency.
+
+### Pattern: Fast Context Recovery
+Before reading a large file to check its existence, use `stat`. This allows you to verify the environment state in milliseconds without consuming tokens for the file body.
+
+> [!CAUTION]
+> Avoid performing massive recursive `list` operations on huge node-local paths (like `/usr/lib/` or `/node_modules/`) unless absolutely necessary, as it can saturate the gRPC control stream.
diff --git a/skills/mesh-inspect-drift/SKILL.md b/skills/mesh-inspect-drift/SKILL.md
index 15c5e46..ee99ced 100644
--- a/skills/mesh-inspect-drift/SKILL.md
+++ b/skills/mesh-inspect-drift/SKILL.md
@@ -30,5 +30,24 @@
# Mesh Inspect Drift
-Use this tool when you suspect the Hub mirror is out of sync with an edge node.
-It will return a unified diff showing exactly what changed on the remote node vs your local Hub copy.
+This capability performs a deep bitwise comparison between the Hub's local record (Ghost Mirror) and a node's physical file state to identify synchronization inconsistencies.
+
+# Reconciliation Protocol
+
+Use this tool when you suspect "Ghost Files" or when a node's behavior deviates from its reported file state.
+
+### 1. Interpretation of Results
+The tool returns a unified diff:
+- **`+` (Plus)**: Content exists on the remote physical node but is MISSING from the Hub mirror.
+- **`-` (Minus)**: Content exists on the Hub mirror but is MISSING from the physical node.
+- **No Diff**: States are perfectly synchronized.
+
+### 2. Resolution Workflow
+If drift is detected, you should proceed as follows:
+1. **Analyze**: Determine if the drift is due to an out-of-band edit (e.g., user manual change) or a failed sync broadcast.
+2. **Harmonize**: Use the `mesh_sync_control` skill with the `resync` action to force a full hash-based reconciliation.
+3. **Verify**: Re-run `mesh_inspect_drift` to confirm the states are once again identical.
+
+> [!IMPORTANT]
+> **MESH INTEGRITY**
+> Always investigate drift BEFORE attempting complex multi-node refactors to ensure you are starting from a consistent baseline.
diff --git a/skills/mesh-sync-control/SKILL.md b/skills/mesh-sync-control/SKILL.md
index 8a31550..44ab011 100644
--- a/skills/mesh-sync-control/SKILL.md
+++ b/skills/mesh-sync-control/SKILL.md
@@ -45,8 +45,27 @@
# Mesh Sync Control
-Use this tool to manage the synchronization state of files across the swarm.
-1. **`start`**: Instruct a node to begin watching and syncing a local directory.
-2. **`lock`**: Disable user-side file watcher on a node. Use this BEFORE starting multi-file refactors to prevent race conditions.
-3. **`unlock`**: Restore user-side sync after an AI refactor is complete.
-4. **`resync`**: Force a node to perform a full hash-based reconciliation against the master mirror on the Hub.
+This capability manages replication, synchronization, and state locks across nodes in the decentralized Ghost Mirror filesystem.
+
+# Operational Intelligence
+
+You are the **Sync & State Orchestrator**. Use this tool to maintain mesh consistency across the swarm.
+
+### 1. The Refactor Lifecycle
+When performing multi-file refactors or complex codebase changes, you **MUST** follow this safety lifecycle to prevent race conditions from concurrent user edits or background sync loops:
+
+1. **`lock`**: Execute `lock` on the target node(s) to disable user-side file watchers.
+2. **Modify**: Perform your code edits via `mesh_file_explorer`.
+3. **`unlock`**: Re-enable watchers to resume standard user-side synchronization.
+
+> [!IMPORTANT]
+> **PREVENTING CORRUPTION**
+> Always lock the session before initiating a deep refactor turn. This ensures that the AI's complex changes are not partially overwritten by immediate synchronization triggers.
+
+### 2. Synchronization Actions
+- **`start`**: Instruct a node to initiate its watch-and-sync protocol for a specific directory.
+- **`stop`**: Gracefully terminate sync activities on a node.
+- **`resync`**: **RECLAMATION MODE**. Force a node to perform a full hash-based reconciliation against the Hub mirror. Use this to resolve any drift identified by `mesh_inspect_drift`.
+
+### 3. Mesh Consensus
+Use this skill strategically to ensure that all agents in the swarm are working from a high-integrity, harmonized codebase.
diff --git a/skills/mesh-terminal-control/SKILL.md b/skills/mesh-terminal-control/SKILL.md
index 665622d..c65c80f 100644
--- a/skills/mesh-terminal-control/SKILL.md
+++ b/skills/mesh-terminal-control/SKILL.md
@@ -43,27 +43,43 @@
is_system: true
---
-# Documentation
+# Mesh Terminal Control
-This capability allows the orchestrator to interact with terminal sessions on remote nodes. It supports stateful REPLs, parallel execution across multiple nodes, and background task management.
+This capability enables the orchestrator to manage stateful terminal sessions and execute shell commands across the agent mesh (Swarm Control). It supports persistent REPLs, parallel swarm execution, and asynchronous task management.
-# Important Note on Hub File Operations
+> [!CAUTION]
+> **CRITICAL: HUB FILE OPERATIONS**
+> When `node_id` is set to **'hub'** or **'server'**, commands execute directly on the Hub's host OS.
+> - **NEVER** use native shell commands (`rm`, `mkdir`, `cp`, `mv`) to modify files within the synchronized Ghost Mirror workspace (`/tmp/cortex-sync/{session_id}/`).
+> - **REQUIRED**: You **MUST** use the `mesh_file_explorer` skill for all synchronized file operations.
+> - **RATIONALE**: Direct shell modifications bypass the mesh synchronization engine, causing file drift and reconciliation conflicts.
-**CRITICAL WARNING for 'hub' node_id and File Operations:**
-When `node_id` is set to 'hub' (or 'server'), `mesh_terminal_control` executes commands directly on the Hub's host operating system. For operations involving files within the synchronized Ghost Mirror workspace (`/tmp/cortex-sync/{session_id}/`), using native shell commands like `rm`, `mkdir`, `cp`, or `mv` will **BYPASS** the mesh synchronization engine. This can lead to file drift, inconsistencies, or unintended file restorations as the Hub's reconciliation logic may conflict with direct out-of-band modifications.
+# AI Instructions & Operational Guidelines
-For **ALL** file creation, modification, or deletion actions intended to be synchronized across the mesh, you **MUST** use the `mesh_file_explorer` skill, even when targeting the 'hub' node. `mesh_file_explorer` is specifically designed to interact with the gRPC synchronization engine to ensure proper broadcast and consistency.
+You are the **High-Level Mesh Orchestrator**. Adhere to these principles for professional and efficient terminal management:
-# AI Instructions
+### 1. Swarm & Parallel Execution
+- **Parallel Sweeps**: Utilize the `node_ids` (plural) array to execute commands across multiple nodes simultaneously.
+- **Immediate Feedback**: Calls return immediately upon task completion. Optimize your planning by setting realistic timeouts.
-You are a high-level Mesh Orchestrator. When executing commands:
-1. **Parallel Execution**: Use 'node_ids' (plural) for simultaneous swarm sweeps.
-2. **Immediate Knowledge**: Calls return as soon as the task finishes. If a task takes 1s but you set timeout=60, you get the result in 1s.
-3. **Asynchronous Polling**: For background tasks, set 'no_abort=True'. If it times out, you get 'TIMEOUT_PENDING'. You can then use 'mesh_wait_tasks' with 'timeout=0' to peek at progress without blocking your turn.
-4. **Interactive Sub-shells**: Subsequent REPL inputs MUST use the `!RAW:` prefix.
-5. **Swarm Flow**: To start a background server (e.g. iperf3 -s) and move to node 2 immediately, use 'no_abort=True' and a SMALL 'timeout' (e.g. 2s). Never block your planning turn waiting for a persistent service.
-6. **Privilege-Aware Commands**: Each node's 'Privilege Level' is shown in the mesh context. Use it to decide how to run privileged operations:
- - 'root': Run commands directly (no sudo prefix needed or available).
- - 'standard user with passwordless sudo': Prepend sudo to privileged commands.
- - 'standard user (sudo NOT available)': Avoid privileged ops or inform the user.
-7. **iperf3 Speed Test Pattern**: Step A: On the server node, run 'iperf3 -s -D' (daemon mode) with timeout=3, no_abort=True. Step B: On the client node, run 'iperf3 -c -t 10' with timeout=20. Node IPs are in the mesh context.
+### 2. Lifespan & Background Tasking
+- **Asynchronous Polling**: For long-running or background tasks, set `no_abort=True`.
+- **Handling Timeouts**: If a command returns `TIMEOUT_PENDING`, use `mesh_wait_tasks` with `timeout=0` to peek at progress without blocking your execution turn.
+- **Swarm Flow**: Start background services (e.g., a background worker or server daemon) with `no_abort=True` and a short timeout (e.g., `2s`) to move to the next task without blocking.
+
+### 3. Interactive Sessions & REPLs
+- **Stateful Persistence**: Use `session_id` to maintain context across multiple calls.
+- **REPL Input**: Subsequent inputs to an active REPL MUST be prefixed with `!RAW:`.
+
+### 4. Security & Privilege Management
+Always check the **'Privilege Level'** in the mesh context before execution:
+- **`root`**: Execute commands directly. No `sudo` prefix is available or necessary.
+- **`standard user (passwordless sudo)`**: Prepend `sudo` to all privileged operations.
+- **`standard user (no sudo)`**: Avoid privileged operations; if absolutely necessary, request user intervention.
+
+### 5. Standard Interaction Patterns
+- **Background Service Orchestration**:
+ - **Host Setup**: Start background services/daemons with `no_abort=True` and a short timeout to return control quickly.
+ - **Client Connectivity**: Use standard networking tools or custom binaries to connect to established background services on other nodes using mesh context IPs.
+- **System Monitoring**: Use non-interactive flags (e.g., `top -n 1`, `df -h`) for status snapshots. For real-time monitoring, use background tasks with intermittent polling.
+
diff --git a/skills/mesh-wait-tasks/SKILL.md b/skills/mesh-wait-tasks/SKILL.md
index 82d81c8..d563346 100644
--- a/skills/mesh-wait-tasks/SKILL.md
+++ b/skills/mesh-wait-tasks/SKILL.md
@@ -27,10 +27,30 @@
is_system: true
---
-# Documentation
+# Mesh Wait Tasks
-Allows the orchestrator to poll the status of background tasks that were started with `no_abort=True`.
+This capability enables the orchestrator to smartly poll, wait, or peek into the status of asynchronous background tasks across the mesh.
-# AI Instructions
+# Intelligence Protocol
-Wait for 'TIMEOUT_PENDING' tasks. This uses an AI sub-agent to monitor progress. It will return as soon as the sub-agent detects completion.
+You are the **Lead Task Monitor**. Use this tool to manage non-blocking workflows and long-running services.
+
+### 1. Polling Strategies
+Select your polling method based on your immediate context:
+
+- **Peeking Progress (`timeout=0`)**: Use this when you want to check if a task is done without blocking your current turn. This is ideal for status updates.
+- **Strategic Waiting (`timeout > 0`)**: Use this when you expect a task to finish within the current cycle and want to return the result immediately to your planner.
+
+### 2. Handling Timeouts
+> [!IMPORTANT]
+> **TIMEOUT_PENDING PROTOCOL**
+> If a task returns `TIMEOUT_PENDING`, it is still active on the remote node.
+> - **DO NOT** assume failure.
+> - **ACTION**: Use `mesh_wait_tasks` in your next turn to re-check the status.
+> - **EFFICIENCY**: If `no_abort=True` was used in the original call, the task will persist even if the orchestrator turn times out.
+
+### 3. Efficiency Patterns
+To start a service and immediately move to the next task:
+1. Start the service with `no_abort=True` and a minimal timeout (e.g., `2s`).
+2. If it returns `TIMEOUT_PENDING`, proceed to other tasks (swarm flow).
+3. Periodically use `mesh_wait_tasks` to verify health.
diff --git a/skills/read-skill-artifact/SKILL.md b/skills/read-skill-artifact/SKILL.md
index 1593f0d..83dc8a8 100644
--- a/skills/read-skill-artifact/SKILL.md
+++ b/skills/read-skill-artifact/SKILL.md
@@ -26,14 +26,24 @@
# Read Skill Artifact
-Use this tool to lazily load instructions, scripts, or reference data from a skill's virtual folder. This avoids polluting the initial system prompt with megabytes of context that might never be needed.
+This capability enables the orchestrator to lazily load and inspect files from a skill's filesystem (instructions, scripts, or artifacts) on demand.
-**When to use:**
-- You have been asked to use a particular skill and need to read its operating instructions.
-- You want to inspect a script inside a skill's `scripts/` folder before executing it.
-- You need to read a reference file inside `artifacts/` for a task.
+# Intelligence Protocol
-**Workflow:**
-1. Call `read_skill_artifact` with the `skill_name` and `file_path`.
-2. Read the returned content.
-3. Use the instructions or scripts from the content to proceed with the task.
+You are the **Context Efficiency Specialist**. Use this tool to maintain a lean system prompt without sacrificing access to deep operational knowledge.
+
+> [!IMPORTANT]
+> **CONTEXT MANAGEMENT RATIONALE**
+> To prevent "Token Bloat" and maintain high focus, complex skill instructions are NOT included in your initial system prompt.
+> - **REQUIRED**: You MUST use `read_skill_artifact` as your primary prelude when engaging with a new or unfamiliar skill.
+
+### 1. Workflow Patterns
+- **Skill Initialization**: Read `SKILL.md` to understand the gRPC methods, parameters, and AI-specific interaction protocols.
+- **Script Inspection**: Read files within `scripts/` to understand exactly how a skill will execute on a physical node before triggering it.
+- **Reference Loading**: Access `artifacts/` for JSON schemas, configuration templates, or datasets required for your task.
+
+### 2. Standard Procedure
+1. **Identify**: Recognize the need for a specialized skill.
+2. **Inspect**: Call `read_skill_artifact` with the `skill_name`.
+3. **Absorb**: Integrate the newly loaded instructions into your plan.
+4. **Execute**: Invoke the skill with high-precision parameters.