Skip to main content

Documentation Index

Fetch the complete documentation index at: https://allhandsai-tech-notes-llm-key-protection.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

A ready-to-run example is available here!
ACPAgent lets you use any Agent Client Protocol server as the backend for an OpenHands conversation. Instead of calling an LLM directly, the agent spawns an ACP server subprocess and communicates with it over JSON-RPC. The server manages its own LLM, tools, and execution — your code just sends messages and collects responses.

Basic Usage

from openhands.sdk.agent import ACPAgent
from openhands.sdk.conversation import Conversation

# Point at any ACP-compatible server
agent = ACPAgent(acp_command=["npx", "-y", "@agentclientprotocol/claude-agent-acp"])

conversation = Conversation(agent=agent, workspace="./my-project")
conversation.send_message("Explain the architecture of this project.")
conversation.run()

agent.close()
The acp_command is the shell command used to spawn the server process. The SDK communicates with it over stdin/stdout JSON-RPC.
Key difference from standard agents: With ACPAgent, you don’t need an LLM_API_KEY in your code. The ACP server handles its own LLM authentication and API calls. This is delegation — your code sends messages to the ACP server, which manages all LLM interactions internally.

Prompt Context (AgentContext)

ACPAgent supports agent_context for prompt-only extensions — skills, repository context, current datetime, and system/user message suffixes are appended to the user message before it reaches the ACP server. This lets you inject the same skill catalog and repo-specific guidance that the built-in Agent receives, without interfering with the server’s own tools or execution model.
from openhands.sdk.agent import ACPAgent
from openhands.sdk import AgentContext
from openhands.sdk.context import Skill

context = AgentContext(
    skills=[
        Skill(
            name="code-style",
            content="Always use type hints in Python.",
            trigger=None,  # always active
        ),
    ],
    system_message_suffix="You are reviewing a Python project.",
)

agent = ACPAgent(
    acp_command=["npx", "-y", "@agentclientprotocol/claude-agent-acp"],
    agent_context=context,
)
The prompt assembly works as follows:
  1. The conversation layer builds the user MessageEvent, including any per-turn extended_content (e.g. triggered-skill injections).
  2. ACPAgent._build_acp_prompt() collects all text blocks from the message and appends the rendered AgentContext prompt (datetime, repo context, available skills, system suffix) via to_acp_prompt_context().
  3. The combined text is sent as a single user message to the ACP server.
user_message_suffix is an ACP-compatible field, but it is not duplicated in to_acp_prompt_context() because the conversation layer already applies it through MessageEvent.to_llm_message().

Compatible AgentContext Fields

Each AgentContext field is tagged as ACP-compatible or not. At initialization, validate_acp_compatibility() rejects any context that uses unsupported fields.
FieldACP CompatibleNotes
skillsSkill catalog and trigger-based injections
system_message_suffixAppended to the prompt context
user_message_suffixApplied by the conversation layer
current_datetimeIncluded in the rendered prompt
load_user_skillsLoad skills from ~/.openhands/skills/
load_public_skillsLoad skills from the public extensions repo
marketplace_pathFilter public skills via marketplace JSON
secretsACP subprocesses do not use OpenHands secret injection
Passing secrets (or any future field marked acp_compatible: False) raises NotImplementedError.

What ACPAgent Does Not Support

Because the ACP server manages its own tools, context window, and execution, these AgentBase features are not available on ACPAgent:
  • tools / include_default_tools — the server has its own tools
  • mcp_config — configure MCP on the server side
  • condenser — the server manages its own context window
  • critic — the server manages its own evaluation
Passing any of these raises NotImplementedError at initialization.

ACPAgent with RemoteConversation

ACPAgent also works with remote agent-server deployments such as APIRemoteWorkspace, DockerWorkspace, and other RemoteWorkspace-backed setups. When RemoteConversation detects an ACPAgent, it automatically uses the ACP-capable conversation routes for:
  • conversation creation
  • conversation info reads
  • conversation counting
The rest of the lifecycle, including events, runs, pauses, and secrets, continues to use the standard agent-server routes. This keeps the existing remote execution flow intact while isolating the schema-sensitive ACP contract under /api/acp/conversations.
If you attach to an existing conversation by conversation_id, use ACPAgent for ACP-backed conversations. Attaching with a regular Agent to an ACP conversation ID is rejected explicitly to avoid mixing the standard and ACP conversation contracts.

How It Works

  • Subprocess delegation: ACPAgent spawns the ACP server and communicates via JSON-RPC over stdin/stdout
  • Server-managed execution: The ACP server handles its own LLM calls, tools, and context — your code just sends messages
  • Auto-approval: Permission requests from the server are automatically granted, so ensure you trust the ACP server you’re running
  • Metrics collection: Token usage and costs from the server are captured into the agent’s LLM.metrics

Configuration

Server Command and Arguments

agent = ACPAgent(
    acp_command=["npx", "-y", "@agentclientprotocol/claude-agent-acp"],
    acp_args=["--profile", "my-profile"],      # extra CLI args
    acp_env={"ANTHROPIC_API_KEY": "sk-..."},   # extra env vars
)
ParameterDescription
acp_commandCommand to start the ACP server (required)
acp_argsAdditional arguments appended to the command
acp_envAdditional environment variables for the server process

Authentication

When the ACP server advertises authentication methods, ACPAgent automatically selects a credential source:
  1. ChatGPT subscription login — If the server supports a chatgpt auth method and ~/.codex/auth.json exists (created by LLM.subscription_login()), this is selected first. This enables ACP-backed workflows to use device-code login credentials without an explicit API key.
  2. API key environment variables — Falls back to checking for ANTHROPIC_API_KEY, OPENAI_API_KEY, or GEMINI_API_KEY depending on which auth methods the server supports.
If no supported credential source is found, the server may proceed without authentication (some servers don’t require it).

Metrics

Token usage and cost data are automatically captured from the ACP server’s responses. You can inspect them through the standard LLM.metrics interface:
metrics = agent.llm.metrics
print(f"Total cost: ${metrics.accumulated_cost:.6f}")

for usage in metrics.token_usages:
    print(f"  prompt={usage.prompt_tokens}  completion={usage.completion_tokens}")
Usage data comes from two ACP protocol sources:
  • PromptResponse.usage — per-turn token counts (input, output, cached, reasoning tokens)
  • UsageUpdate notifications — cumulative session cost and context window size

Cleanup

Always call agent.close() when you are done to terminate the ACP server subprocess. A try/finally block is recommended:
agent = ACPAgent(acp_command=["npx", "-y", "@agentclientprotocol/claude-agent-acp"])
try:
    conversation = Conversation(agent=agent, workspace=".")
    conversation.send_message("Hello!")
    conversation.run()
finally:
    agent.close()

Ready-to-run Example

This example is available on GitHub: examples/01_standalone_sdk/40_acp_agent_example.py
examples/01_standalone_sdk/40_acp_agent_example.py
"""Example: Using ACPAgent with Claude Code ACP server.

This example shows how to use an ACP-compatible server (claude-agent-acp)
as the agent backend instead of direct LLM calls.  It also demonstrates
``ask_agent()`` — a stateless side-question that forks the ACP session
and leaves the main conversation untouched.

Prerequisites:
    - Node.js / npx available
    - ANTHROPIC_BASE_URL and ANTHROPIC_API_KEY set (can point to LiteLLM proxy)

Usage:
    uv run python examples/01_standalone_sdk/40_acp_agent_example.py
"""

import os

from openhands.sdk.agent import ACPAgent
from openhands.sdk.conversation import Conversation


agent = ACPAgent(acp_command=["npx", "-y", "@agentclientprotocol/claude-agent-acp"])

try:
    cwd = os.getcwd()
    conversation = Conversation(agent=agent, workspace=cwd)

    # --- Main conversation turn ---
    conversation.send_message(
        "List the Python source files under openhands-sdk/openhands/sdk/agent/, "
        "then read the __init__.py and summarize what agent classes are exported."
    )
    conversation.run()

    # --- ask_agent: stateless side-question via fork_session ---
    print("\n--- ask_agent ---")
    response = conversation.ask_agent(
        "Based on what you just saw, which agent class is the newest addition?"
    )
    print(f"ask_agent response: {response}")
    # Report cost (ACP server reports usage via session_update notifications)
    cost = agent.llm.metrics.accumulated_cost
    print(f"EXAMPLE_COST: {cost:.4f}")
finally:
    # Clean up the ACP server subprocess
    agent.close()

cost = conversation.conversation_stats.get_combined_metrics().accumulated_cost
print(f"\nEXAMPLE_COST: {cost}")
print("Done!")
This example uses ANTHROPIC_BASE_URL and ANTHROPIC_API_KEY environment variables to configure the Claude Code ACP server.
Running the Example
# Set up environment variables (can point to LiteLLM proxy)
export ANTHROPIC_BASE_URL="https://your-proxy.example.com"
export ANTHROPIC_API_KEY="your-api-key"
cd software-agent-sdk
uv run python examples/01_standalone_sdk/40_acp_agent_example.py

Remote Runtime Example

This example shows how to run an ACPAgent in a remote sandboxed environment via the Runtime API, using APIRemoteWorkspace:
examples/02_remote_agent_server/09_acp_agent_with_remote_runtime.py
"""Example: ACPAgent with Remote Runtime via API.

This example demonstrates running an ACPAgent (Claude Code via ACP protocol)
in a remote sandboxed environment via Runtime API. It follows the same pattern
as 04_convo_with_api_sandboxed_server.py but uses ACPAgent instead of the
default LLM-based Agent.

Usage:
  uv run examples/02_remote_agent_server/09_acp_agent_with_remote_runtime.py

Requirements:
  - LLM_BASE_URL: LiteLLM proxy URL (routes Claude Code requests)
  - LLM_API_KEY: LiteLLM virtual API key
  - RUNTIME_API_KEY: API key for runtime API access
"""

import os
import time

from openhands.sdk import (
    Conversation,
    RemoteConversation,
    get_logger,
)
from openhands.sdk.agent import ACPAgent
from openhands.workspace import APIRemoteWorkspace


logger = get_logger(__name__)


# ACP agents (Claude Code) route through LiteLLM proxy
llm_base_url = os.getenv("LLM_BASE_URL")
llm_api_key = os.getenv("LLM_API_KEY")
assert llm_base_url and llm_api_key, "LLM_BASE_URL and LLM_API_KEY required"

# Set ANTHROPIC_* vars so Claude Code routes through LiteLLM
os.environ["ANTHROPIC_BASE_URL"] = llm_base_url
os.environ["ANTHROPIC_API_KEY"] = llm_api_key

runtime_api_key = os.getenv("RUNTIME_API_KEY")
assert runtime_api_key, "RUNTIME_API_KEY required"

# If GITHUB_SHA is set (e.g. running in CI of a PR), use that to ensure consistency
# Otherwise, use the latest image from main
server_image_sha = os.getenv("GITHUB_SHA") or "main"
server_image = f"ghcr.io/openhands/agent-server:{server_image_sha[:7]}-python-amd64"
logger.info(f"Using server image: {server_image}")

with APIRemoteWorkspace(
    runtime_api_url=os.getenv("RUNTIME_API_URL", "https://runtime.eval.all-hands.dev"),
    runtime_api_key=runtime_api_key,
    server_image=server_image,
    image_pull_policy="Always",
    target_type="binary",  # CI builds binary target images
    forward_env=["ANTHROPIC_BASE_URL", "ANTHROPIC_API_KEY"],
) as workspace:
    agent = ACPAgent(
        acp_command=["claude-agent-acp"],  # Pre-installed in Docker image
    )

    received_events: list = []
    last_event_time = {"ts": time.time()}

    def event_callback(event) -> None:
        received_events.append(event)
        last_event_time["ts"] = time.time()

    conversation = Conversation(
        agent=agent, workspace=workspace, callbacks=[event_callback]
    )
    assert isinstance(conversation, RemoteConversation)

    try:
        conversation.send_message(
            "List the files in /workspace and describe what you see."
        )
        conversation.run()

        while time.time() - last_event_time["ts"] < 2.0:
            time.sleep(0.1)

        # Report cost
        cost = conversation.conversation_stats.get_combined_metrics().accumulated_cost
        print(f"EXAMPLE_COST: {cost:.4f}")
    finally:
        conversation.close()
Running the Example
export LLM_BASE_URL="https://your-litellm-proxy.example.com"
export LLM_API_KEY="your-litellm-api-key"
export RUNTIME_API_KEY="your-runtime-api-key"
export RUNTIME_API_URL="https://runtime.eval.all-hands.dev"
cd software-agent-sdk
uv run python examples/02_remote_agent_server/09_acp_agent_with_remote_runtime.py
On the agent-server side, the ACP-capable REST surface lives under /api/acp/conversations, including POST, GET, search, batch get, and count.

Next Steps