Two MCP servers, one install pattern. Pick the surface you need, drop the snippet into your tool’s MCP config, restart, and run the verify prompt at the bottom. The Documentation MCP is hosted atDocumentation Index
Fetch the complete documentation index at: https://docs.amps.ai/llms.txt
Use this file to discover all available pages before exploring further.
https://docs.amps.ai/mcp. Connect via URL; nothing to install.
The Execution MCP is in development and will publish as @amps-ai/mcp on npm. The snippets below cover both, with the Execution MCP marked accordingly.
Documentation MCP
Claude Desktop
Edit the Claude Desktop config file. The location depends on the operating system. macOS:~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
Linux: ~/.config/Claude/claude_desktop_config.json
Add the amps-docs entry under mcpServers:
mcpServers already has other entries, add amps-docs alongside them. Do not replace the block.
Quit Claude Desktop fully (right-click the menubar icon, Quit) and reopen it. A hammer icon appears in the chat input bar. Click it to confirm both Documentation MCP tools are listed: search_amps_ai_documentation and query_docs_filesystem_amps_ai_documentation.
Cursor
Open Cursor settings:Cmd + , (macOS) or Ctrl + , (Windows / Linux), then Features > MCP. Click Add new MCP server. Or edit ~/.cursor/mcp.json directly:
Windsurf
Edit the Windsurf MCP config at~/.codeium/windsurf/mcp_config.json:
Continue (VS Code extension)
Continue supports MCP through itsexperimental.modelContextProtocolServers config. Edit ~/.continue/config.json and add the experimental block:
Custom MCP clients
The Documentation MCP server is a standard JSON-RPC MCP endpoint over HTTP / streamable HTTP. Any client built against the Model Context Protocol specification can connect. The base URL ishttps://docs.amps.ai/mcp. Initialise with a standard initialize call, then tools/list to enumerate the two tools, or tools/call to invoke them.
A minimal probe with curl:
tools/call. See the tools reference for working examples.
Execution MCP
Coming soon. The Execution MCP is in development. The snippets below show the planned shape; the
@amps-ai/mcp package is not yet published.Claude Desktop (planned)
AMPS_USER_ID identifies which of your integrator account’s end-users the agent is acting on behalf of. For multi-tenant deployments, the user identity travels in a per-request header instead.
Cursor, Windsurf, Continue (planned)
Same shape:command: "npx", args: ["-y", "@amps-ai/mcp"], env with the API key and user identifier. The exact config path is the same one used for the Documentation MCP, in each tool.
Hosted HTTP (planned)
For Vercel AI SDK and other hosted-agent runtimes, run the Execution MCP as a long-lived HTTP service. One process serves many concurrent users. Per-requestx-user-id headers carry user identity from the integrator’s app to the MCP server.
Verify it works
Paste this prompt into your AI tool and watch it call the Documentation MCP server:What modes does the Amps API support for batteries, and which one optimises for self-consumption?A connected agent calls
search_amps_ai_documentation for “battery modes”, reads the canonical actions page, and answers with the canonical names: charge, discharge, idle, auto.balanced, auto.reserve, auto.export. auto.balanced is the self-consumption answer.
If the agent answers without calling the tools, the server is not connected. Check the config file path, restart the client, and confirm the hammer icon (or the equivalent in your tool) is present.
What next
Tools reference
Input shapes and call patterns for every MCP tool.
Examples
Concrete agent workflows that exercise the tools.
Documentation MCP
What the docs MCP is and why it matters.
Battery cheat sheet
Canonical Amps vocabulary your agent will encounter in responses.