Introducing /agents — The robots.txt for AI Agents
We built the first agent-native platform: a /agents endpoint that gives Claude Code, Cursor, and any AI assistant instant instructions for browser automation.
The web has a 30-year history of machine-readable contracts
robots.txt told crawlers what not to index. sitemap.xml told them what to index. llms.txt tells LLMs what content means. Today we add the next entry in that lineage: /agents, which tells AI agents how to USE you as a tool.
We ship solaceagi.com/agents today — a structured endpoint that serves the right integration format for any AI assistant. Claude Code gets a CLAUDE.md snippet. Cursor gets .cursorrules. GitHub Copilot gets copilot-instructions.md. Every format is downloadable in one curl command.
One line. Any project. Immediate capability.
The one-liner is the whole onboarding: curl solaceagi.com/agents/claude.md >> CLAUDE.md. That command adds Solace browser automation to your Claude Code project. Navigate, click, fill forms, take screenshots — all from within your next coding session. No signup. No SDK. No configuration.
We tested it this session. Four steps: check status, navigate, evaluate, screenshot. All four pass on first try. That is the bar for agent-ready software.
The recipe flywheel makes agents smarter over time
Here is the insight Andrej Karpathy articulated best: LLMs are task machines limited by context. Give an agent structured context and it performs 10x better. The /agents endpoint is inference-time training data for tool use.
But Solace goes further. When your agent completes a task, it becomes a recipe. The next time your agent — or anyone's agent — tries the same task, it replays the recipe with no LLM tokens consumed. Agents using Solace get cheaper and more capable over time.