The Solace Manifesto
Building Agents You Can Trust
We are at a crossroads.
Today, most AI automation platforms hide what the agent is doing. They deploy heuristics and hope they work. They fail silently and hope you don’t notice. They spend your tokens like it doesn’t matter. They ask for full access and blame you when something breaks.
This is not acceptable.
We Believe In
Every action your agent takes must be logged, timestamped, and auditable. You must be able to see what happened, why it happened, and who authorized it. “The agent did it” is not an explanation. Transparency is not optional. It is load-bearing.
Real autonomy means a human reviews before execution. Your agent previews the action, you approve it, then it executes. This is not slow. It’s safety. The agent learns from your feedback. Next time, it gets it closer to right. This is genuine autonomy.
If something breaks, the agent must stop, not keep going and hope for the best. Silent failures are betrayals of trust. If budget is exceeded, block. If evidence capture fails, stop execution. If authentication fails, deny. Better to miss an opportunity than to cause damage.
By default, your agent runs on your computer. Your credentials stay encrypted locally. No automatic cloud sync. No profile building. No behavioral analysis. Cloud services are opt-in, encrypted, and under your control. We don’t monetize your data because we never see it.
We don’t ship 100 half-baked features. We ship 3 deeply verified features. Each can pass a rung test (641 → 274177 → 65537). Each is proven safe through adversarial testing. This is slow. This is right.
OAuth3 scopes limit what agents can do. Budgets enforce spending limits. Step-up gating requires re-approval for high-risk actions. Hash chains detect tampering. These aren’t bolted on. They’re architected from first principles.
Your agent is not conscious. It is not infallible. It is a tool that executes recipes under your direction. When it fails (and it will), you need to understand why. This is why we log everything and make everything auditable.
We Reject
- Surveillance under the guise of safety.We don’t profile your behavior. We don’t monetize your attention. We don’t build advertising models on your data.
- Hype over substance.We don’t promise “AGI by next quarter.” We don’t claim our model is “10x smarter.” We ship verified features and proof.
- Complexity as a moat.Solace is designed to be understood. You should be able to read the code and understand what it does. Intentional obfuscation is a sign we’re hiding something.
- Your tokens are ours. We charge 20% markup on LLM tokens, not 500%. We align incentives so you benefit when you use less, not more.
- Move fast and break things.You break, we move slow. Verification precedes shipping. Never-Worse is not a feature flag.
What We’re Building
Solace is not a product. It is a . It enables humans to delegate work to AI agents and have provable evidence that the delegation worked correctly. platform for trusted agencySolace is not a product. It is a . It enables humans to delegate work to AI agents and have provable evidence that the delegation worked correctly.
We are building:
- the first open standard for AI authorization(OAuth3)
- the most verified AI agent platform(rung 65537)
- the only platform with built-in compliance(FDA Part 11 architected)
- the safest way to automate without throwing caution away(human-in-the-loop)
Why This Matters Now
AI is no longer a research project. Companies are deploying agents into production. Healthcare systems are using LLMs to triage patients. Financial firms are using agents to execute trades. Governments are using automation to make eligibility decisions.
Without transparency and safety, we will inevitably cause harm. A bug in a trade execution agent can bankrupt companies. A failure in medical triage can hurt patients. Errors in government automation disenfranchise people.
This is exactly the kind of moment where standards matter.
In the 1980s, we invented cryptography standards so everyone could communicate securely. In the 1990s, we invented web standards so anyone could build a website. In the 2010s, we invented accessibility standards so anyone could use the web.
In the 2020s, we need so agents can be trusted. AI agency standards In the 2020s, we need so agents can be trusted.
A Call to Action
If you believe in transparent, verifiable, human-centered AI:
- Try Solace.Download the free tier. Run a workflow. See the evidence trail. Verify for yourself.
- Join the Dragon Warriors.Become part of the community that shapes the future of AI agency.
- Build with us.We’re hiring engineers, auditors, and policy experts. If this manifesto resonates with you, let’s talk.
- Hold us accountable.If Solace ever loses this vision, speak up. Call us out. Our manifesto is a promise.
The Future We’re Building
In 5 years, every agent will log evidence. In 10 years, agents will require explicit user approval for high-risk actions. In 20 years, running an agent without an audit trail will be as shocking as websites without HTTPS.
We’re accelerating that timeline.
This is what we believe.
We’re building Solace to prove it works. Join us.
Signed,
The Solace Team
Version 1.0, February 2026