When Software Thinks, It Needs to Prove It
In The AI Backend, we argued that your backend needs a reasoning layer. Not a chatbot, not a copilot, but a layer that sits alongside your services, making decisions that used to be hardcoded.
We called the principle "guided autonomy": software that reasons freely within boundaries you define, predictable enough to trust, flexible enough to be useful.
But here's what we didn't address: how do you actually trust it?
When software executes predetermined logic, trust is straightforward. You wrote the code, you know what it does, and the behavior is bounded by the instructions you gave it. Logs work great.
When software reasons, trust becomes a problem. It makes decisions based on context that didn't exist when you wrote the code, delegates to other agents, and acts on behalf of users who never explicitly approved each action.
If you can't answer "who did this, and were they authorized?" then you don't have guided autonomy. You have unaccountable autonomy, and that's a liability, not a feature.
Beyond the API Key
The current conversation around AI security is too narrow, focused on prompt injection, jailbreaking, and model safety. These are real, and they have value, but they miss the larger shift.
Think about your identity infrastructure. It has layers: authentication to prove who you are, authorization to control what you can do, permissions to scope access, audit to track what happened. Each layer assumes something fundamental: the client is predictable.
Your payment service doesn't suddenly decide to call an unexpected endpoint. Your notification service doesn't reason its way into customer data it wasn't designed to touch. The code does what the code does.
API keys and OAuth were built for this world. One key equals one identity equals one set of permissions. The protocol doesn't need to distinguish between "the user wanted this" and "the software decided this" because deterministic code naturally enforces fixed workflows.
Something fundamental is absent from this stack.
An identity layer designed for software that reasons. One that doesn't assume predictable behavior. One where delegation is a first-class primitive. One where every action carries cryptographic proof of who authorized it, why, and under what constraints.
The Authorization Paradox
The approaches dominating today both fail, and for related reasons.
The callback trap. Require authorization checks at every decision point. Every time an agent acts, call back to an auth server. Consider what happens when you ask an agent to book a trip. A human searches sequentially, checking one flight site, picking dates, checking one hotel site, comparing prices, and repeating the cycle. Maybe 20-30 requests over an hour.
An agent operates differently. It can fan out in parallel: query Kayak, Expedia, Google Flights, and Skyscanner simultaneously for every date combination in your window. Each flight result triggers parallel hotel searches across Booking.com, Hotels.com, Airbnb. One user request cascades into hundreds or thousands of parallel API calls. The power of autonomous software, and exactly where callback-based auth breaks down.
The broad permission fantasy. Give agents sweeping access to avoid the bottleneck. Issue tokens with wide scopes and trust the agent to stay within bounds. The pattern spreading fastest right now, it demos beautifully but also fails catastrophically in production. You can't audit what you can't attribute, and you can't revoke what you can't scope. Accountability vanishes, and you're back to "trust us" security, which isn't security at all.
A better path exists: identity infrastructure that enables delegation, local verification, and cryptographic proof of every action. We call this accountable autonomy.
What Autonomous Software Actually Needs
Building software that can think requires new primitives for trust. Not features you bolt on, but infrastructure that exists from the first line of code.
Identity that is real, not assumed. When your payment service calls your notification service, you know who's calling because you deployed both. The identity is implicit in the architecture. When your reasoning agent spawns a sub-agent that delegates to another sub-agent that calls an external service, who is calling? Each agent needs its own identity. Not a shared service account, not a forwarded token, but a real, cryptographic identity that can be independently verified.
Authorization that can delegate. Traditional auth is binary: you have access or you don't. Agents need something else, the ability to grant scoped, time-limited authority to other agents, authority that attenuates with each delegation narrower than the last. When Agent A delegates to Agent B, Agent B shouldn't get Agent A's full permissions. It should get exactly what it needs for this specific task, nothing more.
Permissions that travel with the request. OAuth permissions live in the auth server, and you check them by calling back. But what happens when the request crosses organizational boundaries, when your agent talks to a partner's agent? Permissions need to be portable, verifiable without callbacks, and cryptographically bound to the request itself.
Audit that is proof, not logs. Logs record what someone claims happened. They can be edited, they can be incomplete, and they're not proof. When an agent makes a decision, the audit trail needs to be cryptographic, tamper-proof, and independently verifiable, years later, on an air-gapped machine, without trusting anyone's servers.
Why OAuth Can't Get You There
OAuth was designed for a different problem. The core assumption is that "the client's request embodies the resource owner's intent." The client and the user are indistinguishable for the token's lifetime.
This made sense for the world OAuth was built for. When you authorize a calendar app to access your Google Calendar, the app's requests are your requests because the app executes your explicit instructions. But agents don't execute explicit instructions. They reason, they branch, and they make decisions based on context that didn't exist when the token was issued.
Consider what happens in a multi-agent workflow: four agents, four delegation hops. OAuth wasn't designed for this. A resource server at the end of the chain must be able to cryptographically verify the entire delegation path back to the original user, not just the final sub-agent making the request.
Identity Infrastructure for Reasoning Agents
What if identity worked the way autonomous software works?
Instead of tokens that represent "who holds this credential," imagine identity based on cryptographic key ownership. Instead of authorization requiring callbacks, imagine verification that happens locally. Instead of delegation meaning "forward the same token," imagine delegation meaning "issue a new, scoped credential."
The solution comes from identity standards designed for decentralized trust. Decentralized Identifiers (DIDs) let agents prove identity cryptographically, while Verifiable Credentials (VCs) encode scoped authority that anyone can verify offline.
Each agent gets a real identity. Not a service account, not a forwarded token, but a cryptographic key pair that belongs to this specific agent, verifiable without calling anyone. Delegation becomes explicit. When one agent delegates to another, it doesn't share a token. It issues a credential: "Agent B can perform refund transactions up to $5,000 for the next hour." The credential is signed by Agent A's private key, anyone can verify it, and no callbacks are required.
Every execution produces a credential. Not a log entry that someone promises is accurate, but a cryptographically signed record that can be verified offline, years later, without cooperation from any party.
Trust Without Borders
The real power emerges across organizational boundaries. Consider a customer's shopping agent negotiating with your inventory agent, which checks with a third-party logistics agent to guarantee same-day delivery. Three companies, three agent systems, one transaction.
Traditional cross-company trust requires federation: shared identity providers, SAML agreements, mutual API key exchange. This worked when companies had a handful of integration partners, but it fails when every company has agents that need to talk to every other company's agents. You can't pre-federate with the entire economy.
With DIDs and Verifiable Credentials, two agents from different organizations establish trust at dialogue initiation. Exchange DIDs, present relevant credentials, verify signatures against public keys. No shared IdP, no federation setup, no pre-shared secrets.
Guided autonomy requires accountable identity, accountable identity enables trust without borders, and trust without borders enables the agent economy.