From Prompts to Permissions: The Constitution of Agentic AI
Agents are moving from chat to action. The next platform layer is the permission fabric around them. Scopes, time-boxed rights, receipts, and revocation will build trust as AI acts on your behalf.

The week the prompt stopped being enough
Something subtle but historic happened this week. OpenAI launched AgentKit and chat-embedded apps, positioning agents as first-class citizens inside ChatGPT. Google showed Gemini’s browser-native Computer Use, an agent that clicks, drags, and types inside a real browser. IBM and Anthropic published enterprise guidance that reads more like a security architecture than a marketing deck. Taken together, the message is clear: the next platform layer is not the assistant itself. It is the permission fabric around it.
The last decade trained us to ask models for things with clever prompts. Prompts are not enough once software can act. Agents need explicit capabilities, time limits, and a paper trail. They need a constitution. This week’s releases feel like a first draft of that compact.
- OpenAI framed the build path with new tooling for agents and a way to embed them directly in chat. The company’s announcement of AgentKit formalizes multi-agent workflows, connectors, and guardrails, and it puts capability governance in the same pane as design and evaluation. See the OpenAI announcement on AgentKit.
- Google’s Gemini 2.5 Computer Use demonstrated a model operating a real browser with a small but potent set of predefined actions such as open, type, click, and drag. That matters because it acts where application programming interfaces do not exist. Capability governance is no longer an app-by-app permission problem. It is a capability-by-capability problem across the open web.
- IBM and Anthropic, meanwhile, published enterprise agent guidance grounded in the Model Context Protocol community and a secure agent development lifecycle. It treats capability grants, revocation, and audit as first-class design objects. IBM’s press release describes the trajectory in an IBM and Anthropic enterprise guide.
From asking to authorizing
Prompts are requests. Permissions are authorizations. Requests can be misread. Authorizations can be misused. The difference is accountability.
Think about how you already delegate. You do not tell a travel assistant to try its best. You hand it a corporate card with a spending limit, a route preference, and a time window. You also expect a receipt. Agents need the same structure.
What changes in the user experience:
- Scopes replace vague instructions. Instead of please book a flight, you grant a scope like flight.booking:create with constraints such as airline preferences, price ceiling, and no red-eyes.
- Rights become time-boxed. You grant the scope for 30 minutes or until a specific itinerary is booked. The grant auto-expires.
- Intent proofs accompany actions. The agent attaches a short, signed statement of intent to each step stating what it believes you asked, what constraints it applied, and why it chose a vendor.
- Revocation becomes routine. A persistent revoke button and a one-tap pull the plug action now live in the same place as the chat input.
If prompts were conversations, permissions are power of attorney letters with expiration dates and itemized limits.
What a capability grant looks like
Here is a concrete, human-readable example of a scope grant for an agent that can book travel and update your calendar. This is not a standard yet, but it illustrates the level of specificity users and admins will need.
subject: agent:travel-assistant@yourcompany
principal: user:alex@example.com
scopes:
- id: flight.booking:create
constraints:
max_total_price_usd: 650
cabin_class: economy
depart_window: 2025-10-14T06:00Z..2025-10-14T18:00Z
nonstop_only: true
allowed_airlines: [\"DL\",\"UA\"]
ttl_minutes: 45
audit_level: summary+artifacts
- id: calendar.event:write
constraints:
domains: [\"work\"]
max_duration_minutes: 90
ttl_minutes: 60
receipts:
format: intent-proof-v1
destination: user-vault://alex/activity-ledger
revocation:
on_user_action: immediate
on_anomaly: auto
Three things to notice:
- Constraints are the rulebook. They convert your preferences into enforceable policy.
- Time to live forces renewal. You do not want forgotten grants hanging around for months.
- Receipts and audit level declare what will be logged and where that log will live before anything happens.
The audit trail is the product
Once agents can click and buy, the log is not a compliance afterthought. It is a core user feature. Think order history, but for every action an agent took on your behalf, down to the form field it typed and the page it clicked.
A good audit trail creates trust in three ways:
- Provenance. Each step has a signed checksum of inputs and proposed outputs. You can reconstruct what the agent saw and decided without exposing model internals.
- Accountability. There is a clear map from user authorization to agent action. Anomalies are explainable. Who granted the scope, when, and with what constraints is visible.
- Negotiated privacy. You can choose summary-only logs for sensitive tasks or full artifact capture for high-risk tasks. It is opt-in and readable.
We explored the value of logs in shaping behavior in post-incident logs teach AI why. The coming permission fabric turns that philosophy into a practical product surface that users will open as often as their order history.
Revocation first, not last
Revocation needs to be a first-class flow. Right now, most agent demos start with a blazing capability and end with a happy path. Real life needs the opposite. People change their minds. Inputs change. Vendors fail.
Four revocation patterns to implement now:
- Manual revoke. One tap inside chat to pull active scopes. The model acknowledges the pull and receives a denial token.
- Timer revoke. Grants expire by default. Renewal must be explicit and explain why further control is needed.
- Policy revoke. Security detects an anomaly, such as an unexpected vendor domain, and automatically suspends the scope pending review.
- Cascading revoke. Pulling a high-level scope automatically invalidates all dependent capabilities. If calendar.write is revoked, the meeting scheduler cannot backdoor the calendar through a delegated sub-agent.
OAuth for autonomy is coming
When apps first asked for your data, we got OAuth, a system where you click to grant specific scopes to specific apps and can revoke them later. Agent ecosystems need a similar contract, but with richer semantics that include time, intent, provenance, and multi-agent delegation.
A credible standard will likely include:
- Rich scopes. Beyond read, write, and delete, we will see verbs like trade, reserve, sign, and deploy, each with structured constraints.
- Intent proofs. Short, signed summaries that bind a user instruction to a proposed action and its rationale without exposing thought tokens.
- Delegation chains. Agents that call other agents must pass along attenuated tokens, not blanket keys. Short chains reduce blast radius.
- Portable capability tokens. Tokens that travel across browser, operating system, and enterprise stacks, minted by the resource owner and verified by anyone.
- Machine-readable receipts. A standard format for receipts that are audit-ready and privacy-aware, stored in user-controlled vaults.
Anthropic’s Model Context Protocol community seeded part of this idea by standardizing how models discover and call tools. IBM’s work with Anthropic pushes that further into enterprise practice with governance patterns and policy enforcement. The market will converge on MCP-style interfaces that carry not only how to call a tool, but who is allowed to call it, for how long, and under what constraints.
This evolution also aligns with how platforms consolidate access. As we argued in assistants become native gateways, the assistant interface becomes the front door to everything else. A permission fabric is the access control vestibule that sits right behind that door.
The browser becomes the new kernel
Google’s Computer Use feature matters because it anchors capability governance at the interface layer. When a model can operate a browser, the primary primitive is not an application programming interface key. It is a click. Governing clicks looks different.
Instead of asking whether an agent can access your bank API, the better question is whether this agent can click the transfer button on your bank page for up to 200 dollars between 9 a.m. and noon one time. That is scope, constraint, and time in a place that never offered an application programming interface. The browser becomes a universal adapter. The permission fabric becomes the switchboard that routes and restricts power.
Expect three near-term patterns:
- Action whitelists. A small set of allowed actions on specified origins with rate limits.
- Visual attestations. The agent captures sanitized screenshots or element hashes to prove it clicked the intended control.
- Human checkpoints. For sensitive actions, the agent pauses and requests human approval inside the same chat, carrying forward the full intent proof.
Per-action escrow and liability
If an agent can act, someone is on the hook for mistakes. Today, platforms blend terms of service with bug bounties. That will not scale when agents move funds, sign contracts, or change infrastructure.
A workable model looks like per-action escrow and layered liability:
- For any action above a defined risk threshold, the platform sets aside a micro-escrow from either the user’s budget or the vendor’s bond. If the action causes quantifiable harm within a defined window, the escrow pays automatically.
- Vendors post standing bonds to access high-risk scopes. Bonds rise with risk tiers and fall with proven reliability.
- Insurers offer riders for agents that operate in enterprise settings, priced by observed behavior and verified by audit trails.
Because receipts and intent proofs are machine-readable, claims can be adjudicated programmatically. Enterprises will increasingly demand audit-ready disclosures, much like the framing in auditable model 10-Ks. The result is faster restitution and fewer arguments about who approved what.
A market for portable capability tokens
Portable capability tokens create a new market layer. Think of them as cryptographic permission slips that travel with the agent across platforms.
- Browser to operating system. A token that authorizes a browser agent to save a file to a specific folder for ten minutes can be presented to the operating system without handing out a global file system key.
- Enterprise to consumer. A company can grant a vendor’s agent a scope to read a specific dataset for one hour and then hand the user a summary. The vendor never holds a standing key to the database.
- Cross-vendor orchestration. A calendar agent can pass a narrowed token to a conferencing agent that only allows creating a meeting link tied to a single event id.
As these tokens become standardized, marketplaces will emerge around audit providers, escrow insurers, and capability brokers that mint and validate tokens with vetted policy templates. Procurement teams will buy not only agent functionality, but the guarantees that come with it.
What builders should do this quarter
If you are building or buying agents in the next 90 days, convert this theory into action.
Product teams
- Design scope-first flows. Make the permission grant the main screen, not a side drawer. Show constraints before model output. Offer presets for common grants and a custom builder for advanced users.
- Default to time-boxed rights. No grant without a timer. Ask for renewal with a clear explanation.
- Ship a receipt viewer. Treat receipts like order histories. Users should filter by agent, resource, and time, and export for compliance.
Security and platform teams
- Instrument an activity ledger. Capture intent proofs and artifact hashes. Sign them and store in a user-controlled vault with role-based access.
- Build revocation as a service. Offer a central revoke endpoint that all agents must consult before acting. Require agents to cache deny tokens and back off.
- Pilot portable capability tokens. Start with low-risk scopes such as calendar availability and document comments. Measure latency and failure modes.
Developers
- Adopt MCP-style tool interfaces. Keep tools declarative with capability descriptions and constraint schemas. Treat permission checks as part of the tool contract.
- Add anomaly hooks. Build plug points for policy engines that can pause or block actions based on domain, price, or content signals.
- Log at the boundary. Emit receipts when crossing system boundaries, not inside the model. Boundary logs are easier to audit and harder to tamper with.
Legal and risk
- Define harm thresholds. Enumerate what counts as an automatically compensable error. Tie thresholds to escrow triggers.
- Standardize consent receipts. Work with design to make them readable and consistent across agents. You will thank yourself during audits.
- Negotiate bonds, not blurbs. Ask vendors for standing bonds on high-risk scopes rather than more words in a support policy.
Why this is accelerationist and safe
Acceleration happens when the slow parts go away. The slow part of agents is not model latency. It is organizational friction such as approvals, audits, and clean rollback. A permission fabric speeds that up by making rights explicit, revocation routine, and accountability automatic. The result is more capability shipped with fewer meetings and safer defaults.
Safety is not a slogan. It is a contract you can enforce. When rights are scoped, time-boxed, and logged, you can experiment faster because you can cleanly undo and clearly explain. That is what this week’s launches are really about. OpenAI is making it easier to build agents. Google is making it easier to act where application programming interfaces do not exist. IBM and Anthropic are showing how to govern that power in enterprises. The platform layer that connects these moves is permissions.
The punch line
Prompts will not disappear. They just will not be the seat of power. The seat of power will be the grant, the timer, the receipt, and the revoke. That is the social contract of agentic AI. Write it once, make it machine-readable, and let it travel with your agents wherever they go. When we do, assistants stop being toys and start being teammates.
And just like that, we will look back on October 2025 as the month we stopped asking and started authorizing.








