Deep Dive: Enterprise Security with OBO Tokens & AI Agents
In our introductory article, we compared On-Behalf-Of (OBO) tokens to a school hall pass. Now, let's step into the real world of enterprise software.
We'll explore a complex scenario involving a Corporate Traveler App, an API Gateway, an AI Agent, and the Model Context Protocol (MCP).
The Scenario: Sally & Steve
The Characters:
- Sally (The Traveler): She just returned from a business trip and needs to submit her expenses.
She has the basic
expenses.submitscope. - Steve (The Manager): He is Sally's leader. He needs to review and approve expenses. He has the
elevated
expenses.approvescope. - Traveler Assistant (The AI Agent): A helpful bot inside the company chat app.
Step-by-Step Security Flow
Goal: Sally wants to ask the AI Agent to "Check the status of my latest expense report."
1. Authentication & Minting the User Token
When Sally logs into the Traveler App chat interface, the Identity Provider (like Okta or Google) verifies
who she is. It issues a standard User Access Token.
Token Claims: { User: "Sally", Scopes: ["expenses.submit", "expenses.read_own"] }
Use Case: Why Not Just Use Sally's Token?
You might think, "Sally already has a token, why not just give that to the Agent?" This is a major security risk.
If We gave this raw token to the AI Agent:
- The AI Agent (and the LLM provider) would theoretically have access to all of Sally's data, not just expenses.
- If the Agent was compromised, the attacker could read Sally's emails.
The Best Practice: We follow the Principle of Least Privilege. We never share the raw user token. We exchange it for a narrow OBO token that can only do one thing: talk to the Expense Service.
2. The Handshake: Token Broker & OBO Exchange
Sally types her request to the AI Agent. The application doesn't just pass her raw User Token to the bot. instead, it calls the Token Broker.
The Token Broker performs a secure exchange: "Here is Sally's User Token. Please give me an OBO Token
specifically for the Expense Service."
The resulting OBO Token is narrower. It is only valid for the Expense Service and only for a short time.
3. The AI Agent Acts
The AI Agent receives the request along with this OBO Token. It analyzes the text ("Check status...") and decides it needs to call a tool.
4. MCP Scope Validation (The Gatekeeper)
This is where the Model Context Protocol (MCP) shines. The AI Agent connects to the "Expense MCP Server."
Before running any function, the MCP Server inspects the OBO Token:
- Check 1: Does this token belong to a valid user? (Yes, Sally).
- Check 2: Does this token have the scope to read all expenses? (No).
- Check 3: Does this token have the scope to read own expenses? (Yes,
expenses.read_own).
The MCP Server executes the function get_expense_status(user="Sally") and returns the result.
Usefully, if Sally tried to ask "Approve my own expense," the MCP Server would see she lacks the
expenses.approve scope and block the action immediately before the tool even runs.
What About Steve?
If Steve asks the exact same AI Agent, "Approve Sally's expense," the flow is identical, but his OBO Token
contains expenses.approve. The MCP Server sees this scope and allows the
approve_expense function to execute.
The Beauty of this Architecture: The AI Agent logic doesn't need complex if/else statements for security. It simply passes the OBO Token, and the underlying services (via MCP) enforce the rules based on the original user's identity.