The Good Tech Companies - Delegating AI Permissions to Human Users with Permit.io’s Access Request MCP
Episode Date: June 25, 2025This story was originally published on HackerNoon at: https://hackernoon.com/delegating-ai-permissions-to-human-users-with-permitios-access-request-mcp. Learn how to bui...ld secure, human-in-the-loop AI agents using Permit.io’s Access Request MCP, LangGraph, and LangChain MCP Adapters. Check more stories related to tech-stories at: https://hackernoon.com/c/tech-stories. You can also check exclusive content about #mcp, #ai-agents, #access-control, #request-access-approval, #permit.io, #ai-permissions-permit.io, #access-request-mcp, #good-company, and more. This story was written by: @permit. Learn more about this writer by checking @permit's about page, and for more stories, please visit hackernoon.com. Learn how to build secure, human-in-the-loop AI agents using Permit.io’s Access Request MCP, LangGraph, and LangChain MCP Adapters.
Transcript
Discussion (0)
This audio is presented by Hacker Noon, where anyone can learn anything about any technology.
Delegating AI permissions to human users with PERMIT, IO's access request MCP.
By PERMIT, EO, as AI agents become more autonomous and capable,
their role is shifting from passive assistants to proactive actors.
Today's large language models, LLMs, don't just generate text, they execute tasks, access APIs, modify
databases, and have in control infrastructure.
AI agents are taking actions that were once reserved strictly for human users, whether
it's scheduling a meeting, deploying a service, or accessing a sensitive document.
When agents operate without guardrails, they can inadvertently make harmful or unauthorized
decisions. A single hallucinated command, misunderstood prompt, or overly broad permission can result
in data leaks, compliance violations, or broken systems. That's why integrating human-in-the-loop,
HIDL, workflows is essential for agent safety and accountability.
PERMIT IOS Access Request MCP is a framework designed to enable AI agents with the ability
to request sensitive actions, while allowing humans to remain the final decision makers.
Built on PERMIT, IO and integrated into popular agent frameworks like Langchain and Langgraph,
this system lets you insert approval workflows directly into your LLM-powered applications.
In this tutorial, you'll learn, why delegating sensitive permissions to humans is critical for trustworthy AI.
How PERMIT, I.O.'s Model Context Protocol, MCP, enables access request workflows.
How to build a real-world system that blends LLM intelligence with human oversight, using
Landgraf's feature.
Before we dive into our demo application and implementation steps,
let's briefly discuss the importance of delegating AI permissions to humans.
Why delegating AI permissions to humans is critical.
AI agents are powerful, but, as we all know, they're not infallible.
They follow instructions, but they don't understand context like humans do.
They generate responses, but they can't judge consequences.
And when those agents are integrated into real systems, banking tools, internal dashboards,
infrastructure controls, that's a dangerous gap.
In this context, everything that can go wrong is pretty clear.
Over permissive agents, LLMs may be granted access to tools they shouldn't touch,
either by design or accident. Hallucinated tool calls.
Agents can fabricate commands, arguments, or IDs that never existed.
Lack of auditability.
Without human checkpoints, there's no clear record of who approved what or why.
Delegation is the solution.
Instead of giving agents unchecked power, we give them a protocol.
You may ask, but a human decides.
By introducing human in the loop, HIDL, approval at key decision points, you get
safety, prevent irreversible actions before they happen, accountability,
require explicit human sign-off for high-stakes operations, control, let
people set the rules, who can approve, what can be approved, and when.
It's the difference between an agent doing something and an agent requesting TOTO something.
And it's exactly what PERMIT.
IOS Access Request MCP enables.
PERMIT. IOS Access Request MCP.
The Access Request MCP is a core part of PERMIT.
IOS Model Context Protocol, MCP, a specification that gives AI agents safe,
policy-aware access to tools and resources. Think of it as a bridge between LLMs that want to act
and humans who need control. What IT dose permits access request MCP enables AI agents to?
Request access to restricted resources, E.G. Can I access this restaurant? Request access to restricted resources, EG. Can I access this restaurant?
Request approval to perform sensitive operations, EG.
Can I order this restricted dish?
Wait for human input before proceeding, via Landgraf's mechanism.
Log the request and decision for auditing and compliance.
Behind the scenes, it uses PERMIT.
IOS authorization capabilities built to support, reback, relationship-based access control,
and other fine-grained authorization, FGA, policies, approval workflows, policy-backed elements
that work across UI, API, and LLM contexts. Plug and play with Langchain and Langgraph permits MCP
is integrated directly into the Langchain MCP adapter and Lang graph ecosystem.
You can expose permit elements as Lang graph compatible tools.
You can pause the agent with when sensitive actions occur.
You can resume execution based on real human decisions.
It's the easiest way to inject human judgment into AI behavior.
No custom backend needed.
Understanding the implementation and its benefits, let's get into our demo application.
What we'll build, demo application overview.
In this tutorial, we'll build a real-time approval workflow in which an AI agent-gen
requests access or performs sensitive actions, but only a human can approve them.
Scenario.
Family food ordering system to see how permits MCP can help enable an HITL workflow in a user application, we'll model a food ordering system for a family.
Parents can access and manage all restaurants and dishes. Children can view public items, but must request access to restricted restaurants or expensive dishes.
When a child submits a request, a parent receives it for review and must explicitly approve or deny it before the action proceeds. This use case reflects a common pattern. Agents can help, but humans
decide. TechStack will build this HITL-enabled agent using PERMIT. I.O. handles authorization,
roles, policies, and approvals. PERMIT MCP Server exposes permit workflows as tools that the agent can use.
Langchain MCP Adapters bridges permits MCP tools into Langgraph and Langchain.
Langgraph orchestrates the agent's workflow with support.
Gemini 2.0 Flash, Lightweight, Multimodal LLM used as the agent's reasoning engine.
Python, the glue holding it all together.
You'll end up with a working system where agents can collaborate with humans to ensure safe,
intentional behavior, using real policies, real tools, and real-time approvals.
A repository with the full code for this application is available here.
Step-by-step tutorial. In this section, we'll walk through how to implement a fully functional human in the loop agent system using Permit. I.O. and Lang Graph. We'll cover modeling
permissions with Permit, setting up the Permit MCP server, creating a Lang Graph plus Lang chain
MCP client, adding human in the loop with, running the full workflow. Let's get into it.
Modeling permissions with Permit will start by defining your system's access rules inside the permit. I-O Dashboard. This lets you model which
users can do what and what actions should trigger an approval flow. Create a Reback resource.
Navigate to the policy page from the sidebar. Then, click the Resources tab. Click Create a
resource. Name the resource. Under Reback Options, define two roles. Click Save.
Now, go to the Policy Editor tab and assign permissions. Full access.
Setup permit elements go to the Elements tab from the sidebar.
In the User Management section, click Create Element. Configure the element as follows.
Name, Restaurant Requests. Configure elements based on, reback resource roles,
resource type, restaurants, role permission levels level 1 workspace owner, assignable roles,
click create. In the newly created element card, click get code and take note of the config ID.
We'll use this later in the file. Add operation approval, create a new operation approval element, name, dish approval,
resource type, restaurants, click create, then create an approval management element, name,
dish requests, click get code and copy the config ID, add test users and resource instances,
navigate to directory, instances, click add instance resource type, restaurants, instance key, tenant, default tenant, or your working tenant.
Switch to the users tab, click add user key, instance access, click save, create another user with the key don't assign a role.
Once permit is configured, we're ready to clone the MCP server and connect your policies to a working agent.
Setting up the permit MCP server with your policies
modeled in the Permit dashboard,
it's time to bring them to life
by setting up the Permit MCP server.
A local service that exposes your access request
and approval flows as tools that an AI agent can use.
Clone and install the MCP server
start by cloning the MCP server repository
and setting up a virtual environment.
Add environment configuration, create a file at the root of the project based on the provided
and populate it with the correct values from your permit setup.
You can retrieve these values using the following resources.
Warning note. We are using Permit's local PDP, policy decision point, for this tutorial to support
Reback evaluation and low latency, offline testing.
Start the server with everything in place, you can now run the MCP server locally.
Once the server is running, it will expose your configured permit elements, access request,
approval management, etc. as tools the agent can call through the MCP protocol.
Creating a LangGraph plus Langchain MCP client now that the permit
MCP server is up and running, we'll build an AI agent client that can interact with it.
This client will use a Gemini-powered LLM to decide what actions to take dynamically invoke
MCP tools like, etc. Run entirely within a LangGraph workflow. Pause for human review using,
in the next section, let's connect the dots. Install required dependencies inside your MCP project
directory. Install the necessary packages. This gives you automatically converts
permit MCP tools into Lang Graph compatible tools. For orchestrating
graph-based workflows, for interacting with Gemini 2.0 flash. Add Google API key.
You'll need an API key from Google AI studio to use Gemini.
Add the key to your file.
Build the MCP client, create a file named in your project route.
We'll break this file down into logical blocks.
Imports and setup start by importing dependencies and
loading environment variables.
Then load the environment and set up your Gemini LLM.
Configure MCP server parameters tell Lang Graph how to communicate with the running
MCP server.
Define the shared agent state.
Define workflow nodes in the Graph Builder.
Here's the logic to root between calling the LLM and invoking tools.
In the above code, we have defined an LLM node and its conditional edge,
which roots to the node if there is a tool call in the state's
message or ends the graph.
We have also defined a function to set up and compile the
graph with an in-memory check pointer.
Next, add the following line of code to stream response from the
graph and add an interactive chat loop, which will run until it's explicitly
exited, stream output and handle chat input, and an infinite loop for user interaction.
Final assembly add the main entry point where we will convert the permit MCP server tool
to Lang graph compatible tools, bind our LLM to the resulting tools, set up the graph,
draw it to a file, and fire up the chat loop.
Lastly, run the client.
Once you've saved everything, start the client.
After running, a new image file called workflow underscore graph.
PNG will be created, which shows the graph.
With everything set up, we can now specify queries like this.
Your agent is now able to call MCP tools dynamically.
Adding human in the loop with with your LandGraph powered
MCP client up and running, permit tools can now be invoked automatically. But what happens
when the action is sensitive, like granting access to a restricted resource or approving
a high-risk operation, that's where LandGraphs becomes useful. We'll now add a human approval
node to intercept and pause the workflow whenever the agent tries to invoke critical tools like.
A human will be asked to manually approve or deny the tool call before the agent proceeds.
Define the human review node at the top of your file.
Before, add the following function.
This node checks whether the tool being called is considered, high risk.
If itis, the graph is interrupted with a prompt asking for human confirmation. UpdateGraphRoutingModify the function so that the tool calls the root to the human review node instead of running immediately.
Wire and the HITL node update the function to add the as a node in the graph.
Handle human input during runtime finally.
Let's enhance your function to detect when the graph is interrupted.
Prompt for a decision, and resume with human input using.
After running the client, the graph should not look like this. After running the client,
your graph diagram will now include a human review node between the LLM and tool execution stages.
This ensures that you remain in control whenever the agent tries to make a decision that could
alter permissions or bypass restrictions. With this, you've successfully added human oversight to your AI agent, without rewriting
your tools or back-end logic.
Conclusion
In this tutorial, we built a secure, human-aware AI agent using PERMIT, IOS Access Request
MCP, Lang Graph, and Lang Chain MCP adapters.
Instead of letting the agent operate unchecked, we gave it the power to request access
and defer critical decisions to human users,
just like a responsible team member would.
We covered how to model permissions and approval flows
using permit elements and reback,
how to expose those flows via the permit MCP server,
how to build a Lang Graph powered client
that invokes these tools naturally, and how to insert real-time human in the loop, HIDL, checks using
Want to see the full demo in action? Check out the GitHub repo
Further reading Secure AI collaboration through a permissions gateway
Permit MCP GitHub repo Lang chain MCP adapters docs
Permit Reback policies Lang Graph. LengGraph Reference.
Thank you for listening to this Hacker Noon story,
read by Artificial Intelligence.
Visit hackernoon.com to read, write, learn and publish.