The Good Tech Companies - Unlocking Autonomous AI: iExec's Matthieu Jung on Building Trust in a Decentralized Future
Episode Date: June 13, 2025This story was originally published on HackerNoon at: https://hackernoon.com/unlocking-autonomous-ai-iexecs-matthieu-jung-on-building-trust-in-a-decentralized-future. Ex...plore the future of AI with iExec's Matthieu Jung as he discusses 'Trusted AI Agents' – a decentralized approach to AI automation prioritizing privacy. Check more stories related to tech-stories at: https://hackernoon.com/c/tech-stories. You can also check exclusive content about #iexec, #eliza-os, #blockchain, #llm, #cryptocurrency, #ai, #good-company, #ai-agents, and more. This story was written by: @ishanpandey. Learn more about this writer by checking @ishanpandey's about page, and for more stories, please visit hackernoon.com. Explore the future of AI with iExec's Matthieu Jung as he discusses 'Trusted AI Agents' – a decentralized approach to AI automation prioritizing privacy, verifiability, and composability through confidential computing and Intel TDX enclaves.
Transcript
Discussion (0)
This audio is presented by Hacker Noon, where anyone can learn anything about any technology.
Unlocking Autonomous AI
IXX Matthew Jung on Building Trust in a Decentralized Future
by A'Shawn Pondy
As the line between AI automation and autonomous agency blurs,
a new technical frontier is emerging, trusted,
verifiable agents that can act independently across ecosystems.
We sat down with Matthew Jung, the Product Marketing Manager at IEXEC,
to explore what this shift means for the future of intelligent infrastructure,
what problems it solves, and how developers and companies should be thinking differently about agent-based computation.
Aashan Pandey, thanks for joining us.
To start, can you explain what, trusted AI agents, actually are in your framework, and
how they differ from the typical AI agents we see in automation or LLM based products?
Matthew Jung.
We all know AI agents are the new interface.
They change how we create, build, and collaborate.
But to scale, agents need one thing.
Trust.
Privacy is non-negotiable.
Agents work with sensitive data, private prompts, crypto assets, wallets.
Without privacy there's no trust, and without trust agents don't scale.
That's why trusted AI agents run entirely inside secure enclaves like Intel TDX.
Our confidential computing stack keeps their logic, data, and actions private, even from the host, while generating verifiable
proofs of execution.
Unlike typical LLM-based agents that rely on central AP is-and-expose user data, IEXEC
provides the trust layer so agents can act autonomously with full confidentiality and
proof.
Ashaan Pandey The collaboration between IEXEC and ELISA
OOS seems to merge privacy infrastructure with autonomous computation.
How are you technically achieving both verifiability and confidentiality? Aren't they often in conflict?
Matthew Jung. That's a great question, and it's central to what makes trusted AI agents different.
The tension between confidentiality and verifiability is real in most architectures, but not in ours.
With the ELIZA-OS XIEXEC stack, we're leveraging trusted execution environments, T's, specifically
Intel TDX, to isolate not just the agent logic built with ELIZA-OS, but also the AI model
itself, the one used to train or fine-tune the agent.
This is crucial, it means the entire runtime environment,
including both the logic and the sensitive model parameters, is protected against external access,
even from cloud hosts or system administrators. At the same time, these enclaves generate
cryptographic proofs that certify the integrity of the execution. This is how we achieve verifiability.
Anyone can verify what was executed executed without seeing how it works.
There's no need to choose between exposing the model for auditability or hiding it at the cost of trust.
We provide both thanks to the architecture of confidential computing.
So to sum up, TDX enclaves bring confidentiality by securing the model and logic, and verifiability, by producing proofs of correct execution.
That's the foundation of trusted AI agents.
Ashaan Pandey
In a world where many AI startups rely heavily on centralized AP is and closed models,
what does it take to build autonomous agents that are redicentralized by design?
Matthew Young
It takes a new stack that adds private verifiable execution to AP is.
With confidential computing IEXEC makes it easy to prove every action on chain.
IEXEC just launched the IAPP generator, a dev tool that allows developers can build
and deploy confidential apps in minutes. Our team is also launching MCP model context
protocol servers optimized for trusted agents
so they can scale while remaining verifiable.
Ashaan Pandey
How does the IEXEC infrastructure help agents prove their work, state, or logic to external
parties?
Matthew Young
IEXEC runs agents inside Intel TDX enclaves that generate signed proofs of the code, inputs,
and outputs. Those proofs can be
shared on chain SO anyone can verify the enclave identity and confirm the agent did exactly what
IT promised, without ever exposing the private data or code.
Ashaan Pandey
Let's talk about risk. What's the potential downside of allowing autonomous AI agents to operate across chains or applications?
Are we ready for truly composable intelligence?
Matthew Young.
Allowing agents to roam across chains or apps raises new attack surfaces and unexpected
interactions.
A bug in one protocol could cascade into another.
Plus, without clear standards, proving liability or reversing bad actions gets tricky.
That's why trust in governance is still so important when IT comes to decentralized agents and automating actions on the blockchain.
Aashan Pandey Can you walk us through a hypothetical example
or use case where a trusted AI agent is deployed in the wild, what it is, how it proves its
execution, and why it's better than traditional models?
Matthew Young Imagine AI agents that trade for you, managing defy
strategies, reading signals, placing trades, and adapting portfolios. It needs to interact
with sensitive data and assets like your private key or wallet. With IEXEC, fee agents logic runs
confidentially in a T. Traders can protect their datasets, share access securely with a model,
and even receive confidential alerts.
Every trade and every decision is executed securely and verifiably.
They can even monetize their private trading data.
You don't have to trust the dev or the infra.
A'Shaan Pondy
Finally, for builders and investors watching the trusted AI agent narrative unfold,
what are the signals they should be paying attention to over the next 12 months?
Matthew Jung.
We already know agents are the new interface.
And unfortunately, that means we will start seeing leaks of agent prompts and data
exposures, and that will underscore why private execution matters.
Another innovation that will take agents to the next level is MCP or model context protocol.
It lets agents securely share encrypted snapshots of their working state as they move and iterate
across multiple apps.
IEXEC has already deployed an MCP server.
These trends will reveal who's truly building decentralized, privacy-first agents versus
those still tied to closed APIs.
Don't forget to like and share the story.
Thank you for listening to this Hacker Noon story, read by Artificial Intelligence. to closed APIs.
