In The Arena by TechArena - Inside Confidential Computing with Hushmesh & Invary
Episode Date: October 8, 2025Allyson Klein hosts Manu Fontaine (Hushmesh) and Jason Rogers (Invary) to unpack TEEs, attestation, and how confidential computing is moving from pilots to real deployments across data center and edge....
Transcript
Discussion (0)
Welcome to Tech Arena, featuring authentic discussions between tech's leading innovators and our host, Allison Klein.
Now, let's step into the arena.
Welcome in the arena. My name is Allison Klein, and I am really excited today because we're going to be talking about confidential computing, a topic that is growing in importance in the data center to edge environments.
We've got two guests from the confidential computing consortia with us,
Manu Fentane, CEO of Hushmish, and Jason Rogers, CEO Inbury.
Manu, why don't you go ahead and start us off by introducing yourself and your company?
Thank you, Alison. It's really great to be here. First, I'd like to say I'm actually a general
member representative on the governing board of the CCC. So that's one of the reasons why I'm here
to represent the Continental Computing Consortium. And in my day job, I am the first.
founder and CEO of HushMesh. We are a dual-use public benefit cybersecurity startup in the D.C. area.
Then we have a grand vision to revisit, rethink the internet and the web from the chip up,
from a confidential computing point of view, to automate decentralized cryptography everywhere
and secure the entire internet. Just a small goal.
It's a tiny, tiny little goal. But, you know, one light at a time and maybe we'll get there.
Nice. And Jason?
Yeah, thanks, Allison. Great to be here. I'm Jason Rogers, the CEO of Enverry. We're a member of the Covertureaucer Consortium, graduated from their startup tier. So, real quick quick, great organization to join. They make it a real economic and provide a lot of value back to that. So quick plug there.
At Invary, we ensure the security and confidentiality of systems based on technology that we license from the NSA. So in a sense, we're doing a lot of remote attestation of a system, making sure that it is what it says it is and is only doing.
what it says it's supposed to be doing, which is we're going to find out a key component to
confidential experience.
That's fantastic. Jason, do you want to just give, I know our audience has probably heard
a lot about confidential computing, but why don't you just give us a brief definition of
confidential computing and its core capabilities just as a starting point for this conversation?
Yeah, there are really three core concepts. The first starts with data. So for a long time,
we're really good about encrypting data in transit and data at rest.
Well, kind of forgot about data in use.
And at its heart, cognitive computing is saying, hey, while we're using data, it should be encrypted and it should only be accessible to those people and those things that absolutely need.
That's backed by running in a hardware-protected trusted execution environment that isolates the sensitive data for other users, other processes, other hosts on a server, those types of things.
And then third key tenant is that remote cryptographic attenstation that verifies the environment.
So again, your inner-trusted execution environment is your software all doing what it's supposed to be doing.
Yeah, and I would add this another dimension of confidential computing that we don't talk about quite often,
but that is quite foundational too, is the ability to generate full entropy random numbers within the TE.
And what's really, really interesting about this is that usually cryptography was done outside of the execution environment,
out away from the computing environments.
And so whatever process was relying on cryptographic material
had to essentially trust an external source of randomness.
By having full entropy random number generations
within the general purpose computing environments,
we can do things that we could never do before.
We can have truly universally unique numbers
and keys that can be generated there.
So uniqueness is one of them.
And with that, you can actually rethink and reinvent identity
from the chip up, and that is a foundational capability that is really interesting.
You know, you pull the thread and you discover that you can actually rethink a lot of things,
like authentication, authorizations, all kinds of things,
all secured within a T.E. protected in hardware, completely isolated from the rest of the system
and the host, the owners of the infrastructure itself.
Manu, can you add a little flavor of how confidential computing has evolved over the years?
from where we started when the first specs came out to where we are today.
Yeah, so confidential computing has been there,
and Alison, you were telling me that you were there at the beginning of the launch of these technologies
about 10 years ago.
Originally, the technology was quite complicated to use because indeed it was very low level.
It was all the way down at the CPU level.
And then there were many layers of software and capabilities that need to be built on top of this.
That has been taken care of pretty much all the way up.
Now it's quite easy to actually deploy workloads in a variety of trusted execution environments.
You have very low-level application-isolation-type environments.
You have virtual machine-type environments that can be isolated.
And now, over the past couple of years, you can even have GPUs.
So it's not just a CPU anymore.
You can have GPUs that are themselves continental computers, trusted execution, environment
that can be attested and verified, and that provide all these cryptographic capabilities.
So in this era of intelligence now and of AI, when you need to be able to protect not just the code, but also, you know, the weights of the model, you need to be able to protect the training data.
You need to be able to protect the whole pipeline.
You need to be able to protect files for rag, retrieval augmented generation.
There's all kinds of assets that need to be protected.
And confidential computing is getting to this place now where you can actually stitch together a complete,
system that is fully protected end-to-end all the way to the end-user in a way that shields it from
virtually any other software process and the operators of the infrastructure itself.
That's fantastic. When we look a little bit deeper and look at attested trusted
execution environments, Jason, can you give a sense of what these environments offer,
the previous security measures couldn't, and why that's specifically vital for AI applications?
Absolutely. First and foremost, who mentioned it before, but it is ensuring the safety of the data in use.
And you think about data from an AI perspective, you have your raw data, your training data, the wage data, the inference data.
This is the most valuable, the most private, the most confidential information an organization can happen to it.
So it's critically important.
We protect it, not just when it's on disk or when it's moving around, but why it's being used all the time.
And that is one of the main tenets that confidential computing is bringing to AI.
I think about it as zero trust lens, right?
So I think I might be okay in all these ways by being able to confirm and run inside a TEA in a confidential environment.
And then a test to that gives a lot of confidence back to the organization that valuable data states are protected.
You know, the applications where AI is really shining, the applications where you actually
disclose a lot of very sensitive data. It could be personal AI. It could be health care, finance.
That's where the magic's going to happen. I think there's also the challenge of securing data
across domains when you have a supply chain or multiple companies that want to collaborate on
something. They want to protect their own intellectual property, the old data. If you put it
in confidential computing environments, in TEEs, then all participants in a joint calculation,
a multi-party computation, as it's called,
can have the assurance, the verified assurance
through attestation that none of that data,
none of that intelligence is going to leak.
Going back to defense-type applications
or security applications,
in the era of AI,
intelligence leaks is going to be a competitive penalty.
You don't want to leak any data to any competitor
or to the operators of the system.
So they're highly regulated, highly sensitive information,
will be best processed within confidential computing environments.
And in that environment, too, you have all these collaborators.
The risk of an intruder or a bad actor internally is minimized when you're operating
in this environment because only those things that have access can read that data.
Very key.
That's awesome.
You know, I've been doing a bunch of AI practitioner interviews with folks in financial services
and health care of light.
And they keep mentioning the importance of confidential computing, which is music to my
years. Manu, as the team has worked together, one of the things these practitioners talk about
is performance at scale as well. So how does the team work together to provide this protected
execution while maintaining performance capabilities that customers need? Yeah, I think there's a little
bit of a misconception about performance because people think that, oh, if you do stuff at the chip level
and you do encryption in real time, when in use, that you can have a penalty. That may have been
true in the beginning of confidential computing. It's definitely true. People also confuse
different pets' privacy-enancing technologies like homomorphic encryption. They may confuse
continental computing with these other technologies. Homomorphic encryption is really putting a strong
performance penalty on many, many types of workloads. Confidential computing has matured now to
a state where the penalty is actually minimal. We're talking about a few percent, but most are
penalty. So the challenge really with Continental Computing Technology today is not so much about
performance because essentially the cost, the few percent of performance that you lose is in fact
a huge gain on privacy, confidentiality, assurance, all that good stuff that you don't have to
worry about after the fact. But it's really a problem of awareness. And so that's why I think
Jason and I are so excited to be here and talk about this because there is not enough awareness
out there. People need to know that this is now possible. And I'm sure for Jason and myself and
in fact, I think the whole CCC, we believe that it's simply the future of computing. So the sooner
you get on board, the sooner you dispel and you leave behind those conceptions that performance
is going to be impacted. And the sooner you embrace the fact that you're going to have this assurance
of security and privacy and confidentiality, the better everybody is going to be because now we can all
collaborate together, make the technology better, and then improve the stack, and then migrate
to a secure future much, much faster.
I'd add too.
I think there's a misnomer that it's hard or not easily available, and I think those are not
true.
There's been a ton of innovation over the last five years.
The availability of cultural computing on all the cloud service providers has been increasing
software stack that enables managing these workloads of performing the attestation from all
the member groups of the California Computing Consortium has made this a lot of.
easier. So not only is there a misconception that it's not performing, but I think there's a misconception
that it's part. Now, Jason, can you walk us through how your system uses TEEs or similar
isolation mechanisms to keep model code and data secure in production? Yeah, from a very perspective,
like I mentioned, you know, we focused a lot on our product on that remote attestation component,
making sure that systems are who they say they are and doing what they say they do. We, of course,
operate that internally from our workstations all the way to our cloud software. And then more
appropriate, we run our operations within a cultural computing environment to keep our data safe
and our customers' data safe as well. What concrete improvements and throughput compliance posture
or risk reduction of customers realized by adopting these solutions for AI workloads? I bet both
of you will have some perspective on that one. Maneo, do you want to start?
I think privacy, of course, is an important one.
The fact that you can now secure information from the end user all the way to the model
and then all the way to the output and the relying party on the output of the model
is, of course, very transformational.
That's what we are TashMesh are stitching together.
We are trying to revisit the entire infrastructure of the Internet today and prepare for the
agenetic era.
So in the jet tank era, what we envision is that every human on the planet, every organization
and everything, will have a personal, dedicated software agent that's going to act on their behalf.
So that's the ultimate state of the Internet is when we will be able to delegate a whole lot of
actions to those software agents that will then be able to act on our behalf.
So I think the idea of securing privacy when you have indeed a piece of software that acts for you
is, of course, paramounts.
And then the ability to then, of course, protect confidentiality whenever organizations collaborate,
with one another is another one.
Over the next few years,
virtually every nation around the planet
is planning to come out with more and more regulations,
protecting their citizens,
protecting their own businesses and organizations,
the concept of sovereign AI.
I'm sure that the French will want to have a French AI
that has a French accent.
Those things are basically coming up
and they basically cannot be resolved
without the use of continental computing.
So you need that low-level security,
that uniqueness, that revisiting of the identity, this automated decentralized cryptography
to be able to address those upcoming regulations, which, by the way, every nation doing their
own patchwork of regulations becomes untenable for global businesses. Some large financial
organizations I know is operating in over 100 countries. So they're looking for a solution
that can solve the problem once and for all. And confidential computing is that baseline.
And I think you're going to see that in healthcare, you're going to see that in supply chain,
you're going to say that in defense, in government services.
All of those high-value, high-stake-type use cases will have to be harmonized and secured in the best way possible.
And that's going to be at the chip level.
I would just add specifically the European Union passed their AI Act.
It specifies that systems must be resilient against unauthorized access, things that commercial computing already takes care of.
I think that's a good indication of what's to come from the others.
Hopefully, you know, we can come to a common set of requirements.
But you can walk then back into what are more template requirements, HIPAA, GDPR.
They all say the same thing.
And Comptial Computing is bringing the ability to be compliant with all these things in one go
through this isolation and encryption that we're talking about.
So I guess one question that I have as we look ahead is,
does confidential computing provide a foundation for all computing within a
in AI era? Are there limitations there? And what emerging innovation do you believe will be
augmentative to that for AI-driven applications? I loved what you said. I knew about
Agentic Computing Vision. Are we looking at additional needs on the horizon that will need to be
added to the evolution of confidential computing or other technologies to fully address the needs
for privacy and security.
I think the name of the game is to close every single security halls that are still left.
So now we protect CPUs, we protect GPUs.
Virtually every chip manufacturer today has confidential computing in their roadmaps.
So ARM for embedded chips in smartphones and devices, whatever, even RISX5 architecture, Qualcomm,
everybody else.
All these companies have their own roadmap.
So you're going to have these chips everywhere.
I think the one that is particularly interesting in the coming horizon is the interconnection of those chips within a single system.
It's called TIO.
It's to secure the IOS, the communication between different chips, between the CPU, the GPU, but also the CPU and the network chip.
All of those gaps are getting closed all the way down at the hardware level.
And then for us, what's really interesting for the Argentic era in particular is the ability to,
to enable those agents to talk to one another across domain boundaries.
Today, if you think about the whole web and the whole IT paradigm,
there's always insiders.
There's always some sort of privileged access that is being managed within domains.
Continental computing enable an architecture that eliminates that privileged access
to the information that's managed across agents that could operate and collaborate
autonomously across domains.
And that to me is the next step, all the way down at the system level, the interconnections
between chips, all the way up at the internet layer is the interconnection between workloads
between those agentic AI, agentic intelligences that will be able to act on behalf of virtually
any entity around the world.
If you combine those innovations, I think, with growing availability of computing and compute
from cloud service providers, it's a huge deal.
So every day, every month, we're seeing more.
and more systems move to this, more availability of the hardware, the ships, et cetera.
So it's a really exciting time.
I love this interview.
This was so interesting.
I think this is such an interesting space.
Can't believe the progress that has been made in this space over just a few years.
I think that our listeners value privacy and security and will want to learn more,
not just about confidential computing and what the consortium is doing, but also about your
companies. Where can they go to learn more about the topics we discussed, get involved in the
consortium, and then connect with you and your teams? The confidential computing site is
confidentialcomputing.io. It's a Linux Foundation project as well, or organization. So you can find
it through that. You can find Envary at Invary.com. And I knew. Yeah, yeah, definitely. By the way,
you should join indeed, like Jason were saying, join the confidential computing consortium.
It's a bunch of very smart people, very friendly people.
It is hosted by the Linux Open Source Foundation.
So you should definitely join and see what's going.
You can participate in all our meetings.
It's open doors.
Everything is recorded.
You can find us on YouTube.
You just need to search on YouTube Confidential Computing Consortium.
So every workgroup and every meeting are recorded and available to the public.
And as far as we are concerned, you can find us at hushmash.com.
or you can literally go and start playing with messaging.
Messaging is our messaging application built on top of the mesh infrastructure,
and you can go to m.sh, m.sh, m.sh, like mesh, and send me a message whenever you get there.
I will be checking out you guys and be following you moving forward.
This is a fantastic view in what is the cutting edge of Trusted Computing.
So thank you so much for coming on the show, highlighting what the consortium.
was doing and sharing your perspectives.
It was a real treat to interview you both.
Thank you very much, Ison.
Thank you.
Thanks for joining Tech Arena.
Subscribe and engage at our website, Techorina.AI.
All content is copyright by Techarena.
Thank you.