The Good Tech Companies - The Missing Layer in AI Security: Why "Data-in-Use" Is the Next Battleground
Episode Date: March 17, 2026This story was originally published on HackerNoon at: https://hackernoon.com/the-missing-layer-in-ai-security-why-data-in-use-is-the-next-battleground. Confidential AI p...rotects data while it’s being used, closing a major security gap as AI spreads into real workflows and critical infrastructure. Check more stories related to cybersecurity at: https://hackernoon.com/c/cybersecurity. You can also check exclusive content about #ai-security, #confidential-computing, #private-ai, #data-in-use-security, #ai-infrastructure-security, #tee-attestation, #good-company, #hackernoon-top-story, and more. This story was written by: @ollm. Learn more about this writer by checking @ollm's about page, and for more stories, please visit hackernoon.com. The average global breach cost will rise to $4.88M in 2024, up 10% from the previous year. In H1 2025 alone, 1,732 breaches exposed over 165 million records. Qatar's Bet: Confidential AI Gets Physical.
Transcript
Discussion (0)
This audio is presented by Hacker Noon, where anyone can learn anything about any technology.
The missing layer in AI security. Why data in use is the next battleground.
By OLLM, if you've ever pasted proprietary code into an AI assistant at 2A.M, you've already
lived the tension confidential AI is designed to resolve.
AI spread through organizations the way spreadsheets did, quietly, everywhere, and faster than
governance can keep up.
Teams route contracts, customer tickets, and code through model endpoints because it works,
not because anyone has verified where that data actually goes. This convenience widens the blast radius
of a breach. The average global breach cost rose to $4.88M in 2024, up 10% from the previous year.
In H12025 alone, 1,732 breaches exposed over 165 million records. The thing is, traditional security
two states well, data at rest and data in transit, but not data in use. At that point, the data
typically becomes visible to the system, the hypervisor, and the people who operated. Confidential
computing goes after this third state. It helps encrypt information in memory and process it only
inside a trusted execution environment. T, a hardware isolated enclave that even the operators
can't peek into. The verification piece is the hinge. If you can remotely prove where computation
happened and confirmed no one tampered with it, trust, becomes an inspectable property of the stack.
The gap that policies can't close. Developers are shipping faster than ever, and AI is the accelerant.
But rapid, AI-assisted output can be hard to audit and easy to deploy with invisible risk,
especially when provenance and accountability are fuzzy. That rough draft your AI assistant
generated? It doesn't fully internalize your security constraints. It can't. KeySight's recent
research on PE disclosure flags how easily sensitive information the kind companies least
want appearing in logs or third-party systems can leak through prompts and model outputs.
Security teams are now treating what leaks in prompts as something measurable, attackable,
and in CREA singly regulated.
Meanwhile, credential theft remains stubbornly effective.
In basic web application attacks, about 88% of breaches involve stolen credentials.
If the modern breach often starts with someone logged in, then pushing more sensitive work
into more tools doesn't just add risk, it multiplies it.
In standard deployments, prompts and context must exist in plaintext somewhere to be processed.
You can redact, you can filter, but somewhere in the stack, the data has to be readable,
which means it's vulnerable.
Confidential computing shifts that baseline by running workloads inside isolated hardware
environments that keep data and code protected during execution.
This reduces who or what can observe the process, including privileged infrastructure layers.
It doesn't eliminate every application layer risk, but ITN arrows an entire class of exposure.
Katah's bet Confidential AI gets physical.
The fastest way to tell whether a category is real is to watch where money and megawatts go.
Confidential AI is now entering the language of facilities and power grids.
Qatar is building a confidential AI data center that represents one of the first physical
manifestations of this shift. The facility will use confidential computing chips, including
Nvidia hardware, to keep data encrypted through every stage of processing. AILOAI, MBK holding, and
OLLLM are partnering on the project, with an initial $183 million investment and ambitions to scale over time.
OLLLM's role particularly matters here because there's a difference between claiming support for
confidential AI and actually building the infrastructure required to support it.
Their pitch is less about a better model and more about a safer way to access and run a growing
catalog of models.
Think of it Asan AI Gateway.
One API that aggregates access to multiple providers and AI models while offering verifiable
privacy guarantees. Confidential AI is an easy phrase to slap on a landing page.
So the credibility test is simple. Can you prove it?
OLLLM's partnership trail suggests they can.
One differentiator OLLM emphasizes is a confidential compute only.
only posture paired with zero data retention, ZDR models are deployed on confidential computing
chips, and the platform is designed not to retain prompts or outputs. The only data OLLM keeps
is operationally necessary and deliberately non-content, such as token consumption plus T attestation
logs that aren't linked to users ors or data, so attestations can be shown publicly for
transparency in the ALM dashboard. That's the philosophical shift. Treat model access, like critical
infrastructure, where minimizing retained data and making execution verifiable
are a first-class product requirements, not policy footnotes. OLLLM's partnership with Fala is the
clearest, prove-it, story today. It expands its footprint through Fala's confidential AI infrastructure,
where workloads run inside T's, including Intel TDX, AMD-SEV, and Nvidia H-100-H-200-200-class GPU
confidential computing. That matters because each inference can generate a cryptographic
attestation showing it ran on Genuine T hardware, not on a standard VM with a trust me policy.
The Near Eye is also building private inference infrastructure powered by T's.
This means developers can think in terms of confidential inference as occuposable primitive,
Fala as one route, near AI's private inference is another. Bottom line, why developers should
care. For developers, confidential AI can unlock workflows that were previously awkward, risky,
or stuck in security review limbo. It directly expands on where AI can be used in practice.
Proprietary data like internal playbooks, product designs, competitive intelligence can pass
through AI systems without being captured in third-party logs. For regulated industries,
that shift is even more consequential. Instead of being asked to, trust, how data is handled,
banks and healthcare providers can point to cryptographic attestation as evidence,
changing compliance discussions from risk avoidance to controlled deployment.
The same logic applies outside heavily regulated environments.
Teams under pressure to ship quickly do not suddenly become less exposed when a prototype
turns into a product. Confidential execution makes it possible to keep iteration speed
while narrowing what inference can reveal.
AI agents begin to manage credentials, trigger API calls, and interact with sensitive
systems. So, the ability to run them without exposing their instructions or data, even to the operators
running the infrastructure, becomes a baseline requirement rather than an advanced feature. As AI becomes
embedded in real workflows, the weakest point in the stack is no longer storage or transport,
but execution. Data in use is where sensitive information is most exposed and least controlled.
Confidential computing doesn't solve every security problem, but it closes a gap that policy,
contracts, and access controls can't. For AI to move from experimentation to infrastructure,
that gap has to be addressed. Thank you for listening to this Hackernoon story, read by
artificial intelligence. Visit hackernoon.com to read, write, learn and publish.
