The Good Tech Companies - Kevan Dodhia’s Builder Journey to Creating the New Policy Layer for AI Agents
Episode Date: October 21, 2025This story was originally published on HackerNoon at: https://hackernoon.com/kevan-dodhias-builder-journey-to-creating-the-new-policy-layer-for-ai-agents. Kevan Dodhia, ...ex-Compute.ai co-founder, launches Alter (YC S25) to pioneer agent authorization—bringing zero-trust security to AI agents in production. Check more stories related to machine-learning at: https://hackernoon.com/c/machine-learning. You can also check exclusive content about #alter-yc-s25, #agent-authorization, #kevan-dodhia, #ai-security-compliance, #zero-trust-for-ai-agents, #compute.ai-acquisition, #distributed-systems-security, #good-company, and more. This story was written by: @jonstojanjournalist. Learn more about this writer by checking @jonstojanjournalist's about page, and for more stories, please visit hackernoon.com. Kevan Dodhia, former Compute.ai co-founder, is redefining AI security with Alter, an agent authorization platform that enforces real-time, fine-grained access control for AI agents. Built on his distributed systems expertise, Alter applies zero-trust principles, ephemeral credentials, and auditable policies—making autonomous agents safe and compliant for enterprise deployment.
Transcript
Discussion (0)
This audio is presented by Hacker Noon, where anyone can learn anything about any technology.
Kevin Dodia's builder journey to creating the new policy layer for AI agents by John Stoy and
journalist.
Kevin Dodia's career has traced the arc of modern enterprise computing, from building
high-performance distributed SQL engines to pioneering a new category of AI security.
As technical co-founder of compute, I, he and his team built Ocompute Engine 5X faster than EMR
Spark to serve data analytics in highly regulated environments. After compute, I's 2025 acquisition by
Tiriza, Dotya turned his attention to a fresh problem, how to make autonomous AI agents safe
and compliant in production. The result is Alter, YCS25, an identity and access control platform
for AI agents that embodies Dotya's experience with distributed systems, SQL compute engines,
and regulated deployment. In his previous role, Dodea learned firsthand how critical
auditability and compliance are for financial and government customers. Compute. I's clients included
the London Stock Exchange, LSEG, where every data query and job had tomeat stringent governance rules.
We built a compute engine 5x faster than EMR Spark and sold into highly regulated enterprises
like LSEG, Dodia said. That experience, scaling low latency queries across clusters
while satisfying auditors, set the stage for Alters' architecture. The Alter platform
breaks down policy intent into machine enforceable controls, using distributed enforcement points
in the data plane. Just as Dodea architected compute, I's SQL engine for speed and reliability,
alters control layer must be highly performant. Every agent API call is intercepted and checked before
proceeding, so latency is a key trade-off. Dodia said that the system verifies identity and
checks every parameter against policy in real time, a process that inevitably adds a small delay but
is essential for zero trust for AI agents. Alter effectively creates a new security category,
agent authorization. The platform lives between an AI agent and any external tool or database,
authenticating each request with strong identity and enforcing fine-grained policies.
The term agent authorization captures this idea. Just as user sessions undergo identity checks
and permissions, AI agents get an equivalent check. The YC launch materials describe Alters
approach succinctly. It wraps every.
tool call in strong authentication, fine-grained authorization, and real-time gart rails.
In practice, that means an LLM agent cannot issue a data query or execute a transaction without
Alter's approval.
Alter issues ephemeral credentials for each agent action, scopes that expire in seconds,
so there are no long-lived secrets floating around.
Under the hood, Alter converts policy intent into these low-level controls.
For example, a business user might write a policy assigning agents high-level
Rolesan functions. Alter's compiler translates this into row and column level filters on database
queries and even into checks on prompt parameters. This approach delivers fine-grained policy
for LLMs. The system doesn't just say yes or no, but enforces what data and actions are allowed
at the level of individual rows or fields. In Dodea's words, the system is designed to prevent
risky operations, such as database deletions or unintended transactions, by applying policy-based
safeguards. A core promise of Alter is blocking dangerous agent commands in real time. Since AI agents
can fabricate or misinterpret outputs, as Dodia noted, enterprises worry about rogue behavior.
Alter addresses this by checking every request against the policy engine. For instance,
Alters policy rules automatically restrict access attempts that exceed defined permissions or
transaction limits, helping prevent unintended operations. Alter's security model is built to minimize the risk
of harmful commands reaching production, using policies that limit exposure to potential data
are financial errors. Each API call uses an ephemeral token with narrowly scoped privileges,
once used, that credential expires. The effect is a system with no long-lived secrets,
no blind spots, and NO surprises in audit. This design reflected Dodia's distributed
system's pedigree. Like a fast-quiry engine splitting work across shards, alters enforcement is
distributed. Every tool, databases, APIs, cloud services, connects through the control layer.
The platform even supports multi-cloud tools, MCP, and native integrations, with agent-to-agent,
A2A, coming soon. The choice to be vendor-neutral is intentional.
Dodia and his co-founder stress the need for neutral, vendor-agnostic infrastructure,
security that works whether a company uses a WSGCP or on-prem systems. This stems from serving
compliance buyers who dread being locked into one cloud's mechanisms. By keeping the control plane
generic, Alter lets customers adopt AI agents without rewriting all their access policies. Of course,
intercepting every agent action comes with tradeoffs. There is added latency from authentication
and policy evaluation. The team found that keeping policies only as code was two developer-centric.
Instead, they invested in a policy UI geared for security teams and business users. It presents policy
in a simple manner so non-technical stakeholders can define them too.
One lesson Dodia highlights.
The users of these policies are often auditors or compliance officers, not engineers.
That requires the U.X to be unambiguous and verifiable.
Real-time enforcement also necessitates efficiency.
A policy check must finish in milliseconds.
Alter mitigates impact by using low-level languages
and using incremental evaluation where possible.
Nevertheless, the system's strict checks means
some sacrifice in raw throughput. A compromise Dodia acknowledges AS necessary for AI access control
in high-stakes settings. Integration complexity is another challenge. Alter must sit alongside potentially
dozens of tools and agent frameworks. Dodia's past experience helped him at compute. I, he built
connectors to common data stores under tight service level requirements. Similarly, Alter provides
connectors and SDKs suh that existing agent platforms, open AI, anthropic, etc.
can call into Alter's Gateway. The hope is to make Alter mostly transparent once configured,
ideally a frictionless layer. A recurring theme in Dodea's story is compliance. Serve the financial
sector and you learn that audit trails and built-in security controls are non-negotiable. At
compute, I, he had to prove every job's provenance. At Alter, he baked auditability into day one.
The platform logs every agent request and decision, surfacing it in a CISO ready dashboard,
For example, Alter can report, Agent X asked a database for customer records with parameter
Y at 3.14 p.m. and W as denied under policy rule Z. This transparency is a major selling point.
Dodia noted that compliance buyers expect evidence of least privilege and that policies run
automatically in the background, essentially proof that no rule was violated. With built-in audit trails,
teams can pass SOC2, HIPAA, or GDPR reviews without weeks of manual
evidence gathering. Several lessons emerge from Dodia's emphasis on compliance. First, policies must
be auditable by design that influenced Alter to avoid magic, AI solutions. Every access decision is
deterministic and recordable. Second, security for AI agents can't be an afterthought. When you run
in regulated environments, Dodia said, you can't bolt on security at the end. It has to beckward to
the architecture. That's why Alter was built from scratch as a zero trust platform for AI.
Its very name and design are about removing implicate trust.
And third, flexibility matters.
Enterprises often have heterogeneous tech stacks, so alters vendor-neutral approach,
E-G, supporting any cloud or in-prem tools, ensures customers aren't forced to replace infrastructure
just to add agent controls.
Kevin Dodea's move from distributed compute engines to agent security platforms illustrates
how deep engineering experience can address emerging AI risks.
Alteris both a technical and conceptual leap. It compiles intuitive policy and tallow-level controls,
applying the rigor of database access control to AI agents. By rejecting long-lived credentials
and enforcing zero trust for AI agents, IT prevents a single mishap from escalating. The result
aligns with Dodia's goal, making AI agents safe for production by constraining them to the minimum
access and duration they require. His journey underscores that in security, and especially in
compliance-driven environments, architecture and human needs must be in sync. As Dodia puts it,
Alter is about enabling teams to move fast on AI agent initiatives while staying fully compliant.
In practice, this means building security that codifies policy intent, respects non-technical
users, and gives auditors exactly what they need, evidence that no agent ever misbehaved.
Thank you for listening to this Hackernoon story, read by artificial intelligence.
Visit hackernoon.com to read, write, learn and publish.
