The Good Tech Companies - How CoinFello's MinChi Park Built the Trust Layer 500 Million Crypto Users Have Been Waiting For
Episode Date: April 1, 2026This story was originally published on HackerNoon at: https://hackernoon.com/how-coinfellos-minchi-park-built-the-trust-layer-500-million-crypto-users-have-been-waiting-for. ... CoinFello COO MinChi Park on ERC-7710 delegation, the DeFi complexity problem, ETHDenver alpha lessons, and building the execution layer for the agentic economy Check more stories related to tech-companies at: https://hackernoon.com/c/tech-companies. You can also check exclusive content about #good-company, #web3, #coinfello, #defi, #ai, #agents, #technology, #cryptocurrency, and more. This story was written by: @ishanpandey. Learn more about this writer by checking @ishanpandey's about page, and for more stories, please visit hackernoon.com. CoinFello launched publicly at EthCC 2026 with an AI agent that executes DeFi transactions through natural language while keeping private keys on the user's device. The security model uses ERC-7710 scoped delegations — users grant the agent a limited spending permission rather than wallet access, and can revoke it with one action. ETHDenver alpha surfaced two surprises: multilingual demand the team had not anticipated, and developer demand to use CoinFello as an execution layer for third-party agents. The B2B infrastructure angle, enabling Claude Code, Windsurf, and OpenClaw agents to call CoinFello for onchain execution, is now a primary growth thesis alongside the consumer product.
Transcript
Discussion (0)
This audio is presented by Hacker Noon, where anyone can learn anything about any technology.
How CoinFellows Min Kai Park built the trust layer 500 million crypto users have been waiting for,
by Ashan Pondi.
Greater than 500 million people hold crypto.
Almost none of them use Defi.
Min Kai Greater Than Park, co-founder and COO of Coin Fellow, thinks she knows exactly why,
and she greater than built the fix from the security layer up.
What does it mean to give an AI agent permission to spend your money with?
without giving it your wallet. That question is not theoretical in 2026. As autonomous AI agents move
from productivity tools to financial actors, the question of how authority is delegated, scoped,
and revoked sits at the center of whether the agent economy is safe for mainstream participation
or a new attack surface dressed in a friendly chat window. Coin Fellow launched publicly at ETHCC in
Cannes on March 30th, emerging from private alpha with a product that inverts the typical
AI crypto pitch. Rather than leading with the interface and treating custody as a back-end detail,
CoinFellow made the custody architecture the headline and the conversational interface the delivery
mechanism on top of it. The platform uses a delegation model that allows users to assign limited
spending permissions with configurable timeframes and token limits, rather than granting AI
agents direct access to wallets. In this exclusive interview for Hackernoons behind the startup
series, Min Kai Park explains the architecture, the alpha lessons,
the B2B pivot that ETH Denver accelerated, and why the real ambition is not a consumer defy
ab but the permissions layer for a future where agents pay other agents on chain.
Ashan Pondy.
Hi, Min Kai.
Welcome to R. Behind the Startup series.
You are shipping a product that sits at the intersection of AI agents, self-custody, and
Defy Automation.
That is a technically loaded combination.
What is the thread connecting those three and what ultimately led you to co-found Coenfellow,
Minkai Park. The threat is intent without friction.
500 million people have opinions about what they want to do with their money.
They want exposure to ETH. They want yield on their stables. They want to stop getting liquidated
while they are asleep. What they don't want to do is navigate a protocol interface or copy-paste
a contract address and hope they got it right. Defy built rails that could theoretically serve
all of those people, but the interface layer never closed the gap between having an intention and
executing it.
That is not a marketing problem.
It is honestly a distribution problem, and it has existed since 2017.
AI agents are the thing that can finally close it, but only if you solve the custody problem
at the same time.
The naive version of an AI crypto product is, give the agent a private key, let it transact.
That works until the agent gets prompt injected by a malicious web page or someone jail breaks
it into sending funds to the wrong address.
You have just automated the attack surface.
Self-custody and AI autonomy look like they are in tension from the surface, but he are not.
The key component that solves it is delegation.
You keep your keys and the agent operates within a scoped permission you granted that you can
revoke at any time.
The blast radius of any failure is limited to what you explicitly authorized.
Those three things have to be designed together, or you E&D up with something that is either
unsafe or too constrained to be valuable.
Ashan Pondi.
Most AI crypto products treat self-custody as a backer.
can detail and led with the interface.
Coingfellow inverts that, making custody the headline and T-H-E-A-I layer secondary.
What was the strategic logic behind that prioritization?
Minkai Park.
Most teams make the interface the product because that is what is easiest to demo.
You can show a swap happening through a chat window in three minutes.
You cannot easily demo a security architecture in a conference presentation, but users don't
lose money because the interface was confusing.
They lose money because the wallet was compromised, or the agent had too much authority, or the key was stored somewhere it should not have been.
The interface failure is annoying. The custody failure is catastrophic and irreversible.
The strategic logic is simple. Trust is the actual product. Everything else is features on top of it.
If you don't solve custody correctly, you are one high profile incident away from the whole thing collapsing.
We have seen that pattern play out with centralized exchanges and with early defy bridges.
ERC 7710 fine-grained delegation is what lets us make custody the headline without sacrificing usability.
The user does not see the underlying mechanism.
They see a confirmation that says,
COINFELO is requesting permission to manage zero.
One ed on your behalf.
They approve it.
Their keys never move.
The agent acts within that permission.
Revoking it is one action.
That is the experience.
The custody architecture is doing the work invisibly.
Greater than what is ERC 7710.
standard Ethereum wallets grant permissions in and greater than all or nothing way.
ERC 7,710 is a newer standard that lets a user create a greater than scoped permission,
specifying exactly which tokens, which actions, which greater than chains,
and for how long an agent is authorized to act.
Think of it as issuing greater than a temporary, purpose-limited power of attorney rather than
handing over your greater-than-house keys.
Ashon Pondy
Coin Fellow went from Private Alpha at ETH Denver,
to public launch at ETHCC.
Walk us through what the alpha period actually revealed.
Minkai Park.
ETH Denver was the sharpest product feedback we have had.
Buffy Bot, our conference navigation agent,
put natural language on-chain interactions in front of thousands of real users over three days,
in noisy rooms, on bad Wi-Fi, and half distracted.
That environment is brutal and honest in a way that no structured user test can replicate.
The first thing that surprised us, comprehension would be.
Comprehension was not the barrier. People got it immediately. The natural language interface clicked. What was genuinely
unexpected was how multilingual the use case was. Attendees were talking to Buffy Bought in Portuguese, in Spanish, in Korean. Not because we built for that
specifically, but because when you remove the form field interface, people think in their native language.
That is a real signal. The interface is not just simpler. It is more human. The second thing. Developer demand was as strong as consumer
demand. The builders at ETH Denver were not just curious about the user-facing product.
They were asking how to give their products agent access to Coingfellow's execution layer.
That is a different market than we had been primarily building for, and it accelerated how
seriously we take the B2B and infrastructure angle. What caused the vision to expand, realizing
that the interesting unit is not one us or with one agent. It is an ecosystem of agents,
all capable of discovering and calling each other on-chain.
ERC-804 agent registry, a 2A protocol support, sub-delegation to other agents.
These were not on the original roadmap in the detail they are on now.
ETH Denver surfaced the demand for them.
The wedge was always the Mulkbot user, someone running an AI agent locally,
comfortable in a terminal, who wants their agent to be able to do things on chain
without holding a private key.
That user exists, that community is real, and we're not.
found them. ETHCC is where we go broader. Ashon Pondi, the 500 million sidelined holders figure
is your core market claim. Why has every previous attempt at making Defi accessible failed to
convert that population at scale? Min Kai Park both exist. But the framing that people aren't
interested, IS mostly a rationalization by people who built interfaces that required too much
fluency to reach that population. Here is the evidence that it is a complexity problem, centralized
changes work. Coinbase has tens of millions of users. People are clearly willing to hold crypto.
The drop-off happens at the defy layer, specifically at the point where using a protocol requires
understanding what a gas fee is, what slippage tolerance means, and whether the contract you are
approving is legitimate. That is not disinterest. That is a competency barrier that was not there
with Robin Hood. Every previous attempt at making defy accessible failed at the same place.
They simplified the interface without simplifying the cognitive load.
You still had to decide which protocol to use, which pool to deposit into, whether the APY was
sustainable or liquidity incentive that would collapse in three weeks.
Simplified interfaces on top of complex decisions are not actually simpler.
The Yare just less honest about how much the user does not know.
Natural language changes the equation because it shifts the competency burden.
You do not need to know what AVE is.
You say, I want to earn yield on my USDC, and the agent handles protocol selection,
route construction, and transaction execution.
The user's job is to specify the intention, approve the delegation scope, and verify the result.
Building a bridge between intention and execution has been the missing piece.
Ashan Pondi, the delegation model is doing significant architectural work in your security design.
How does that work in practice, and what is the recovery path when an automation executes something
a user did not consciously intend, Minkai Park. Practically, you connect your existing Metamask
wallet, or you initialize a new smart account from a prompt. You grant CoinFellow a scope
permission, use 0.1 ed from my wallet for staking on base. That permission ESON ERC-7710 delegation.
It specifies the token, the amount ceiling, the chain, and the permitted actions.
CoinFellow can act within that scope without asking you each time. It cannot act
outside it. The recovery path for an unintended execution. First, the blast radius is already limited
by design. If you grant a zero. One-Eth delegation and the agent does something you did not consciously
intend with that zero. One-eth, you have not lost your whole wallet. You have lost the allowance
you pre-approved. That is the same model as a credit card spending limit. Second, you can revoke the
delegation at any time. One action, done. Third, for high-states automations, human-in-the-loop
is part of the flow. An automation monitoring your AVE health factor will ask you before it moves,
unless you have explicitly told it to act autonomously below a certain threshold. The user controls
that dial. The harder question is, what happens when an agent gets prompt injected into
requesting a delegation the user did not intend? That is a real attack surface. The answer is two-layered.
The delegation prompt is human-readable and explicit, so a socially engineered delegation
request should surface a red flag the user can catch. And the scopies always tied to explicit
parameters, not open-ended authority. We are not claiming it is impossible to fool a user. We are claiming
the architecture contains the damage when it happens. Greater than what is prompt injection.
It is an attack where malicious text on a web page or greater than in a document tricks an AI agent
into following instructions from the attacker greater than rather than the user. For a crypto agent,
a successful prompt injection could greater than mean the agent requests a broader delegation
than the user intended or routes greater than funds to an attacker's address.
The scoped delegation model limits the damage.
Greater than even a fully successful prompt injection can only access what the user greater than
pre-approved. A Shan Pondi. There is a meaningful gap between a clean demo and a live multi-chain
environment with gas spikes, failed transactions, slippage, and bridge delays. What does it actually
take to make natural language execution reliable enough that a user trusts it with real money,
Min Kai Park. The demo gap is real, and most teams building in this space underestimate it in the same
two ways. The first underestimation, natural language parsing is much harder than it looks in a
controlled environment. Send some USDC to my wallet, requires the agent talk now which USDC.
There are multiple token contracts, on which chain, to which address, and with what gas budget.
That is four ambiguities in a sentence that sounds completely clear to a human.
In production, with real users sending real prompts with real world imprecision,
the parsing layer breaks in ways that do not surface in structured testing.
We have built a delegation of valuation suite to catch these cases systematically.
It is one of the things we are most invested in improving.
The second underestimation, execution reliability across multiple chains requires redundancy
and fallback logic that takes significant engineering time. Gas spikes on Maynett, bridge delays,
RPC failures, slippage on low liquidity pairs. In a demo environment, the happy path almost
always works. In production at scale, you are building for the 10% of cases that do not, because those are
ify cases that erode user trust. Ashon Pondi, the agent skills integration, enabling Claude Code,
Winsurf and other third-party AI agents to execute blockchain transactions through CoinFello
positions you as infrastructure rather than a consumer product. What future do you see
CoinFello enabling in this new agentic economy? Min Kai Park, the agent skills integration,
so it works with Claude, Winsurf, OpenClaw, and any agent runtime that supports the standard
was ad-a-liberate architectural decision. We do not want CoinFellow to win by being the only
AI agent with a crypto skill.
We want to be the execution layer that on-agent can call when it needs to do something on-chain.
The future it enables, an agent economy where financial actions are a primitive, not a specialty.
Right now, most AI agents treat blockchain interactions as a hard problem that requires specific integration work.
With CoinFellow as a standard execution layer, any agent can swap, bridge, stake, and manage delegations through a natural language interface.
The integration cost drops to a claw hub install command.
At the B2B level, products that want to embed crypto execution for their users do not need to build
wallet infrastructure, protocol integrations, and a parsinglayer from scratch.
They plug into CoinFello, a portfolio tracker whose agent needs to rebalance, a lending product
whose agent needs to close positions, that is B2B agent integration, and it is a market we are
actively building for now.
What I think the agentic economy looks like at maturity, agents paying other agents for services,
model inference, and data access using on-chain microtransactions, ERC-804 agent registry for
verifiable discovery, X-402 for agent-to-agent payment rails. Coinfellow's delegation model is the
permission slayer that makes all of that composable and safe. The interesting unit is Natone agent
doing one thing. It is an ecosystem of agents with scoped authority, discoverable on-chain,
transacting with each other. Ashan Pondi, you are pitching Coinfellow as the assistant.
central execution layer fourth autonomous agent economy. What does the next 12 months look like
in concrete terms, and what has to be true about both the market and coin fellow for that vision
to materialize? Minkeye Park, concretely, the full public launch at ETHCC is the first milestone.
The delegation flow is live for all EVM chains, the DCA and automation features are publicly
available, and the developer documentation is complete enough that third-party teams can integrate
without hand holding. B2B developer relations is the other major thread. The ETH Denver
signal was strong enough that we are moving from reactive to proactive on developer outreach.
The products that embed CoinFellow for their users represent a larger market than individual
Mulkbot operators, and the integration is straightforward enough to make that tractable.
What has to be true about CoinFellow? We have to maintain the trust model. One high-profile
security incident would be damaging in a way that feature gaps are or not.
That means the delegation architecture stays rigorous, the parsing evals catch-edge cases before they hit users, and we are honest about what is in production versus what is in progress.
The teams that win in this space will beta ones that earn trust through their security model, not through marketing it.
Don't forget to like and share the story. Thank you for listening to this Hackernoon story, read by artificial intelligence.
Visit Hackernoon.com to read, write, learn and publish.
