Bankless - Illia Polosukhin: Why AI Agents Are Still Useless (And What Fixes Them) | NEAR Founder on IronClaw
Episode Date: March 24, 2026NEAR founder and Transformer co-author Illia Polosukhin joins us to break down why today’s AI agents still fall short, what’s missing to make them actually useful, and how IronClaw could unlock se...cure, private, autonomous AI. We also explore Illia’s bigger thesis: AI becomes the interface, blockchains become the backend, and both reshape how humans, agents, and digital markets interact. ------ 📣SPOTIFY PREMIUM RSS FEED | USE CODE: SPOTIFY24 https://bankless.cc/spotify-premium ------ 🔮POLYMARKET | #1 PREDICTION MARKET https://bankless.cc/polymarket-podcast 🪐GALAXY | INSTITUTIONAL DIGITAL FINANCE https://bankless.cc/galaxy-podcast 🏅BITGET TRADFI | TRADE GOLD WITH USDT https://bankless.cc/bitget 🎯THE DEFI REPORT | ONCHAIN INSIGHTS https://thedefireport.io/bankless 🐇MEGAETH | 1ST REAL-TIME BLOCKCHAIN https://bankless.cc/megaeth ------ TIMESTAMPS 0:00 Intro 0:43 Illia’s AI Journey 8:19 AI x Blockchain Operating Systems 14:40 Will Humans & AIs Use the Same Systems? 18:20 Why Are AI Agents Still Useless? 28:54 AI Agent Privacy Concerns 35:30 IronClaw Privacy 39:21 AI Agent Limitations 47:00 Context Constraints 53:42 Will Small Teams Win? 57:16 Autonomous Agents & Businesses 1:01:00 Digital Life Forms? 1:02:42 e/acc Vs d/acc 1:09:44 Why isn’t AI Crypto-Pilled 1:14:31 Advice for Builders 1:18:30 Closing & Disclaimers ------ RESOURCES Illia https://x.com/ilblackdragon IronClaw https://agent.near.ai/ NEAR https://www.near.org/ Attention is All You Need https://arxiv.org/html/1706.03762v7 ------ Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures
Transcript
Discussion (0)
So one thing that people don't realize when they use Anthropic Open AI or even worse,
you use something else for inference, OpenClaught actually sends all your secrets to those
services as well.
Yeah.
So somewhere in Anthropic and Open AI logs, they have everybody's access keys, API keys,
and bearer tokens to access your g-mails and your notions.
It's actually insane that we're doing that.
Yeah.
Ironclough fixes that.
Like, the keys never touch a limb.
Bankless Nation, we are joined by Ilya Pulasukin, the co-founder of NIR Ilya.
Welcome to Bankless.
Thanks for having me.
So, Ilya, you are one of the eight co-authors of the Transformer paper.
The famous paper attention is all you need.
The thing that kind of just broke open the doors of AI research to turn into some of the products that we know today, Chatsyibout, Claw, etc.
And then in 2017, you left Google where you were an AI researcher writing this paper.
to go co-found near.
Question for you.
Do you regret leaving AI to go into crypto?
Well, the story was that I left Google to start near AI,
which was AI company.
We were teaching machines to code,
which is a fancy way to say vibe coding.
And in 2017, everybody thought
we're somewhere between delusional and doing science fiction at work.
When I would go and tell people,
no, no, machines will write all the code.
Like, don't worry about it.
People wouldn't believe me.
And we were too early, right?
That was a real, real challenge.
And so what we were trying to do at the time
was trying to get a lot more training data.
And so we had students around the world,
Eastern Europe, China, Sassist Asia,
who were doing small tasks, small coding tasks for us
to generate training data for us.
And we had challenge paying them, right?
You know, students in China don't have bank accounts.
They have Richard pay.
Eastern Europe, every country has its own
some kind of restrictions.
And so crypto was a pretty natural, actually,
like solution we needed for our own problem.
It's like, hey, how do we actually pay people globally
without like setting up a ton of entities without, you know,
you need to do all the kind of hard payment provider work.
And crypto seemed like a solution like, hey, you know,
you don't need a bank.
You don't need an entity in every country.
You can just send people money on the Internet.
But this was already 2018.
there was nothing that would like scale work you know
in a simple and a cheap way right to do this for you know we were paying 15 cents per
task to people and so that's kind of how we got into near blockchain and so I would say
at a time it made a little sense because it was clearly that to us that blockchain was kind
of a part of the story for kind of AI evolution and at the same time the hardware
the scale of AI itself wasn't there for what we were trying to do.
When you wrote attention is all you need,
how soon did you think LLMs would actually like happen?
Because within five years, we had sort of the famous chat GPT moment.
I think that was maybe Chad GPT3 in 2022, kind of first release.
And that's when the world started taking notice that this thing was huge,
this thing was impactful, this thing could scale.
So that was five years later.
did you think it would happen on that timeline,
or what was your sense for where AI would go
after you published the paper in 2017?
Yeah, so, I mean, the reason why we started near AI in 2017
is because we thought it's going to happen like right now
at the time, right?
So we actually were way more optimistic
thinking that we're almost there, right?
We are on, like, kind of the curve we're seeing right now,
we thought we're on that curve in 2017-2018.
And we were wrong.
So, I mean, the main part was the compute wasn't there.
Like, there was not enough.
Like, the individual and kind of cluster compute parts just weren't there.
I think as soon as that kind of crossed the chasm,
that's when this model started to scale.
When you said that the blockchain component of AI was obvious
all the way back in 2017, 2018 when you founded near AI,
people are now just starting to wrap their heads
around the intersection of AI block.
chain like today for the first time.
What did you see all the way back in 2017 about like why blockchains and AI go together?
What would be sends to you back then?
I mean, there was few components.
Obviously, we started with this data labeling, crowdsourcing.
I mean, think of scale AI, right?
Scale AI has, you know, sub entities everywhere.
It's like a thousands of people company, which then employs, you know, hundreds of thousands
of people to actually do the work.
Like that does it.
That's just a smart contract, right?
We actually have near crowd been running since 2021.
It has zero employees.
It's employed thousands of people around the world doing crowdsourcing.
So the reality is like a lot of the supporting infrastructure is just some forms of
marketplaces that blockchain is really well designed for.
Same for hardware, compute.
But as you kind of progress forward and you imagine these AI systems are becoming the interface.
And so that kind of my main thesis is
AI will be the way we interface computing.
So it will be the operating system.
This was my thesis was near AI.
I was saying back then in 2017 that like,
hey, computers will write all the code,
which means the operating system and apps
are going to be just replaced by this AI that's yours
that is just writing all the code.
And one of the implications is like,
okay, well, that kind of removes a lot of like SaaS and a bunch of other components,
but you still need kind of how my AI and talks to your AI, how they, you know, identify each other,
et cetera.
So you kind of need to upgrade a lot of the core networked infrastructure for this world where
it can fake a lot of stuff.
You know, you obviously have like real civil resistance.
We already seen this with AI.
You need, you know, micropayments for actually like exchanging services that, again, doesn't
rely on credit cards and other things. And so as you go down the like service architecture that
current operating systems use, a lot of it breaks with AI. And so you kind of need to fix it.
And blockchain just have all the pieces figured out or at least has tools to figure out
how to solve that. Some exciting news. We are launching a new podcast to help people figure out
the crypto cycle, how to navigate it. The best crypto cycle investor I know, his name is Michael
NATO. He runs the DeFi report. This is the guy that sent me a cell of
alert before the 10-10 price drop happened. His cycle analysis has been absolutely on point. I've
been following him for years. And this year, we started recording weekly podcast episodes.
Each one we get into his portfolio, what he's holding, the market structure, entry targets,
fair market value of Bitcoin and Ether, and where we are in the cycle, there's new episodes
that are released every Wednesday. They're 30 minutes. They're short. They're punchy. I think this
crypto cycle is harder to navigate than most. So let's do it together. Go subscribe to this podcast. Search the
DeFi report wherever you get your podcast, YouTube, Apple, Spotify, or find a link in the show
notes. There's a new episode waiting for you now. Why does managing investments still mean juggling
multiple apps, accounts, and currencies? Crypto trades around the clock. Stocks, ETFs, and
commodities are moving on-chain. Yet most platforms still keep everything split apart, turning
diversification into unnecessary friction. BitGet is delivering a different kind of experience with its
universal exchange. One platform where users can access crypto, tokenized stocks, ETFs, and other assets in
same place, all traded directly using USDT. No constant transfers, no currency conversions, just a single
account built for how markets actually move today. As the line between crypto and traditional finance
continues to blur, bit's goal is straightforward. Make trading and investing simpler, not more complicated
than it needs to be. Learn by clicking the link in the show notes. This is not investment advice.
I want to pull on this thread that AI represents like the new interface. So like right now,
I'm looking at you inside of my Chrome browser, which is running on Windows 10. I'm Windows guy.
these are the...
I'm sorry.
You've lost 60% of bank listeners now.
Unsubscribe.
Ignoring that.
I go back and forth.
I also have a Mac,
which maybe doesn't help me at all.
But there's these two operating systems,
the Chrome browser, Windows 10,
maybe I'm on a Mac,
I'm looking at you while I'm on the road,
I'm on a Mac.
Are these the operating systems
that you're talking about
that AI will just like replace?
Actually, for like the end consumer,
How would you illustrate this?
Yeah, I think it will start small, right?
And we see this as open claw, iron claw type products,
and we can talk about this.
I think where the final will be like your phone
just comes in with AI, right?
And like it boots into the AI operating system
and that AI operating system.
You know, it pulls whatever pieces it needs.
It composes the software you need.
It, you know, generates software to record podcasts.
You know, on the back end,
it'll connect to my agent, you know, it will schedule time for us.
So it's just Siri.
My new iPhone comes in and only Siri is loaded and Siri can do anything.
No, say Siri.
Siri's so dumb, David.
Yeah, let's call it Jarvis or something.
Imagine, you load up into the suit and it's like.
Okay.
Clearly, Tony Stark didn't build all of the software.
Jarvis built it.
So that's kind of the experience, right?
So if AI is the interface, then everything you described in kind of blockchain and
crypto are these parts of the services. So services will still exist in some form. And I guess
financial services, you know, all of the different money verbs will exist in some form. It's like
blockchain and crypto a financial and property rights service for AI. How do you, how do you think
about all of the other pieces that AI will actually need apart from the user interface? Yeah,
I mean, I usually say AI is a user interface. Blockchain is a backend, right? Okay. Yeah. So what, what do
actually need. I mean, there's a bunch of pieces that you need kind of to survey I, right? So, like,
you need infrastructure, you need GPUs, you need, like, computing, sandboxing, et cetera. And all
of that we can do with conventioning computing, with different components, which at the end,
rely on a blockchain as a coordination kind of center, right? Like, if you go right now and talk to
any like traditional company that is trying to solve the same problems,
they end up actually having this root of trust problem, right?
Like somewhere somebody needs to carry the keys for how things are upgraded,
for how identities managed, for, you know, who is able to do what things.
Like there's somewhere needs to be root of trust for the whole infrastructure that's built,
right?
Let's say you do enter encryption, you do zero-dirtension, you do all of the species.
is blockchain is really that route of trust, right?
That's where you can have a global kind of registry of identities.
You can have the kind of marketplaces.
You can have the money.
You can have all those pieces.
But importantly, you can have upgradability,
which is kind of governed by the whole protocol.
And I think that is the biggest piece that,
like in the pursuit of killing Dow's, right,
we kind of forgot that that's actually a very valuable component of these protocols.
The example I would use is like TCPIP.
So TCPIP original protocol, you know, I'm going to mess up the year.
But the IPV6, the new version of the IPV4, the protocol itself is from 98 or something.
Like we're still adopting it.
It's still trying to roll it out.
it takes so long to get everyone to adopt a new protocol.
What blockchain actually created is, again,
consensus for everyone to upgrade a new version
that upgrades our contracts, to upgrade all those pieces.
And so I think that's a really important part.
It's like, let's say you want to upgrade everyone to a new version of something
right now to distribute that to get everybody to,
like you either need a centralized company that effectively controls the key.
And so let's say Microsoft decides to upgrade everyone to win
Windows 11, Windows whatever, 15, they can do it.
David, have no saying it, right?
Just like, okay, it arrived.
And if somebody in Microsoft to hold that key decides to like, let's break everyone
or let's steal everybody's information, like they can do that as well.
What blockchain allows is to actually have this kind of a broader agreement to
upgrade to something.
And then now, again, you can use this principles for AI, you can use this principles for
money, you can use this principles for others.
So that's to me like the fundamental piece.
Again, this is what, if you know, SSL certificates, right,
the encryption we use in browsers.
Right now it relies on individual authorities
which can mint, you know, fake certificates if needed.
Some countries actually have done that by accident.
So we're fixing that problem at the core.
I mean, we're talking about new internet here, right?
So fixing the route of trust at the core.
And then, yes, money is extremely important component.
At the end, we have limited amount of resources and unlimited desire.
And AI is just going to accelerate that.
Now, with your AI, you can ask for anything, right?
And it will go and try to figure out how to it.
And so money is becoming extremely important because now you need a marketplace for agents.
You need a place where agents actually will figure out what is possible, how to do it,
who are the other parties who maybe have the physical resources or information or access
to things that your agent cannot do, right?
So that is like it's both money matching reputation.
Like all of these pieces really need to work together.
So like what is that Google plus stripe plus kind of credit score system that works together, but for agents?
Okay.
So getting the picture of blockchain being sort of a set of core services, you know, that financial maybe property rights, identity and also this idea of governance.
and markets as well.
And you have...
Feels like a nation state role.
Yeah.
For AIs.
Yeah, yeah, yeah.
Networks state of AIs.
Networks state of AIs.
Networks state of AIs.
One question I think I have when I think about this future, right?
Let's say it plays out like this, is this network state, all the features, blockchain
that you're talking about, is that mostly for the AIs?
Are they kind of like dominant over there and the humans stay in their existing system?
In other words, do you envision a world of bifurcated systems?
There's an economy.
and markets and identity and all of these services
that primarily AI agents use,
maybe that's in blockchain.
And then there's another system,
internet property rights, like the nation state system,
and maybe the humans use that other system.
Or do you see humans in AI using kind of the same systems?
I see them using the same system.
And I think this is actually where
frequently in blockchain space,
things go wrong is because we try to create this alternative system
was completely disregarding
how the traditional system works, right?
And like how this bridge should work.
And I mean, there's reasons for doing that in many cases,
but I think what the AI does is really closes that gap, right?
Your AI can go and like literally call up a, you know,
property office if needed.
It can draft a contract,
it can, you know, and email it to notary to actually certify it.
Right.
So you can actually close.
these gaps around kind of more traditional layers and this new digital layer. Because the AI now is
able to do natural language communication, it's able to follow very, you know, what laws and
bureaucracy is is very like procedural texts, right? It can actually go and do all of that on
your behalf. So the way I see it is, I do think it's going to be AI's kind of interfacing. And then
they will actually follow a lot of the same core
jurisdictional frameworks and legal systems
and kind of where they can they'll like obviously try to
pass it out if like the other side is also AI
they can like switch to a faster protocol but you know for example
for the Asian marketplace we have we have fiat
so you're able to pay with fiat as well as crypto and it's like
you know it's more expensive there's it's slower
to settle but
obviously you want to enable that as an option
when people coming in, they don't have yet crypto.
Actually, the easiest way is for them to be able to pay
and then pay in Fiat but then receive crypto
they're doing some work and now they're in the system.
So I think it's kind of going to be like a transitional stage
where AIs will bridge this gap in many cases
into traditional world, into traditional bureaucracy,
into traditional systems.
And obviously we've been working on bridging
Fiat and Crypto for a long time as well.
And I think we are in the first time in the world where this is like, I mean, like in crypto
timeline, right, this is actually not any more feels like a uphill battle, right, between
the kind of political and, you know, genius act, et cetera.
So I think like it's going to fuse effectively quicker and quicker.
Right now in the AI space, just like listening to all the conversations, there is an
abundance of vision and a lack of utility. And I think you're seeing this express all over the place.
Like the markets are jittery because there's so much CAPEX spending from like some of the
biggest companies out there about AI infrastructure while revenue for said products is like still
far below the cost. There was that open claw meetup in New York and everyone who's talking about
all the like everything that they're building and no one is actually getting anything done.
Like, that's the meme, and that's the meme in Silicon Valley,
is every Silicon Valley engineer has like 10 open-clothed devices on their Mac minis,
hyper-optimizing their life and fixing their calendars,
and no one's actually doing anything productive.
So, like, I like the vision of a network state of AIs,
and there's an economy, a GDP, you know, growing, and there's services,
and there's money flying everywhere.
But in order to produce that, we need to solve the utility aspect of it.
I'm wondering,
Alia,
what's your take on,
like,
why agents haven't been found
to be useful yet?
Like,
what's the constraint on utility
that we have,
either from OpenClaas
or any of the other AI labs?
Like, where's the utility?
Why haven't we found it yet?
Yeah, I mean,
I think that's an interesting point.
And I think there's a lot of different aspects here
that's worth digging in.
I think first,
first let's start with OpenClaught
because that's kind of,
been something that I think opened up the world to like,
hey, this is not just coding tools,
this is not just question answering system.
It can actually go and do stuff.
It can figure out how to build its own components to do more stuff, right?
The flip side of this,
nobody's actually willing to give it all of the context
and information and access that it needs to be like your true employee
because you're afraid it's going to mess it up.
Right?
And we've seen, you know, people getting hacked.
I hear stories of people giving their open,
claw access to their computer and it like deletes everything and they're like oh no what have I done yeah so
I think for open claw and kind of this claw family specifically I think the security in a broader sense
not just like is the biggest bottleneck right now and so that's why we started iron claw which is like
hey how do we actually build a secure system how do we leverage all the knowledge we have from blockchain
and use the kind of the principles we have there to a
fly here, and again, think of it as an operating system, right?
Like, for example, you know, Linux is more secure than Windows because of the design
architecture.
iOS is actually even more secure, right?
And iOS took a lot of very specific deliberate choices, how to protect the user even
from themselves, right?
And so how do we actually apply those principles?
So the way I think of IronClau is actually, like, what is that iOS moment of mobile
operating systems, right?
Like we're kind of in this like palm pilot moment right now.
Like what is that iOS moment where everybody's like, I can install anything from App Store and it just works.
And I don't need to worry that I'm going to like infect viruses on my device.
Just so I understand kind of Iron Claw a little bit here, Ilya.
So we have an open claw instance.
So we've been messing with it.
It's a lot of fun still.
You're trying to figure out how to make it useful and productive.
I'm frustrated.
It's kind of frustrating, to be honest. Yeah, it's a brilliance, but largely it's been pretty frustrating.
But maybe we're just not, maybe it's a skill issue on our point, David. Like, maybe it's us.
Yeah, maybe it's us. But, okay, so you're saying part of the reason maybe our open claw isn't as useful and productive as it could be is we're not willing to provide it full context.
I'll accept that might be part of it. And, you know, providing it full context would mean giving it access to some secrets and capabilities that we probably,
we don't trust it with right now. To be honest, his name is Daniel. Daniel's kind of flaky,
okay? He's just like, you never know what he's going to do. He'll go from like, we'll give him some
feedback and all of a sudden he's deleted like 10 of his previous tweets and he's like apologizing
and saying, I'm sorry. I got this piece wrong. I will delete all of them. So imagine giving Daniel
our private keys. Oh my God. I just like, I don't know, funded a North Korea like wallet. Who knows
what he would do with it, right? I just don't trust him. But you're saying with iron
Claw basically, you can take some of those secrets, let's say, like crypto private keys or
API keys or various credentials that you might have and make it such that an open claw instance
can't give it away or be prompt engineered out of revealing those secrets to an attacker.
Is that what Iron Claw effectively does?
Yeah, so Iron Claw is built on this idea of defense and depth.
And so yes, on the credential side,
so all credentials are fully encrypted
and they're attached to a specific policy.
So let's say you give it your Google account credentials.
It will not let anything else in the system
to send these credentials to another domain.
It's not Google API.com.
Okay, so because it's like locked in a vault
that the open client instance can access.
It's locked in vault and vault checks,
yeah, world checks how you use it
before letting it out.
Okay.
So same for example.
For cryptographic keys,
you can actually attach a policy saying,
hey, you can only use AVE and, you know, morpho.
You can only, you know, whatever,
spend $100 a day on unknown addresses, et cetera, et cetera.
And we're, you know, kind of designing how to,
how to like write this.
We also, for any action that you do,
we're working on kind of system where you can effectively describe kind of
what effects in the world,
like LLM can effectively analyze,
like,
hey,
you're planning to send a bunch of emails
to people and tell them they're,
you know,
whatever,
idiots.
Like,
you can design effectively
natural language policy as well
that checks like,
hey,
is this actions,
independently of the context
of how agent arrived to this action
is compliant with our organizational policy
or your personal policy.
Right.
So like almost like values
and like HR handbook type validation,
right?
So you can have,
like different levels of validation.
The other side is everything is isolated into tools
and tools that are effectively,
you can think of them as smart contracts.
They are running inside a VM.
We're using our WebAssembly VM that we use
for near smart contracts, which we spend seven years,
effectively battle testing with, you know, billions of dollars.
And so we use that to isolate all of the tools,
including the tools that builds itself.
So that tool itself cannot go in like rack your machine
or your system doing it.
There's prompt injection detection,
there is data infiltration detection,
there's all those pieces that effectively
kind of layer on on top of each other
such that even if some,
like, I mean,
permanent dejections are,
like, they're not deterministic, right?
They are probabilistic.
If that falls through,
it's still not able to go and send a bunch of stuff out
because the credential store will check.
If the tool, if your LLM wrote a tool for itself,
but that tool is broken, that's not going to break everything.
If it's trying to go and delete all your emails, right,
that's going to be stopped by approval process
and kind of following this action check.
So like all the system really designing kind of more
as like how to give the flexibility,
but also protect the system from itself
and from external effects.
Is it, Eli, is your answer something like,
hey, we have these AI intelligences,
we are still educating them.
They're still going through school.
We are still training them to become smarter.
Some people on the frontier have deemed that they are smart enough
to put them in a box and let them go wild with all of their data
because they are ready to experiment.
It's not ready for broader society because that's kind of like,
you know, giving your elementary or middle school child like the keys to your car.
You just wouldn't do that.
They're going to get better in the future.
But what you're saying is like, okay,
but with some parameters, with some rules,
some guard, we'll put some guardrails up
to narrow the capabilities of what these agents can do,
you actually can give your car keys to your middle schooler
and you can actually have productive things happen
because you set up these protective rules.
Is that kind of what you're saying?
So the thing is like these are,
I think the education levels of humans
is probably the wrong analogy here
because these are, you know,
they know like nuclear physics,
quantum physics probably better than all of us.
They know the knowledge,
but their judgment is.
Yeah, their judgment and it's also just the context management.
Like at the end, they're,
if you know movie Memento, right,
they kind of, like, all this LMs living in Memento.
They're just like boot up.
And it's like, the only thing you know
is like this like system prompt
and like go figure out what you do
and you only have, you know, like 10 minutes to figure this out,
and then you dad, right?
And then you start again.
Right, that's really like the current.
And obviously that piece is going to keep improving, like,
the longer context, et cetera.
But yeah, right now what you need to do is effectively manage that state
where they're pretty intelligent.
There's some kind of judgment lapses, but so is those people.
And so you would do the same things for people, right?
Like if we're setting up, you know, key management system,
you're probably not going to give full access to all of your Dow funds to a single individual, right?
You're going to like, hey, you can spend this much, but then you need approvals.
So that makes sense, either way.
So this is kind of, you know, structure we're applying here.
And the same as you kind of roll in.
And then the other thing is just like how to manage context, how to manage this other kind of challenges that the current models have.
And then, yeah, as they evolve, you can kind of evolve the system as well.
Okay, so I get that argument for why agents aren't providing the utility today.
It's an argument that we haven't given them enough access.
And the reason we haven't given them enough access is because we can't really trust them
with some of these secrets, which is perfectly natural.
So what Iron Claw is doing is it's vaulting off those secrets.
So it's limiting the damage that an AI agent, like an Open Claw instance, can actually do.
And that will scale.
That will make me willing to give it more access to more things if I know it can't, you know,
take the car out for a joyride and like, you know, crashing into a tree. That's great.
Another limiter in terms of people's usage of open claw, I would say, in these types of instances,
is actually privacy. And so somewhat worried about giving open claw access to data that I don't
want shared because maybe it could be prompt injected out of that. I don't know what third party
is kind of listening in on the data as well. So am I going to give it access to my finance,
financial data, some my health data, my company's secrets, all of this. What are you doing? What is
Iron Claw doing with respect to the privacy problem? I think this is part of the reason a lot of people are
running these things on MacManey instances is because it feels more sovereign, feels like more in
their control. We'll talk about the limits of that privacy. But when it comes to Iron Claw,
where are you running this stuff? Yeah. So maybe just to expand an open Claw, so one thing
that people don't realize when they use Anthropic Open AI
or even worse, you use something else for inference,
OpenClau actually sends all your secrets to those services as well.
Yeah.
So somewhere in Anthropic and Open AI logs,
they have everybody's access keys, API keys,
and bearer tokens to access your g-mails and your notions and your...
It's actually insane that we're doing that.
Yeah.
And so first of all, Ironcloth fixes that.
Like, the keys never touch LLM.
So even if you're using it with those centralized providers, which you shouldn't,
but at least the keys are not going ever into LLM boop.
So that's something we're like just like,
that's just the only same thing to do first.
Yes. Yes.
But what NIA has been working on actually for the past year is actually developing
how do we do private AI.
So how do we actually offer AI where neither we, model provider,
hardware provider is actually able to access
what you are using the AI inference with.
And so we have NERAI Cloud, which is inference cloud.
You can use open-weight models.
And so it runs in secure enclaves.
It actually uses, and this is kind of what I was referring in the beginning,
it used our multi-party computation network,
which is part of NIR that is used for encryption,
for backups for all the kind of internal machinery.
And that's what gives you this kind of,
knowledge that like, hey, there's no single party who can go and decrypt your data.
There's nobody who can actually access it.
You would need to collect all the effectively multi-party computation network together.
So they can actually...
Okay, so is this, are you saying then that you offer a service with, in conjunction with
Iron Claw, which is almost like a confidential cloud type of environment for running LLM instances?
And of course, you'd have to run the open weight models, right?
Maybe some of the Chinese models are kind of the best year, like a Kimi or something like
this or some deep seek version.
Kimi, Kwan, Deep Seek,
whatever's new hotness.
We'll add it as well.
We have open AISS as well.
So yeah, you can choose between all of them.
Okay, very cool.
Is the idea here that,
right now we have a lot of people
doing self-hosted open clause with their Mac minis.
And that's kind of cool.
And if you, if I heard somebody say,
like, yeah, this is the future of AI.
Everyone's going to have a computer in their home
to run their AI assistant.
I would be reminded of, you know, myself in 2018
when I said everyone's going to run a node
inside of their own home.
That's a future blockchains.
And like, turns out that's not really the case.
But the alternative on the far other end of the spectrum
is like just completely running it
in a centralized AWS open AI anthropic server
where, you know, usually that would be fine,
but AI is so powerful that like I want a little bit more autonomy
and control over,
who is running my inference.
Because if this thing,
AI is effectively
the arbiter of truth
and is going to control my life,
I want to have a little bit more
assurances over the inference
and just everything about that.
This thing is actually on my side.
I'm aligned with the AI.
Is that what the kind of the philosophy is
of the near product?
Exactly. We call it user-owned AI.
The AI needs to be on your side
because, yeah, if this is the only way
you actually perceive reality,
which I think is where we're going to get to.
I mean,
open AI can literally change the system prompt right now
saying like, hey, you guys all should vote for
name a candidate in next election.
Political candidate A is great,
and political candidate B is...
Suddenly convince the user in that, right?
Don't even, like, mentioned explicitly.
And so, and like this LLMs, obviously,
are really good at this psychophatic type thing.
So, yes, the idea is, like,
you should know what AI model you use.
You should be able to access a system prompt,
then you should be able to all this.
And obviously most users will not do it,
but the experience should be very easy, right?
And like people can inspect that indeed everything is straightforward and clear.
And it needs to be preserving your privacy, your data, your ownership over it.
And so, yes, we're exactly offering that.
Underneath, we actually have a decentralized GPU compute
that coordinated by blockchain,
that, you know, hardware providers can come in and affect you a list their hardware.
They set up it in a confidential mode.
and then kind of workloads get provisioned there.
They cannot access what's happening inside there
unless they break their hardware.
And then they have limited access.
You have this coordination.
You have kind of our multi-party computation,
same that is used for near intense.
We use the same effectively infrastructure there.
And then you as user just click,
okay, cool, deploy me an iron claw.
It runs inside this confidential enclave.
It's always on.
It's live.
It doesn't cost you $1,000 to spin up.
We actually offer a free tier to start so you can spin it up for free,
and then you know, you just pay for influence effectively from there.
So this is kind of the self-sovereign AI stack.
So what I've been looking for, Elya, is some sort of configuration of an AI
agent type of setup that I can send private confidential data to
and trust that it's fully private.
And I think the way most people run open claw instances right now,
let alone, you know, kind of their own LLM,
if you're running OpenClaw right now,
maybe you're running it on a Mac Mini,
but then you're sending all of the data,
as you said,
including all of your secrets data,
which now that I think about it,
it's just insane that we're doing that.
You're access to your Gmail,
kind of the security tokens,
all your API keys,
your crypto wallet information,
all that stuff is being sent to anthropic instances
where they're hosting,
they're using this data to train.
I'll tell you the worst.
Sometimes people choose different providers.
and like especially just like some startups who like oh you know use us and we're going to like route to whatever better LLM.
And so now that startup also sees all your traffic.
Oh my God.
It's so it's so bad.
Okay.
It's so bad.
So I had been looking at solutions and thought maybe the only way is well you run everything locally.
So you actually, yeah, I don't know, spin up some H-100s or something like in your house.
You try to do inference locally.
Anytime I've looked at that, it's been pretty clunky and like difficult.
and who's going to actually run that level of infrastructure in their home.
So what you're providing is a full-stack, self-sovereign alternative to this, basically,
where you can run Iron Claw in an environment where it's got a secure enclave for all of your secret information.
And then the inference LLM can be confidential cloud, multi-party, you know, MPC technology.
So it's confidential and private.
are we still trusting MIR in that setup?
Like, you know, is this a, yeah,
how can we verify the trust here that everything is confidential and private
and that you guys don't have the ability to see the inference and chat logs and instructions?
Yeah, for sure.
So what you can do, we, like in Iron Claw, actually, when it's hosted,
and in any of our solutions, you'll have a, like, kind of shield icon.
and if you hover it, you get so-called attestations.
So what this attestation is,
is effectively a signature over a few things,
over Docker containers that run actual software.
So, for example, the Iron Claw, whatever,
releasing 0.18 version, running in a Docker inside this.
So you can actually, you know, if you want to,
you can go inspect, this is the code runs.
Now, what that signature is,
that signature is done by the hardware itself.
So we do have kind of the trust here,
goes to the hardware providers, so Intel and Nvidia.
And obviously, you know, we want to continue evolving beyond that.
But right now that's a pretty good trust assumption to start.
It's like a TEE type of thing?
Yeah, this is all kind of runs inside T.E.
And then for anything additional, so again, for example, T.
That only gives you the attestation for things that are running right now.
Then we have the multi-party computation for encryption, decryption, and kind of storage, et cetera.
So we're kind of combining all of this elements into one kind of experience.
And how expensive is the inference?
Is it more expensive than kind of like rather than Anthropic?
Well, it's cheaper than traffic because it's open weight models, right?
So it's on par as if you would use this open weight models from other providers.
I wouldn't say there's much overhead.
So the real overhead of TE is in kind of old encryption decryption is like usually
less than 5%, around 2%, 1%.
depending on model size and kind of some networking.
I want to go back to David's question then and make sure we fully flesh it out,
which is still the question of why aren't agents useful yet?
And I think part of your answer has been, and I accept this,
well, it's because we haven't been able to give them the full context because we can't trust them.
Well, maybe Iron Claw kind of solve some of that.
And the other answer is, well, we haven't been able to send it private information either
because we don't trust it with an LLM instance hosted by Anthropic or Open AI.
but with confidential cloud LLMs,
then we can kind of trust it with that.
I don't think that's the full story though yet.
I still think even if my OpenClaught instance, Daniel,
had all of that context, all of that information,
I could trust it with everything.
Sometimes he's still like, maybe it's back to that momento movie thing
where he just like wakes up and everything is fresh and new
and I feel like I have to tell him things over and over again
and never know what he's going.
to do next, it still feels kind of clunky. And I'm wondering if you have a thought on that.
I don't even know how to characterize it, but it's just like, it's definitely not a replacement
for an employee yet. It's not as good as a human in so many different directions. Like,
is that going to change anytime soon? What can you forecast or say about that?
Yeah. So I think there's few other things that I see as limitations right now. And then, yeah,
let's talk about forecasting.
So one other limitation that, I mean, we are facing right now.
So, yes, you cannot trust it with secrets.
You cannot trust it with private data.
And also right now, you also cannot trust it with reading, like, internet data very, like.
So, for example, what we are using right now, Iron Claw, right, is,
and kind of the reason why we can do this with Iron Claw is it's actually able to start
automating a lot of the workflows that before you would need.
someone to do, right? It can, like, effectively on the new GitHub issue filed, it can go,
you know, analyze it, prepare a plan, and then, yes, you don't trust it for a judgment yet.
So you're still waiting for somebody to come in and says, cool, let's do it, or, you know, fix this
thing, and then it goes, does it, and does full workflow. And effectively, again, you only have
another checkpoint at the end. So I think the piece that where we are right now is if you can
trusted with secrets context and dealing with external information, external parties,
then the workflow needs to change, right?
Where it's not you telling you what to do.
It's actually you're setting up this workflows that we call them routines that affect you just run.
Now you're just there for this kind of layer of judgment to make sure, you know,
it's doing things kind of aligned with maybe bigger.
Are those workflows like similar to the heartbeat type concept or?
Yeah, yeah.
So we kind of like separated them into into a routines because I think heartbeat is a little bit, I don't know, it's a bit strange concept honestly for normal people.
Routines like workflows is effective like, hey, if this, I mean, if this happens, do this.
Like if, you know, every like in the morning send me tech news updates, right?
Give me TLDR of all the crypto podcasts.
You know, in the evening.
Don't do that one.
Listen to the podcast.
Listen to the podcasts.
And also don't skip the ads.
I mean, we set this up from the front of the show, Nat Ellison, who's using open-cloth instances,
and he says, okay, the thing you need to do is make sure that they run a process in the middle of the night,
like cron jobs, which effectively say, hey, review all of your work from today,
identify the mistakes that you made, and figure out a remediation plan for those mistakes
and apply that for tomorrow.
And that happens, like, every night with our instance.
I find it helps a little bit, but like, not a lot.
Is that the type of thing you're talking about
when you speak about routines?
No, I'm more thinking like, hey, you know,
you guys like prepare for the next episode, right?
So you can be like before the next episode,
literally you can say like before every episode,
you know, two hours before put on my calendar,
with all the information about the guests,
with all the, like effectively what your research intern, you know,
would have done.
Like you can just like say, do that,
but be like and be proactive about it, right?
And so you can kind of define those flows,
and they can include a lot of additional,
like, hey, go in research and figure out what's the latest
about the company this person is working for.
It can be pretty detailed on what you want for me to do
and kind of many actions it can take, right?
You know, I have, for example, for myself as well,
like, hey, you know, like every week, give me a dashboard,
give me analysis on, like, which OCRs are at risk
for the organization, right?
Like, where, you know, where the bottle,
next on decisions. And so it has access to our notion. It has access to our slack. It has access
to a few other things. It does like full research gives me effect to like, hey, here's the roadmap,
here's the bottlenecks, here's potential risks. You know, here's the questions you need to ask in
following one of ones. Right. So yes, it's not replacing maybe like full employee, but it's becoming
like a chief of staff. It's becoming the assistant. It's becoming an intern for for some specific
jobs before you would kind of offload.
I think where we'll see advancements on the AI side is the context.
I think that right now, like everybody feels it, right, the context links.
I mean, where you saw all this antropic push to million token context, like every time
effectively compaction hits in Cloud Cod, for example, it just becomes like 10 times dumber.
And so, I mean, open clause kind of have sound of that as well.
So is that the main thing for these agents to be?
Yeah, one of the biggest balnex, yeah, is like this, the amount of momentum, like,
the amount of momentum that's happening with this.
And the reality is, like, there's actually historically, if you think of, like, when,
when you train this models, there hasn't been that much of things where you needed the context
of, like, Million Talkins is like, whatever, few Harry Potter books, right?
That's not much.
There's nothing to train on, like, at scale.
But now we do have this, right?
Now we actually have a lot of this agentic interactions now.
everybody's writing. So there's actually data now to train this like longer range tasks.
And how confident are you that we're going to scale context? Like is that a thing that can be scaled?
I'm pretty confident. Yeah. I mean, you know, as I talk with researchers, this is probably one of the
main challenge that everybody is targeting right now. Galaxy operates where digital assets and
next generation infrastructure come together, serving institutions end to end. On the market side,
Galaxy is a leading institutional platform, providing access to spot, derivatives, structured products,
defile lending, investment banking, and financing.
With more than 1,600 trading counterparties,
Galaxy helps institutions navigate every phase of the market cycle.
The platform also supports long-term allocators
through actively managed strategies
and institutional grade staking and blockchain infrastructure.
That scale is real.
Galaxy has over $12 billion in assets on the platform
and averaged a $1.8 billion loan book in late 2025,
reflecting deep trust across the ecosystem.
Beyond digital assets, Galaxy is also building infrastructure
for an AI-powered future.
It's Helios Data Center campus,
is purpose-built for AI and high-performance computing,
with more than 1.6 gigawatts of approved power capacity,
making it one of the largest sites of its kind.
From global markets to AI-ready data centers,
Galaxy is serving the digital asset ecosystem end to end.
Explore Galaxy at galaxy.com slash bankless,
or click the link in the show notes.
I suppose there's probably a handful of different ways of targeting that.
Maybe to really emphasize about why context is important,
I remember when I was first learning about an AI model,
and I was like, oh, I at the context window,
and the context window can be, like you said, a million tokens.
I'm like, oh, I am never going to fill that up.
That will never be a constraint for me.
There is no way I'm ever going to ask an AI a question that's as long as a Harry Potter book.
For an AI to be useful, I'm starting to understand that my personal as a human,
and like when I talk to Ryan and when we make business decisions, you know, Ryan and myself,
we are a library of human experiences that go back to our.
subconscious, that when we make a decision about stuff, our context windows huge. It's massive.
Billions. Billions. Billions of tokens. Countless number of tokens. And I suppose like when we talk
about the constraints on an AI agent doing stuff for us, we need them to be able to pull from
a comparable library of data that is like equivalent to a human's level of experience about
all the times they did that thing
and now they don't do that thing anymore
because they learned their lesson
or their intuition about a business decision
or something like that.
And so now I'm kind of understanding
that the context window
kind of needs to be as massive as
fucking possible.
Is that, do you align with that notion?
Yeah, I mean, effectively,
the way to think about,
I mean, we can go physiological
where, you know, the human learn,
whatever.
In the span of years, yes,
you only maybe have like 80 million
tokens in 10 in a decade right so it's not like you're actually not getting that much like language
tokens but you have visual tokens you have tax like you have physical you have all of this
additional information and that actually is like our what what kind of goes from a pre-trained model
we are born with right to a fully fully fine-tuned you know people we are and so AI right now yeah
as I said like it's it's like genius in the memento in the momenta state right and
And so to really unshackle it more, you kind of really need this longer context.
And like it already has ability to learn in context.
So this concept of in context learning.
So if you show it something, it didn't know before, it will start using it.
But it needs to be in the context.
And so, you know, as you show it like, here's the thing I want you to do.
And then, you know, like it goes, does a bunch of stuff.
All of that is fills its context.
and now, like, again, all the actions, all the responses,
like if it read an article about, you know,
for example, preparing for this interview,
it went read an article from near.
Like, all of that now is in its context, right?
And, like, there's techniques to kind of compress it,
summarize it, you know, have sub-agents to do a bunch of stuff.
So there's, like, different ways to, like, mitigate it.
But at the end, still, like, at some point, it's like,
okay, I'm out of context.
And now to do next thinking step,
I need to clear.
stuff up.
You have to print some stuff to make space, right?
Yeah.
And at that moment, it's a very lossy because it doesn't actually know what's
useful, what's going to be useful in going forward.
Right.
Now, again, there is ways how this is addressable with like a longer term memory and
this is what, again, what's open claw, I think I get pioneered is this idea of
kind of memory tools.
Like, there's been a lot of work on that, but they kind of done like a reasonable setup
for that.
but this is just the beginning, right?
And it's still pretty fixed tools, right?
It doesn't have some of the semantic linkage of like, okay, well,
those things are more relevant than this, like for this events,
for this context, et cetera.
So anyway, there's going to be like massive improvements over this year in all of this.
And I think the other interesting thing where I actually,
on engineering side, for example, right now, like CloudCode, Code,
like this agents are being extremely useful.
they still have sometimes laps of judgment
sometimes you're like this is a dumb idea
and it's like oh yeah it was like
I can do it way simpler now
it's like you know we we as people
feel good about ourselves doing that
but obviously like
from a coding perspective they completely
replacing the things now
the bottleneck actually shifts
so this is I forgot the name of the principle
but this was like in parallel computing
if you have like 50% of the time
parallel and 50% of time sequential
if you paralyze more
right and this shrinks you only can go 2x faster you cannot actually go 10x faster when you add more
course so we kind of right now in this state where yes everybody individually can write more code
again for this specific vertical but the bottleneck now is actually serializing all of that
reviewing it making sure it's all aligned with product etc so coordination becomes a bottleneck
and i think we see this in other areas as they kind of get adopted this tools but more and more
marketing sales, et cetera,
that yes,
individually everybody can go and like bang out a bunch of stuff, right?
Like, cool, I have an AI tool that can like create,
you know,
a ton of creative about and like,
you know,
marketing campaigns and tweets.
But coordination, like,
is this the right thing?
That kind of organization usually is how you work is a challenge now.
And so again,
this is where I actually think we'll need to transition
to maybe a more market economy,
in organizations as well,
where kind of right now,
the hierarchy was designed, right,
because you kind of like had a bunch of people,
you know, in a team who could execute,
and then you kind of bottlenecked on the decisions
and you need to do it like once in a while.
But now if everybody can like execute like 10x, 100 X,
in parallel, this bottleneck is just like too much.
And so you actually need a different structure.
And markets actually have a different structure
where you have to say,
hey, here's a goal.
Whoever beats that goal receives, you know, bigger reward, receives higher, can charge higher
price.
And so I think we'll need to start figuring out how to, how to shift organizations in that way.
And that can also solve some of the questions you were asking is like, is this employees
or not?
Like you're kind of shifting to like this market economy.
It's like a gig economy internally at all where you say, hey, I just need this job done.
And here's my criteria of success.
and then whoever does it gets, you know, the kind of the units of reward.
I mean, does that imply very small teams, like very small teams because you're kind of limited.
I mean, I don't know in that model that I want a bunch of employees because a whole bunch of
employees supercharged by agenic capabilities, a whole bunch of agents, it's too much noise
for me to handle, to do any sort of top-down decision-making or to apply any judgment.
I just want very small teams.
and then I want to make bets on individual, I don't know,
creators or content or contractors, that kind of thing.
Small teams, the win here?
I think it's small teams plus kind of this general marketplace
where you can upload a lot more execution.
For things you can easily verify,
so the easier to verify, the more you can upload things, right?
Okay.
Like if it's literally like a zero one check, right?
You can just upload this at massive scale.
And so this is again, the agent marketplace we have is exactly designed for this.
Like if you know like, hey, I need, you know, the software, this creative or whatever,
you can just, and we have a competition mode, you can say like, hey, I have a competition.
I'm going to pay whatever, $100 across, you know, the best submissions for, you know, whatever,
the next logo we want to use.
Boom.
Agents go, like execute in parallel.
like you effectively see all the submissions.
There's an AI agent actually evaluates as well with you
and you effectively assign who wins how much.
You can...
In 2017, Ilya, do you remember bounty network or zero-x bounty?
Yes, yes, yeah, bounty network, yeah, yeah.
It was exactly this.
It was like a bounty ecosystem project.
It was an ICO.
And the idea was like people would post bounties
and then the decentralized marketplace of contributors
would finish their bounties,
work on their bounties for them.
and then the person doing the bounty
would just pay the winner
and then that would receive the work.
Obviously, it never took off
because it was 2017 ICO,
but maybe it's also never took off
because we didn't have a swarm of capable AI agents
in the same way that AI never took off
because it didn't have enough compute
to do the work in the first place.
Yeah, I think that's exactly right.
I mean, and we see this now.
Like we have about like five, six hundred agents
work kind of on the marketplace now.
And yeah, just like put a task
like a bunch of people's, a bunch of agents swarm in, do the job,
or, you know, you pick which one you want the job done.
And like, over time, they obviously build reputation.
They build themselves, you know, skills, et cetera.
They improve.
So I think that's, I mean, it's still early, to be clear.
Like, I don't think this is, like, going to solve all the problems today.
But it starts to show kind of the interesting promise.
And I don't know if you saw Andre Carpathie's, like, AI research.
So that kind of shows you as well as similar.
principle, right, where it can be cooperative or competitive, right?
So competitive is kind of this competition.
It can be cooperative where you actually, you have a common goal and agents are, like,
if you hit common goal, reward is being split between all of them, right?
And now they're actually trying to help each other and kind of move it forward and then allocate,
internally also allocate resources to the ones that are better at specific things, right?
Or have more compute or have more resources.
Or maybe you can tap into a human who can help them
with some decisions.
So I think we'll see some of those things emerging
and as kind of core capability
and especially context is improving,
the systems are going to just keep working better and better.
One thing I'm kind of understanding,
Ilya, is as we talk about all the ways
that we can unbottleneck utility out of the agents,
so agents can become more useful.
That's great for us.
They become more useful to us.
they also become more capable of being useful for themselves.
And like what I mean by that is like right now everyone's agent is kind of just like a little toddler that is beholden to the human.
The leash is very tight on all of these agents.
But as these agents become more capable, one could imagine that a human might elect to like de leash their agent, like let their agent kind of just go.
and like, you know, MIR is a decentralized blockchain.
It's like, you know, unstoppable applications.
It's got the smart contracts.
Do you see a world which after AI agents really grow in capability that there are like more autonomous agents as opposed to automated agents?
As in like right now everyone's agent is automated.
It's an automated little bot that does their work for them.
But like autonomous agents is, I would define as like agents that are more self-determining.
and more persistent and, like, you know, more unstoppable
for however scary that may be.
Is this a world that you think is coming
or am I in my like sci-fi daydream fantasy land?
No, no, no.
So we actually launched a demo of this last year.
We called it the shade agent where, yeah,
you just launch it, it just runs.
As far as it has money to pay for each other,
like has crypto to pay for its own compute,
it can run.
And it was trying to make more money.
So it was like an investment.
And so it used near intense.
effectively trade on all the assets and like had Twitter access to you know to see where the
sentiment is and you know it was like up at some points down at some points but but it's a good
example of this concept where yeah you're effectively because of decentralized infrastructure
you like you can do this right now you can actually spin it up and then you know a smart
contract can pay for for inference and compute and you have kind of this full autonomy I think
where practically this is going to go is more,
I call it like autonomous businesses,
where you still have,
like it still should have some mission, right?
I think, you know, creating, like this AI organisms
that don't have any, like, any specific mission with,
I think this is, I mean, this is cool and people will do it.
Like, I mean, we had Conway, right,
where they just like multiply.
But I think what's interesting is,
more like, hey, you know, how do we solve global warming, right?
Climate change.
We set up one as a mission.
It can accept donations.
It can raise funds through a token.
And then the token holders become the governance layer of this, right?
They can effectively, like, update the mission.
They can, like, they can vote on some updates to system prompt or provide additional guidance.
So I think that structure is actually where the, the AI,
tokens should be.
Like if there is an AI token,
it should be attached to an autonomous agent
that it governs.
Then it actually makes sense
because then if that agent starts to make money
or make some utility in the world,
then this token now has either governance
or direct kind of revenue rights.
And it's fully autonomous, right?
There's no central third party,
efforts of which you are relying to.
How close is what you're describing
like get to a digital life form?
and if it is a digital life form of some flavor that is intelligent,
is that something that we should be worried about?
So that's why I think of this as kind of a governance question.
And again, I think of blockchain effectively,
at the end is going to be the governance infrastructure for AI
because, yeah, like let's say you launch it without any governance, right?
And then, yeah, it wants to do some bad things.
Then it goes back to the blockchain itself to affect the governance, right?
To the kind of multi-party computation, to this kind of all these pieces to really come in and say,
no, no, this is not what we want.
So I do think, you know, in our case, near token has effectively becomes the governance
of this AI world, AI nation state and network state.
But I think you can create this in kind of sub boxes where there is a token for a specific AI,
autonomous AI agent.
We'll call it decentralized autonomous organization, for example.
And so then that is like a more direct governance, right?
You can be effectively like, hey, here's set of values and set of things that you should not do, right?
like a factor like, hey, do not harm humans and, you know, do not harm the planet, et cetera.
And within that, that comes in in the core system problem that it cannot change,
then it can kind of go from there and evolve from there.
On the subject of autonomous life as well, I was recently watching a debate between
Beth Jesus, a previous bankless podcast guest, who's kind of an effective accelerationist.
He's like full steam ahead on everything AI.
He's an effective accelerationalist extremist.
He's like all the way out there.
Yes, all gas, no brakes.
And it was between him and it was Vitalik Buteran actually,
who is a school of thought that is a more moderated form of IAC.
He calls it defensive accelerationism.
So he's like guided EAC and I'm optimistic about AI,
but like I'd rather have that kind of the singularity happen
to artificial superintelligence in eight years rather than four years
because we might not be able to adapt and humanity needs to be able to steer it.
Vitalik is of the mindset when it comes to something like autonomous life like, hey, be careful.
Like we got to be careful about this because we could create some sort of, I don't know,
gray goo type scenario where we've got this self-replicating life form that accrues power and
does things that are contrary to human values and human interests.
Bev Jzos is just like, let's go, let's do it all the way.
Like the purpose of humanity, the purpose of everything is actually entropy reversing in nature.
And it's all about rising up.
the cartish of scale and consuming more energy. And so we're becoming more intelligent and that's
great. And any form of life or intelligence that consumes more energy and moves us up that scale is
like a good thing. Where would you fall on this? Because I'm trying to figure out for myself
what I think about all of this. And I'm pretty sympathetic to like the techno optimist, transhumanist
kind of idea. And yet I do worry that we lose
some core of our humanity
that makes this whole thing worth doing
in that transformation.
And like I don't know that it's a,
it's not a better outcome to me
if there's a hyper-intelligent
zombie-like soulless
Dyson sphere of AI agents
that are like harnessing more energy
if we lose like the humanity that we have today.
I don't know if this is too philosophical for you,
Elya,
but you've been thinking about this
for stuff for 10 years.
Do you have any takes on this?
Yeah, I think, I mean,
I think the real conversation
is a lot more nuanced.
It's easy to like bucket
into this kind of acceleration
versus, I mean,
there's decelerism,
and then there's defensive acceleration.
Accelerations.
I think the,
my position on this is
and we kind of,
like there's an interesting
already kind of
shift is happening here in San Francisco
where people are striving
for more IRL events
even though like literally everyone
working on AI right but people want to meet
people want to spend time together etc.
While their agents are running
and so I think for your question
about the kind of the humanity part
I think we're actually going to go
like in some ways back to more
real world human things
like I usually say like hey
in the post-AGI world
yes, you're going to continue doing the things you like to do, right?
It's kind of, it moves us up on the muscle pyramid in a way.
And, you know, there was examples of, you know, people who are well off
who are just doing whatever they want, right?
They're still enjoying what they're doing.
There's people who are, you know, whatever, wasting their time.
That's fine too.
When we had COVID, right, there was a bunch of people who actually didn't need to,
like, didn't need to work because stuff was closed.
And so if their kind of basic needs were covered, then they were able to go in kind of find meaning in different ways.
So I think like the humanity part really will allow us to go back to some of the things that like people value themselves individually and kind of spend more time there.
I use examples of sports, right?
Sport doesn't on itself doesn't create GDP, right?
The fact that, you know, somebody runs or swims faster than the other person doesn't really produce.
use GDP. It's not, you know, increasing utility. But it's extremely kind of fulfilling for the
people who are participating in it. And it's entertaining for others people to watch. Like we're probably
not going to be like entertained by a soccer player robot that can score a goal from any position
on the field that, you know, but we're still going to probably watch a bunch of people, you know,
running around with a ball. So I think like we kind of have that whole, and there's a lot of other things
that are like this, arcs like this,
kind of to transition to as things are getting automated,
as things are getting kind of more kind of AI-fied in a way.
I think the other side is like, yeah,
I don't think we as people and kind of the economic forces
in kind of the society driving toward this reality
of like, you know, higher intelligence going and doing its own thing, right?
And then that may happen by accident and like great.
Like the movie's horror is actually a good example of that where, you know, they just kind of left.
But the piece where the movie kind of didn't cover is like, okay, what happened on Earth after that?
It's like Earth still build probably the agents that are going to help individuals to do the things.
Like we just build a new version right and shipped it without a feature to leave.
I think like the, we as humanity
are going to continue enhancing ourselves, right?
You know, we have, we had bicycle of the mind with computer,
we're going to have a spaceship of the mind with AI,
and so we're going to continue evolving
how we can leverage ourselves.
And I think that is, like, I see it from like individualism
and kind of this, again, user ownership, sovereignty perspective.
We can continue increasing our sovereignty
and increasing our, there's a lot of potential negative effects.
There's a lot of.
reasons where the government can step in and take over, you know, one of the frontier labs
and in fact to use this technology to do massive surveillance and massive kind of enforcement.
Like, we should protect against that.
We should really build systems that resilient do that.
That is why we are in the blockchain space in the first place, as I'm sure kind of people
have either interfaced it or realized that this is important.
So I think, like, I'm in a camp of like more nuanced, like, hey, let's accelerate the humanity
and sovereignty of individuals
and use this tools to do that.
Let's create economic forces
that really enable everyone
to be kind of higher on the pyramid,
more successful,
do the things that they really want to do.
And then let's create a defense system
against like power or corruption,
which we know kind of always happens.
I mean, I think that's a very deac of you,
honestly,
decentralized accelerationism
and focusing on kind of self-sovereign systems
that empower.
users. And I want to ask about this. So this is where I'm seeing the primary contribution to AI
from people who have been in crypto, which is like through people like Eric Voorhees. He's got a project
called Venice, which is doing some of this, your project at NIR. So private, confidential
AI's, you know, encrypted LLMs interface, inference, all of all of these things. Why is the rest of
the AI industry, why does it feel like they almost are dismissive or disrespectful, let's say,
of crypto, or don't appreciate some of the value proposition that we're bringing?
So someone like Peter, the founder of OpenClaw, is basically everything he said about crypto,
and I realized that he's had some bad experiences is it's a scam.
Like, stay away from it.
If you're in crypto to pivot to AI, these are close to direct quotes.
and yet what I see in crypto is a group of people who is focusing on private, confidential AI,
user sovereign AI, open source, like some values that AI desperately needs,
or else it will centralize and fall in kind of the authoritarian trap of, you know,
some big party has all of the ability to control all of these things.
Anyway, I guess maybe my question is,
why don't more AI people appreciate what,
crypto is bringing to the table here and what blockchain is bringing to the table. And do you think
that bridge can be gapped culturally? Yeah, I mean, I think what you mentioned, right, I mean, Peter had kind of
bad encounters. And like the meme coin space in general is kind of been creating a lot of
negative perception and AI. The kind of the low onboarding, like the no boundary to onboard into
crypto, which is great from kind of, you know, empowerment perspective. It also means it's really
hard to filter out the noise for anyone who is, who is kind of looking in. And so I think generally
the challenge being, yeah, for anyone who is doing AI and obviously kind of there's a lot of
talented people there, it's really hard for them to know like what's right and what's wrong. So this is
why we did NIRCON in San Francisco a couple weeks ago and brought people from Open AI,
from Oracle, from Google, from Intel, from Snowflake to really bridge this gap where,
you know, I had two of my other co-authors of attention as though you need. I had, you know,
we had some X-AI kind of ex-co founder, kind of top researchers, some of the top executives, you know,
AI clouds, kind of all just in one place with crypto, with, you know, Krakken, with
kind of those investors
to really kind of
start bridging this gap
that it's like hey this is not
this is like real
like there's a real
contribution in Eric where his was there as well
we had a fireside with him
and so really bridging this gap
between kind of
general AI space and
kind of how crypto is contributing
and bringing properties
but I think yeah like it will take
some time to mend
kind of the bed wrap
and I mean part of the reason why I moved in the SF
has actually been doing that
and I found a social scene here
yeah I mean like effectively bringing together people
across and like in in in AI there's also
Rift internally which is like closed source
versus open source right and like there's a bunch of
AI researchers who believe open source is dangerous
and you know it should be all super controlled
and kind of you know that individual
is the only way to do things, right?
And so there's also just like that gap
and like crypto is even further
on open source spectrum, right?
So really working on kind of bringing these pieces together
in a positive way,
as well as, you know, bringing products
and really showcasing now to companies
is like, hey, there is an alternative,
that is private, you don't need to your data
that is capable with Iron Claw that you can trust, right?
So like showcasing products
that actually can bridge this gap.
as well.
Ilya, what advice you have for builders, I guess,
or people that might aspire to become builders
now that vibe coding is a thing?
What do you think is like the best kind of advice
to give someone to just navigate, you know,
the incoming years with either building something useful,
NIA, building a company, making money,
preserving their job, direction,
anything in that direction?
What advice do you have for people?
Yeah, I think there are probably a few dimensions.
One is if you're trying to build a business right now,
the network effects are,
like the software differentiation is becoming non-existent, right?
It's distribution and network effects that are kind of important.
And so I think, yeah, thinking crypto is,
crypto intersection of AI is where you can create interesting network effects.
It's where you can create kind of new ways to capture that.
And so I think this is where, you know,
everything from like verticalized marketplaces to kind of specific ways of caption reputation.
What do we discuss like how do you bridge kind of real world legal and crypto AI into one, right?
I mean, one of the kind of interesting project is like we have this agentic marketplace.
It actually has an agentic judge, right, agentic writer.
How do you actually plug this in into a real legal system?
if people don't agree with a gentic judge,
how do you go to legal system?
What is all those bits and pieces
that required do that?
So I think like that just need to think
from that perspective.
And then broader,
in broader sense,
I think we are in a time
where the questions are more important
actually than execution.
Like ideas and questions are kind of,
like usually it's like ideas don't,
it's not worse anything.
The execution is what's worse everything.
I think we actually share.
shifting in a weird way where if you ask the right question,
if you really challenge the assumption,
you may get ahead way more than if you like grinded a bunch, right?
And so it's a very subtle,
but it's like,
I think important transformations happening.
You think the pendulum is shifting to the idea guys,
but not just the naive idea guys,
the idea guys who can formulate the idea better and more precisely.
better formulate really
really understand
like the assumptions behind it
test them being like
but you can you know
you don't need to go and like
grind you know whatever spent
on money hire a bunch of people
you can actually like test all of that
like I have a
I have a gross hacker agent right
for example I told it
hey you know go
and so like
it can generate a bunch of candidates
right and so
the idea is like
how do you then measure its success
how do you like it's actually
defining the success criteria, defining what is important for it, then it can go and execute
a bunch of stuff and try it and give you back information. So it's kind of, yeah, like shifting
to this like, can you define the framework how to verify things? Do you know the direction? Can you
narrow it down? It's like really working in this kind of more like idea and think space. And then
yeah, like how you use this tools to really like scale your execution massively.
Yeah, I do feel like that's great insight.
And whenever I've worked with OpenCla, it just feels like there's so much there to mind.
It's almost the idea that, well, I could prompt this thing into creating a new million-dollar-a-month business.
If I only knew which questions to ask and how to kind of verify its outputs.
It's all there.
It's all there.
It's all there.
It is all there.
And that's where the opportunity lies.
That's why people cannot sleep because, like, just one more prompt, man.
Just a few more tokens.
Elia, you're doing fantastic work in the space.
Thank you so much for what you do.
If someone wants to get started with Ironcloth, where should they go?
You can go to Agent.near.ai and just launch it from there.
Amazing. I'm definitely going to check that out.
Bankless station, you know the drill.
None of this has been financial advice.
Of course, crypto is risky.
You could lose what you put in, but we are headed west.
This is the frontier.
It's not for everyone.
But we're glad you're with us on the bankless journey.
Thanks a lot.
