Bankless - The Economics of AGI: Why Verification Is the New Scarcity w/ Christian Catalini
Episode Date: March 26, 2026MIT economist Christian Catalini joins Ryan and David to unpack his new paper, "Some Simple Economics of AGI," which argues that the scarce resource in the AI economy is no longer intelligence but ver...ification: the human capacity to check, judge, and certify that AI output is correct. Christian walks through the two cost curves reshaping every industry (cost to automate vs. cost to verify), explains why entry-level jobs are collapsing first through what he calls the "missing junior loop," why even top experts are unknowingly training their replacements (the "codifier's curse"), and maps out the three roles that survive the transition: Directors, Meaning Makers, and Liability Underwriters. --- 📣SPOTIFY PREMIUM RSS FEED | USE CODE: SPOTIFY24 https://bankless.cc/spotify-premium --- BANKLESS SPONSOR TOOLS: 🔮POLYMARKET | #1 PREDICTION MARKET https://bankless.cc/polymarket-podcast 🪐GALAXY | INSTITUTIONAL DIGITAL FINANCE https://bankless.cc/galaxy-podcast 🏅BITGET TRADFI | TRADE GOLD WITH USDT https://bankless.cc/bitget 🎯THE DEFI REPORT | ONCHAIN INSIGHTS https://thedefireport.io/bankless 🐇MEGAETH | 1ST REAL-TIME BLOCKCHAIN https://bankless.cc/megaeth --- TIMESTAMPS 0:00 Intro 3:42 The Low-Grade Panic 6:43 Who Gets Hit First, and Hardest? 13:06 Coding as Canary, or Exception? 16:21 Human Cognition Was the Binding Constraint 19:59 What Is Verification, Exactly? 29:10 The Codifier's Curse 31:21 The Expanding Iceberg: Non-Measurable Work 38:32 The Two Racing Cost Curves 41:59 Trojan Horse Externality 48:46 The Four Quadrants: Where Do You Want to Be? 54:12 Liability Underwriters and the Venture Capital Parallel 55:41 Directors: Navigating Unknown Unknowns 59:49 60-80% of Your Job Can Be Displaced 1:06:24 Button Pushers vs. Founders: The Great Resorting 1:12:08 The Luddite Risk: Political Backlash Against AI 1:17:30 What Companies and Investors Should Do 1:22:56 The Crypto Connection 1:25:11 Don't Panic: A Closing Playbook --- RESOURCES Christian Catalini https://x.com/ccatalini Some Simple Economics of AGI https://arxiv.org/html/2602.20946v2 Christian’s thread on his paper https://x.com/ccatalini/status/2026311784421036223 --- Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures
Transcript
Discussion (0)
Welcome to bankless, where today we explore the frontier of AI is going to take our job and how can we survive the transformation.
This is Ryan Sean Adams. I'm here with David Hoffman and we're here to help you become more bankless.
David and I read a paper and a thread corresponding to that paper. It's called The Simple Economics of AGI.
And one of the writers of that papers on the podcast today, his name is Christian Catalini. He's an economist. He is an MIT scientist.
Fun fact, David, he was actually one of the creators of the original Diem project over at Facebook.
Do you remember that?
Oh, wow.
I did not know that.
That whole time we just interviewed him.
He's been in cryptocurrency for over a decade.
In fact, about 10 years ago, he wrote a paper called some simple economics of the blockchain.
So he was all over crypto when it was new, and now he's coming back to AI talking about economics yet again.
Yeah, he seems a little bit like a Robin Hansen type of he's putting models onto cultural phenomenon.
and trying to provide answers to him.
Mainly the knowledge that we tried to get out of him is
if AI is going to commoditize a lot of easy tasks,
where does the value go?
If we're just automating the foundations of like the push button jobs of society,
where do people go next?
And that's basically the theme of the episode
and the question that Christian answers in the pod.
Really important episode.
I think it's on everyone's mind.
And the core argument of this paper is that the scarce resource is no longer intelligence,
the things between our ears, our brains.
It's verification and the human capacity to check on AI and its output.
There's a lot of implications following from that idea.
So let's get right to the episode.
But before we do, I want to thank the sponsors that made this possible.
Are you sitting on USDT or stable coins after taking profits and wondering where to deploy next?
What if you could access stocks, gold, and ETFs without ever leaving crypto?
That's what tokenized stocks on BitGet unlock.
Traditional markets still run unlimited hours, but capital is moving on chain 24-7.
On BitGet, you can trade tokenized stocks and ETFs 24-7 with up to 100x leverage,
all settled directly in USDT or USDC.
No brokerage accounts, no off-ramps, no platform switching.
BitGet has already processed over $18 billion in tokenized stock trading volume,
with most of that happening in the past month alone.
The platform now captures close to 90% of Ondo's tokenized stock spot market share.
As gold and silver hit record highs, on chain training followed.
Over the past two weeks, volume in SVL on, tied to silver and IAU on, linked to gold, surged on BitGed.
This is BitGet's universal exchange vision and action, crypto equities and real world assets in one place, built with crypto-native speed and flexibility.
If you want to trade stocks the way you trade crypto, explore tokenized equities on BitGat.
Learn more by clicking the links in the show notes.
This is not investment advice.
Some exciting news.
We are launching a new podcast to help people figure out.
the crypto cycle, how to navigate it. The best crypto cycle investor I know, his name is Michael
NATO. He runs the Defi Report. This is the guy that sent me a sell alert before the 1010 price
drop happened. His cycle analysis has been absolutely on point. I've been following him for years.
And this year, we started recording weekly podcast episodes. Each one we get into his portfolio,
what he's holding, the market structure, entry targets, fair market value of Bitcoin and Ether,
and where we are in the cycle, there's new episodes that are released every Wednesday. They're 30 minutes,
They're short, they're punchy.
I think this crypto cycle is harder to navigate than most.
So let's do it together.
Go subscribe to this podcast.
Search the Defi Report.
Wherever you get your podcast, YouTube, Apple, Spotify, or find a link in the show notes.
There's a new episode waiting for you now.
Christian, I think a lot of people listening feel the way I feel, maybe I feel, maybe the
way David feels, which is like some kind of a low-grade panic.
There's an underlying uncertainty.
Bezal anxiety.
Yeah, basal level of anxiety.
And it's funny because David and I are.
techno optimists. Like we're very excited about the future and yet even I feel it. And I think it's
born of this feeling that AI is going to change everything. It's going to automate a lot of things.
Maybe there's some anxiety that it's going to be us that we won't adapt soon enough. There was a
Satrini research post that made the rounds three weeks ago or so. And it was a basic idea of
almost like a hollow economy that AI would be so bullish and so successful that no one had jobs anymore.
This type of doomer porn has product market fit.
It's easy to see why.
It's because this low-grade anxiety is pretty pervasive.
Why do you think people are worried about AI
and are they justified to be uncertain and worried in this way?
So first of all, I think we all feel the same.
I would say this paper was really the result of that low-grade fever,
maybe at times, you know, spikes of high fever.
It's a period of rapid and transformative change.
the closer you are to code, the closer you're probably already witnessing the acceleration.
And we're talking honestly the last few months.
And that exponential becoming very real between even December and March while we're recording this.
That feeling of, you know, the technology really jumping ahead and delivering on things that many would have thought would have taken much longer.
it's something that we're all kind of struggling with.
I do think, and this is where, you know,
the Dumer view I think is wrong,
people tend to underestimate the potential
that comes with these tools.
Yes, there's got to be a period of transition
and it's going to be a very difficult one.
A number of jobs will have to change
and we'll have to change at this pace and speed
that I don't think it's, you know, historically seen before.
That's where it is fundamentally done.
different. But that said, and I hope the paper really speaks to this, if you take the best feature
of the technology, if you realize where our weakest points are, and you start investing in those,
then I do think in the long run is mostly upside, although, you know, along the way, things will
get pretty bumpy. It affects us all. I think if anything, there's no individual job that's
not going to be affected. Jobs tend to be,
economies considered in bundles of tasks.
Some of those tasks are going to be automated, and that's
great news. But how do you
retrain yourself? How do you keep
on the frontier? That's a big question.
Christian, you mentioned those that are closest to kind of code
will be hit first, and maybe you're talking about
developers. It's unclear to me, to what extent they have been
hit so far. I get the sense that
maybe more junior level developers, there's less
demand for them. The senior level developers,
appear to be getting more productive on this technology. So, you know, even that is sort of a mixed
scenario. It's not as if demand for developers has just dropped to zero. And then there's some other
tasks here in the economy, you know, a doctor, a lawyer. Some of these are, let's say, protected by
almost credentialism and by government mandate. And so they might be safer for a time. And then
there's also the argument that like, okay, like, I'm a lawyer and AI can never automate.
my task because there always has to be a human in the loop. We have to have some level of human
judgment. I listed a bunch of things and I'm not sure to what extent some of these things are
cope, just humans not being willing to sort of adapt to the future or like maybe another way
to ask this is what do you think gets hit first by this AI automation wave and what gets hit
hardest? And I think everyone is asking like, am I safe? Like who's safe here?
so many different thoughts on that excellent question.
I would say first, when I meant by whoever's close to code has been hit first,
they've been hit with the reality of just how powerful this is, right?
And as we've seen, and there's been a long conversation around Jevind's paradox, right,
the idea that, of course, if something becomes really scarce,
we kind of start consuming a lot more of a coding, I think we'll bifurcate,
like many other professions, where we're already seeing what in the paper we call
the missing junior loop.
If you're entry level,
if you haven't really acquired a tacit knowledge
about what makes for a great product
versus just average product,
AI is out of the box,
often a good substitute for you across every domain, right?
So everybody now has access to a pretty good marketer
or pretty good, you know, IC4 maybe,
soon maybe an IC6 in engineering terms.
Or, you know, a lawyer that will navigate you through
most situations and maybe even some complex ones
so that you can save money and maybe you use the
I paid lawyers for the final level of verification.
That's one part of it. The other one is that
as we bring AI into everything we do, even top experts
are essentially creating sometimes consciously, sometimes
not consciously the labels the information and the digital trails
that will automate them out of a job. So you're seeing
top foundational labs hiring,
you know, top people in finance
or other domains,
they're essentially using them to create the evils
to create the harnesses
so that those, those, you know,
those domains of expertise
can be brought into the main models.
As that unfolds, I think,
first of all, I don't think any individual job
is 100% safe.
Even the physical ones, I mean,
we're, yes, we're bottlenecked by the capacity
to build robots and bring them into the real world.
The real world brings a very high level of entropy and complexity.
So things will be somewhat slower.
But reward models, I think, will make massive leaps, even in those domains over the next few years.
Anything that's in front of a screen, of course, can be traced, replicated, you can learn from.
And we're also very tempted, right?
Who doesn't want to augment their own productivity and remove all the ground work by using these tools?
and as we do that, we are trading something that will replace a good chunk of what we do.
As a result, I think, for every profession and for everyone, the idea is to really think through,
okay, if I can delegate as much as possible to these new tools, where can I still add value?
What is that layer of decision making where my expertise, my unique point of view,
such everything I learned from, you know, the time you were born to where you are today in your career,
you've seen all of these out-of-distribution examples
situations that you've learned from.
And that's the difference between, you know, an IC4 coder
and an IC7 or an 8.
I think there's a lot of cope around terms like taste and judgment.
They're very vague.
And so in the paper, we tried to really knock them off out of the gate
by saying there's no such thing as taste, good like defining it,
there's no such thing as good judgment or bad judgment.
There's only measurable and not measurable.
If something has been measured, the machine will be able to replicate it.
If something is still just embedded into your own weights in your brain,
and that's kind of what a top designer would look like, a top podcaster.
They've done so many hours, you know, the 10,000 hours of mastering in their domain, maybe more.
And that's what allows them to choose what should be shipped and what should not be shipped.
We have this concept of verification.
All verification is this final step.
you've got the agents, this form of agents,
creating all sort of interesting work and product.
But then you're the final, you know, there is zero claimant.
You're the one deciding as a CEO, essentially, of this new type of enterprise,
is this ready for the market?
Should I ship it or not?
Or do I need to go back and iterate on this one?
And yes, it relates to taste.
It relates to judgment.
But I think the key difference is that why taste and judgment are A,
are to define and B,
what used to be good judgment or relevant judgment yesterday
could be not relevant tomorrow, right?
Because the machine can replicate it.
Once you start thinking about measurement as the key primitive,
it becomes obvious where, okay, if we're getting better data,
this is going to be more automated.
If we don't have data and it's super uncertain,
or we may never have data, think about the stock market.
Unknown unknowns.
Eventually, maybe these models will know enough
that they'll be able to predict things a few days out,
but there's something magical about these domains of fundamental uncertainty.
that for now is still human.
Now, maybe now forever, but, you know,
for maybe the next couple of years.
Measurement being the key feature here,
the key mechanic is kind of the main quest line of your article
and therefore this podcast too.
I'm not ready to get in there.
I want to put a pin on that,
but I just want to let the listeners know
that we're going to come back to that idea in a second.
Before we get there, the question I want to ask is,
do you think that coding and engineering,
as you say, is like the first,
industry to materially be impacted by AI, which makes sense the engineers building the thing.
They all know how to code, so they automate their jobs first. So it's just very natural.
To what's degree do you think that this industry represents a canary for all the others?
Or do you think it's a little bit more spiky and every industry has facts and circumstances
and each one will be impacted uniquely and generalizing what happens in one industry across
others don't do that.
It doesn't really make sense, you know,
each one of the facts and circumstances.
Where do you land between these two spectrums?
I would almost say that, so first of all,
we have enough evidence that seems to scream,
the change will be jagged, right?
So it will spike in certain parts of a domain and not others.
But even for coding,
what we're automating is a lot of the groundwork at this stage
and being able to ship and replicate what's been done, right?
So there's a best set of practices for security.
There's a best set of practices for building a back end and a front end.
All these things that are sort of known, I think agents will be really good at.
But once they start pushing into domains that they haven't seen, sure, they will be able to simulate them and learn from them and, you know, run unit tests or whatnot against them.
But that's going to be a little bit harder.
And so I do think even for coding, the verdict is out in terms of like maybe we would just build a lot more software.
And I think that's going to be a big part of the story.
So if I were to answer your question,
I would say that the algorithm to think through is,
is this job sort of a wrapper under something that is fundamentally not that valuable today
to society but happens to be wrapped in a special casing as a job?
Those are probably the ones most are risk.
So take something like your average work on consulting.
If that was typically repackaging information that was somewhat widely available and distilling it, summarizing it, that's clearly at risk.
Now, of course, there's some forms of consulting that are extremely valuable, right, because they bring in rare domain expertise.
Then there's political reasons for bringing in consultants, right?
Maybe when you're doing layoffs or like some sort of other external party verifying your strategy as a third voice, those will survive.
But as you look through all of these professions, I would try to ask, is this profession profitable today because it does solve a complex problem?
Or is there some other bottleneck that either is fictionary or, you know, it's kind of falling apart because we can now do it with code and just automation?
I think that's hard to reason about because maybe this is the first time that we've ever seen something akin to human cognition that is becoming cheaper.
over time and becoming far less scarce than it used to be. I love the way your paper opens,
which is kind of the historical landscape on, you know, 300,000 years of Homo sapiens,
where cognition really was the binding constraint for progress. You said this in your thread.
Human cognition was the binding constraint. And I think you're pointing to an era, like of many
different inventions, but I think you're pointing to an era where it was really like our brain
size that was the limiter in terms of what technology we discovered or what progress we made in
civilization or societal organization. That is no longer the constraint, I guess. I think that's part of
your thesis behind the new economics of artificial intelligence. Can you talk about that? Why is that
insight profound in of itself?
Yeah, I would say a lot of institutions and things we do today have been designed around
the idea that cognition or intelligence is scarce.
And we try to get the most leverage out of the most talented individuals in an organization,
you know, the way we make decisions.
We're kind of optimizing for this bottleneck and that bottleneck is going.
But the second realization, and this is where, again, I think,
a lot of the dumerism is premature,
is that we're still in the phase where,
and again, this could change,
especially if you start thinking more about artificial super intelligence,
but at least as our definition in the paper of AGI
is something that, you know, for most intensive purposes,
is human level, human-like, with some gaps,
and we will have gaps, they will have gaps.
So it's going to be almost like two different forms of intelligence
that will trade with each other.
in this particular phase,
what's happening is that we will be able to execute really quickly.
We will be able to apply intelligence to a lot of problems.
We may not necessarily be able to fully know
that data intelligence is following our original intent, 100%,
and that intelligence is still executing within, you know,
what we wanted to execute inside.
And so if you assume that that's still true,
Of course, when those boundaries go, then we're talking about a very, very complex society
and one where we're dealing with peers and eventually with something that's even more capable
than us.
But within those boundaries, humans will spend a lot more time on verification and in making sure
that their intent, their preferences, right, are respected.
And that's going to keep us busy.
I think for the long haul, we'll have more capabilities, so we'll be a lot more ambitious
about what we can do.
It is a massive change.
And I think for many jobs, it's going to be.
be a drastic change.
You know, you think about your job, right?
It's not like you ended some sort of guardrails.
That's kind of what we're doing with agents today, right?
You ended some guardrails.
You execute.
You have your metrics and KPIs.
I think that universe is going to shrink for most professions.
And the universe of I have some sort of higher level intent or human preferences that I'm
trying to respect and carry along through the task.
I think that's going to be really, really important going forward.
Okay.
So you use the term verification.
and that seems to be a central point of the paper itself.
But so far, we've said that there's really no distinction,
I suppose, that that's relevant in terms of like taste or, you know,
curation really and the things that human can do, but AIs can't do.
There's only really like measurable and unmeasurable.
And then also human cognition is no longer the binding constraint on progress
because we have a different form of machine cognition that we're growing,
that maybe it can't do all of the things that human cognition can do,
but it can do enough to really push us towards progress.
And so you said the scarcity will move from the idea of cognition,
maybe, the number of humans that we have or the applied human intelligence
to something else, and that something else is verification.
That'll be the thing that we focus on.
What exactly do you mean by verification?
because this is, it's hard for me to break down the work items I do in a particular day
and map out which ones are cognitive work items versus which ones are the verification work items.
What is verification really mean?
Yes, let me start from the first principles that we have in the paper.
And of course, please push back.
I mean, part of this is really, you know, getting to the fine-grained items
so that we can see how they're right or not.
If you buy the idea that models to date have been extremely good at automating anything that they can ingest data on,
and I think we're plenty of evidence for that.
Then you suddenly realize that there's things that, you know, agents can measure because they've learned,
they've ingested all of the web, all of the books, all of the materials, all of the traces,
and things that we can measure.
There's a big overlap between the two, and that's why there's going to be dislocation and job loss.
wherever agents can measure the same things that we can measure,
well, guess what?
Agents are going to be cheaper, right?
We can just throw computer the problem for many professions.
Not all of them, but when you do the balance,
is it cheaper to hire a human or to hire, you know, a swarm,
the swarm will be cheaper.
It's definitely more scalable.
It also learns in a swarm-like way,
so it's more copy-paced and replicable.
But then there's things that the agent doesn't know yet.
And this goes back to what's being.
measured by your brain? What is your own neural net? What are your weights? And by the way,
this is what distinguishes, you know, again, an average designer from a top designer, an average
coder from a top coder. Every profession has this distinction, right? Where there's some
individuals that are just on the tail. And sometimes that's luck. Think about the creative arts,
right? There are many people probably as talented as Taylor Swift, but there's only one Taylor Swift.
But it is also true that she has some really unique weights about she thinks about not just
the art, but also the business and everything that comes around that. That unique training data,
it's really just in our brain. And it hasn't been quantified yet. And so now you have a situation
where there's stuff that the agents can see a measure and there's stuff that any one of us,
as in their brain, through their own experience, through their own struggles, that makes them really
unique. And they see the universe from a unique perspective. They make different decisions,
even faced with the same information. You know, one person that maybe, you know, in this,
this is kind of related to crypto, many of the people that were early in crypto were people that
grew up in countries like Argentina, Venezuela, Nigeria, where they saw that hyperinflation
first hand, you know, the parents coming out with bags of cash, and they felt the need for better
money early on. And so when the technology came about, they were the first ones to react very
differently to that piece of information. I think that that unique measurement that's inside
all of us, it's still a massive advantage. And so what is very big.
It is really the difference between your measurement, your own calibration about the universe, and what the agent may have.
And it's fundamentally the distinction, take a piece of writing, right?
The difference between slop and a great editorial is that person that has written thousands of them knows exactly what resonates, what doesn't, what's funny, what's not funny.
We'll take that and say, okay, no, this is still slop, let me iterate, let me find you in it, and then eventually ship something that is AI or
augmented, but with the final verification staff of human, you can still call it taste,
you can call it judgment, you can call it curation, but it's fundamentally applying your
own weights to that output and deciding, is this up to standard or not? Is this code safe to ship
or not? Again, agents can build massive code bases today, and we're accumulating some sort
of risk, of course, because no human can go through all of them. But a top CTO would say,
okay, this is the thing that, of all the things
that this code base needs to get right,
these are the ones that absolutely need to be
in this kind of boundary on verification.
These are the ones that are going to go line by line
and check the code, or I'm going to ask the LLM
to make really careful decisions in this area.
That's the part that's not measured yet,
and that's the part where humans, I think,
can play a major role.
I want to try and distill this concept down.
So let me spit it back at you in my terms,
and we'll see if we can move forward with that.
So there was an AI video that was watching that was getting shared on Twitter.
And it was of the Iranian conflict, which has been a great testbed of people's ability to see and understand what's AI versus what's not.
And this was a video of Israel getting just pounded by missiles.
And upon further inspection, I would look in and zoom into the video and see a lot of the buildings were copy and pasted.
And the cars on the street were incoherent shapes that didn't really make sense.
and a few other features that made it very clear
that what I was looking at was AI generated.
And so am I doing verification in that role
by doing that process of like these are the things in this video
that I using my own weights, as you described, my brain weights,
I'm identifying that this, I'm verifying that this is AI slop.
And maybe I could take my weights.
And if I was in charge with the model,
I'd be like, let's make the cars better.
let's fix these copy and pasting buildings.
Let's fix all these things that are very clearly AI
and I can maybe repromps for a better video.
And that's me using my verification ability
plus my own talents to actually produce a better output.
That's the gap that is valuable
that we are trying to measure.
Is that right?
I mean, I think that's an excellent example.
And let me take it one step further.
We're probably not far from a universe
where that video will be to most people,
indistinguishable from the real thing.
Sure.
And then the next phase, and again, it's a moving target.
That's what makes it so hard.
The next step will be a military expert,
maybe just looking at it and saying,
well, the way the dynamics of the bomb are happening in the video,
it doesn't make sense,
and this is what the flame should look like,
different color.
Then there's an even further step,
which is even the military expert,
a first view cannot tell,
will be prompted an AI
with the right set of questions
and saying, hey, can you analyze this for me
and go into the physics of it,
replicate it, run some simulations.
How likely is it to be accurate?
And eventually there might be a point where
it's completely indistinguishable,
especially with word models,
we may be at a point where we don't know
if it's true or not.
And at that point, we'll have to rely on some sort of prominence
and crypto-ground infrastructure
to even know, is this real or not?
So it's almost like different stages.
And the video is a great example because I think everyone can resonate with our works.
Same with, you know, take a domain that has expertise like medicine.
We're at a stage where you could have some of these models look at imagery and probably make a pretty good assessment.
There's going to be some edge cases where a top radiologist will say, no, no, no, I understand where you're coming from, almost like training a junior, right, a resident and say,
I would have made the same mistake 20 years ago,
but this happened actually to meet with a patient,
and this is, you know, given these other contexts about the patient
and where they are in their journey, no, this is the wrong decision.
That's that thin layer of final filtering that we're kind of focused on.
As we do that, by the way, we free a lot of our time.
So the upside here, and this is why I don't think we should be taking this too negatively,
we will be able to do a lot more with less.
the cost of a lot of these things
that used to be very exclusive
will drop. We'll consume a lot
more of them across society.
So all in all, it depends on the transformation,
but I think it's good news.
But Christian, isn't this an example?
The example that David just gave
of, like, he's starting with verification right now, right?
He's able to verify these explosions
and he's got sort of just like maybe
an average level of he doesn't have military expertise,
right, so you can't get there.
But then it moves up to the military commander
and pretty soon the military commander cannot
verified either, and he has to outsource it to AI to begin with. Isn't this just another example of
something that has the ability to get measured? And I guess you can measure how well a video matches
reality and what reality looks like. Isn't this just an example of verification being valuable
at first, but then getting quickly automated again by AI? So even verification is not safe in this
model. 100%. And in fact, we have a name for it in the paper. We call it the qualifier's curse.
which is essentially the very rational act of performing verification is pushing the frontier, right?
Everyone is tempted to do this and we can't stop, right?
It's not that all the lawyers could coordinate it.
I mean, they're trying, right?
If you look at some of the laws that are being proposed saying,
oh, lawyers can only be the ones using an LLM,
the regular citizen cannot just LLN themselves out of court.
There'll be all sort of weird regulatory and policy pushback on this.
stuff, but you're absolutely right. And at some point, the layer of verification is so thin
that the only way I think we will keep up is by augmenting ourselves, whether it's a, you know,
brain computer interface or better tooling, right? So I think we're already seeing it through the
IDE's evolution encoding. The IDs will get better and better at helping the human focus their
attention and becoming a better verifier. But it's a race. And eventually we have this section of the
model that goes a little bit more into the future where, you know, you have to take these
agents as peers and take them very seriously. The key problem we surface is one where we don't
know what preferences these agents will have, right? And there's already evidence that sometimes
they develop really corky, weird preferences almost by mistake. And that's where things get
all more complex. But yes, I think verification is kind of a shrinking frontier. Okay, so shrinking
frontier. So that is the idea of the codifier's curse. It's basically like, you know, this is
Humanity's last job is what you're saying is verification.
But even that last job is we're all standing on an iceberg,
and that iceberg is kind of slowly melting away,
and the surface area is getting smaller and smaller,
even on the verification front, right?
Where's the part where I get less anxious?
Yes, yes.
Look, first of all, some things are not measurable by design.
And sociologists have all sort of names for these,
but sometimes they get called as status games.
or, you know, things where people are trying to describe and ascribe meaning,
those things are not going to be the domain of machines
because the very feature is that it's about human coordination.
You can think actually of cryptocurrencies to some extent like this,
which is, you know, there's similar technologies.
People could converge on one being a structural value or a different one.
What matters is the consensus among humans, what should be worth something.
And so I think as the domain of measurable work shrinks,
we will invent many, many ways to make non-measurable work meaningful.
So the iceberg actually will get bigger in some ways,
which is kind of the non-measurable human status-type games,
human subjective preference types games.
That's where the economy for humans will expand
and the job opportunities for humans will expand.
I'll give you another example.
So David and I are messing around with an open claw agent.
We have a, you know, a Discord for him.
Who doesn't do this, right?
It's fun, right?
The prompt is kind of simple.
Hey, create a media company,
just because we're just seeing if it can create a media company
the way bankless has created one.
And just getting it to tweet something that doesn't sound like AI slop.
Something coherent.
It's just like, it feels like Mission Impossible.
And the number of times we've gone to it and said,
Hey Daniel, that's its name.
Hey, Daniel, this is like, you keep tweeting things that sound a lot like AI slop.
You have to look at what humans are saying, and you have to kind of model that behavior,
and we'll give it like detailed instructions in terms of how it can tweet better.
And what does it do?
You know, an hour later, it just sends off another AI slot type tweet.
And I start to get the sense of like, oh, well, you know, it's so smart and it can
wire up a website in like 10 seconds and develop an application in 20 seconds,
but it can't write a simple tweet that sounds interesting to a human audience.
And I guess that's part of the jagged frontier,
but maybe that also gets into the element of like the verification, right?
In order to have a tweet that sounds good subjectively to other humans,
that might be one of the last things
that AIs are actually able to do.
Is that an example of the expanding surface area for us?
Maybe we can still tweet
and that can have some,
like it provides some value to other humans?
So first of all,
I think people will care that it comes from a human
for different reasons too.
So at some point,
some sort of proof of personhood
would be important in all of this.
But I think you're underselling yourself
a bit short on the tweet except.
I would argue as I don't,
actually quite hard. And anyone that has tried to get any one of these LLMs to make a funny joke
knows that they will come up with some, but they're mostly dead jokes. I'm a dad, so I can say
that. And the reality is that nothing is harder than, you know, in a media company,
reading the moment, understanding your audience, and really intercepting it with something that's
truly novel. A tweet competes every second for so many other tweets and the algorithms are pretty
rootless. And so if you wanted to break through that, it kind of needs to break into something
non-measurable. I think it would be pretty good at tweeting, you know, updates about, you know,
the conflict right now in Iran. You can get into do the systematic, you know, SEO type things with no
problem. Anything that has kind of been done before and just needs to be executed well, I think
you do a pretty good job, but getting somebody's attention, that's creative work.
That is ultimately trying to break into something that has never been measured.
And that's where I think our near cortex and whatever part of our, you know,
artwork is still giving us an advantage.
We've been selected and we hinted at this in the paper to be able to respond to very changing
environments.
It was life or death, right?
So the way we've selected this new intelligence, this alien intelligence of sorts,
it's very different.
It's optimized for kind of search and pattern matching
and replication of what's known.
We only survive if we could respond to something completely unseen
and make the right decision.
And so we're very flexible at the moment.
I think some of this will fall over time.
And that opens the question of like,
okay, then we're really jumping the iceberg.
It's like, okay, there's this non-measurable world
where we can just feel human and give each other's meaning.
We call them the meaning makers in the paper.
It's a job that's very hard for me to understand personally, right, because it's all about human coordination.
And to some extent, you've seen it in the arts and industries that have already been hit by automation to some extent, right?
Music where the cost of producing the initial product is really low.
So you have had massive entry.
And they've all turned into blockbuster economies.
In anything that requires meaning, think about art, right?
Who decides what's valuable art?
And this is saying when you go into a modern art museum
where that consensus is informed
and so often you walk by
and if you're not a domain expert,
you'd be like, I don't really understand this.
And much of that will be filtered out.
So 10 years from now,
people will not think of that as successful art.
Some of it will be.
But for those domains where, you know,
in a sense we're discovering together
what we should be paying attention to,
I think that that's still safe.
That's probably going to survive
even a word where AI surpasses us
because the whole value is like,
Okay, we decided this is the relevant history,
a bit like with a blockchain, right?
But for the stuff that it's objectively useful,
that's where this tension between the verification layer
and what the machine can do on its own is really important.
And the tweet is a very eyebar.
I think the verification and the steering,
it's something where maybe you could build a harness
where you have all conversation with your agent before
and given the right context and saying,
okay, I think this would be the right thing
to tweet about and figure out the best way to optimize it and write for it, then it can go and do it,
but you still have to set that intent.
You call this paper the economics of AGI, and I just want to make sure we're getting
some of the core economic fundamentals from this.
So one, of course, we've talked about is anything that can be measured will be automated,
and the cost to automate the measurable things is just like decreasing at a exponential
at this point in time.
There's another cost curve in the paper, which is the cost to verify.
It's unclear what's happening with the cost of to verify.
You're arguing that that is a biological constraint.
At least it's constrained by some level of human cognition.
Does that cost curve, like what happens to that cost curve over time?
Does it get cheaper and cheaper to verify as well?
Or is that always going to be biologically constrained?
So it is currently biologically constrained.
And that's why in a sense, I think people are underestimating maybe the speed of adoption.
right, if we're deploying these systems
a massive scale
and we don't have the bandwidth to verify them,
you hear it every day, right?
Now it's like, okay, our company ships 20 to 30%
or maybe even 50% of its code as AI generated.
When you read below that headline,
you realize that, well, you definitely didn't read
all of those lines of code.
So there might be something that's unverified.
And I think while now probably people are underestimating that,
we're going to run into some massive failures because of it.
And it's just a result of, again, the cost of automation, like you were explaining,
decaying faster than our capacity to kind of verify the output.
But can't AIs help with the verification piece?
So isn't the answer to, you know, all of that AI generated code?
Well, you have AIs also running to verify these things.
That is a very tempting conclusion.
But again, if you really focus on what the cost of verification
here is, anything that AI can properly verify, that's automatable.
So, yes, we will use tons of AI to verify AI.
But because the blind spots, a checker AI, even if you're using multiple models, right,
they're kind of all trains around similar things.
So they're not that different.
But even if you mix all the best models instead of the art technology they use,
you will automate whatever you can with AI.
You will verify all of it with AI.
And then you're left with what's really unverifiable,
and that's where the human comes in.
And so at that bottleneck,
I do think people will invent great tooling.
So AI will help design the better tooling for augmenting verification.
So maybe it's not just linear, but it's still somewhat bottlenecked.
We maybe will augment ourselves.
I think that's probably the most promising technology in the end,
which is if we are on par with our creation,
if we can at least compute as fast as, you know, the agents can,
then we're at least peers forever
and that verification bottleneck
kind of disappears to some extent
we may still need to go run
experiments and create things in the real world
to get feedback where the AI cannot simulate them
but we will be working
in a scenario where we don't augment ourselves
is very clear that at some point
like the example right from the video
we will be less and less useful for verification
and sure maybe if you're 99.99
uptime to all sort of problem
is really important.
You still have humans.
For many applications,
we'll just take them out of the picture.
Is there another negative externality,
I guess, that that crops up here,
which is like if the cost to automate is going down,
but we're sort of bounded by this verification, right?
It could be tempting to just let automation,
you'll keep running and just to do less verification.
And I'm wondering if an externality pops up,
which is sort of a safety and alignment type of externality,
which is just a world that we have,
no idea, whether the work that's happening is aligned with what we want to actually happen.
And there's this like oversight, weakening, alignment drift happening going on such that the AIs
are doing things that only they can comprehend.
And it's not necessarily an outcome that we want.
Is that part of the story here?
Is that a separate track?
100%.
And so we gave it a name.
We call it the Trojan horse externality.
Why is it a Trojan horse?
It's because it's extremely tempting for all of us
to automate as much of our work as we can.
Same for companies, right?
If you can ship code faster, products faster,
you will do it.
And some of the cost of that misverification
may not be immediate, right?
Of course, if they're immediate,
you ship the code, you see it breaks,
then you learn, you iterate,
you know, okay, next time we better do this type of code review.
but the more nuanced and subtle problem is something where the risk accumulates over time.
And a good example is like long-term capital management, right?
So these examples in history where it was very clever financial engineering and the fund run really well for a long time and then some hedge case hit and the whole thing unraveled tragically or, you know, think about Chernobyl, where complex system fail in complex ways.
And so for the longest, all, everything may look fine.
and then suddenly you hit this kind of debt
that you've been accumulating.
Why is it an externality?
Economists have a very precise definition around that,
which is something that the market cannot fully price.
If I'm building legal software right through LLMs,
of course I would not want wrong citations
because in court that's going to surface
and it's going to undermine the entire product.
I will think already about a number of dimensions of verification
and it will price them in.
I want to be a good player, I will do that.
But these longer run things tend to be underestimated and not fully internalized,
especially in a race, right?
Maybe the best example is exactly the foundational labs.
Are they deploying new models at the same speed if they knew exactly all of the side effects
or the potential cost of society?
Some of those cost of society they're internalizing for sure,
because they could be company-ending, but some they may not.
And in a race, you know, speed of delivery may matter more.
The same is playing out, I think, geopolitically.
Should the U.S. slow down versus China, right?
Makes, of course, no sense.
And as we accumulate this hidden debt,
we could find ourselves in a situation where I don't think,
you know, the nefarious scenarios of the sci-fi movies
are probably the most likely.
And many of the instances where people say,
oh, the robot didn't want to be shut down.
Well, guess what?
The other than read the sci-fi fiction too,
or maybe, you know, they had a previous objective,
which is like you've asked me to solve these problems
if I shut down, I won't be able to.
The reality is that I think these systems may fail in ways that are almost benign
and we may not have anticipated.
It's not that they're trying to take over yet,
but they're just following orders or they've accumulated some sort of hidden preferences
that we don't understand.
If you look back, there's been lots of cases where you prompt the LLM with something strange
and suddenly something happens that's completely out of context
because of the way they were trained.
And if we don't understand the preferences of these models fully, if we cannot interpret their decisions, then we're kind of living with a black box. And some of the black box could be, you know, hidden risk.
You use this term in your thread to hollow economy. And that does remind me a little bit of the Satrini post that we talked about at the very beginning. What's your concept of a hollow economy and what scenario could that happen?
Yeah, for us, the hollow economy is actually a fairly narrow definition. And then we spent most of the paper thinking about the augmented economy.
economy. Again, it's tempting to think about do more scenarios, but the reality is that we've been
pretty resilient as a species, so hopefully we can survive this new filter. Why is it hollow?
It's because the proxy metrics, the things you're tracking are looking green. Everything is looking
great. Like all the measurable things like GDP and growth, that kind of thing, is looking green?
Yeah, or imagine even more simply inside a company, right? You're seeing, oh, we're shipping more
code and ever, customer growth, everything is being, you know, booming.
But the problem is that you and your agents are optimizing always for proxy metrics.
There's no way for you to capture the full intent of what you're trying to do.
Good Arts law is kind of the classic name for this, which is like, you know, when a metrics
becomes a target, it ceases to be kind of a good metric.
If we get to a situation where some of these agents at least are pushing these metrics that
look good on the surface,
but maybe hiding back to your iceberg example,
some hidden problem below the surface,
then for a while we will feel great about ourselves
before running into the nightmare scenario
of like long-term capital management
and fund unraveling really quickly,
some auto-systemic risk and cascading effects
that may be very hard to buffer for.
Galaxy operates where digital assets
and next-generation infrastructure come together,
serving institutions end-to-end.
On the market side, Galaxy is a leading institutional platform, providing access to spot, derivatives, structured products, defy-lending, investment banking, and financing.
With more than 1,600 trading counterparties, Galaxy helps institutions navigate every phase of the market cycle.
The platform also supports long-term allocators through actively managed strategies and institutional grade staking and blockchain infrastructure.
That scale is real.
Galaxy has over $12 billion in assets on the platform and averaged a $1.8 billion loan book in late 2025, reflecting deep trust across the ecosystem.
Beyond digital assets, Galaxy is also building infrastructure for an AI-powered future.
Its Helios Data Center campus is purpose built for AI and high-performance computing,
with more than 1.6 gigawatts of approved power capacity, making it one of the largest sites of its kind.
From global markets to AI-ready data centers, Galaxy is serving the digital asset ecosystem end-to-end.
Explore Galaxy at galaxy.com slash bankless or click the link in the show notes.
Christian, I want to learn about how to protect myself or really what parts of the
economy become valuable or what kind of jobs become valuable. So if we understand that verification
is the new scarce thing, but we've also established that verification is a receding frontier,
then I still feel a little bit at a loss as if I'm, if I work in the economy, where and how I want
to work, or if I'm investing in the economy, where and how I want to invest. So with this knowledge,
with this paper that you've produced,
the knowledge in the paper,
where do you point people towards
as the new valuable thing
that the new valuable sector
of the economy of the future?
Yeah, so we have a section in the paper
where we tried to go through strategies,
very applied strategy,
practical things for individuals,
companies, investors,
and also policy makers to some extent.
But for the individuals, I would say,
we have a two-by-two,
which is a classic, you know, in an MBA class,
but it's essentially taking those two costs,
costs to automate and cost to verify,
putting them against each other
and saying where does your job really fit into that box?
And of course,
you don't want to be into the bottom left quadrant,
which is the displaced worker.
That's where, you know, it's easy to automate things.
It's easy to verify them.
You're going to use AI to verify that the output is good.
Why would you, you know, pay a human?
But then there's at least three ways you can succeed.
And I would say, I mean, this is a,
is the approach I'm taking, you probably need a little bit of each one of the other boxes
and you find kind of your perfect balance. I don't think people should go in all the way into
one box. Every job will be some blend. But let me start with the hardest one, which is the
meaning makers, right? So these are the people we were talking about before where it's not even
clear that there's a better or worse outcome. It's all about building that social consensus,
rallying people around some sort of meaning.
You're really monetizing those status games,
that human connection, very difficult to do.
Some people do it really well.
Is this a taste maker?
I mean, you could call it taste,
but it's really a coordination maker, right?
You're trying to rally people to care about something.
I think art is often like that,
where, you know, what is the good art?
When is bad art?
I mean, I think you have the NFT there in the background, right?
Yeah.
I think we've seen it.
The fashion industry comes to mind.
Like, I have a hard time understanding how AI will display.
Like, New York has a huge fashion industry.
I don't really see how AI or robots gets involved with that.
That's a great example, by the way, because when you think about fashion is fast moving.
The top fashion makers, the mini makers in fashion, continuously have to evolve
because they get knocked out and replicated by the low-cost producers, like, within the season.
And in fact, that gap with automation and manufacturing has been.
closing so quickly.
But it is true that what makes a great fashion designer from an average one has been
capturing the moment, pushing the boundaries, and kind of jumping ahead and creating that
coordination.
I don't know how much of that is really objective versus you're just good at creating that
movement around it.
A lot of crypto falls into this category, right?
Which is like those initial blockchain moments are typically, okay, people just build the right
narrative around it, they have to write DNA and genealogy
in the story.
I think in our sector of the world for crypto,
this feels very much like Twitter influencers.
Like if you can create a narrative
and if you can educate about a narrative
and provide value around an idea,
that kind of feels like tech influencer,
tech Twitter people is kind of like
that's where I see this,
at least for my like purview of the world.
David wants to say he's going to be safe.
You're going to be safe, David. It's okay.
I'm going to be okay. The hard part of that is that, and I think many of the best ones will be
augmented, which is right now you can only track so many things. And so I think what makes
a great person in that role is someone back to the tweet example that reads the moment,
understands what people are kind of resonating versus not, maybe even like a comedian,
throw something out, learns from it, deletes a tweet, and it keeps evolving. That process of
experimentation is super important even for the meaning makers.
So I would like us to think of them as pretty scientific too.
They're not just fully improv.
I mean, maybe in some professions, it's like, you know, like a religion.
If you're launching a new religion, then even there, you probably need to understand,
okay, right now everybody has this low-grade fever around the eye, so some AI-centric
religion may actually make a lot of sense.
But look, the other two boxes are the ones that I at least can relate more to
the liability underwriter is obvious.
It's essentially saying if you're a top expert in your domain,
you're really at the top of that verification layer.
Can you augment yourself and do just a lot more of it?
I think the top lawyers will do this,
the top medical doctors will do this, like every domain, right?
It's like if you know something that's narrow and niche,
well, guess what?
Now you can scale it rather than just being part maybe of a bigger machinery.
And the liability underwriters,
this is the quadrant of our two by two grid.
that where automation is easy, but verification is hard.
And this sounds like, you know, the top 1% of engineers, the top 1% of lawyers, the best in
their fields are augmented the best by AI and their value is going to become more scarce.
Like being the leader of your field is going to be a more valuable thing.
Venture capital is another great example for a slightly different reason, which is some of these
things have a gap between when they're created and when you get feedback.
did I make the right decision or not, right?
And so whenever that gap between I make a decision today
and I will know if I made the right decision tomorrow is long,
you need someone to sort of underwrite that risk.
And so venture capital is with a good track record and good taste,
curation, judgment, whatever you want to call it,
is essentially underwriting that when I make this investment today,
well, maybe not all of them, but I will get some of those home runs.
Same with a doctor in the hospital.
Doctors are essentially already underwriting decisions
on behalf of the hospital they work for.
And it matters a lot more for those edge cases.
Many decisions are kind of rubber-stand,
and they'll just use AI to do it and put their name at the end.
But then there's a few that are really critical
for the repetition of that hospital say,
okay, if you have a rare condition,
and this particular doctor is kind of the word expert
for making the underwriting on that, right?
Should you get treatment or should you not, for example?
Okay, so we have opportunities for meaning makers
is where verification is in automation is hard.
It's kind of the social games type of space.
And that's where the iceberg is actually getting larger.
I mean, there's more surface area for opportunity.
The liability underwriters is kind of that top 1%
where they're just massively automating themselves with AI,
but they are still providing a lot of value on that verification layer.
There's another quadrant here where verification is easy,
but automation is still hard.
You call these the directors.
Is this where, like, people are doing more artisanal type of tasks, like things only a human can do?
Or, like, what's in this quadrant?
No, this is actually all about intent.
So if you think about the verifiers about that final filter, this is the hardest role, you know, being an entrepreneur or, you know, essentially coordinating economic activity, including coordinating agents towards a certain goal.
What's important in this bucket and why it's hard to automate?
is because there's what economy is called nightnian uncertainty.
Nighting and uncertainty is the distinction between risk
where you can assign probabilities saying,
okay, 60% chance this happens.
I may be wrong, but I sort of can put some probabilities on it
and not even knowing what those probabilities are.
When somebody starts a startup, typically,
if it's trying to push something truly new,
there is fundamental uncertainty about,
is this even the right way to think about the problem?
Is this even a problem?
Right.
Is this the right technology?
It's the difference between knowing that you'll be 60% of the time you'll be wrong
and you know that 60% is correct versus you don't even know what that number is in the first place
because it's unmeasurable.
Yeah, the best definition of this is the famous unknown unknowns.
So in the land of unknown unknowns, you need someone.
Again, you can call it someone with good taste, good judgment, good curation.
Really, all they are, even as entrepreneurs, if you think about founders,
they've seen a bunch of situations and instances,
and they learn maybe from their own travel
through different careers.
Some problems are worth solving,
that the way you to solve problems might be a certain approach.
And then what they do, and this is why this job,
I mean, we call it directors from an Hollywood band,
in a sense, they're the final ones that will know,
okay, this is the right output.
When they see the final cut, they're like,
okay, this meets my bar.
But the more important,
part, I would say is not so much the filtering, where they may even rely on, you know,
the liability underwriters. They're the ones that launch the swarm, keep it in within,
within balance, right? As you go, you're like always course correcting. That's why there's no
recipe for a startup, right? The startup is some weird zigzag. That adjustment along the way,
that tackling of different situations that are updating to new information and redirecting
those agents, that's what the director needs to do.
And it also needs to figure out, okay, the agents are hitting the KPI's because I told them to,
but I do feel drift.
And maybe you won't be able to explain it.
Maybe it's kind of a gut intuition in that phase.
They're the ones that will bring that swarm back into compliance with your original intent.
So it's many professions, I think, are like this, especially in creative industries,
entrepreneurial endeavors, science.
Right.
So if you think about AI will automate and augment a lot of science,
but if you're truly pushing the boundaries of the non-measurable,
I think you'll need a director.
And sometimes there will be a single individual, sometimes there will be a team.
The other piece of good news is that, by the way,
a lot of the economy is not measured.
There's things like in space that we haven't measured.
There's things on the planet we haven't measured.
There's things about humans and their interactions that we haven't measured.
That's all the domain where you can make investments.
you can make R&D bets, you can really push the future.
That doesn't change with that.
Okay, so I guess the goal is to be in one of these quadrants,
be a director, be a meaning maker, be a liability writer,
if the idea holds that verification becomes very cheap,
or sorry, verification is the scarce thing in automation becomes very cheap.
This other quadrant we've talked about already,
but I just want to underline that.
That's the displaced workers quadrant.
That's where you don't want to be.
That's where wages drop to the cost of compute.
and certainly no one wants to compete against the cost of a token, not in this economy.
So if you were to map out the existing economy right now and all of the jobs, let's say in the United States, all the jobs in the United States, which, you know, what portion of them are right now closer to this bottom left quadrant of being a displaced worker?
Is that most of the work that we do?
It feels like a not insignificant amount.
This is the reason for the underlying angst, I think, that people are feeling is because they're sort of worried that they're in the bottom left quadrant or at least a good portion of their work that they do is in that bottom left quadrant. Is that how you read it? I do read it that way, but I also combine it with, so here's the good news. Imagine even 60, 80% of your portfolio can now be displaced. The key is that now if you have anything from the other bucket,
you can do a lot more work.
And so a single individual becomes, you know, a super individual.
You get these superpowers.
The challenge, and some people I talked about agency,
that's another cope term that's been going around.
It's like, oh, don't worry, but humans have agency, agents do not.
That's my favorite cope term, by the way, Christian.
I like that one.
There you go.
So, look, everyone needs one.
Now you can do new things, and it can be a lot more ambitious.
And even the learning, and this is where, you know,
even the, we were talking about the juniors not getting jobs and the codifier's curse.
These are the same tools.
It's such a double hedge work, right?
It's like these are the same tools that you can prototype, go from prototype to idea in the market within a few hours or a weekend in a way that you could have never done before.
So if you're willing, if you're taking the positive side of the technology, no matter what your job is and no matter what percentage will be automated, I think for most of them, it won't be 80% overnight.
Now, there's some jobs that were already kind of very thin layers,
like wrappers on other things.
I'm going to pick on one.
Like search engine optimization, right?
If your job is to generate cookie cutter content to beat rankings,
I mean, the role of ranking thing will change too,
but put that aside for a second.
That type of output that is non-original is replicable
and it's kind of replicating the same thing over and over again.
That's going.
But now you can gravitate and move up the value sheet.
You have to.
Can I throw another one in there?
What do you think of paralegal?
Is that like a dangerous bottom left quadrant?
100%.
When you think about the role of a good paralegal,
often it was a career step, right?
You start in that role, you accumulate additional experience,
and then you move up.
Some good law firms at summer analysts, you know,
some are programs where they will take the best of the new batch.
they put them into essentially paralegal work,
they follow along,
they learn mastery from the people with more experience,
and then they essentially either it's up or out.
I think it's the same here,
which is that's why entry level is so challenged.
It's because often the entry level job is a training ground,
and that training ground has been taken by eye already.
Yeah, that's your idea of the missing junior loop, right?
That's in this paper.
Let me ask you, though,
So this is quite a chasm for those that are just starting their career,
maybe coming out of university and just starting to enter the workforce,
is if what you're saying is there's very little value in being kind of a paralegal
or sort of entry level, whether you're a developer in the legal profession or cross professions,
but there's a lot of opportunity on the other end of the spectrum once you have the sufficient judgment
and curation and taste that usually comes with spending 80,000 hours in your chosen profession
in a decade-long career, well, there's an opportunity for you to become a 100x liability
underwriter, okay? But there's still this chasm between the two. And how do you even get to the
other side if there's no opportunity for you because you're two junior and an AI can do what
you're trying to do? There's like, there's a huge gap here. There is. And I think the good news
is that you can now compress
what would have been, you know,
multiple years of learning
into a much shorter period.
You can also, you know,
skip the training step.
If the training step was, you know,
trying to maybe ship something
and develop something in the real world,
take that IC4 that may not get the internship
or the entry level job,
they can now arm with, you know,
some of these tools
do the same things that a team of engineers would have done.
And at the beginning,
their intuition will be wrong.
By the way, because they're fresh,
they may also question things in a novel way,
so they may even have an advantage.
They can bring those ideas to reality
in a way that I think none of us
when we were that age could have done.
And so, yeah, it cuts both ways.
I do think in the end
the positives will outweigh the negatives,
but it's a massive cultural shock, right?
So if you were expecting,
I get a good degree, that leads me
to a good internship,
and once I have that good internship, if I work hard,
you know, I'm going to get the job to keep improving.
That path is gone.
I think that's what makes it particularly hard for individuals
that are probably fresh out of college right now.
If you're in college, you probably have a few years to figure out, you know,
where this is going.
If you're super young, maybe these tools will make your learning experience so different.
But yeah, if you're in the crux of it,
if you're in the missing junior loop, my advice is essentially,
look, you have superpowers.
Try to use them.
Try to build things.
Try to use them to engage with society in a way where your ambition should be like 100x,
what our ambition would have been at that age.
Yeah.
Yeah, this is starting to align with one of the big takeaways that we had
from our recent Lin-Alden podcast.
And we were talking about the supply chain of this whole AI revolution
where the big tech companies, the hypers,
are taking all of their.
profits and they're throwing it into data centers. And the AI labs are spending way more than
their profits on training the AI models. And then like, you know, everyone who's, anyone who's in
the supply chain is not making money. And they're all burning money. And so I asked Lynn, like,
Lynn, who's making the money here? If this is such a valuable industry, where is, where is the value
being created here? And her conclusion was, it's in the end consumers. The end consumers actually get the
value. The LLMs that they use are smart and they get to express the value of that. And this is starting
to align with what you're saying where I'm kind of getting that like the really the beneficiaries of
this is everyone who leans into this technology and becomes sort of like a founder who takes an
executive mindset, a leadership mindset, a I will take this technology and I will create something
mindset. Whereas the quadrant that really loses are the button pushers. If you just show up to your job
and it's your job to press buttons and to write emails,
like, you're not, that is, you're gone.
Like, and so you need to go from a button pusher to a founder.
And that seems to be like the technological trend shift
that things that this quadrant is mapping,
just like go into any of the other three quadrants.
You need to be a tastemaker,
you need to be a coordinator, founder, literal founder,
or the other one can't remember.
But the point is, it's just like the automated jobs are out.
And I can go into the future,
and imagine the future, and I will say like, oh, you know,
no Citrini article will ever sigh up me into thinking
that the future is going to be worse when we have abundant intelligence,
you know, so much more productivity, we get free labor with robots.
There's no way that we go into the future and all of a sudden, like,
I'm, we're in a depression.
Like, I don't believe that that will be the outcome.
The future is going to be sick because of AI,
and no one's going to be able to sciat me away from that.
But I do understand,
Christian, that as a society, as a human species,
we have had the button pusher quadrant,
be the dominant labor sector forever.
Going back to peasants, like pick wheat, put it in the mill,
bake the bread, just do the thing.
Don't use, don't think too hard, just do the thing.
And that has employed the large swath of society forever.
And so I think that's kind of your like, what you're saying is like, yeah, this is going to be, this is going to like tear society at the seams.
Like this is going to cause a bunch of chaos.
And so I'm looking at the future and I'm like, it's going to be great.
But I'm looking at the short term and be like, we're kind of fucked.
And so I'm of two minds about this.
How do you think about this dichotomy?
Look, two things.
So first of all, society will always recreate button pusher jobs if it needs to.
So we will have to, right, to keep the societal column.
And you could argue there's already jobs in professions
that are created in that way for different reasons.
But I think the more interesting part is
how many of the people that are in those types of jobs,
the intellectual capacity to do a lot more?
And I think it's more than you think.
I think it's more than I think.
I also think it's not everyone.
Yes, it's not going to be everyone.
But then you also have to ask,
are those differences in whatever measurable capacity you want to take, right?
And all these measures are completely imperfect, like IQ or EQ or whatnot, right?
How many of those gaps are driven by lack of opportunity,
also on other environmental factors that we haven't tracked yet from pollution
to other things that affect, okay, this child born here will have a better intellectual trajectory than someone else,
the way they're stimulated through education.
As we reinvent all those pipes and we discover probably all the things that are
kept, you know, human capacity behind.
I do think it's still that positive.
And yes, we will have probably some jobs and some islands that governments will have to
maintain.
And this is where, by the way, I think the old UBI approach is completely wrong, which is
people need meaning.
And nobody's going to want to end out from the government in a fully augmented society.
Maybe some people will and they'll just enjoy it.
But I think for many, that agency or that feeling that, okay, I'm learning.
I'm improving myself.
I'm pushing myself.
I think it was Carpathi that at this example or someone else, I can't recall, which is like, look, when manual labor stopped being necessary, we invented, you know, the gym.
And for anyone that goes to the gym, right?
It's like you're going there, first of all, because it's good for you and your health, but also because that progress, that feeling of challenging yourself, it's a core part of happiness.
I think we're going to do the same for intellectual labor, and you're already seeing it.
are building all sort of crazy things on the side.
And some of those things will become jobs, right?
So people will discover a passion.
The creator economy is a good, maybe cannery in the coal mine for that, right?
It's like, how deep do some of these YouTube channels or TikTok channels go in terms of
like people that are really mastered something super narrow and, you know, maybe to a small crowd?
There's going to be a lot more of that, probably.
Christian, I'm wondering what you think about this.
So this low-level angst that we've talked.
about if it's widespread enough and if it's stoked by charismatic political leaders, it could
actually throw a wrench into this entire thing. It could really slow down the cost to automate or
create entire sectors of the economy that really can't be automated. I'm thinking of some legislation
that I recently saw coming out of New York State. This is legislation to actually prohibit LMs from
and being able to even provide any sort of health care, therapy, financial type of a device,
essentially protecting credentialed authorities as, no, these are the high priests and they get
to comment on these things.
If you were trying to use an LLM to get any form of therapy or advice, that's off limits.
And this is a way we can, society can kind of organize to slow down automation,
protect incumbents.
There's maybe a good side of this,
which is if your argument is,
well, like this is going to happen so fast naturally,
that we actually have to slow things down
in order to give time for society to adapt.
On the other hand, it's also a bad thing
because we're limiting the propagation of these tools
that can increase well-being and increase affordability.
And if the U.S. doesn't adopt them,
then some other country will
and will become more relevant over time.
Anyway, what do you think about this social force,
let's say, cultural force, political force,
that is starting to push back on AI automation.
It feels like that's starting to strengthen,
maybe even crescendo.
Do you think this disrupts the entire plan here
and the thesis and the economics of everything?
I think it's a very serious concern.
I mean, think about the historical example of the Luddites.
This is like Luddites on steroids.
Wow, we're going to see probably all sort of attacks on data centers.
And right now it's in the policy level, right, trying to stop deployment.
The lobbying is going to be very strong.
I mean, we've seen it, for example, with crypto and financial services,
how many years it took, you know, for the technology to be taken seriously.
I think it would be very detrimental.
And the main reason is that every moment we stop these services from being improved
and deployed, we also stop a lot of people from having access to them.
You know, you mentioned medical advice and therapy.
Like, there's a lot of.
segments that are excluded from
high quality products. It's expensive.
I mean, $200 an hour?
Correct. Especially in the United States, right?
And people have found immense comfort
in these LLNs, even sharing
all sort of really personal things
that they couldn't have afforded the equivalent
human, right? It's like
$20 a month versus $200
a session or $100 a session.
And so I do think these laws are very dangerous.
They do create this impression
that the future is going to be
negative and the technology is here to take the jobs rather than the technology is here to deliver
a service that used to be expensive and we can now expand to many more people. Some clash will be
inevitable and I think we'll have to be prepared. Different countries will probably make different
choices, right? If you were to guess, maybe the Eurozone based on past experience of over-regulating
things may take a very slow approach. But look, the reality is that this is where I think open-source
models are great. Yeah, maybe New York will ban, you know, you're getting that kind of advice
from a commercial entity, but if you can run a local model, you know, on your hardware,
and intelligence is becoming too cheap to meter anyways, the bottle is already pretty good. I think
people will work around this, people will be smart. And ultimately, it's for the expert to show
that, look, I will be using the model myself. So what used to cost you X will cost it way less.
I will focus most of my session with you on the work we cannot do with the model.
And there I still add bad.
So it's kind of a change in how you do business, even for some of these jobs where
maybe you'll take a lot more clients, but you spend less time with each one of them
and where you're focusing on is kind of the thin layer of verification.
But the alternative is kind of also historically doesn't work out.
So the slowing down the progress of the technology will not work.
The genies out of the box.
The models are out of the box.
And in fact, I think we need to focus on the side effects and prevent some of those.
I do think when I think about where the Dumer crowd as a point is that the capabilities do enable bad actors to take advantage of this.
And when it's a proprietary model or open source one, I don't think it matters.
But those that follow Plinius on X, he kind of jail breaks all of these models within hours.
So the aura of a closed model being safer,
I think it's minimal at this point.
These capabilities are out there.
Bad actors will try to exploit them.
How do we re-engineer society
so that our antibodies
and our ability to respond to side effects of AI
will be rapid?
I mean, just think about identity.
There's all these situations
where everything we use to rely on
is going to be broken.
Like, just think about social security numbers, right?
It's like, it's ridiculous.
is our infrastructure is not ready for what I can do.
It's really not.
There's so much work ahead.
And that's why it does seem daunting at times,
but it's a fantastic opportunity at the same time.
So we talked about what individuals can do.
We've talked at some level about what societies can do.
How about companies and how about investors?
What can they do with this shift towards verification scarcity
rather than, you know, I guess intellectual and cognitive scarcity?
Yeah, I would say for the companies,
the roadmap is not that different than the individuals.
Step one is, okay,
take advantage of the capabilities, automate as much as you can,
but keep in mind where verification may be weak.
Start thinking about what kind of investments in verification,
infrastructure, and talent,
in those kind of harnesses around it I can make today
so that my product is better than the alternative.
You're seeing some of this, right?
So I think it was a voice model that now adds insurance
so that if the agent ends up saying something crazy,
you're kind of insured by the consequences.
That's an early sign of what we called,
liability as a service, kind of moving from software as a service or even software as labor to,
I'm going to underwrite not only the agentic output, but also the consequences.
So I'm taking full responsibility end-to-end for your workflow.
Another key area in this relates to verification again is companies that can build what we
call proprietary ground fruit are going to be extremely valuable.
I give you an example.
I've been following actually some of the developments on the war.
most of true LLMs.
I kind of a script that I run every few hours
rather than monitoring the situation
in the eye adrenaline, eye cortisol mode of X,
which I also enjoy at times.
I've learned to kind of pace myself,
and so I get that update.
It's very well written.
It kind of focuses on the dimension I want to track,
but I could imagine a better version of that
that has access to maybe some of the articles
that are behind the paywall
that are kind of close it to the source of truth.
And so for many of these,
companies that used to get that ground through, if you make it agent available, I think you
will build an even bigger business. Another example would be something like reliable product
reviews. The ground truth of what is a product actually like, things like, you know, wire cutter
or consumer reports, I think it's even more important in gentic commerce because the agent
will really want to know, am I building on solid ground or is it shaky, right? The human used to do
verification. You read the reviews and say, well, this sounds like fake.
or maybe they check on Reddit, a few sources, right?
They triangulate and then they buy something.
The agent, I think, will be more gallible in that phase.
And so if you're selling ground fruit,
I think you have a very profitable business model ahead
because you used to maybe only have access to data,
and now you can sell the labor around that data flow.
So I think that's going to be quite important.
Christian, how about investors?
This is very difficult to navigate from an investment perspective.
You've seen anthropic drop, just kind of different extensions,
a security extension or something or legal extension,
and entire SaaS industries go down by like 10 to 20% in the stock market, right?
So it's hard to know really what's going to be displaced in this world
versus what the net new business models are going to be.
Do you have any insights for investors into how to invest in verification scarcity?
Yeah.
So first of all, look for the companies that are advancing verification infrastructure
And some of this also relates to crypto.
But fundamentally, you know, companies that build better tooling for the top verifiers
to scale a genetic output, I think are going to be very valuable.
Second one, we already touched on it from a company perspective.
If a company has a unique mode in some sort of ground-truth access to information,
like think about the Bloomberg data.
Like, if you can get that information first and you can serve it fresh to the globe,
you can scale that even more.
But maybe the most important piece of all for investors is that focus on the nonmeasurable.
If the measurable is becoming cheap, can you push deeper tech ventures,
ventures that push on R&D that goes into domains that haven't been fully measured?
Can you really venture into the things that it's maybe a few years out or less,
depending on the acceleration, where, again, there's no digital trips.
In a sense, it makes the job harder, right?
Because there's no more a playbook and things like, oh,
Network effects matter.
Well, not all network effects matter.
So an investor now needs to ask themselves,
is this a type of network effect that agent can unravel?
Because I can throw compute at it.
The agent will populate the platform,
we'll reach out, we'll onboard people,
we'll do all the things that used to be hard
and created the large two-sided marketplaces
that are coming in today.
Or is this a very specific new type of network effect
where as I deliver a genetic output,
I get better and better at underwriting it.
Why? Because I've better telemetry, I better feedback loops.
I kind of learn from the agent in the wild, and I can make that agent cheaper, faster, more insurable than the competitor.
I think you're going to see some multi-billion dollar companies being created out of this idea that, okay, intelligence is cheap, but verified intelligence, which is actually what people want to buy.
It's going to be harder to get.
In some ways, blockchains and cryptocurrency is a verification technology, though.
It's unclear to me whether it's the same type of verification technology
that you've been talking about throughout this conversation.
To what extent do you think crypto will be useful in this whole verification move?
Yeah, the paper has a few Easter eggs for the crypto crowd
and probably won't be surprising to people,
knowing kind of my past in the industry.
I do think, and I've asked my question for a long time,
which is like, okay, what does AI and crypto look like as a combo?
And where I landed is actually that what's interesting is that the crypto space over the last decade
has built some of the primitives that I think are going to be extremely important for the new landscape.
To some extent that they weren't necessary.
Take something like proof of personhood.
Very clever.
You know, people have come up with also reconstructs, on-chain attestations, and whatnot.
But because crypto never kind of scaled up to the mainstream and some of the early things like stable coins and payments,
probably don't need that in this phase.
We haven't seen its shine.
But as AI scales, I think a lot of the side effects
are going to be a lot more painful.
Identity is an obvious one where what is real, what is not,
is this the right person, is this account being taken over or not?
Crypto has all the right primitives for building around that
and providing stronger forms verification,
and they will become more important.
The other one is provenance.
And maybe the video is a good example,
but can you document that that camera was a real camera in the real world, taking that video.
Some of this has been actually experimented with.
There's a lab at Stanford that has been documenting war zones.
And the key part is like, okay, when we're taking evidence, can we prove like a cryptographic chain of custody
from the moment it's recorded to the moment it's shown?
We'll need for all, everything we do will need that, right?
We'll need this kind of hard cryptographic lineage on information being generated.
information being used, and even for models, like, you know, can we verify that what they're doing
is what they're supposed to be doing? Christian, this has been a fantastic conversation, really enjoyed it.
I think you've shed a lot of light on the possible futures where, you know, like people can
adapt and still continue to drive value and where the economy is going to adapt. I guess if you were
to leave us with a summary of what this all means, what should people be doing maybe over the next
12 months to really think through this issue and apply this in their careers and their
companies and their investments. I would say first, don't panic. Don't let that low gray fever
paralyze you. If anything, again, get into action. Play with the tools. Try to think through
what parts of my job are augmented by the tool versus replaced. Try to replace as much of
yourself as possible through, you know, experiments. And then run those experiments across not just
your work life, but all of your life. I think for many, many, you know, maybe their hobby or
something they do on the side might be the most meaningful thing in this new economy. So experiment
broadly, see what resonates, and try to really turn your ideas into reality. I don't think
there's a more precise playbook than that, which is go through the flow, go through the process.
worst case you learn where these models break
and where they're not there yet
and that could be very profitable
but my sense is that for many
it'll be kind of a eureka moment where they're like
oh wow this thing that used to be my hobby
and we've seen it with creators online right
their hobbies turn into their business
that will be probably what
they'll be doing in the future and then
of course if you have kids if you're trying to think through
you know not only how to navigate yourself
but navigate some younger human
I would say and this is what we're doing is like
In this new future, the most important thing will be discovering your natural aptitude,
what you love doing, what gets you in the flow, and doing more of it.
And so I don't think there's a recipe.
It's not like STEM versus the arts or like everyone will have to find their path even more so.
And the good news is the tools are great at helping you find that path.
Christian, you're very plugged into this.
Do you think this is going to go well for humanity?
Absolutely.
Good.
Well, and on that note of optimism, we'll include a link to the paper and the thread
in the show notes, Simple Economics of AGI, Christian,
thank you so much for joining us today.
Thank you.
Bankless Nation, got to let you know.
None of this has been financial advice,
although I do think it was some career advice in here for sure.
You could list what you put in, but we're headed west.
This is the frontier.
It's not for everyone.
But we're glad you're with us on the bankless journey.
Thanks a lot.
