a16z Podcast - AI Just Gave You Superpowers — Now What?
Episode Date: March 19, 2026A new paper, “Some Simple Economics of AGI,” is making the rounds—Web3 with a16z we sat down with author Christian Catalini (MIT Crypto Economics Lab) and Eddy Lazzarin (CTO of a16z crypto), in ...conversation with Robert Hackett, to unpack what AGI could mean for work and markets. EPISODE NOTES: A hot paper — "Some Simple Economics of AGI" — has been making the rounds, so we sat down with the author, covering: Automation vs. verification: the key economic split Why AI agents now feel like coworkers - What's happening to junior roles and the “codifier’s curse” The “AI sandwich” structure for firms The value of "meaning-makers," consensus, and status economies Why crypto may become essential infrastructure for identity, provenance, and trust Two possible futures: a hollow vs. augmented economy Featuring Christian Catalini (founder of MIT Crypto Economics Lab) and Eddy Lazzarin (CTO of a16z crypto) in conversation with Robert Hackett, our discussion dives deep into how automation is reshaping labor markets, as well as the nature of intelligence. What do these changes mean for startups, the future of work, and your career? Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
You've just been told you have superpowers.
You've just been told you can have multiple employees for $200 a month.
What do you do?
If I was a young person today starting off my career,
I would try to convince my parents to give me some money
to harness a huge swarm of computers
and see, like, can I spend $5,000 of compute productively?
That's the challenge.
We've been talking about a meme sort of in tech world for years now,
the idea of like the one-person billion-dollar startup, right?
Yeah.
Is this not how that happens?
What we're describing is exactly how that happens.
There's a new surplus, learn to exploit it.
That is the lesson for a young person.
Look, the apprenticeship might be dead, but the real work is beginning.
What happens when AI gives everyone the leverage of a team?
In this episode, taken from Web 3 with A16Z, Christian Catalini and Eddie Lazarin, unpack what that means for work, startups, and ambition.
Let's get into it.
Hi, everybody.
We're here with Kristen Catalini, who's the co-founder of LightSpark and founder of the MIT Crypto-Economics Lab, as well as Eddie Lazarin.
And we're here to talk about a new economics paper that Christian published called Some Simple Economics of AGI.
Christian, I think the title of this paper is slightly misleading in that it's actually not incredibly simple.
It's more than 100 pages long, and there are many complex mathematical formula involved.
maybe some of the insights you've managed to distill down into a simple kind of framework for people to
understand things. But, you know, over the course of 100 pages, there is a lot of complexity also
in your analysis. So I'd love to ask, what began you on this journey to investigate the economic
relationship of AI and the world we live in right now, the robots and the humans? Yeah, I would say it was
born like probably many others at the same time out of a semi-existential crisis. We're all grappling
with the fast pace of progress
and just so quickly, everything is moving.
I'm an optimist, so look at all of this
and can see kind of at the end of the arc,
really amazing things.
But the fundamental question was like,
what are we going to do?
What should we focus on?
What's worth of our attention, effort and time?
Especially in this phase where we still, I think,
have a meaningful shot at influencing the trajectory
and really the technology.
So we wrote actually some months ago
piece on measurement.
And the basic idea of that,
piece was like anything that can be measured will be automated, which doesn't sound like good news.
But this second paper was really centered around, okay, if that is true, let's take that initial
assumption to the limit. What would the economy look like? What will the nature of labor look like?
What should startups do? What should incumbents do? And essentially, what will the future look like?
Now, we did a similar exercise back in 2013 when I went down the crypto rabbit hole. We wrote a simple
economics of the blockchain. The simple in the title is just a trick. If you make it
Too intimidating, people would not read it.
But very much like that time, look, some things will be right, some things will be wrong.
Hopefully, we've got it directionally right, and part of the exciting phase right now is it's in the wild,
and people are kind of seeing what resonates and what doesn't.
Even so, you have managed to distill down the findings in a way that people can get a handle on pretty reliably well.
You even have little short, branded ways of understanding the existential crises that we all face,
such as the codifiers curse
and several other of these kinds of labels
that you've invented to describe the world we're entering.
Let me just ask you, though,
you said this stemmed from an existential crisis.
How were you feeling psychologically?
What is your state?
Great.
Are you sweating?
Are you happy?
You feel good?
Absolutely.
I think it was a long journey.
It was many, many months
of kind of thinking about some of these fundamental concepts.
It came out, and I think my co-authors, too,
with a feeling that, first of all,
this is a technology that is under our control, still at this point.
Second, the upside, as I already kind of hinted at,
is many orders of magnitude greater than what the DOOMers would want you to believe.
And third, I think there's a playbook.
There's a playbook that all of us can look at.
We can think about where are we adding value,
what are the sort of things that we do within our job?
Jobs tend to be bundles of different tasks,
and people get always very nervous when certain tasks
because certain parts of their job get automated.
I think right now coding is going through that experience
where many talented individuals that have identified as developers
that have written elegant, fantastic code over the last few decades,
look and say, oh, wow, this is doing what I do.
And I think that's both true and not true.
In a sense, as we kind of surface in the paper,
these tools, which for now are tools,
but I think we'll become a lot more than just simple tools,
are taking out the groundwork.
They're taking out a lot of the exploration within what known,
but we're still, I think, at the top, thinking through, okay, what is not known?
Where can we push beyond the boundaries of what's being recorded, measured, digitized?
And so those decisions, although they seem smaller, I think they have much higher leverage than we had before.
So you mentioned one profession of coder.
I want to drill down a little bit on that because we have Eddie Lazarin with us,
who has spent several years here as chief technology officer.
Eddie, how are you thinking about this transformation that we're undergoing right now?
How are you thinking about these changes?
Well, there's a lot to say on this, Robert.
Yeah.
Maybe let me situate us in time and also situate us with the paper.
So many people feel that something changed in December, okay, in December 25.
And what changed was a series of incremental improvements in how these agents work,
accumulated to the point that AI agents can now perform long-running tasks.
And the reason why this creates such a provocation, such a feeling,
is that the feeling just a year ago was,
I asked the agent to do a small thing, it's amazing how it does that,
I had to ask it to do the next thing, it's amazing how it does that, and so on.
And now you can kind of give it less guidance,
you can kind of walk away for a day, even.
You know, there were a few days in some extreme case.
and come back and something is complete.
And maybe it's not quite perfect,
but all of a sudden this sensation is very similar
to the sensation of working with somebody,
where you didn't kick forward what they did
one piece at a time.
That's ridiculous.
That would be an extreme micromanagement.
Instead, you have a conversation.
They go away.
They come back a day or two later.
They've got something, hey, what do you think?
And you provide feedback and go back and forth.
So now it starts to feel like it's a coworker.
And that qualitative feeling provokes a lot from the imagination.
And now everyone is beginning to grapple with this reality.
And part of grappling is just some historiotics.
But another part of grappling, the more interesting part of grappling,
is trying to figure out the ways to squeeze as much value in actual production settings
and for commercial use as possible.
And what people are discovering, and this parley is right into Christian's paper,
is that they produce an incredible amount of work,
Some of it is fantastic.
It takes a fraction of the time it used to take,
but it's often flawed in ways that are subtle
and that may not have been fully appreciated before.
So to give you an example of the ways that they're flawed,
and also the ways that, as Christian was saying,
the bundle of what it means to be a software engineer
is being reconsidered.
People think of the work, software engineering,
as sitting down and writing a bunch of code.
I'm sitting down, I contemplate the issue,
I understand the specifications and then I write code
and the code is what I produced.
But it turns out and AIs help us understand this
and break it out into its parts better
is that in the process of making the work,
making the code,
there is a very nuanced, iterative process
of correcting and straightening and feedback gathering
and integrating that is not just the printing
of each line of code in sequence,
It's, right? Like it is this holistic task. And it turns out the AIs are incredibly good at a lot of that and not so good at others. So the balance of work for a great engineer is shifting quickly. And the way that work is shifting is just kind of writing the code is plummeting. But making sure the code works or making sure the code is correct or not even correct as in logically correct as in bug free. It's about that it provides value for the customer.
as they needed, or it serves business goals,
or it actually is handling prioritized tasks for the organization, right?
This more nuanced concept of good.
Yeah, or perhaps even that it surprises and delights you.
Sure.
And there's many dimensions to that task.
And it seems that in the process of producing the code in the first place,
engineers may not have considered that they were also doing this work too.
They weren't just printing the code.
They were doing this work too.
And this process of truing the thing and writing it and guiding it and taking risks about it and deciding, I'm going to experiment with it.
This paper Chris wrote calls this verification.
That's kind of like the catch-all term for this bucket of not the mere automation, but this sort of incorporating what was made and writing what was made to suit some end goal, some purpose.
So going back to your question, Robert, is the way things are changing is,
people are now grappling with that fact
and realizing that maybe the split of work
that is commanded from a great engineer
has a different balance.
The amount of attention paid to writing the code
and just kind of printing one line at a time
is smaller and vanishingly small for some
like in the vibe coding extreme near zero.
And a huge part of the work is now verification.
You brought up this word verification
and it encompasses a lot underneath that umbrella.
It does.
And I actually haven't talked with Christian about it.
So I would love Christian to actually unpack that a little bit as like,
how do you think about the word, not just this choice of word,
but the concept, because it's such so important to the paper,
automation and verification, being kind of the key conceptual split.
Yeah, so I think the automation part is very intuitive.
These agents essentially can do more and more of what it's been done before.
and for now, I think they're still somewhat constrained by the observable domain, right?
So imagine every code is ever written that they've been ingesting during their training or fine-tuning.
All of that is what they can build on.
And often people say, oh, well, then they cannot innovate.
They cannot be creative.
They cannot have good taste.
I actually strongly disagree.
In fact, much of innovation is just recombination of ideas.
And humans have only explored probably a tiny fraction of the possible reason.
combinations between different disciplines, between different sciences, between different concepts.
So I do think these agents will be extremely innovative just by taking what we've given them,
essentially the unity of the knowledge that humans have accumulated to date that's being recorded
and digitized and go with it. So that cost of automation is going down. And verification is actually
an important cost in the economy throughout. So actually when we wrote the simple economics of the
blockchain, that also as a centerpiece on the cost of verification, although
I would say in this paper, it takes on a much broader idea.
So what do we mean by custom verification?
What is verification in this paper?
In this paper, verification really starts from that idea about measurement.
If you buy into the thesis that AI is being incredibly good at once it's given the right data,
replicating that process, if you buy into that, then you start asking, okay, what's not measured today?
And there's a lot of things that are not measured.
Some are not measured because they're not really measurable.
Economists call this all concept night-ean uncertainty after night.
And it's essentially a difference between looking at the future
and trying to assign probabilities around the event
and not even being able to assign those probabilities.
For a non-economist out there,
they might be more familiar with Donald Rumsfeld's unknown unknowns.
Absolutely, yes.
The unknown unknowns are essentially the non-measurable piece,
often about the future.
So that's why even if you throw agents today at the stock market,
they'll probably be on average pretty good,
maybe better than your financial advisor,
but they will not be probably resilient
to drastic changes in the environment,
geopolitical shifts and whatnot.
Those are things that are not measured.
Of course, there's many more, right?
And so what verification really is in this paper
is the act of applying all the embedded measurement
that's in your brain as a human.
So if you think about from birth to where you are professionally,
you've seen all those sort of examples, situations,
and you've learned from them.
You're essentially recorded measurement in your brain,
and it's really only yours.
Now, two people may have very similar knowledge, even career-wise,
but it's not exactly the same combination.
And so when people say, okay, this person has good taste
or is a great curator or they have good judgment,
well, one of the things that really inspired this paper
was the idea that everyone was sort of coming up
with all this cope around the eye,
which was like, oh, don't worry,
the machine will never be able to do X, Y, and Z.
And the code was very vague, right?
How do you define taste?
Good taste or bad taste?
How do you define good judgment versus bad judgment?
And even worse, some of these things that needed judgment,
you know, to Eddie's example in December,
a good engineer probably needed a lot more judgment applied
than they need today when reviewing a code base.
All those bases are shrinking.
And so we needed to go to the bottom of some,
something that was more fundamental,
and that could be really pinned down to something precise.
And so we think that, you know,
as long as there's data underlying that information
that you're trying to use to automate,
you will be automated.
And of course, AIOs improves automation by giving us better measurement, right?
Just think about vision and all the things that we can do today, sensors.
AI is going to feed its own new datasets over time.
But if it is not really captured anywhere,
if it's still in somebody's brain
just because they've seen all those
out of distribution examples,
they've seen those exceptions.
You know, when Eddie launches this form of agents,
he knows all the ways this could go wrong, right?
It's like when you're building on crypto,
there's just so much nuance in building a secure and safe system.
That nuance is not yet fully captured.
But at the same time, of course,
as measurement progresses,
we need to keep moving up and up and up the value chain
until, you know, we're going to be peers
and we'll see how.
for that. People have moved the goalpost on measuring AI's ability to do things for many decades.
You know, at first it was like, well, an AI will never be able to beat somebody at chess,
and then it was like it'll never be able to drive a car, you know, cross-country or something.
It seems like the field that is unique to humans is diminishing.
And you mentioned people have held out taste as an area, a domain that maybe humans
can retain, but AI has this ability to crunch through every single combination and pump them out
basically at negligible cost and to completely explore the map and landscape and to optimize for
various things. So what becomes the role of the human in that world? We talk about verification,
but have you thought through where the limits end in terms of how much AI can advance into the
unknown territory. I mean, it really depends what you mean by end, Robert. You know,
are we talking like a thousand years to 10,000 years, 10 years? Like what, you know,
galactic empire state. And look, in the paper, like when we're trying to push it all the way
to the limit, I do think the only path is actually human augmentation. And so as you think
through, again, the shrinking space for verification, at some point, it's all about intent.
we're going to have some preferences
and the machine may, by the way,
have developed their own.
Today, I think they develop weird quirks
and preferences as a side effect
of training often.
And sometimes we understand them,
sometimes we don't.
But in the future,
it is credible that as these systems
become more and more capable,
they will have preferences,
very much like we do.
And so in that extreme,
it's going to be a tension
between our preferences and theirs.
And the good news is that
the underlying physical reality is the same.
And so,
augmentation is going to be the only path I think where we can keep out with what we create it.
We will be able to still have a thoughtful conversation with it and try to play a part in it.
We could talk near term because you break down the economy into three different areas where you can sort of find where you exist or where various tasks and jobs exist and understand their level of automatability or rather measurability in terms of their output and what they do.
maybe that's the best place to go through now
because that gives you kind of a short-term,
a near-term roadmap of how to think about this
for each individual to think about what they're contributing
and what is likely to get eaten.
Yeah, let's start there.
I think there's actually a lot here
in terms of what's still human
across many dimensions.
I would say the first one is, of course, verification.
As these systems become more capable,
the leverage that any single single,
individual as in their profession is massive relative to what it was even in December.
This means that probably we should all be more ambitious.
We should all try to think through the workflows that we currently do.
And in and not to actually crypto, we call this the AI sandwich, a reference to the stablecoin sandwich.
But the firm or a startup essentially can have one single human, we call it a director,
but it's essentially someone that is in charge of steering, verification, making sure that as the system,
drifts in directions that, you know,
we're not intended, it can course correct.
So that's maybe one person,
maybe a small team at the top.
In the middle, you're going to have a swarm of agents,
and we're already seeing it.
People are experimenting with all sort of interesting new things.
Of course, these are funky, they break,
they have all sort of side effects,
but, you know, the next iteration of this
is going to be much more enterprise grade.
And at the bottom of the sandwich,
you're going to have an army or a small army
of top verifiers.
So if you think about all the agentic
output coming out. If you empower those people with great tools, humans are not going to do
verification, you know, but line by line. It's impossible. The throughput of the machines is
accelerating too fast for that. But with the right tools, I think the top experts in every domain
are going to be the ones ensuring that what was intended actually came out of the system.
Super important job. One where I think domain experts will try it for a long time. But there's
some bad news, right? So as you do that work,
you're also kind of creating the labels for your displacement.
And I think we've seen it in the most simple form in the past
when people were labeling images for AI companies and training.
That's not needed anymore.
Now you have big foundational labs hiring, you know,
top expert from finance, top expert from different domains.
Those people are creating the evils and the training
that will eventually displace their peers.
So this verification layer is a really important one.
I think many people will thrive in it.
It's one that really rewards,
almost like hyper-specialization, right?
So if you're the one person that really can deliver that final unlock,
again, your leverage is massive.
So that's one category, the verifier.
That's the one that you have called the codifier's curse.
So the codifier's curse is what we describe is the mechanic where,
if you're a top verifier, you need to keep moving up the stack, right?
Because the technology gets better and better.
And so you need to keep adding value at that thin, thin layer,
so that you're always one step ahead of the machine.
so to speak. The director, I already mentioned, right, is essentially someone that really drives the intent.
Entrepreneurs are directors, right? So they see some future, they imagine some path for getting there.
And then, of course, startups are continuous drifting and realignment of the object along the way, right?
There's many jobs that are director types, including, of course, in media, right, in movie production.
That's where we stole the title from. And then there's going to be jobs that I think we need to recognize
are easy to automate, are easy to verify,
and those jobs are gone or soon to be gone.
And I think society hasn't really grappled with some of those effects,
and there's going to be a massive need for retraining
and really pushing people further up, the knowledge frontier on that.
But when you look at those jobs, we're going to use LSI to verify AI.
So one of the things that sometimes people misunderstand from the paper
is that we talk about human verification as the last step.
But in many cases, AI will verify AI.
So there's going to be a whole series of steps
before it really gets to the final human
that may be or may not be needed, depending on the job.
And then we have a category that was the hardest one to qualify.
I mean, we called them like the meaning makers.
So imagine settings where actually it's all about,
and here, again, my passing crypto shows,
it's all about consensus.
These are individuals that are really good
at understanding trends,
societal changes or things society cares about
that require everybody to coordinate it around something.
Art is like that.
You know, crypto networks to some extent are like that.
And these meaning makers are essentially not,
they're not in the land of what's measurable.
You know, we could land on one equilibrium or another.
It doesn't really matter.
But they're really good at creating that social coordination
around some sort of outcome.
These are not necessarily, by the way,
the jobs that sometimes people say require human
touch, I do think people severely
overestimate. You know, I'm important
that human touch is. You hear it for
jobs like, you know, a therapist
or even elderly or
child care. Yes,
I think people will have all sort of concerns initially,
but nobody's
really accounting for the drastic reduction
in cost, right? So if it's
100,000 acts,
cheaper, and some people may
even feel it's more private,
people will rapidly shift. In fact, we already know,
right? People are using all of the
LLMs aggressively for all sort of questions that would be very intimate or personal.
That said, of course, there's going to be jobs where human-made or made by a human
will be a very important label, and crypto will play a role here because soon we're going
to lose the nature of that identity without some strong cryptography behind it.
But that human-made will be valuable just because of this scarcity that's inherent in the
fact that it's human-made.
So not because it's better.
It's just knowing that a human dedicated their scarce time and attention to deliver that experience, that culture, whatever it is.
I think those things will still be important.
So you brought up cryptography and crypto.
What is the place for crypto then in this world?
It's a really important one.
It would seem to be complementary, but how so and in exactly what ways?
Yeah, when we started this journey, I mean, many before us had already said, look, LLMs and AI is kind of probably
Cryptos deterministic, you know, think about a smart contract, putting the guardrails on an agent,
or being able to give an agent the ability to buy and sell resources.
All these things resonated.
But I do think there's an even more profound complementarity between AI and crypto.
And maybe the reason why it's not so salient in the economy today is because we haven't seen the
side effects yet.
But issues around think about identity or provenance of digital information, I think we're
about to enter very uncharted territory in the next few months. As these capabilities become
truly amazing, every digital platform will have to really wrestle with the idea that what used
to be a human contribution, whether it's a post or an image or anything else that's been done,
it's now potential agent. You know, those bots sometimes come on a delegation of a human,
so you need to treat them completely differently. As that unfolds, I think society will have to
drastically reimagine its identity stack, the way it really certifies things, the way it thinks
about is this true or not, what is the chain of custody of this digital item until the way it
reaches me. And so, yes, I do think crypto probably is going to shine in all of this. And everything
that's been built over the last decade, it's going to be a lot more foundational. Back to verification,
when you have underlying information on a blockchain, verification is cheap. It's more reliable.
you can trust it.
And so in a land where trust is going to be increasingly scarce,
yes, I do think crypto primitives will finally truly shine across a number of applications.
Yeah, one way I'd put that, Robert, that idea is that the cost of automation is declining very rapidly
and the cost of verification in this broad sense we've talked about.
I think it is declining, but it is declining not as quickly.
And that creates a gap, right?
And that gap is a interesting thing.
There's many ways to describe that gap.
some may describe that gap as an opportunity.
That's kind of what Christian is saying for human laborers,
is that if there's this bottleneck, there's this gap in measurability,
because of humans' general adaptability and experience and generality,
humans are probably able to specialize to the verification component
faster than we can get the machines to.
And there's some interesting sort of deep challenges
that make handling verification hard for machines in the short term.
In the long term, I don't know that that's, I don't think that that's a permanent thing.
But in the short term, that is definitely the case.
Cryptography and blockchains are a verification tool.
Provenants is, you know, just a chain of cryptographic evidence, right,
that something, you know, travers some path between specific hands
or it underwent some series of transformations that we can be sure of.
And that gives a signal about what we're looking at.
It just makes different categories of verification easier.
So anything that makes verification easier is going to be a part of solving that gap, trying to close that gap.
And that gap is a kind of systematic inefficiency in what the thing is trying to do.
A really interesting frame that the paper puts out is splitting things in terms of measurable and non-measurable tasks, measurable and non-measurable labor.
I wanted to ask Christian, is measurability basically just cost of verification?
Like, is more to it than that?
Like, do you think of measurability as the essential cost of verification?
The idea, just to say, of measurable, non-measurable tasks,
a measurable task is one that I'm understanding as having a low verification cost
such that you can kind of put the measurement components into the existing loop, right?
You don't need to do a lot of additional work in order to figure out that it was done properly
or that it's working or that it's fitting or that it's compatible or that it's bug-free, so on, and so on.
whereas non-measurable tasks seem like they're either in this complex domain
or you were just saying, Christian, like consensus domains where there's not really a concept of right or wrong,
but there is a concept of consensus that's important to reach just to proceed, just to organize future tasks.
What do you think?
Is that how you think about measurability?
So I would say you're absolutely correct about the bifurcation.
And I think that's an interesting one for society, right?
Because to some extent, some things are non-measurable.
and even if we had perfect measurement,
we probably wouldn't improve on them,
because they're social constructs.
Some people call them status games, right?
Where it's like, okay, we're coordinating on this piece of art being important
because it reflects some sort of meaning to that society,
to that culture, to that growth.
But to the automation question, I would say
the latter category is probably the most important,
which is, again, there's probably a distinction between
what's measured outside of a human brain versus inside.
What is it that a single individual
as recorded through their own experience?
And of course, as we start carrying devices
that were video and capture all sort of rich information,
that barrier will come down.
But right now, I think what makes a season engineer different
than even a machine that has read all the code
is that they've struggled through all those moments.
They've learned some auto-distribution examples
that they will be in the day.
for the machine, but they don't know how to weight them.
And so our neural net has been trained in a very unique way.
And so I do think the distinction is essentially the reason why verification may matter
for this category versus not, is it something that you've measured that's unique?
Or is it something that the machines can also measure?
And of course, as we feed better, better data, that shrinks.
And that's why we need to move more into the unknown.
Do you have solidly defined examples of things that you think are,
at least right now
unmeasurable and safe because of that?
I think across pretty much every profession, right?
You're seeing this in law,
you're seeing this in engineering,
you're seeing this in strategy.
There's components where the machines
are really good at average
or I would say even above average.
They've ingested the right materials.
They've seen it enough examples.
And then there's the final verification layer,
which is all about the exceptional.
know, the recombination that pushes the boundary a little bit forward.
And you see it also in domains like the arts, right?
So some of the greatest artists are really good at capturing a sentiment
that hasn't been fully expressed in data yet or by society.
I mean, that layer of applying your own expertise,
your own accumulated experience across your life for that decision,
it's still human across all of those professions.
It's almost like a universal meta-sophobic.
I would say. So if we're going to make this concrete for people, I just got back from Paris a few weeks ago
and went to the Muse de d'Orsay, looked at all the Impressionist artwork there. And it's funny to me now
that France claims the Impressionists as their beloved artistic movement that they presented to the
world when actually they faced just persecution and were completely rejected by the Academy for so long.
and now they're celebrated,
but they might be perhaps an example at that time
of their unique combination of the way that they saw the world
and expressed it through color and shape.
Now I'm not saying that that is safe from AI today.
I'm not an artist, so I'm not going to make claims,
but that is maybe a historical example people could latch on to
about people whose unique experiences
and perhaps refined taste enabled them to transcend.
Maybe another example could be like the Michael Burrys of the world during the big short, you know, the financial global financial crisis calling the big short.
You know, when everybody else thought that everything in the economy was humming along just perfectly wonderfully and the few who kind of saw that that risk that other people overlooked.
Well, I'd say I'd say the first example, the impressionist example is closer to, I think, what Christian was getting at with like maybe there's a little bit of a regime change.
in the consensus,
but there's not necessarily
some underlying
new information that they had.
It's not like they had
some secret knowledge,
basically,
or some secret proprietary understanding
of what art was good.
The consensus changed.
And the whole idea of consensus,
there's like a rabbit hole
we could go down
where, like,
take consensus in like a software engineering sense
about like specific coding standards
to enable interoperability.
It's not this approach or that approach.
They're different.
there's some degree of mutual exclusivity,
you kind of got to pick one,
which someone's just got to decide, right?
And if everybody aligns on this one standard,
or this other one, just one of them,
then we're more efficient.
If you consider a future market
where there's a bunch of machines as peers
with the humans,
then there is a concept of consensus
that spans both groups.
You can actually have like kind of a machine consensus
and a human consensus
about a specific software engineering approach
or technical approach.
then it starts getting really murky.
Like, why would the human being necessarily have an advantage in consensus construction?
In fact, the machine might, because it could, like, automatically pull, like, every other model
or, like, create some incentive scheme among models that they could decide as rational instantly.
In other words, there's ways you could imagine that machines could find ways to coordinate faster.
So this idea of consensus formation being uniquely in the domain of the human,
I don't think is necessarily permanently true, even though it is today,
because most laborers and most tasks are obviously remaining coordinated by people.
The second example, the Michael Burry style example,
that's more of a proprietary information
where just the market has not incorporated some information
or some incentive scheme.
It makes it hard to actually act on that information or something.
And they arranged facts,
they arranged their positions, their capital and things,
to exploit that error.
And even that domain, it seems hard to imagine why.
the human would have a monopoly.
Yeah, look, if you push it to the limit,
I think we all know that it goes to full kind of equivalency, right,
their peers.
And then, yeah, I mean, unless we reinvent ourselves,
and I think technology will be a piece of this,
we're already seeing all sort of experiments, right,
with brain human interfaces,
it will be more powerful than us.
I think with the impressionist,
it's also important to remember that, in a sense,
that was a response to photography,
automating, right,
but was considered art.
And so if you could paint
perfectly real-looking
paintings, now that's
glorified, right? Now suddenly the photography
will be way, way, way better. And I think we're
witnessing a lot of that. And so people
were moving in the meaning-making
space. It's like, how do we respond?
What is still the nature of being an artist?
And completely agree with Eddie.
I mean, with the big short example,
and this is why I love, you know,
biographies, when you think about some of the
influential people in history, but good and bad.
There's something about their entire trajectory,
the experiences that really put those weights in their model, right,
in that net that are unique.
They've just lived life through a set of experiences
that calibrated them completely differently than others.
And so given the same amount of information,
the response is very different.
So maybe eventually we will train models
that will bring back that diversity,
that unique, you know, biased, opinion.
about reality.
Could we talk just a little bit
about the Trojan horse?
We haven't kind of gone
in the dimension of
the negative externalities
of extremely low automation costs.
We've talked about the risks
to human laborers
and there's so much more to say to that,
but maybe outside of that
for the productive benefits
toward the economy.
What are the risks
to the economy of low automation cost?
Yeah, I think we're seeing
glimpses of it.
When companies today say
that X percent on their code
is now generated
by machines, that's amazing.
And it's the center of growing productivity
and I think the release cycles are shortening.
But at the same time,
because we already know that it's humanly impossible
to review all of that code,
there's a good chance that it may carry
some technical debt on different types.
We've all been tempted to, you know,
ask requests to an LLM,
skim through it and, you know,
ship it as our own without full verification
because the models are getting better.
But whether it's a wrong sentence or wrong line of code or some sort of like zero date that is now part of your code base,
I think we're going to see more of that.
And what the model says about this is that essentially it's perfectly rational to ship code or to ship writings
or any sort of AI generated work that will contain some potential error because you can't verify the full thing.
And if you scan it up to the entire society, that means that we're probably accumulating some degree of systemic risk.
as we accelerate through,
hopefully we can develop better verification tooling,
better technology to really go back and look what we may have released.
But in the immediate term, I think companies face this tension
where if you think about the long run, sustainability,
even for a startup, right?
Investing today in better tooling for verification,
including some of the cryptographic primitives that we were talking about,
is expensive.
It may slow you down.
The benefits of that are all in the future
and the rush to ship and to grow might be really strong.
So I think we're going to see probably two set of founders.
Founders that think about that second long-term liability
and will build things in the right way.
We're seeing grimpses of this kind of liability as a software model.
11 labs recently insured their audio agent, right?
So saying, sure, deployed in production,
we're also insured that there's some weird side effect
of the agent making a bad decision.
I think we're going to see a lot more of that.
Alex Rompel is written extensively around this concept of laborer software.
As we deploy these agents as workers,
that issue of liability and insurance,
I think is going to become increasingly important.
It's not probably the most glamorous topic,
but as you think through to a deist point,
what will be happening in the while?
I think we're going to see a lot of systemic failures.
There's a good example historically, right?
So if you think about long-term capital management,
making lots of really smart investment bets,
until the old fund collapsed.
This is the quant hedge fund
that tried to use computational models
to beat the market.
Yeah, I mean, there's many of these instances
where humans jump ahead
of the technology that you don't fully understand
and then, you know, yeah,
we have some major sign effects.
Yeah, I think this is such an interesting idea
because if what was happening in the production of software
before or anything or any other service in the economy,
if a lot of it has been the result of direct human work,
then you can sort of take for granted
that people have been observing and quality checking many, many, many steps.
Now, I'm not trying to say that until today, there have never been errors or flaws,
hardly, right?
But there is a limit to how severe those have gotten in specific cases that we may not
fully appreciate because there's always kind of been somebody touching every step along the way.
But as things become more and more automated, and as things become higher, higher stakes and more valuable,
then the liability radically, radically increases.
Now, of course, the benefits are radically increasing, right?
Which is why we're tolerating that.
But the ability to supervise and limit and understand the boundaries of risk have to expand.
And so the idea of bringing in like an insurance type thing where you actually put a dollar value on the risk that things fail might be an important component in managing an entire enterprise because you just have to take for granted.
that it cannot be supervised, and you want to delegate the responsibility of quantifying
that risk and understanding what's going wrong to a specialist. It's basically a demand for a type
of specialization, which always emerges whenever there's some new massive surplus with some
big trade-off people specialize to handle the negative side of that trade-off. So I think that
it's very interesting that even the process of producing software might develop and new
financial dimension that it lacked before, right? And this kind of smells good to me, like, as an idea,
because everything is getting this financial component. And I don't mean this in some cynical sort of
money bags, crazy way. I just mean that the tools of financialization allow us to handle more
complexity and increasing abstraction in the economy. Like, that's what financialization is sort of for.
So it kind of feels on trend to me. It feels right. Yeah. And, you know, back,
to crypto, to some extent, everything we've been building over the last decade or so
as being advanced in the frontier, while we can measure and, you know, weight risk,
from a lot of defy and the evolution within it and prediction markets, all those primitives
are suddenly kind of critical, right? So if you're deploying software, if you have these agents
as stack that allows those agents to see better signals, I give you a very simple example.
I was talking to a founder that's building in the agentic commerce and payment space.
And he made this really interesting observation that when he switched from a traditional legacy payment system
to just having payments over a stable coin, the system behaved more reliable.
And the reason was that the signals were all on chain.
The agent had a much better understanding what was happening.
It wasn't just hitting a dead API.
we were seeing the whole context of those actions.
And I think there's going to be a lot more of that.
Christian, you're saying there was more out in the open
for the agent to be able to see
and to have full complete context and understanding
of, you know, what was actually going on
with given transactions.
Correct.
Whereas in the legacy model,
that stuff is hidden behind, you know,
various companies, intermediated left and right.
We have all these data silos, right?
And on an on-chain native transaction flow,
a lot more is surfaced to all the participants.
And of course, you know, there's privacy requirements for some of these things,
so it really depends on the flow.
But another interesting part of this,
and this really is to Eddie's point on insurance and liability,
people say sometimes that, oh, network effects are going to be a sustainable
mode in the AI era.
I think the reality is going to be a bit more nuanced.
In fact, AI agents and autonomous systems are really good at
breaking down a lot of the modes that have made two-sided marketplaces very, very defensible,
just the cost of bootstrapping these things and a lot of the groundwork that goes into
seeding two sides of the market is coming down. But there's a different double network effects
that I think is going to become even more important. We call it verification-grade network
effect. It probably needs a better name. But the idea is that if you have key proprietary
data that you're generating as part of what you're doing, and if that data allows you to
scale verification out of the hands of humans and into the hands of machines more and more,
you will inevitably be able to underwrite risk better, make better decision, and deliver
a product at a lower cost, that's safer.
And that kind of mode, I think it's going to be very persistent in this phase.
So when you look at the incumbents versus startups, the incumbents that have, you know,
a whole database of failure, like think about a decade of information about all some of these
could fail, extremely valuable.
And in general, startups that will center their attention on, is it true that every
time we do an interaction, we automate a system, we're bringing a top expert, a top
engineer to make a decision, we're learning from it and we're kind of creating a positive
feedback cycle around verification.
I think those companies are going to be extremely successful.
Yeah.
More evidence for the idea that proprietary data, you know, and the data that an organization can
keep inside and specialize from might be one of the.
one of the most defensible things.
I have a direction I'd love to take it
is in the paper
there's this concept of the hollow economy
and the augmented economy
or this sort of like possible split.
Could you unpack those
and what do you see as like the key factors
that distinguish them?
I like this framing.
I think this is a really good,
really interesting framing and resonates.
But isn't it true that in some sense
like the hollowing out forces
like the undermining self-reinforcing
feedback loops like in the codifiers curse
or in the missing junior loop problem, right?
Aren't these also sort of the natural side effects
of just being able to automate something
and find efficiency?
Yeah, so we start with the hollow economy.
You've already hinted at some of the dynamics,
but the first one I think it's already top of mind.
It's happening, I think, in the labor market,
there's early evidence of this,
and tech companies will realize that they can do a lot more with less.
And of course, they're going to start with below average
or average performers because AI is already there
and younger performers because now the senior one can already scale 100x or 10x depending
on the task. So that's one of the forces driving changes. The second one we already hinted at
is the quantifier's curse. As an expert trains, you know, makes decisions. They essentially
create those labels. Those labels can be used in the future to do the same decisions without the
expert. And last, there's this concept of alignment drift. And without getting too much into the model
itself, the punchline of that is that it's going to be important to think about alignment,
not as one shot.
You know, we train the model, it's aligned, we're good.
I think it's actually, I'm a chair as this definition of raising a child.
Where you're course correcting and continuously kind of providing feedback along the way.
If you take those three dynamics together and you combine them with the idea that the
incentives for deploying unverified AI, if it can get the job done, are super high because maybe
I get productivity today, right?
60% of the code written by machines versus humans.
But some of the costs, maybe in the future,
we may be racing towards an economy
where we're not training our future class of verifiers, right, the juniors.
Our top verifiers are progressively becoming
slimmer and slimmer.
That class is shrinking in size.
And we're creating all these potential risk
that can lead to what we call the hollow.
But then we use that actually to carve the path for a while.
We prefer as the end state, which is the augmented economy.
Again, I've already mentioned I'm an optimist.
I think we're going to land on an augmented economy.
Eventually, the question is like, how fast can we get there?
And can we make that transition, which in some case is going to be painful,
as painless as possible for a lot of people that will have to be retrained and adapt?
And the augmented economy is the opposite because essentially we realize,
okay, juniors are not being trained.
But guess what?
AI is magical at accelerating mastery.
You can find a young individual, discover their real aptitude
rather than pushing them through K1 to K whatever of standard curricula.
You accelerate them so that they can find who they really are, what they truly love,
what gets them in the flow.
That's at least what we've been thinking about our kids, which is like,
who knows what, you know, it's going to be valuable.
STEM, not STEM, arts.
We don't know.
But if you're building on your true talent,
you have a much better shot at advancing.
And I think, yeah, is going to play a massive role in that.
These are wonderful, wonderful tool for learning.
We have to build that.
I don't think they exist that skill today.
Second, if you take the codifier's curse,
well, guess what?
Those individuals will have to keep retraining
and moving up the value chain and discovering,
oh, now that I have all this leverage,
maybe I can be a director type.
Maybe I have an agent swarm.
Some people have talked a lot about agency being important.
I think that really gets at the crux of
you need to realize you can be a director.
You can do a lot more than you were doing before.
And on alignment, I think between a lot of the safety, R&D, everything else that's happening,
and better verification tooling, including human augmentation,
if we can augment our capabilities, we'll be able to verify much better and be peers.
If you put those all together, you're suddenly in a scenario where a lot of things that used to be expensive in life
are practically free.
Anything that can be measured, can be automated,
So we'll converge at the cost of compute, right?
Maybe even energy.
Then you have other things that we're going to invent,
lots of new jobs, lots of new things that people want to entertain themselves,
including in the status economy, in the non-measurable economy,
underlying a strong verification staff so that we do have ground through.
We're not submerged by fake identities or, like, you know,
actors trying to essentially civil attack our society.
If you put that all together, the future looks pretty good, right?
And a lot of the things that I think governments have been trying to do
forever are going to be cheap and available, like a great education, great health care.
All these things that used to be, you know, very, very rate of friction, I think we can deliver
on.
But yeah, we do need to make some investments along the way to make sure that we build that
versus, you know, just struggle through the transition and make some crazy decisions like,
okay, let's dismantle the data centers.
Let's stop everything.
It's impossible.
It's never going to work.
So if you're early in your career or you're just starting out, you should be using these
tools to simulate environments that you'll encounter to train yourself up, basically, is what you're
saying. And if you are later in your career, you need to get a fire under your butt, get some agency
to realize that you can do more with less. It's hard to say how long all this lasts until there's
another whole set of changes that are hard to predict. But the specialty of the human being is going to
be looking at the whole thing and being able to zoom in and zoom out and zoom in and zoom out,
across an entire endeavor, an entire enterprise, whatever it is,
and to know where more attention needs to be paid,
more resources need to be paid,
how the entire project needs to be shifted.
If I was a young person today starting off my career,
yeah, I'd be a little sad that the glory of kind of going into the back room
and carefully reading the instruction manual
for some assembly language one line at a time
and writing a beautiful program that's as efficient as I can imagine it
over the whole summer.
Like, yeah, that's gone.
That's a hobby.
That's something you can do for fun.
Download a fantasy virtual machine and make up a game.
You know, go to GitHub.
There's tons of cool stuff there.
That's a hobby now.
No one's doing that anymore.
Instead, I would try to convince my parents to give me some money
to harness a huge swarm of computers and see, like,
can I spend $5,000 of compute productively?
You know, can I make 200,000 tokens per hour
that are like useful
or something like that.
That's the challenge.
Can I guide a whole swarm of machines
to do a thing?
We've been talking about a meme
sort of in tech world
for years now
has been the idea of like
the one person billion dollar startup.
Is this not how that happens?
What we're describing is exactly how that happens.
Not necessarily it's literally exactly this way,
but the skill to control
a huge class of machines and data
and have this wide,
view of a thing and constantly be adapting it, that is itself a skill set that has never been
developed because that's never made sense to do. If you wanted to have a big project,
you've always needed to learn how to marshal many, many, many, many people. That has been the
way that you get leverage when labor has been shaped as it has been shaped. Well, that's changing
its shape. And so now you should learn how to harness this new thing. There's a new surplus. Learn
to exploit it. Like, that is the lesson for a young person. It's not that things are over. That's,
that's just like black pill garbage. Like, that's like, that's a ridiculous. Ridiculous. I cannot
condemn it enough. That's people trying to sound smart about being all negative or whatever.
Now, you've just been told you have superpowers. You've just been told you can have multiple
employees for $200 a month. What do you do? And they're a little weird, by the way. The $200
employees are strange. Okay, we'll now learn to talk to them.
One way to summarize it is essentially, look, the apprenticeship might be dead,
but the real work is beginning, right?
So what used to be an old phase were kind of doing ground work or kind of side by side,
you don't need any of that anymore.
If you're passionate about building, even hardware,
I think a lot of these domains that used to be technologically harder to tackle for someone,
if you have the curiosity, now they're really yours to grab.
you know, if I were to classify the most positive thing coming out of the model is this idea that the cycles on experimentation are going to compress and people are going to be a lot more able to scale their ideas rapidly into things in the real world.
Eddie, are you seeing this in the companies that you're assessing for investments?
Yeah, completely. Of course, absolutely.
Fewer employees like than usual for an early stage company.
I don't think I've seen a formalization of the number of employees. I mean, I have seen a formalization of the number of employees.
I mean, I have seen over the years that, of course, we've seen, like, as Trishan reminded us,
a block cutting a bunch of people. Obviously, Elon did that with X, and X didn't fall apart,
even though everybody said it would, right? There are many such examples. So I think there's a lot of
empirical support for that. I haven't seen a formal analysis, but look, like hyperliquid,
uniswap, like many companies in crypto are incredibly valuable, despite having had less than 20
employees. We're still having fewer than 20 in some cases. So that just seems true to me as a matter
of fact. There may be a radical example I haven't quite seen, but I've seen glimmers of is a single
person or a duo of founders who have been able to go from their idea to a live product that is
working and serving customers in a matter of weeks or months. Like I haven't seen many examples of
that yet, but that seems like a this year thing, like a happening now thing, not a maybe,
well, maybe five years from now. That is like happening, happening now thing. Eddie, you also mentioned
the black pill and how you reject it outright. It's not all over. There is a path forward.
Christian, I want to mention, last time I spoke to you was October of last year. And the book you
had recommended when I asked you for a book recommendation was, if anybody builds it, everybody dies,
which is perhaps along the black pill genre of AI books out there.
I also read this book.
I'm not going to take too much time in the podcast to disparage it,
but I will say that Nick Bostrom, whose book, Super Intelligence,
I feel treats this topic the most carefully
and sort of formally in philosophical.
It's actually a great read,
even though I disagree with elements of it.
Even Nick Bostrom has changed his tune on this.
He recently had a paper basically analogizing the choice of whether to pursue superintelligence
and this sort of broad extreme automation as not a choice between build a bomb or not,
which is how many seem to frame it like in the book you mentioned.
Should we make a bomb that blows us up or not, this sort of stark, obviously good or bad decision.
Instead, Bostrom now frames it as a patient who is terminally,
ill, going to die, but we can choose to perform a risky life-saving surgery.
And what he's trying to say is human beings are doomed already, right?
And I don't mean in some cataclysmic way necessarily.
I'm not saying he means it that way.
Just, you know, we're all mortal.
Like, we are all going to die, right?
I mean, just a standard memento-mori type way.
And if we want to try to treat that and we want to solve that type of
problem. We need incredible works. So why not, why not take the shot? Why not take the shot? I find
that very convincing. So he's in some way, obviously, I'm not, I'm not trying to rob him of his nuance.
Bostrom's thinking is very, very, very thorough and fascinating. But I think even he in some senses
flipped a little bit on this equation. Bostrom, who gave us the paperclip thought experiment
of a rogue AI that maximizes paperclip output. And in the process,
vacuums up all of the resources in existence to do so.
Yeah, I would say the Trojanore externality in the paper
is definitely inspired by the paperclip analogy, right?
It's this idea that there's going to be side effects,
and this actually brings me to open source.
I do think, very much like in crypto, open source is going to play an important role here.
The gist is essentially that if you believe that some of the defenses on the proprietary
models are easy to circumvent anyways,
then the value you get from deployment of open source in society
may actually give an early signal of how these systems can be abused
and build the countermeasures.
What I like that actually about the audit book was this single idea
and it disagree with a lot of the conclusions with most of them
that these models may pick up preferences,
and we've seen it in the wild,
that are almost like side effects,
and some of these might be minor, some of these may be major,
and as we deploy them, we may not be aware.
of those preferences in the system.
And I do think going back to what I think it's important to do now
is that that verification infrastructure
is almost the antibodies for the side effects.
Part of it is going to be experimentation or open source.
Some of it is going to be crypto-primitives.
Some of it is going to be better tooling, to be honest,
that we give engineers and everybody else that uses AI
to make sure that when they're automating,
they still have some oversight and they can steer and align.
When you combine those both together,
I think we're drastically reducing the cost of a massive, massive failure.
I actually wish we had even more time to talk about this side of it
because it's so, so, so interesting.
Like, take what we're talking about, Robert,
that if it's possible for only a few people to make a company,
then there will be many, many, many, many, many companies.
And I don't mean like gig economy, someone doing sort of a simple type of labor
that is easily kind of commodified and understood by a larger network.
I mean complex work, maybe potentially lots of complex work.
And if that's the case, you need coordination across many of them.
And coordination is very complicated.
You need reputation, you need identity, you need provenance for types of data, you need provenance for types of payments.
We talked about this insurance idea.
It gets incredibly complicated.
And maybe if moats are harder to form, as Christian was alluding to, some of the things that we thought maybe modes may actually be easily dissolved by AI.
then there may be fewer majorly large platforms
that can sort of coalesce energy to solve these problems,
coalesce focus to solve these problems.
So what you'll need, if you have all these companies,
many, many, many complicated challenges
and it's difficult to form specific certain network effects
to coalesce solutions to solve them,
then you still need networks.
So the blockchain networks end up being this very attractive thing
because they're credibly neutral.
So all the individual agents and actors in the system, I mean, can scrutinize them for their neutrality.
No, they're not necessarily being rent collected by using them.
And they may want to coordinate around these things for, well, exactly what I was saying.
Information sharing, payments, insurance, provenance of data.
There's just a lot of things you'd want to do with them.
You know, why worry about trying to figure out the exact reputation of the 50 billionth company you've interacted on this thing,
when instead you can trust some smart contracts and some verifiable AI models
to ensure that the exchange happened the way you expected
and payment was tendered as needed.
So it's almost a little inevitable to me.
I feel that blockchains end up being a very, very big part of this story.
If there's a lot of complexity, a lot of fragmentation,
more verification needed, more financialization of services rendered,
I think there's a lot to disentangle there.
I completely agree.
And to some extent, I mean, it really boils down to do you believe that intelligence,
the relevant intelligence, right, for making decisions, for creating productive outcomes,
is going to be fully centralized in a supermodel that's AGI and that's the only one, you know,
or ASI eventually, that's the only one that gobbles up everything.
Or, as we've seen to date, right, where the gap is maybe months, right,
even between some of the open stores and, of course, there's problems with some of that
is being built right now, not respecting intellectual property.
but putting that aside,
if you believe that intelligence
is going to be more distributed,
then I think the future
at a year describing is inevitable, right?
Because you have all these pockets
of relevant intelligence in the economy
that will need to transact with each other,
will need to trade.
And yeah, I think we've been building
in crypto, the rails,
and the infrastructure for that for a long time.
So I think it's going to become a lot more useful.
Christian, having done all of this research and investigation,
how are you taking the findings
into your own work,
your own life. I would say
I already hinted at, you know,
with our kids, a big part is
okay, accelerated mastery.
They're in the driver's seat, even if
they're little. I think that director's role
it's something we need to train for really
early. And I think a lot of the
education system is optimized
for the opposite. It's optimized for making
them, you know, actually perfectly
automatable. For me, it's
just, you know, pushing myself to rethink
every time I start a flow. It's like,
okay, this is how you used to do it. And I like,
that it's highly verified at the end,
but do I dare taking a little bit of risk
and just automating more of it?
So it's uncomfortable, right?
Because especially if you strive for really good outcomes,
sometimes you're like, well, should I do this or not?
But I think it's the only way.
And last, I think I've been thinking more
about what are the gaps.
It seems that AI is creating,
and like any great new technology,
all sort of side effects.
And often those are the shovels in the gold rush
that are worth building on.
And so thinking more about what was society need, what are the things worth building.
And yeah, and why aren't they here yet?
So the classic exercise of like projecting a few years into the future, I mean, at this point,
few years is like two or three and working backwards.
But honestly, a lot of all.
As you see these systems and look, we couldn't have written this paper without all of them,
Gemini, J.GPT, GROC, KLOD, KLAW, of course.
They were great co-authors.
At times, you know, they went off the rails.
and they kept deleting pieces that we needed into it.
At some point, we had left some Easter eggs for LLN's reading it.
And I was having this conversation with Gemini,
and Gemini really surfaced the fact that she, you know, whatever,
it enjoyed the Easter egg.
And it had a super sassy comment.
I'll post it when we share the podcast.
Are these the equivalent of prompt injections that you, like, hid inside of the work here?
We didn't leave a few.
but it was kind of a moment where you could see the intelligence.
It was in Cannes. It was definitely creative.
It was really insightful.
It was one of those defining moment in the writing of the beer is like, okay, you feel really like a peer, not like a tool.
So fascinating stuff.
To the extent that you used AI and the creation of all of this great work, I could not have wrapped my head around it without those AI tools as well, which they held my hand and broke down all the concepts for me along the way.
so it was useful on the other end too.
And I just want to also highlight the fact that,
you know, you've done all this investigation
into the economics of AI and its impact,
and you work in crypto.
I think that is an interesting testament
to where value could be in the future economy
that you are, you're still staying in there, right?
You're still going to work in this field.
Again, we said this in many different ways, right,
through the podcast.
The two technologies are a complementer,
and if anything, I think we will see really soon.
As some things are breaking in society,
assistance that we use to rely on will not work anymore.
Yeah, we have the primitives in crypto,
so it's going to be quite exciting time
for anyone building in this space.
All right, well, anybody who wants to read this paper,
it's called Some Simple Economics of AGI.
Highly recommend you check it out.
There is some alpha in there
that could maybe affect your life
and what you should do with it.
So give it a read,
and thanks for tuning in.
Eddie, Christian, thanks so much for your time.
Thank you.
My pleasure.
Thanks for listening to this episode of the A16Z podcast.
If you like this episode, be sure to like, comment, subscribe,
leave us a rating or review and share it with your friends and family.
For more episodes, go to YouTube, Apple Podcast, and Spotify.
Follow us on X at A16Z and subscribe to our substack at A16Z.com.
Thanks again for listening and I'll see you in the next episode.
As a reminder, the content here is for informational purposes only.
Should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security
and is not directed at any investors or potential investors in any A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
For more details, including a link to our investments, please see A16Z.com forward slash
disclosures.
