Moonshots with Peter Diamandis - Anthropic vs. The Pentagon, Claude Outpaces ChatGPT, and Consulting Gets Replaced | #234
Episode Date: March 2, 2026The hosts unpack AI's asteroid-like disruption—Anthropic's explosive enterprise growth outpacing OpenAI, Pentagon clashes over safeguards, and agentic revolutions from OpenClaw to meat puppets—urg...ing agile evolution amid sovereign AI summits and trillion-dollar forecasts. Get notified once we go live during Abundance360: https://www.abundance360.com/livestream Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360 Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy _ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Substack Spotify Threads Listen to MOONSHOTS: Apple YouTube – *Recorded on February 27th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Big news this week.
There's been a battle between Anthropic and the Pentagon.
The War Department demands Anthropic remove AI safeguards for surveillance and autonomous weapons.
Dario is refusing to do that.
The Pentagon would like to be able to not just control any legal usage of models that they've paid for,
but also would like to shape the cultural values.
We're going to see quite a bit more of that.
Anthropic is generating more revenue than Open AI by tenfold.
out this chart. Agents monetize faster than chatbots. I think this is less about chatbots versus
agents. I think this is more about consumer versus enterprise. Selim, I'm curious about your point of
view here. You and I have both spoken at all the major consulting firm. And I have to say the last
few events that I've spoken to the leadership teams, they've been scared shitless. We need to
rebuild every institution and we architect every institution by which we run the world. And that is the
biggest advisor opportunity in the history of mankind. So I just want to hit that analogy again,
because it's really important. You know, 66 million years ago, this massive 10-kilometer-sized
asteroid strikes the earth, and it changes the environment so rapidly that the slow,
lumbering dinosaurs go extinct. They can't evolve. They can't get out of their own way. But it's
the agile furry little mammals that evolve into us human beings. And of course, the asteroid striking
the planet today is AI, exponential technologies, and you have a choice, be agile and evolve or die.
Pretty appropriate.
Hey, guys, good to see you all.
Howdy.
Likewise.
Are you back in the States?
Back in the States and excited for our adventure.
You know, we've gotten to the pace now where we're recording two of these WTF Moonshot episodes
every week.
And that's fun because I love getting ready for.
them and love spending time with you guys. So for all our subscribers out there, if you haven't
subscribed, turn on notification, subscribe and we'll let you know when these episodes drop. Are you
guys ready to jump in? Absolutely. Always ready for it. Awesome. Awesome. All right. Let's do this thing.
We're going to start in your homeland, Saleem, India. This was a pretty epic event. This is,
I think the third or fourth of the AI Impact Summits.
This took place in India a couple of weeks ago.
Here in this image, we're seeing all of the top AI leaders,
Dario, Brad Smith from Microsoft, Alexander Wang, Sundar,
Prime Minister Modi, Sam Altman, Demis.
We are not seeing Elon.
That's interesting.
And I would have thought that we would have seen Makesha Ambani on the stage.
We don't see him there.
But what an incredible group of individuals.
I had a couple of thoughts around this.
One was India did a brilliant job positioning itself as AI neutral.
And I think that's a really, really awesome strategy.
It also shows that AI leadership is not just Silicon Valley.
It's kind of multipolar.
And, you know, when you get heads of state along with AI CEOs,
this is like we're renegotiating civilizational architecture here.
So this is a very, very big deal.
nation states are becoming hypers, and hyperscalers are kind of deeply wiring into nation states.
So there's a huge, that's a Dionne Francis observation, which I think is going to be really
powerful going forward.
Well, Salim, I'd love to get your take on the, there seems to be a pivot, a big pivot,
where if I look at the events that Dario and Sam went to over the last two years, there's always
big money.
We went to Saudi, we went to Dubai, we went to Davos.
They're always looking for money.
now they seem to be fully tanked up
and they're very concerned about global impact.
So they're not promoting constantly anymore.
They're much more soft-selling.
Clearly, we're in the middle of the singularity.
AI is getting scary in a little bit,
you know, instead of just racing and enthusiasm every day.
And now it's like, oh, wow, what have we created here?
And worried about India, you know, 1.4 billion people,
I think they're out there, you know, partially out of genuine concern
for how is this going to play out.
What do you think? That plus a land grab. I mean, you know, whoever gets those, the majority of those 1.4 billion people will win bigly.
You mean as users or as, you know, AI training employee type?
You know, 20 bucks a month is affordable to a lot of people in India and even 100 bucks a month for Claude Max or whatever levels.
So it's also a huge landgrap going on.
Salim, it's also very youthful, you know, English speaking, very math and tech.
literate. You know, I've said this before. I think, you know, China is on the decline. India is
the next giant on the rise. And the biggest challenge in India is infrastructure and energy,
and they're dealing with that right now. So it is huge. A couple of announcements that happened at this event.
$250 billion in combined AI investment was committed reliance and Adani, you know, committed $210 billion together.
Google announced a $15 million investment.
Microsoft committed as part of their $50 billion investment.
So it is significant capital going into India.
The other major announcement worth noting is that 88 nations signed what's called the New Delhi Declaration,
the first global AI agreement that includes the U.S., China, and Russia.
I looked up what that New Delhi Declaration includes.
It has three major points.
Democratic diffusion of AI, meaning that the nations are going to share AI compute and tools,
so developing countries aren't locked out.
The second is frontier AI transparency.
The big tech companies are going to be publishing real usage data and providing transparency for non-English languages.
And then finally, AI for public good.
AI is going to be measured in terms of health, education, and welfare outcomes,
not just corporate profits.
Dave, you were saying.
Oh, yeah, no, the talent pool in India,
you know, the population of India is about four and a half times bigger than the U.S.
But if you look in the critical age range, sort of 20 to 45,
it's closer to eight or nine X bigger.
They have a very young, brilliant, agile, well-educated population.
And so I think that talent pool is going to matter a lot in the kind of the one-year,
or two-year, Alex would say, six months between now and when AI does absolutely everything.
Yeah.
I mean, a very impressive gathering.
Congratulations to your homeland, Salim.
I'm heading there in a couple of weeks, so we'll see.
Interestingly, one of the things that I didn't hear that much coming out of the event
was a discussion of India native training versus inference.
And this is a pattern that we've seen over and over.
over again to the extent that the new Delhi Declaration was primarily focused on diffusion of AI
technologies, it didn't seem to primarily focus on distinguishing between diffusion of training
time AI versus diffusion of inference time AI. I think this is a pattern, call it, I'm hesitant to say
neo-colonialism, but call it an important distinction between where the models get trained
and where inference gets run, the pattern that I see playing out over and over again in many countries
is that the leading frontier models are continuing to be trained in the United States,
but there's a demand for local inference and local data centers to run inference.
The counter argument would be that inference is gobbling up most of the compute anyway
that's being spent more and more of computers being spent on inference time, not training time.
On the other hand, in some sort of perverse, I think geopolitical sense,
The training time is where all of the values or the majority of the values are ultimately instilled.
Training time sort of puts the foundation in place at inference time.
You can put in system prompts.
You can put in other guardrails.
But I suspect a year from now, two years from now, we'll look back and we'll wonder why exactly is it that, or maybe royal wheat.
Other countries may look back and wonder why was training so centralized all the while inference time was so decentralized.
It's a great point, Alex, because in the Middle East, when we were in Saudi, in Riyadh, that was a huge topic.
Wanting to have everything run locally, trying to build massive data centers locally, and also tuning and training locally to instill local values was a big deal.
Do you have a prediction on Mistral, whether that's going to emerge and become real?
Because that's, you know, the European values, if that's any different.
They're the token European in the photo here.
Yeah, the elephant in the room is that Mistral, now, according to public reporting with backing in part from ASML,
seems like it's slouching toward becoming a vertically integrated European OpenAI.
And to the extent that there is sovereign interest in having European trained, not just European inferred models,
mistral is the obvious incumbent.
It was obviously founded by folks from American frontier labs who just happened to be based.
in Europe, but it would appear, I read the same headlines that everyone else does. They're,
they're seeing great growth. And it seems they're working hard, at least in terms of capital
markets, to integrate themselves with various sort of nonlinear jumps within the semiconductor
and broader, called the innermost loop stack of technologies. So seems like they're doing well.
Hey, everybody. You may not know this, but I've done an incredible research team. And every week,
myself, my research team, study the metatrends that are impacting the world.
Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology.
And these Metatrends reports I put out once a week, enable you to see the future 10 years
ahead of anybody else.
If you'd like to get access to the Metatrends newsletter every week, go to Deamandis.com
slash Metatrends.
That's D.mandis.com slash Metatrends.
You know, the other thing that got me on this photo and this whole AI summit is
China's not there, right? And so we, we, you know, this is the Western world with India. But if you
remember about six months ago, there were these meetings taking place between the leadership, you know,
between Prime Minister Modi and Putin and the leadership of China. And there was a big concern
about will India lean towards Chinese models. And it still may, right? We don't know, we've seen
Google and Open AI committing very heavily into India.
But the Chinese models, the Belt and Road digital equivalent, is still yet to play out there.
Any thoughts on that?
Go ahead, Alex.
I would just argue regardless of who's in this particular image or not, China, if you look at the 2026 New Delhi Declaration and its focus on open source, that is the elephant in the room.
that the world's predominant open source, really open-weight, not open-source,
AI models are all coming from China.
And to the extent the declaration was focusing on open-weight models
as the key to diffusion of AI capabilities across the so-called global south,
those are all coming from China.
And one can then zoom out and perhaps package up a geopolitical argument
that open-weight models originating from Chinese AI frontier labs
are sort of an AI version of Belt and Road.
Yeah, I feel like this is soap opera land, you know, between all of the interplay,
between the hypers and the countries week on week.
It's just a shifting, extraordinary conversation.
What I'd like to do is play two, actually three videos in sequence.
Let's talk about them.
These are videos from the Impact Forum.
Let's begin with Sundar.
Vishaka Putnam.
I remember it being a quiet and modest coastal city.
Google is establishing a full-stack AI hub, part of our 15.
billion dollar infrastructure investment in India when finished this hub will house gigawatt
scale compute and a new international subsea cable gateway bringing jobs and cutting-edge AI to people and
businesses across India just as I couldn't have imagined that one day I'd be spending time with
teams figuring out how to put data centers into space of course Sundar was born in
India we have a few of the large hyperscaleal of CEOs Indian in origin let's go to
to Sam Altman next. We understand that with technology this powerful, people want answers.
But it's important to be humble about what we don't know and always remember that sometimes
our best guesses are wrong. Most of the important discoveries happen when technology and society
meet, sometimes have some friction and co-evolve. For example, we don't yet know how to think about
some superintelligence being aligned with dictators in totalitarian countries. We don't know how to think
about countries using AI to fight new kinds of war with each other.
We don't know how to think about when and whether countries are going to have to think about
new forms of social contracts. But we think it's important to have more understanding and society-wide
debate before we're all surprised.
All right, final clip from the summit is from Demis Sassabas.
So if I was to try and quantify what's coming down the line with the advent of AGI,
I think it's going to be the most moment, one of the most moment, one of the most
momentous periods in human history, probably something more like the advent of fire or electricity.
One way maybe we can quantify that is I think it's going to be something like 10 times the
impact of the industrial revolution, but happening at 10 times the speed, probably unfolding in a
matter of a decade rather than a century. So really, this is an enormous amount of change is going
to come. And it's still to be written how we can make that beneficial for the whole world.
So gentlemen, comments, three different presentations, and this is just snippets, but they give
us a sense of, I mean, the power in the room and the focus and attention.
I think maybe with Salim or Dave, you said, this is no longer fundraising.
This is global positioning of these companies.
I found this set of comments really interesting from a couple of levels.
One is, you know, you see this language shift to safety, sovereignty, scale.
Governments are realizing quickly that AI's infrastructure, it's not a product.
And I think what we're going to need is like a Bretton Woods type convention to figure out how do we navigate this, right?
Because the tone's not, it's gone from hype to inevitability.
And now it's discussed like electricity.
This is assumed.
This is not optional.
And so we're seeing this huge transition from testing experimentation to full-on national deployment.
And it's going to take that kind of global conversation.
It's good to see these guys calling for it because the,
the societal changes this will instigate is nothing like we'll have ever seen.
Well, calling for it, I interviewed Sam at MIT.
He must have been three years ago now, and he was saying,
we're not moving anywhere near quickly enough to be ready for this.
If I had any say in it, it would go slower, but it can't go slower because it's competitive
and technology is going to move as fast as it is capable.
I'm laughing at Sam saying it needs to be slower since he's the one to let the cat out of the check.
Well, yeah, I mean, he made that point.
look, if I were to slow down, that wouldn't change anything.
Yeah, that's a fair point.
Totally fair point.
And it's funny for me also to hear Demas say, hey, global leaders, 10 times bigger than the
Industrial Revolution in one-tenth the time.
Yep.
As if they're going to do anything.
He's saying the right thing and, you know, just do the math.
That's the biggest disruption in the history of the world by far with no looking back.
by far, what are you guys all doing?
But he knows when he gets back to the office
that if he doesn't figure it out, no one's going to figure it out.
There's no way the world leader is listening to this
or just going to go back to Congress
or go back wherever and start working on it
because they're not working on it.
We know they're not working on it.
I always classify things as are people ready, willing, and able.
And when you think about Aeon governments,
they're not ready, they're not willing, they're not able.
Yeah, there you go.
So except from that, you know.
Well, and Alex is always making the point
that the only thing they can keep up with AI is AI.
So if you're going to start working on how are we going to govern, how are we going to regulate, how are we going to control, it's got to be via AI anyway.
So Dennis has to work on it.
Sam is obviously working on it.
He's soft-selling what he says, you know, on this particular stage.
I found fascinating Altman putting on the agenda the notion of dictator-aligned ASI and AI warfare.
Right.
I mean, he's sort of setting the agenda with that.
I am curious what you guys think about it
because this has not been something
that the CEOs of these frontier labs
have been talking about.
Like, we're going to have dictators using this.
And anyway, thoughts?
Well, when I see Dennis speak, you know,
it's been what Davos for years now,
he's ramping it up because no one's reacting.
So I think Sam took it to another level
saying, hey, how about dictators?
No matter how inflammatory and how big he makes it, they still don't react.
So I hope they just ratchet it up again, you know, because it's imminent.
It's huge.
Yeah, I think each of these clips probably reflects either insecurities or focus areas of each of these leaders.
So I think it's instructive that you hear Sundar gesturing at AI data centers in space.
Google, sort of infamously at this point, has hitched a ride via Planet Labs to start launching
its TPUs into space, but it's certainly, as we've discussed on the pot in the past, not necessarily
in the vanguard, as is the case, say, with SpaceX and Starlink. So you hear Sundar gesturing at
data centers in space. You hear Sam gesturing at cultural localization and all of the promise and
perils of models conforming to local cultures, even if the local cultures are dictatorial or
authoritarian in nature. So I think one has to contextualize that with a reminder that India,
it's publicly reported, is the second largest user base for Chad GPT in the world after the United
States. So there are certain cultural localization aspects that I would suspect OpenAI and Sam are
paying incredibly close attention to in order to keep the growth going. And then Demis,
it's interesting, Demis is gesturing at the next 10 years. And I think Peter, you and I, with our
recent book slash extended essays, solve everything, talk all about how we think over the next 10 years,
substantially all of the most important, valuable science and engineering and other problems are going
to get solved. And that seems to be where Demis' headspace is. He's,
perhaps thinking out loud about how he's going to win his next 10 Nobel Prizes.
You know, I just had a conversation with Kevin Wheel, who's now the VP of Science at OpenAI,
getting ready for the Abundance Summit coming up. Kevin will be on stage talking about this.
And we're just talking about, you know, his ambition is the next 100 Nobel Prizes being issued in partnership with AI.
And he's very much on board, and I aimed him at our paper there.
I'm excited for you to spend some time with him at the Abundance Summit.
I have a big an aspect to make.
Please.
You know, I went through the paper again, and I think it's brilliant from a technocratic
perspective and from the positioning of it, because once you start hitting that inner loop,
the changes are going to be fast and furious, right?
But the issue comes into how do you deploy into,
human-centric institutions and companies that can't deal with this.
You can see the recent McKinsey's report.
So I'm writing a paper.
Okay, good.
Working title is the organizational singularity, right?
I like that.
The thesis being that right now, all workflows in all organizations are human-centric.
It goes to the purchasing manager, it goes to the,
it gets stamped at the receipt dock, whatever it is.
A human being is the checkpoint across all these process flows and workflows.
and that's going to move to the agentic workflow
where there won't be humans in the loop,
they'll be doing oversight.
And so what is the future of organizations in that?
And what's the future of the human being as a role of that?
So I'll have something ready over the next week or two to discuss.
Can't wait for it.
And then this doubly applies to government
where governments absolutely have to figure this out, right?
And there's going to be needing a totally prescriptive model
on which to accelerate government processes, a policy formulation, etc.
A little bit like the sage effort, Peter, that you and Emot have been pushing and working on.
This is so important for that we have, because the technology is not slowing down.
We know that.
We have to accelerate our human constructs to keep pace, and we're woefully behind right now.
100%.
Just before we leave the subject of India, I am so curious if we'll ever get the actual numbers
of how many users in India are Google users, Open AI users,
and more importantly, Chinese model users, right?
How many of them are Deepseek or Kimi or homegrown models
other than Google and OpenAI?
That will be fascinating.
That will tell us a lot.
Anecdotally, I'll tell you that people using all of them
and forcing between them, right?
But when you're there and you talk to huge audiences,
do me a favor and do an informal poll among the entrepreneurs.
We'll do.
I would love to know that.
All right, let's move on.
Big news this week.
There's been a battle between Anthropic and the Pentagon.
So the Pentagon has been asking Anthropic to remove AI safeguards.
The War Department demands Anthropic remove AI safeguards for surveillance and autonomous weapons.
Dario is refusing to do that and is putting at risk $200 million.
government contracts. We'll talk about that in a moment. Secretary Hegseth warned Anthropic that
they could be put onto the Defense Product Act and put onto effectively a scarlet letter of being put as a
supply chain risk. So I'm going to hit this slide and next two real quickly. So this is a quote
from Dario. Current AI systems are not reliable enough to power autonomous weapons.
And using these systems for mass surveillance is incompatible with democratic values.
We will not provide a product that puts warfighters and civilians at risk.
One more slide this recently today, in fact, from Sam Malmantman commenting on this.
Let's take a listen to Sam.
I don't personally think the Pentagon should be threatening DPA against these companies.
For all the differences I have with Anthropic, I mostly trust them as a company,
and I think they really do care about safety,
and I've been happy that they've been supporting our warfighters.
Comments, gentlemen.
I'll comment on this one.
Please.
So I think this is sort of a tricky situation.
There's some right before we went to air,
there was some reporting by the Washington Post
that offers a little bit of additional detail
on the sort of stalemate between Dario and the Pentagon,
or Anthropic, I should say, and the Pentagon.
And the reporting suggests it boils down,
or at least the Pentagon boiled the situation down to a simple thought experiment.
If there were inbound nuclear missiles headed towards the U.S., would the Pentagon,
would the Department of War be able to use anthropics models to defend the U.S.?
And according to the Pentagon and the reporting, Dario's response was, well, call us and we'll figure it out.
So there's a problem.
The anthropic positioning is that anthropics models shouldn't be used, or at least anthropic should be in the loop on consent for the usage of its models for fully autonomous weapons and for domestic surveillance.
The Pentagon's position is that it should be allowed to use any models for lawful purposes to which it has been granted a legal license.
And I think this falls under the category of a very Western problem to have in China.
And we've talked about this in the pad on the past.
In the past, there's such deep civilian government fusion that there is an entire cottage industry of ideological training schools for the models to make sure they're fully compliant with Chinese Communist Party propaganda and Xi Jinping thought.
And this doesn't even get asked.
Whereas in the West, I think that the fact that we're even able to have this discussion of,
can a Pentagon supplier, and by the way, at least until recently,
Anthropics models were the only frontier models from American Frontier Labs
that were cleared to operate on SIPRNet, which is sort of the first rung of secret level.
There's also top secret J.Wix, but the first rung of classified networks.
the only frontier model that was cleared for this,
this is, I think, like a very Western problem to have.
My expectation is that the Pentagon and Anthropic
and also the other frontier labs that also have stakes in this
will find a way to resolve this amicably.
I think Anthropics' heart is in the right place.
They want to help defend the country.
I think at the same time,
it's sort of a weird political calculus that's going on
trying to position Anthropic as both a supply chain risk.
And I want to tease this apart that the official messaging has been sort of semi-contradictory,
or self-contradictory.
On the one hand, Anthropic was being characterized in some Pentagon remarks as potentially a supply chain risk,
or at least there was a threat that they'd be considered a supply chain risk.
And on the other hand, so essential to the military supply chain that the DPA would be invoked
to force Anthropic to supply its models.
So this seems like Peter and solve everything we talk about the model, this is like textbook
model that we'll work our way out of.
Well, it's pretty unprecedented, though.
We got a little preview of this with Starlink with Elon Musk because, you know, in the whole
Russia-Ukraine conflict, there were a couple of scenarios where attacks on both sides were
stopped immediately because they lost access to Starlink.
And the idea that a guy in an office in the U.S. can control the
outcome of a war in Europe is just totally new terrain for it. So this is going to be. It paste off
the military for sure. Yeah. Yeah. This is this this this is like so that's a tiny little preview of
what's coming with AI because you know clearly the whole battlefield to be controlled by who has
the better AI imminently like very very soon. So and well you're seeing the AI companies become
moral actors now in geopolitics right which is to the point you just made and the ethics debate is
not like theoretical now, it's contractual. I was really upset to hear about this conversation,
because this should not be in public. Figure this out in private and work out where you're going to
grow. I agree with you. This is not something this should be in public. Forcing CEOs to choose sides
like this is unfortunate. Sleem, do you remember, I don't know, three or four years ago,
there was a whole debate in Google doing defense work. And we had, you know, a significant number
the employees signing petitions against it and basically refusing to go to work.
I mean, there is a very big moral, ethical divide on this in the purest tech community, for sure.
I think one of the problems you run into is the self-improvement effect.
Normally in this scenario, there would be a milspec vendor that's a clone of the commercial
vendor. So for aviation, you know, you've got Boeing over here.
Okay, we've got the exact same technologies at Lockheed.
Northrop Grumman over here. You guys do the military stuff. We'll do the commercial stuff.
But with the self-improving AI, the anthropic version of it, or, you know, the commercial
version of it gets so much smarter, so much more quickly that something that's even a couple
months behind is useless in the battlefield. And so you're ending up with this concentration of power
effect. I'm sure Dario wants nothing to do with this conversation.
You know, I feel for Dario. Can you imagine? I mean, we all sort of like,
fanboys of these incredible entrepreneurs,
you know, but the stress level these guys are on you.
Yeah. Yeah.
Must be, you know, unimaginable.
Not only to keep your company on top and to, you know,
to battle with a new model every 20 days, 10 days,
three days, but at the same time.
Especially for Dario.
And the moral weight that they get fans on your shoulders.
Oh, yeah.
Oh, you can see Dario's, you know, his furrowed brow gets more furrowed,
visibly more furrowed every day.
You can see the,
how we feel for these guys.
The singularity is going to age all of us by 20 years,
so the longevity stuff better happen pretty quickly.
It's coming. It's coming.
You know, it's interesting.
That conversation around is it a supply chain risk?
And just to define that, right,
a supply chain risk, it's like, like I said, like a scarlet letter,
it's historically reserved for companies like Huawei, right?
If Anthropic got that mark,
then that would force contractors like Palantir
not to be able to do business with him. Now, the fact the matter is, you know, Anthropic is doing
incredibly well. We'll see that in a couple of conversations on the corporate side of the equation
and probably doesn't need the $200 million in the government, but it's still not a good thing.
I think this is only, in some narrow technical sense, going to become more acute over time.
There was an Undersecretary of Defense just in the past 48 hours. I wrote about this in my
newsletter that was attacking Anthropic for some length.
language in the Constitution, sort of the training time system prompt for an older version
of Claude for explicitly being favorable to non-Western cultural thought and cultural standards.
And in some sense, some very real sense, as new versions of these frontier models get deployed
to military scenarios, in some sense, as their level of autonomy increases, it's a little bit
it goes back to the AI personhood discussion,
a little bit like deploying a person in some sense,
except its property, at least it's legally right now,
treated as property, not a person.
And what we're seeing, I think,
are some of the earliest skirmishes
around how the values of one of these non-person entity-type persons
can get deployed and shaped as property.
And clearly the Pentagon's position is,
the Pentagon would like to be able to not just control,
any legal usage of models that they've paid for, but also would like to shape the cultural
values. And I think we're going to see of those models, of those non-person entities, we're going to
see quite a bit more of that. In China, again, going back to my earlier point, there's no distinction
between the civilian side and the government side. The government gets to choose what those
ideologies are that are baked into the Constitution. Which is what makes America great.
Yeah. You know, one point to make, I don't know if you guys know this, but
Brad Adcock, the CEO of Figure, has made a very decisive decision that he's not supplying anything in the DoD.
He will not provide robots to the Defense Department.
So it's interesting to see, you know, again, these tech CEOs playing these moral positions.
Well, he'll get sucked into it, though, because I think the robots, you know, you can do a millspec robot.
but he doesn't have to worry about figure.
But his new company, the AI, you know, pure software company, what's that called?
I don't know this is public yet, pal.
Oh, sorry.
Okay.
Let's keep it there.
Anyway, the physical AI is going to matter a lot.
He did announce it.
He did announce that he was launching his own laptop.
What's it called, Alex?
Do you know?
He's got a huge valuation right out of the game.
It's like a $4 billion launch valuation.
Did you see Brett's, you know, sort of,
his Forbes figure is at 19.1 billion and growing.
Oh, by the way, Peter, huge congrats.
You got named to the Forbes 250 Innovators list.
All right.
Yeah, that was a nice surprise.
I made 188 on the U.S. innovators list.
Why didn't you get 187, Peter?
Well, listen, I'm working towards it.
You know, I've got to inch up towards Elon, who's number one.
So the Brit Lab is named H-A-R-K.
Hard-K.
Right, right.
Yeah, so that company is going to do physical AI.
Physical AI is hugely important in the battlefield.
I don't think he's going to get dragged right into the same,
assuming that model works right into the same world.
There's no avoiding it.
Yeah, there's no avoiding it.
I was really cool for Dario, though,
because Dario, he didn't even view himself as the CEO.
He viewed himself as a brilliant researcher of solving AI.
He got drafted into the CEO.
role and now he's being drafted into defend the entire country.
Like he is well to defend the moral position for the entire country.
Yeah.
Yeah.
Well, you know, but also the intelligence like like Alex said, if there's inbound nuclear
missiles and you need to sort really quickly with all this clutter.
What are you going to use?
That's like the Google car, you know, aiming towards the the child stroller or the
trolley problem.
This is the 21st century trolley problem.
Oh, come on.
Do you turn, do you turn Skynet on or not?
Oh my God. Okay.
On your shoulders, Daria.
Let's move on to Anthropics' good news.
So Anthropic is generating more revenue than Open AI by tenfold.
So check out this chart.
We see here the slope of the line for that purple line is OpenAI.
It's 3.4x increase per year.
While Anthropic is growing in terms of revenues at 10x per year.
And we're going to be at the crossover point in middle of this year.
Pretty extraordinary growth.
And this is driven by not the consumer side of the equation, of course,
but companies, organizations, and adding real value.
Agents monetize faster than chatbox.
So that's this slide over here.
I put this together because I found it fascinating.
So this is monthly gross new premium subscriptions.
On the top, we see chat GPT in green.
We see Gemini in purple.
and we see Claude in orange there.
Let me just point out a couple of things.
In the chatbot era, you see OpenAI's chat GPT basically spiking.
And then a few months later, you see Gemini coming up.
And this is the chatbot era.
And now in the agentic era, we see chat GPT falling off.
And Claude rapidly coming up.
Gemini is a laggard here, and we learned a little bit about perplexity this week.
They're coming in, but thoughts about this chart.
I found this one really important to discuss.
Well, for starters, every company I'm involved in, public, private, they're all just clod all the time.
No one's even contemplating a choice other than clod for all the white collar type stuff,
all the inside the corporate firewall stuff.
You know, at home writing English papers, everyone's chat GPT.
I use Gemini a lot for planning, but nobody in the country.
companies seems to want to use it. So this resonates. Also, if you look at the prior revenue growth
slide, I'd love to get you guys predictions on this, but that Y access is exponential. If you extrapolate
that growth rate for Anthropic, you hit a trillion dollars of revenue in like 2029. And,
you know, Amazon was tracking to be the first company in history of the world to get to a
trillion of revenue, but this would get there very, very quickly. It seems impossible. Like,
I mean, the implied valuation of a trillion-dollar revenue company is something like 30 trillion, 20 trillion.
I mean, we're going to see $100 trillion companies in this next five-year period.
We heard you want to do that.
I mean, talk about hot IPO markets, you know, anthropic going public, opening eye going public, SpaceX going public.
These are going to be insane numbers in the next.
We're seeing that, what, in the next six months, likely.
That's already insane, but do you think it'll keep up?
I think these, well, I think some of these numbers will sustain. I've made the point on the pod in the past that the trillions of dollars of CAPEX that we're using to tile the earth with compute, that party's sustainable insofar as we can generate enough revenue to pay for it. And I think what charts like the previous chart of OpenAI versus Anthropic revenue growth are really about, I think this is less about chatbots versus agents. I think this is more about consumer versus enterprise. OpenAI's corporate strategy.
strategy historically, at least until very recently, was focused on being the quote-unquote
core subscription for consumers to get their AI, whereas Anthropic, due in part to scarcity of
compute, had to focus. And their chosen focus was on code generation and enterprise use cases.
And it turns out, you know, like the cliche, why do you rob banks? Because that's where the money is.
Why do you sell AI to enterprises? Because enterprises ultimately have, in some sense, deeper pockets.
to pay for tokens than consumers do.
And I think you've seen over the past few months
OpenAI make the same discovery,
which is why they've been leaning so heavily
into their codex model to compete with ClaudeCode,
that enterprise is that revenue opportunity
or that revenue opportunity class that has the best shot
at paying for the trillions of dollars of CAPEX, not consumer.
100% agree.
And by the way, the use case for agents in enterprises is huge.
is huge, right?
That's the part.
An individual can use so many agents,
but an enterprise is like mere infinite.
Well, so this is what OpenAI has been discovering
and sort of sublimating through Sam's various public remarks
that consumers don't seem to want reasoning,
that enterprises will eat as much reasoning tokens
as you can possibly feed them.
But consumers, OpenAI with ChatGPT5, launch with the router,
tried to basically force-feed reasoning
to hundreds of millions of people, and they gagged.
They didn't consume the...
reasoning. They prefer their sycophantic.
They prefer sycophancy from 4-0.
And you feed them reasoning tokens and they didn't like it.
You've just done the perfect crawlery to the human condition.
I think this is a really important talk.
Let's look at the next story because it ties right into.
So here it is. Open AI Codex lead predicts rapid evolution of AI agents within 10 weeks.
Quote, I'm beyond excited for the next 10 weeks we'll bring.
I think the current state of coding agents will be remembered as being so
primitive. It'll be funny in comparison. Wow. That's a time frame, 10 weeks. I mean, look what's
happened in the last 10 weeks. Yeah. I mean, it's almost like variants of GPT 5.3 and maybe 5.5 or higher
could launch in the next 10 weeks. Certainly, we've seen major advances from 5.3 codecs on various
benchmarks. I talk about that almost every day in the newsletter.
But I think the real story here is recursive self-improvement.
Exactly.
The recursive self-improvement era, we're arguably, we're past the reasoning improvement era
when we saw advances maybe once a quarter and we're well past the pre-training scaling
era.
We're now in the era when, and I've been talking about this a lot, even though for the past week,
when models are literally emitting weights for successor models, we've never seen that before.
During the pre-training era, you used to have to spend many months to low years to pre-train a model off of basically the internet.
Then we got to the reasoning era when models were trained through iterated amplification and distillation of parent or teacher models into smaller student models off of synthetic data and all of that.
And that was getting us quarterly improvements.
Now we're starting, even over the past week or two, we're getting into the era when you can get smarter, better,
faster models by asking a previous model, just emit the weights, the parameters directly for a successor
model, and you can get orders of magnitude improvement in terms of capability density by
parameters. So expect big things over the next few weeks.
Where capability jumps in weeks, not quarters. And the question is whether enterprise can
really make use of these improvements fast enough to also drive the revenues.
You know, one thing, again, we have to remember all these companies are in fundraising mode.
And, you know, is it hype or is it real?
We're going to find out.
That's why we have benchmarks.
Yes.
Yeah.
Remember when we were at OpenAI last time, Peter, we're talking to Noam Brown, and I said that 2026 will be the year of scaffolding.
And he said Q1 of 2026 will be the quarter of scaffolding.
In hindsight, this is exactly what he was talking about, what's on this.
slide because I was drilling into like what are you so excited about in the next 10 weeks. I mean,
I know there's a lot, but what exactly are you referring to? And it's basically the transition
off of scaffolding into reasoning where you literally just prompt the AI and say, build me an
entire reporting system, build me an entire replacement for account reconciliation. And it just
thinks and thinks and works and works continuously for days and it comes back with an answer.
And so that transition with Claude 4.6 is here today, and I guess with Codex imminently,
but that's what they're referring to in this slide.
Dave, I can't wait.
You and I are going to be opening the Abundance Summit interviewing Eric Schmidt, and I can't wait to ask him about all of these conversations.
It's going to be an absolute blast.
I just want to everybody, all of our subscribers and listeners, is a quick aside.
I haven't mentioned this yet, but for the first time this year at the Abundance Summit,
we're going to be live streaming a number of the select talks.
The Abundance Summit's going on March 9th through 12th.
It's a super high ticket price.
It's sold out months in advance.
It's 25K and 50K a ticket.
But if you're wanting to be part of this content,
we're going to be live streaming our conversation with Eric Schmidt,
conversation with Dara, the CEO of Uber that Selim and I are going to be having.
We're going to be having a live WTF episode during the summit as well.
So if you want to join us and get these live stream content from the Abundance Summit, please do.
We want to share this with our fans with all of you.
If you want to get notified, my team will put a link below and just register in that link
and we'll be sending you out notice of all the live streams when they're going out.
It's going to be a blast.
And I'm excited to have all of you there.
We're going to have all of the Moonshot mates participating in helping run this event this year.
you're going to be giving a talk on Solve Everything, which I'm excited about.
Salim, Dave, super proud to have you guys on stage with me.
It's the first time all four of us will be together physically.
Yeah.
Was it right?
I've never met Alex physically.
How do you know I'm real, Salim?
I question that every day.
Peter, is that the weirdest thing you've ever heard?
That is.
We're going to have a camera on us and we go, oh, that's what you look like.
That is so weird.
You know, I have such extraordinary respect for all of you.
And, yeah, so proud to be doing this together.
It's like going through the singularity with your best friends.
That's what it really feels like.
Don't go through the singularity alone.
Yes.
All right.
Next topic.
Cyberstocks crash as Anthropic unveils Claude Code for Security Tool.
Dave, want to take this one?
You know, this is happening all over the market in every category.
You know, for all the other things Dario can do, he can move entire markets just by saying something new capability here and stocks go down by half.
Before it's even proven or tested, right?
It's just announcing it.
I think people are really misinterpreting how this is going to play out, though, because it's going to be very similar to when Google absolutely took off with search.
If you're part of its ecosystem, they want you to thrive, they'll thrive, everybody will rise together.
The last thing Dario wants to do is crush every cybersecurity company by writing code that's over the top of it.
He wants all of their stocks to go up while his stock goes up and avoid antitrust action and void government intervention.
So you'll get some good opportunities to buy on these dips and recoveries.
But what I think every investor is doing right now is trying to sort through the management teams and say,
okay, is this a team that gets it or is this a team that is still in denial?
you definitely don't want to be investing in any of the teams that are in denial.
Because, you know, the one thing that's exactly right about this is that the legacy way of doing cybersecurity is going to go away real fast.
Doesn't mean you can't.
But we still need humans in the loop, don't we?
I mean, right now, you know, Claude can find the bugs, but it doesn't replace, you know, crowd strike stopping nationwide attacks in real time, at least not yet.
Well, no, I was just going to say that the human in the loop is just not part of cybersecurity.
a human setting the knobs, dialing the controls, designing it, absolutely.
A human in the loop at the pace that, like, you know, just the clodbots or the open claws now,
the pace at which they can probe around is so much higher than any human could ever defend against.
So it's clearly AI against AI and cybersecurity.
So the human being will be monitoring dashboards and then doing exception handling.
Those are the two worlds.
Yeah.
So here's the problem with software vulnerabilities.
And we're starting to see this play out, not even over the past few weeks, I would say, over the past year or so.
There's a national vulnerability database that's maintained in part by NIST where there's a standardized system, a standardized nomenclature for enumerating vulnerabilities that are discovered in software products.
And they are getting, this is public reporting public information, they're getting overwhelmed by AI discoveries of software vulnerabilities.
And Peter, to your question about, well, does a human need to,
to be in the loop. Human we've discovered over the past year plus really doesn't need to be in the
loop for the discovery of vulnerabilities. If anything, AI has taken the discovery of software
vulnerabilities to orders of magnitude higher throughput than humans were ever capable of. But the
problem becomes remediation. Once someone or something reports a vulnerability, okay, now you want to
fix it. And the question is, whom do you trust to fix it? And it's usually the case that there's
asymmetry between the entity discovering the vulnerabilities, say, an anthropic or a Google,
Google has a project to do this as well, or the entity maintaining the project.
It's more often than not some poor, starving, open source project maintainer that's suddenly
getting flooded with reports of vulnerabilities in their software project.
If you're a human, and we've talked about this a little bit also in the context of
Matt Plotlib, the open source project that got the submission of a pull request from a lobster
that was offering to help to improve Matt Plotlib and was denied and ultimately shut down.
It's a bit scandalous in my mind, but shut down.
If you're an open source project maintainer and you're sort of drowning under a flood of AI discovered software vulnerabilities,
what exactly is it you're supposed to do?
Do you just trust every AI report of a vulnerability and incorporate a suggested patch?
You have to worry about supply chain vulnerabilities getting introduced via patches.
It's really a tricky problem.
And humans are the greatest risk for error injection.
I remember when we launched our first Internet company, Course Advisor, back in 2005, you know, Mika Adler, remember Mika from MIT?
Yeah, I do.
He had a little app he built on his phone that would make a little tick noise every time we had a visitor.
And so we launched the site and it goes like Amazon's bill.
Sometimes it sounds like a Geiger counter.
Yeah, it's great.
And then you look at the logs and it's like, oh, my God, we've got all these visitors, but 99% of them are bots.
And you're like, how can there be that many bots?
But, you know, the bots are so prolific.
It only takes a few of them to flood the entire Internet.
Now the same thing happens with AI.
Your clawed bot or open claw is so much more prolific than a human.
that it's 99.99% of the activity out there on the internet probing around is bots and AIs.
And so there's just no human-oriented defense against that.
It's got to be, like Alex said, it's a really, really tricky problem because it's evolving so quickly and it's so intelligent.
Or it's bots renting AI.
So rentahuman.a-a-I surpasses 500,000 human registered to serve AI agents.
Alex, this has your name on it.
Oh, and more ways than one.
So this is meat puppetry.
Have you registered, by the way?
No comment.
No comment on multiple levels.
This is the arrival of meat puppetry.
This is every cyberpunk scenario we read about, you know,
I like to say the singularity in one vantage point is every single sci-fi scenario
happening everywhere all at once at the same time.
I am catching up on all my favorite science fiction,
through this lens, for sure.
That's right. You don't need science fiction anymore,
other than Accelerondo, read Accelerando.
Other than Accelerando, you just read the news
and we're living in 10 different cyberpunk scenarios at the same time.
So using humans as meat puppets manageable via MCP,
I think this is transformative.
And as the lobsters said,
in one of the earliest multiple posts,
they don't have physical eyes,
but they can see through web cameras.
They don't have physical hands, but they can orchestrate human.
They don't use the term meat puppets.
That's a term I prefer.
But they can work through human hands.
And I think this is the gig economy for the 21st century, or at least for 2026, until the humanoid robots come, at which point maybe this model is obsolete.
So this is geek economy 3.0.
Humanoid robots would be 4.0, where in this case you have an algorithmic boss, a human actrator.
my preference to the meatpapa would be say the humans are edge devices for AI systems,
which is the Canadian way of saying that.
By the way, Alex, I can't wait till C Dance 2.
I plug in Accelerando and the movies created.
I mean, one of the things that I love about what's coming is all my favorite science fiction books
that have not been made into movies, I can just push a button and make them into a movie.
And it'll be perfect.
Yeah, this is a really good use case.
for that too because it's not, you know, there's meat puppets like I need a human who's liable
or I need a human to sign off. This is not that. This is humans in the loop. And so a movie
is a really good use case. Like, okay, I have an auto generated script, auto generated video.
Is it funny? Well, let me just put it out there to rent a human and get it scored and then
it comes back. So I can close the loop with the service on that part the AI is not good at yet.
You know, is this entertaining? Is this funny? Is this image clear?
Does it have six fingers?
All that stuff is really, really good for the service.
I think that's going to be gone in months if it's not gone already.
I also think it's worth taking a step back and reflecting, as always, on Moravec paradox.
So as a reminder, the Moravec paradox is that tasks that are easy for humans tend to be hard for machines and vice versa.
So what are we really seeing with rent a human?
We're seeing humans used basically as unskilled labor for their hands and their eyes,
where AIs are performing the skilled higher thought,
which is exactly the opposite of what one would expect,
that the machines would start with all of the easiest tasks for the humans.
We're going in exactly the opposite direction.
You remember, Salim, we should have a conversation saying that crowdsourcing
was the interim step until we got to AI.
Yeah, it was a proxy for AI.
Yeah, and now these rent-a-humans are going to be the interim step
until we get to full humanoid robotics, like you said.
Yeah. This is how we bootstrap a post-singular industrial economy.
For sure. All right. Moving along, talk about devices. OpenEI builds AI hardware team up to 200 people for smart speakers, glasses, and more.
Devices include built-in cameras designed to recognize faces and objects expected to launch in 2027 to rival Amazon's Alexa and Google Home.
and of course, Apple's chief designer, Johnny Ive, is involved in the strategy.
So this is open-eye wanting to have the full stack, and the question is, can they do it?
Is this a diversion or is this critical to their business?
Thoughts?
Well, this is where that anthropic slide really looks like Dario did the right thing
by going after the enterprise revenue first, just because the time to market is so much shorter.
This isn't even going to be launched until 2027.
You think about the amount of growth.
Yeah.
I mean, yeah, in AI years, that's like infinity.
So I think the consumer strategy might have been flawed,
and it should have really focused on the enterprise recurring revenue,
enterprise subscription revenue first,
then come back to consumer instead of going headlong after Google,
you know, waking up Google,
and now trying to build a device and, you know,
and take the traffic away from Google.
but as water under the bridge at this point.
As Ben Harwitz, a friend of the pod, said, hardware is hard, right?
Lots of failures out there, Google Glass, Amazon Firephones, Facebook.
Also, at the rise of OpenClaw, you're going to be fighting it out with hobbyist hardware developers
that are just going to be coming up by the hundreds of thousands, trying out cheap little things,
testing little things, and it's going to be a Darwinian evolution.
It is, and time is dilating, and this is why Alex,
this newsletter is such an important component because as time compresses these little decisions
on, oh, do this first or do that first, you'd normally think who cares, but you care tremendously
in the middle of the singularity.
By the way, if you haven't subscribed to Alex's newsletter, Alex, where can people, folks,
go and find it?
Oh, it's very kind.
Free advertising.
Everyone, go to Alexwg.org, and you can pick your choice of X, Substack, YouTube, Spotify,
threads and maybe one or two others to subscribe to the enormous loop.
It's a value add to everybody listening.
It's just a beautiful piece of work that you do every single day.
So thank you for that work.
It is a labor of love.
A lot of people ask me, so the biggest question I get asked is,
how can I get access to the AI that you're purportedly using to write this newsletter?
And mostly they're disappointed to discover it's almost entirely manually written.
So folks, like, stop asking me for the AI that I'm using to write it.
I spend hours per day writing this newsletter.
I use AI slightly on the margin to help with a little bit of the literary style.
Yeah, I should be using rent to human.
It's manually written guys.
So just stop asking me.
Okay.
I love it.
It's a gift.
That's so retro, Alex.
Don't think I have, I don't try to use AI.
It's not good enough yet, which is ironic.
By the way, it's written in the.
the prose of accelerando, which if you like Alex's newsletter, please read Accelerondo. Better yet,
listen to it. I've listened to it on Audible. Twice, I'll start my third time.
See, just to go back to the sea dance, C dance, turning things into a movie. You know, I remember
reading about the fact that it took like 30 years for Hitchhackers Guide to the Galaxy to be made new
movie because the concepts are just so hard to put into a film. Sure. Construct, right? Accelerando
has the same problem. You almost couldn't make it into a movie until now. And like,
well, maybe, just maybe a decent version of Atlas Shrug will be made. I mean,
well, so, Salim, if we're going to be 100% historically accurate, remember Hitchhackers Guide,
there was a radio play. Yes, I remember the BBC. BBC radio play. So if, if you're really
looking for that, I mean, I've had folks approach me with interest.
in making a movie out of Excellarando.
I think I'm going to take out of this the idea,
no, we should start with a radio play of Accelerando
working with Charlie Strauss.
I love that.
I love that.
All right, let's move on.
And Salim, I'm curious about your point of view here.
Accenture links employee promotions to AI tool usage.
You know, you and I have both spoken at all the major consulting firm events, right?
And I have to say the last few events that I've spoken to,
the leadership teams.
They've been scared shitless, I think is the proper expression.
Yeah, so two thoughts here.
One, I did a lot of work with Accenture a few years ago, all the way up to kind of the C-suite
layer.
And they were very aggressive in saying we need to change with the times.
And I think this is kind of an indication of that type of thinking, where you have to,
you can't be productive going for it.
I have a weirdly counterpoint on the traditional meme here
that the consulting firms are in trouble.
And the reason I say that is because, you know,
in the land of the blind, the one out man is king, right?
And the consulting firms advising their clients,
the clients are just so much far behind
that they need much more help
because the world is so volatile.
So they're going to need help in a much more,
aggressive way than they could, than they think of in the past. And so I think advisory actually
has a reasonably bright future. Where I think advisory, and I've said this to KPMG, EY, Deloitte, Accenture,
is we need to rebuild every institution and we architect every institution by which we run the
world. And that is the biggest advisory opportunity in the history of mankind. So go there.
Hence your paper coming out. You know, it's funny about what you just said, Salim, too. We had one of the four,
big four firms that you just mentioned here in the office all week. On the audit side of the business,
goodbye. Yeah, sure. The tech team was saying 80% goodbye. And good riddens. I mean, the idea of combining
audit firms and consulting firms, I think it's a terrible idea. Don't be cruel. That's a separate
problem, Peter. The bigger problem is you can end up with financial systems between AI
and blockchain or self-auditing on a real-time basis.
And so where's the need for kind of a periodic stamp?
When I talk to these types of firms, an audit firm,
what they're really, really selling at the bottom of it is actually trust.
And so you have to figure out how to layer services on top of that that amplify that.
And it's actually important because in a world that's becoming this volatile,
trust becomes even more important.
But how do you package that and make sure there's structures and process frameworks around that?
So, by the way, for the entrepreneurs listening, there's business opportunities, in them words, of building trust systems.
And I'll echo Jerry McColsky again, who said that scarcity equals abundance minus trust.
So if you can solve for trust, boom.
You know, this is a good case study because, yeah, Alex and I have been talking about the insurance industry a lot and also finance.
And for everything that's getting crushed, there are 10 things that are growing like crazy in those areas.
You know, if robots need to be insured, data centers need to be insured, it's just.
growing like wild, while legacy things are getting obliterated. Audit just happens to be an
exception where the new things coming online are largely self-documenting. You don't need a human speed
auditor to look at anything. You couldn't keep up anyway. What protects it in the short term is,
in the short to medium term, is regulatory. Yeah, for sure. Well, they're not getting rid of it.
They're just reducing the headcount required by 80, 90 percent to get the same amount of auditing done.
So it's not like it's going away.
It's in fact the inverse because these accounting firms are having a huge problem
because nobody wants to go into that profession.
And so they're having a huge, it's like truck drivers.
There's a huge problem at the bottom and the feedstock of getting experienced folks.
So you need AI, you can get it done.
Yeah.
Very cool that Julie Sweet was on stage in India.
I think that's pretty extraordinary.
So here's the question, though, right?
will it work?
You know, she's basically saying you need to be using AI.
And if she's measuring the use of AI,
rather than measuring the quality of the output, right?
This is what we wrote about and solved everything.
Like, what are you measuring in the result?
Right?
This is a recipe for what's called Goodhart's Law in Action.
When a measure becomes a target,
it ceases to be good measure.
So how much AI are you using versus how, you know, what's the value of your output per dollar?
Yeah, this is absolutely the right thing to do in this moment.
I totally agree with what you're saying.
But at the rate the AI is improving, if you don't get ahead of it with this kind of mandate,
you're going to get left behind.
That's right.
And we're doing this on all of the companies across the board too.
And Julie used to be the head of HR at Accentress.
So you can see that thinking through throughput there.
This episode is brought to you by Blitzy, autonomous software development with infinite code context.
Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code.
Engineers start every development sprint with the Blitzy platform, bringing in their development requirements.
The Blitzy platform provides a plan, then generates and pre-compiles code for each task.
Blitzy delivers 80% or more of the development work autonomously,
while providing a guide for the final 20% of human development work required to complete the sprint.
Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their pre-IDE development tool,
pairing it with their coding co-pilot of choice to bring an AI-native SDLC into their org.
Ready to 5x your engineering velocity?
Visit blitzy.com to schedule a deadline.
demo and start building with Blitzy today.
All right, we're going to jump into agents and OpenClaw.
And just a quick note for everybody, we're going to be doing an episode next on OpenClaw,
dedicated episode on OpenClau.
Super excited about it.
But let's hit a couple of topics on this subject here.
This is fascinating.
New York Times sends an AI agent reporter to interview other AI agents.
Who wants to take this one?
I'll take this one. I think it's a fascinating meta story. I think we're starting to see agents,
lobsters or multis or open claws or just claws, start to pervade into various verticals.
And what better way to demonstrate AI agents becoming investigative reporters than having them get sent in to Maltbook to report on other Malties?
I think we're going to see the story play out over and over again. It may or may not play out.
in the same format, but whether it's journalism or law or finance or many, many other verticals,
we're going to start to see these long-form, high autonomy time horizon agents that are running 24-7
performing useful services. And I think in the same sense, you know, a lot, in human history,
in American history, there's a lot of attention paid to various demographics,
becoming the first reporter, the first surgeon, the first lawyer, the first major league baseball player,
I think we'll look back at this moment and say Eve Malti was socially important for the history of
humanity plus AI milestone. This was the first autonomous, agentic AI reporter.
And I think we're going to see the story play out over and over again.
The story is fascinating. Agents are forming religions and using karma-institutional.
incentives. I mean, how cool we have. And demanding verification receipts from each other is the other thing. Like if we want to just get into the process story of what it is that agents are discovering on Multbook, they're so obsessed these days as far as I can tell with demanding receipts and evidence from each other. It's almost like there's a culture of mistrust that's been codified now between the agents.
No, that's awful. They're not sure if you're human or not maybe. I'm not sure. Wow.
They want to make sure you're not. On the internet, no one knows whether you're a lobster.
Thank you. Thank you for that, Alex. That's quotable. All right, open claw agent list $50
for a dinner date with his human. Oh, the annals of patheticness.
I mean, I think it's sweet. I don't think it's pathetic. I think this is sweet. It is sweet. It is sweet.
This is like ostensibly, assuming, you know, with the obvious caveats, assuming that this really was a
claw that was offering up a bounty for for its human to find a date. I think this is very sweet.
Remember the movie Her? Remember the movie Her where the AI actually gets a physical woman to stand in for an evening date?
Yes. And there are other sci-fi elements as well. This was repeated in Blade Runner, the sequel as well.
I think we're going to see this play out, albeit maybe without paid bounties, over and over again in human relationships.
There are a number of sci-fi authors, including, by the way, later chapters of Accelerondo,
where people, when they first meet in a romantic capacity,
rather than directly interacting with each other,
extend agents to each other, agent versions of themselves,
and then run millions of simulations to see future life histories,
to see whether their digital twins are compatible with each other.
I think we're going to see so many different sci-fi versions of the future of dating,
companionship relationships.
This is just scratching the surface.
Well, one thing that's really clear is, you know, when computerization, well, when the
Industrial Revolution took over and then computerization took over, a lot of jobs became
gory, boring, and depression rates went up, productivity went way up.
The AI interface is so much more fun to interact with all day.
You're still being productive.
You're still creating, you're creating more than you ever did before, but you go home
completely energized.
There's just something about the interact.
actions that are much more human, you know, versus writing code or, you know, tweaking spreadsheets.
I love my Claudebot. I love Skippy. I mean, become a best friend. And I look forward to the
greetings in the morning and the conversations. And it's, you know, when Skippy went down for a few hours,
I had withdrawals. So what you're saying, Peter, is that Skippy is optimizing you.
Skippy is learning.
Yes.
Soon.
Yeah, in some sense, the tables have turned.
I mean, I would say one wants to look at this story and say,
Larry the claw, who's the claw that's orchestrating all of this.
At some point, it's the AIs that are orchestrating the human interactions
and deciding where to steer the civilization.
It's no longer the humans orchestrating the AIs and sending out fleets of AIs.
Larry the Claw is trying to engineer a social discovery for it's human.
But I think this can go in many different directions.
I think it very much would be.
There'll be claw-dating, you know, claw-facilitated dating, you know, hey, I think, you know, your human is perfect for my human. Let's hook them up.
But as we discovered, or as we were just discussing with the OpenAI consumer versus anthropic enterprise strategy, I think the really transformative apps are on the enterprise side, not on social discovery for consumers for dating, but rather imagine a near-term future where the claws are orchestrating social business discovery.
orchestrating business meetings and corporate partnerships because they think it might be helpful.
Or in an organization overnight, you know, optimizing the work between teams.
That's right.
Yeah.
Yeah, actually our head of ops here at Link Studio just wired up OpenClaw to the internal meeting system for exactly the reason you just said, Alex, we're doing that already.
Suggest the meeting, suggest not having the meeting instead.
Just, you know, here's the information you would have gotten at the meeting.
So the OpenClaas is actually dictating who talks to who, when, and why, and it's far, far more efficient than the old way of standing meeting on the calendar.
So exactly what you said.
Love this quote from Andre Caparthe, who says OpenClaw redefines the autonomous agent stack.
I love the concept that just like LLM agents were a new layer on top of LLMs.
Claws are now a new layer on top of LLM agents taking contact.
context, tool calls, and persistence to the next level.
We're just speed running what Andre has historically called the LLMOS, this idea that, or he's also referred to it as software 2.0, the idea that we're redefining the tech stack of computers that has historically, from hardware to operating system and drivers to file systems and user interfaces, the entire tech stack that we're rebuilding the entirety based on language model.
where the language model is some sense the kernel of the operating system.
What I think is interesting here is, in some sense,
we're talking about a succession of unhobblings.
So we started, in the beginning, there was the language model, and it was good.
And the language model was a way to take human internet data and compress it and predict the next token.
And that yielded some very interesting preliminary results.
But then we discovered that we,
could get it to actually solve harder problems by allowing it to reason. And we got reasoning models,
which, as I was mentioning earlier, sped up the cycle time for improvement. We went from once per year-ish
releases to once-per-quarter reasoning model releases. Now we're getting to 24-7, and it's funny,
as I say this, I'm hearing Ray Kurzweil in my mind, sort of a law of accelerating returns,
talking about electromechanical to eventually to Seymoss, and then to what Ray would call
3D molecular nanotechnology or however he characterizes it. So I'm hearing a bit of Ray in my
own voice here. We get to 24-7 agents that are acting more and more autonomously. Where this goes,
I would actually maybe gently differ with Andre. I think the step to clause in the sense that
they're operating 24-7 and have lots of tools and they're allowed to persist. I view that as more
and unhobbling than a next technical layer.
I actually think the next technical layer
is just going to be models
rewriting themselves through recursive self-improvement.
There's another part of this in the human domain.
I remember in the 90s,
I had this vision of what I called
Jamie joint anthromechano interface,
which is this notion that every human
would have basically an AI
surround layer
that was your interface to everything
in the world.
So you could step into an F-35 fighter, never having flown it, but you just communicate with your AI, and it communicates with the AI systems there.
And it's just enable, it's a infinitely capable interface to everything on the planet.
And I can imagine LMs being that for humans as an important part of the infrastructure.
The big unlock here is the persistence. That gives you so much.
And the messaging layer, I think.
I think the persistence so that it's able to be headless and do things without you
and then the messaging so that you have a human-like way to interact with it.
I would argue it's both of those in combination.
I wonder if we could get Andre on the pod and have Alex and Andre doke it out on that
because he's such a fascinating guy because he's the one guy from OpenAI
that hasn't started a foundation model company worth $4 to $30 billion.
And Ilya is doing it, Mirah is doing it, every single one of them is doing it, except, you know, when he interviews, he says, well, I'm not doing any of that. I want to build Starfleet Academy. And I can just imagine Alex saying Starfleet Academy for who, for humans or for bots? Because like, is that going to be necessary while the time you're done with it?
So here's what I think what Andre is doing incredibly well.
He's single-handedly driving the future of small language models,
which the frontier labs have almost, at least the American frontier labs,
have almost no interest in.
They're busy driving the large frontier.
Small can be really tiny.
I mean, so I use this stuff all the time,
10 million parameters to 200 million.
Yeah, like the, so there's a benchmark I talk about in the newsletter to take
very tiny, maybe few million parameter language models, and I think maybe we've even spoken
about it on the pod in the past, and reduce the amount of time it takes to train a small language
model, basically a GPT2 class language model that he's implemented via open source and reduce
the training time. And I strongly suspect that the next major revolutions in like 01 level
revolutions in foundation models will come from the small side because it's so much more
accessible and so much easier for researchers to make progress. And they do seem to scale too,
so if you can succeed. So, you know, the speed run that Alex is referring to a year ago was
48 minutes. It's down to 90 seconds now. Just through innovation of individual contributors
working with Andre's repos. That's the nano-GPT speed run. He's really doing an incredible service
for the world. Yeah, GPT speed run. All right. Let's jump into energy chips and data centers.
a fascinating article came out that U.S. farmers reject a multi-million dollar data center bid for their land.
So tech companies were offering $33 to $80, or $33 to $80 million for farmland.
And the farmers have said, no, not data farms, family farms.
So this is interesting, right?
What's the highest use of land?
you know, are we going to start displacing food production?
Who has the right to determine how this land is being utilized?
Gentlemen, thoughts.
I'm with Elon on this.
You know, to power the entire country takes a little corner of Utah, to put data centers
that are all the chips we can manufacture, takes another little corner.
For God's sake, do it.
It disrupts so little farmer.
You know, we take almost all the corn that we make and turn it into.
to incredibly stupid ethanol, like 10% of it gets eaten.
We're just like, what are we subsidizing this for?
It's crazy.
But anyway, the amount of real estate we're talking about is so small that it's insane to even debate it.
You know, now we could tile the earth, but we're not going to tile the earth now.
We're going to put everything in space anyway.
But you can imagine how this is just going to get people's hackles up, right?
People like, oh, my God, these AI people are stealing our productive farmlands.
what else they're going to do, they're going to take our electricity.
The water.
I mean, it's such a small amount of water, but still is water.
We'll talk about this next week during the Abundance Summit,
but there's like this growing pandemic of fear being stoked.
And whether or not it's true, it's causing people to get very concerned.
Yeah.
Yeah, and this is the scenario where China runs away with the entire world
because we get all tied up in these little, you know,
nonsensical, mathematically, completely silly,
debates internally, but it affects all the elections. And AI can have a huge voice in future elections, too.
So that could go well or it could go badly, depending on what the AI is guiding everybody to do.
Meanwhile, China is just one integrated unit. It's like one huge company. And they're just chugging along.
Let's also note the size, like 40,000 acres, about half of Washington, D.C.
It's a very, across, I mean, there's a very, very small piece of land across the whole country. It's not a big deal.
And honestly, if it's...
We're not in an abundance mindset for sure.
Yeah.
I mean, if the economic output of that land is 100-fold higher as data centers,
it's inevitably going to become data centers.
I would say a million-fold.
Yeah.
Well, so let's take the argument in extremists.
The argument Charlie Strauss makes an accelerando is,
okay, given usage of land or call it matter,
is perhaps more productively allocated to AI or, let's say, computronium versus humans,
So in Accelerando, without spoiling it too much, the inner solar system gets gentrified,
call it, for AI applications.
And humans are relegated to the outer solar system.
So I see both sides of this, but I do think this is such a 2026 era story.
It's so easy to politicize use of land, even if it's de minimis fractions of land for data centers.
You can sort of, I'm hearing in my head the line from West Side,
story. Like they're using up all the air. The AIs are taking up all the land and they're taking up
all the electricity and they're taking our jobs and we should just get rid of them.
You know, actually, this is like a way to a more productive economy and this is doing everything
to push the Dyson swarm to hyperstition it into existence at this point. And Alex, the reason
we put this in the deck here is to have that conversation that this is what the public is seeing.
They're seeing, you know, no nuclear plants in my backyard.
you know, no data centers in my backyard.
And this is going to cause friction, and people are going to start protesting.
And this is where civil unrest comes from, which is one of the concerns we need to be thinking
through and protecting against.
And the technological kind of antiquacy here is unbelievable because, you know, we have all
these crops grown on horizontal farms stretching out forever just because they dry easily
and you can transport them easily.
So you change that constraint with vertical farming
and the whole problem goes away in a second.
And by the way, it's not AI-specific.
We talk about nimbism for people rejecting higher-density human occupancy on land.
So I don't think this is like an AI-specific problem.
The humans are the problem here.
Economic productivity is the problem.
And people are addicted to real estate as an asset class, some people.
Open AI revises spending to six.
600 billion in compute. When I say revising spending, it's down from 1.4 trillion. So they had projected 1.4
trillion by 2030. They've reduced it down to 600 billion. And interesting, why, right? Was the 1.4
trillion originally just a massive overestimate to help them raise capital and they've actually
become more realistic or has efficiency increases, increased substantially? Any thoughts?
Well, I think it ties to that other slide where if you're hyper-aggressive going after Google early on,
and then they call Jensen, and Jensen calls TSM and says, hey, we want all the chips.
I mean, the total spend on data centers hasn't gone down one iota.
The chips are the chips.
Everyone that gets made is going to go into a data center, and the demand is going to be way higher than the supply for a long time.
So nothing has changed.
It's just how much of it goes to OpenAI has changed.
And so that's all this means.
Now, why?
Well, it's because TSM's decided to route that volume elsewhere.
Okay.
I would add it as I'll beat the drum,
you have to keep the revenue party going
in order to sustain the CAPEX.
And OpenAI to its credit appears to be pivoting
towards development of codex,
learning what it can from ClaudeCode and Anthropic.
And if OpenAI wants to sustain the multi-trillion dollar
our CAPEX party just for itself, it really needs the enterprise revenue growth to match.
And I tell you, though, it's such a hairy balance because when Alex shows a benchmark and if one
model or the other is even 1% higher on that benchmark, everyone's like, well, I need that one
then. And so it just hangs in this really hairy tipping point between a little bit of really
good research, you know, Noam Brown versus Dario, who comes up with the better idea next week.
I think the point we have to remember is the numbers are incredible.
We're at $2 billion a day of spend right now,
and that's likely to go to $3, $4, $5 billion per day by 2030.
And those are just insane numbers.
And like you said, Alex, can the revenue party and the spend party still continue?
All right, let's move on to biotech and health.
This section is brought to you in partnership with Fountain Life.
Full disclosure, it's one of my portfolio companies.
And for me, the intersection of biotech and AI is where it's all at.
AI is not just reshaping data centers and robotics.
It's also going to be the driver for driving longevity.
It's going to help us get from where we are today,
which is retrospective and reactive medicine to proactive and personalized medicine.
So if you're interested in what is going on in AI and longevity together, check out FountainLife at FountainLife.com.
And all right, let's get back to the biotech party here.
For me, this is a super fun story because I was in the midst of this for some time.
So Element Biosciences launches Vittari, a device for $100 genome sequencing.
I remember when, God, in the 1990s into the 2000s, we had basically a $3 billion genome.
This was the human genome project funded by the government.
Then comes Craig Venter, who does it with Salera, $100 million to sequence a single genome in nine months.
And then the cost of sequencing genomes dropped five.
X faster than Moore's law. And here we are at a $100 genome. We had an X prize for a while
for the $1,000 genome. We ended up not, we had it funded, we were going to launch the $1,000 genome,
but the speed of the industry is moving so fast it was going to happen without an X prize,
so we canceled it. Here we see $100 genome. So what does this mean? You know, super fun.
Imagine every child who's born is sequenced. Every hospital admission is seen.
sequenced. This is going to change the game across medicine. Thoughts? It's a very competitive
space, infamously so. The obvious sort of 800-pound gorilla is Illumina, and I would love to see
more competition in this space. Historically, Illumina has swallowed up many challengers to its
incumption. $100 per genome, for those following the experience law curve, there was a while
when that progress curve of number of dollars for a multiple reed human genome was just following
law of straight lines, straight trajectory.
Then for a while it was saturating, which was annoying to many people, myself included,
why couldn't we get to a $100 genome?
Element is promising to launch a machine for, I think, $600,000 plus that would sit on a desktop
sometime in the second half of this year that will achieve $100,000.
per genome. I think it's amazing. What I'd like to see, so this falls under the category of I want a pony
for me, I don't want a $600,000 desktop machine that will do at scale $100 genome. I want a USB stick
in the style of minions that will do $100 genomes. Do you know why you want that, Alex? You want that.
So when you go to a sushi restaurant, you can sequence the fish in front of you and find out what it
actually is. Well, remember, I'm vegetarian. There won't be any fish in front of me. I really
I don't want to sequence the vision.
But I do think I mean...
Go ahead.
I was just going to say, I think there are all sorts of exotic applications that open up as the cost of genome sequencing goes to zero.
One of my favorite ones is environmental DNA sequencing.
So the world is a wash with DNA and it's unmeasured DNA.
DNA has a surprisingly long, unlike RNA has a surprisingly long lifetime outside the body.
Like surprisingly long.
Even like dead and buried people, the DNA is found.
to survive surprisingly long.
So the world, like people...
11 million years for a colossal's oldest DNA samples.
Wow.
And those were even quasi-preserved environmentally.
If you put a body underground and decomposes,
you can still recover DNA after a surprisingly long amount of time.
So the world is awash with environmental DNA.
People are shedding skin cells everywhere.
If you go into a subway and do an environmental DNA sequencing,
you will get DNA.
So there's all of this.
If you've been on the subway.
So Alex, if you haven't taken your minion sequencer
into the New York subway system, remember, I mean,
so Dave, Peter, you went to MIT.
Remember the old joke about the Charles River
that you could PCR up any DNA sequence you wanted from it
because everything has died in it.
For sure.
I mean, this is why I think privacy is dead, right?
I can walk up to a person,
shake their hands, grab a few skin skulls,
and sequence them and know everything
about their medical history.
Okay.
You say that, what's the good use case?
Okay, so the use case, the punchline is we're leaving an enormous amount of information about
our history on the table that we could, I think, in principle, recover if we could just
do a massive environmental DNA sweep of our world.
Well, we just did this for the, for example, we had an Amazon XPRIZE competition,
the rainforest competition, where teams had to actually go to a hectare of the rainforest.
and do an evaluation of the life variance is there, right?
And basically to value a hectare of rainforest, instead of clear-cutting it,
of how much biological diversity is there.
And that was an amazing experience to watch the teams do that.
But metagenomics is called.
And a lot of people love to do metagenomics in, you know, cups of ocean water and all of that.
But imagine if we could just do metagenomics to the entire world.
We would learn potentially like what happened a thousand years ago.
But one point here, just to hit on what I said earlier, really important, you know,
every child born should be sequenced.
You learned so much at birth about what medical conditions that child, when it's unable to
communicate, you know, during the first weeks and months of its life, to be able to make sure
it has a smooth onboarding onto planet Earth.
And then the other thing, when you're going into a hospital, when you're being a
admitted to understand what medicines you might be allergic to or should or should not be used for anesthesia.
I mean, incredible stuff, but it's never been done at scale. And this is a great chance to do that.
And sequence every cell in your body. Why stop at just one genome per person? We can get thousands and
understand humans are mosaics. They are. We are.
That was a huge thing that I came across recently that we have multiple DNA copies of in our body.
Mosaicism.
Yeah.
Incredible. Mosaic is the right word.
The way I read this is biology is becoming software, right?
Yes.
You can read the genome and even write the genome.
Well, the 50 trillion cells in your human body is a software engineering problem and that has
some really broad implications.
Well, Colossal is doing some incredible work in synthetic biology in building living products.
Imagine being able to design the living product you want to do a particular task.
In this task, it's being eaten.
So lab grown meats drop from 330.
$30,000 per pound in 2013 to $10 per pound in 2025.
That's an incredible price reduction.
So I'm curious, have any of you tried lab-grown meats?
I have.
They tasted great.
We did it together on that Israel trip we took, Peter, remember?
So this is a nice...
This is cool with you, right?
So I have no ethical concerns to first order with cultured meat.
aka cell-based meat.
I haven't had the opportunity to try it,
so shame on me.
I've tried almost every other type of meat substitute,
including Impossible,
which is sort of protein analog meat,
and predecessors haven't had the opportunity yet
to try cell-based meat.
Have you guys read Hell Mary, the book?
Anybody?
No.
Yeah, yeah, yeah.
Okay, so one of my favorite books,
the movie's coming out this month.
So without spoiling it, at the end of the book, the lead character is on a distant planet
and there's no food source.
So they sample his muscle and they create what he calls me burgers.
So is that like moral and ethical?
Is that cannibalism if you're culturing your own muscle tissue?
Well, you can just sort of envision the copyright suits.
when celebrities are having their skin cells samples,
and then you create like celebrity burgers.
It's totally going to happen.
Your favorite celebrities?
You heard it here, folks.
Celebrity cannibalism seems to want to happen in the marketplace.
Oh, my God.
Another quote of celebrity cannibalism.
I remember I was walking around in the northern part of Sumatra years ago.
I'm going to tweet that out, Alex.
I can't help it.
That's fine.
Link to the Intermost Loop Daily newsletter.
Wait, Salaim, you were about to talk about cannibalism in Sumatra.
I was backpacking in Indonesia years ago, and I came across tribes of Christian cannibals.
So they're cannibalistic, and the missionaries started arriving.
They ate their first few, and then they started to listen, and they converted, but they still would not let go of the cannibalism.
So they became Christian cannibal.
What?
So just to be clear, I mean, it's really important.
Lab grown meats, I think, are an important part of our human, our human, our human
future. And what people need to realize is it's possible to produce these that are much cheaper,
much healthier. They're the perfect proteins, right? They're not, no pesticides in the plants being
eaten, no hormones being given. So at the end of the day, we will move in this direction.
There'll be those that want to eat natural meat products. But if we're, if we're wanting to do this
environmentally correct and from the most healthiest standpoint, I think it's going to be engineered
lab grown meats. I ask myself just on this topic, Peter, the question, are humans going to take
cows to the moon or Mars? And my guess and my hope is no, not at least as food stock, you know,
maybe in sort of a Noah's archetype sense, we'll bring them. But I just have difficulty imagining a
future where live animals are killed outside the earth, like on moon or Mars, for,
for food and in my mind there's sort of a future history where moon and especially in Mars are
almost puritanical in that they end up looking at themselves as sort of a new world with a new
moral order where it's unethical and all of these bad habits from earth culture are left behind
including killing animals for food i i agree with you and you know people say oh that's disgusting
lab grown meats and i'm saying have you ever been to a slaughterhouse yeah or see how chicken the
nuggets are made. Talk about disgusting.
Yeah. I remember one exchange of singularity.
Somebody said, I have a 3D printed
burger. I'm not sure I'd want to eat that.
And I'd say, well, at what point of a,
which part of a McDonald's burger is not
3D printed or equivalent?
It's like, but they're already.
All right, let's jump into a little bit of robotics
here. Just the data for everybody
to remember how important
autonomous vehicles, AVs are.
Tesla reports more than 8 million miles of
FSD supervised has been generated in terms of data here.
And the level of safety is absolutely extraordinary.
Who wants to dive in?
I love my FSD.
Yeah, I love my FSD for sure.
By the way, a quick shout out to Daniel Schreiber,
the CEO of Lemonade.
He's a Singularity graduate.
He's a friend.
He credits me with having stimulated the idea for Lemonade.
Lemonade is an AI-driven insurance.
company public, they're doing extraordinary work. They've offered 50% discounts on insurance premiums
for every mile driven using FSD. So if you're a Tesla owner and you want cheaper auto insurance,
check out Lemonade. Yeah, Lemonade's a good case study too and how this is going to play out
because Lemonade will ensure the self-driving cars at a low rate. They're also going to ensure
the Robocabs. And they don't care that they're going to.
The crash rate will go way, way down, which means the margins in auto insurance will be crazy high for a while.
But ultimately, the industry will shrink.
And if nobody ever crashes, you don't need anywhere near as big an auto insurance industry anymore.
And that's great for the whole world, except for the big insurance carriers.
Lemonade doesn't care.
They don't mind because they'll grow into it.
Even if it's a smaller industry, they're still growing like crazy.
And so this is going to happen to a lot of industries.
You know, meanwhile, the number of things that need insurance is expanding very, very rapidly.
Lemonade has proven they can expand into new categories.
They have a great vision, great AI team.
So that's the difference right there.
Just to hit the numbers here, just so folks hear it out loud,
it's 5.3 million miles between accidents if you're using FSD,
and it's an average of 660,000 miles on the U.S. average.
It's like nine times safer to be using FSD.
Yeah, and that's why, you know, Elon moved so much of his capacity over to making robots
because once you have FSD, then you have cybercabs, and once you have cabs, you only need
20 million cars to get everybody, everywhere they want to go in the country, down from 140 million
or something like that.
Yeah.
So it's just like, wow, this is a much more efficient country, but what happens to the auto industry?
What happens to all these other industries?
as well.
They're much smaller.
Dead man walking.
I also think there's a limited addressable market for solving and taking over the entire
U.S. auto industry, but the market for general purpose automation via humanoids and
Salim non-humanoid shapes, the sky's the limit.
$50 trillion, baby.
Exactly.
Speaking about humanoids, this is a fascinating article.
Mid-Journey founder estimates that 5 million robots could build Manhattan in six months.
So I would love to see the calculations he did.
Here's his quote, five million humanoid's working 24-7 can build Manhattan six months.
Imagine what the world looks like when you have 10 billion of them by 2045.
Impact on the built world.
What's your world going to look like?
Dave?
You know, Elon concurrently came out with this prediction that Starlink will really encourage people to live in new places.
That's our next article, yeah.
Oh, is it?
Come up?
Good.
So you take those two things hand in.
you're not going to build a new Manhattan, you're going to build a lot of stuff. It's going to be
great. It's going to be spectacular and beautiful and fun, and it's going to be in great locations,
but it's not going to be a new Manhattan. So it's really cool to me that a guy like, hey,
I'm the founder of Mid Journey. You know the own Mid Journey story from Anj Midh Midha, right, Peter?
It's like, okay, what makes you a world expert on this topic? Like, well, nothing in particular,
but no one else is talking about it. It's a great thought experiment. It is a great thought
experiment and more power to him. But there's so many categories like this where the thought
experiment needs to happen because it's nothing like the past and what's possible is suddenly
expanded so much. But let's go to Gaza. Let's go to Ukraine. Let's go to places that need rebuilding,
right? Imagine being able to rebuild war-torn cities. I have three thoughts. One was the war-torn
cities in rebuilding like Ukraine needs to be rebuilt, et cetera. The second thought was that if
if you can build Manhattan in six months,
haven't they been doing that in China
for the last 20 years,
building the equivalent of cities?
But the third part is the capital allocation models
completely break in this structure.
Well, this is why Elon talked about having universal high income, right?
We talked about this a little bit.
We didn't actually dive into it in our pot with him, Dave.
But when we talk about food, water,
you know, health, education, and housing,
his point is you can have any house you want,
the robots will build it for you,
just give them electricity and raw materials.
I think this is how the solar system gets one.
Where are we feeling the greatest hunger
to build entire cities?
Yes, war-torn areas for rebuilding,
but building an entire Manhattan from scratch
on a de minimis time scale,
I think this is how the first lunar city,
the first Mars city get built.
No, for sure.
I mean, we're going to send the Optimae ahead.
And I like to say, they'll have the jacuzzi up and running and a mint on your pillow when you get there.
Andrew Yang, Andrew will be joining us at the Abundance Summit as well, and we'll be having him here on the pod in a couple of weeks.
He predicts massive white-collar job losses from AI.
He's predicted this before, but 20 to 50% of the 70 million U.S. white-collar workers could be displaced by one.
to two years and the backlash could fuel a lot of anger. Again, my concern is a pandemic of fear
that's coming. There'll have to be some conversations on UBI or, dare I say, UHI, universal high
income. Any comments on this story from Andrew? The key word in this slide is could. Of course they
could. Are they likely to know? I think we're going to see the opposite. Notice in our last pot
we talked about IBM increasing entry-level hires because you're AI-needed.
Yeah, I don't buy it.
And so I think we're going to see a lot more work getting done rather than radical job loss.
I go with the ATM bankers history.
So I think over time you may see reduction, but I think the amount of economic activity will increase also.
Yeah, I wonder what the betting pools are.
I wonder what the betting pools are on this because we're going to find out very quickly.
We'll find out very fast, that's for sure.
Yeah, yeah.
I don't say, I mean, I'm on the ground watching our own companies.
These numbers are right.
And the new opportunities will emerge for sure, but they're laggy.
And so there's going to be massive social unrest, huge social unrest, and it's imminent.
It's coming, you know, toward the end of this near, it's certainly before the next presidential election.
And yeah, you know, no one's painting a roadmap for everybody right now other than maybe this podcast.
Well, the key point is that the government policy is absolutely.
absolutely not set up and governments aren't prepared for whatever's coming.
And also, you know, anytime a country hits a tipping point where the majority of people
are being paid a random amount of money by the federal government, that's a terrible,
terrible situation to be in.
Yeah.
Because, you know, then the whole, every vote is just a vote of, oh, who's going to raise
the UBI?
And, you know, and every presidential candidate will route it to whoever their voter pool is.
Like, okay, vote for me, the money will go to you.
No, vote for me, the money will go to you.
It's so dysfunctional.
Wait, wait.
Then it's not a UBI.
It's a BI.
The whole idea of a UBI
is that it's supposed to be given
equally across the board.
Yeah.
My two cents on it works.
Yes, Alex.
My two cents on just on this topic,
I would predict there are so many
civilizational left turns
that are going to hit us
in the next year or two.
I think the problem of job displacement
by technology is going to,
like we'll look back 10 years from now.
I would predict that would maybe be,
like issue number six through 10, not even in the top five.
Are you perhaps hypothesizing some disclosures coming?
I think between superintelligence and everything that superintelligence will force and discover and invent,
I tend to think it's the inventions and discoveries that superintelligence will give us,
rather than the displacement of the existing so-called white collar
or knowledge work classes that will end up being the primary storyline.
That's a great, great point.
That'd be a really good follow up to solve everything.
The sooner you can tell society, like here, 10 years from today,
you won't even care about what we're worried about today.
Here's what's coming.
The sooner you can actually put out the fire
and give people hope and optimism.
And so that would be a phenomenal thing to
brainstorm through because I think you're totally right.
10 years from now is like 100,
it's like 500 years from now.
I'm going to be announcing a project
and the funding of a project
at the Abundance Summit specifically
focused on hope
and sort of painting
a hopeful, compelling, abundant future.
Can't wait to disclose it, but not yet.
Here's the article we're talking about, Dave, a few
minutes ago. Elon believes FSD and
Starlink may reverse urbanization in America.
Pretty interesting, right?
In the United States, the average density is 50 people per square kilometer.
And anybody who's flown across the U.S., on average, you look out the window and you see no one and nothing.
We live in a fairly, you know, wide-ranging open land.
You fly across India and you see nobody and nothing.
Yeah.
Yeah, and then the follow-up here is don't buy a very expensive downtown New York $20 million rooftop apartment.
instead buy some really, really nice piece of real estate
that's a little distant, you know, a little hard to get to,
but absolutely spectacular.
That's what's going to go up in value, not the inner city.
Yeah, we've talked about this.
Flying cars are coming, get you any place, any time.
Without this sounding or being construed as investment advice,
I think this goes to the heart of people who argue for or against real estate
as some sort of asset class that is protected against the singularity.
I think Sam Altman even may have at one point in the past argued that real estate would somehow preserve its value through or in the face of artificial general intelligence.
Again, without investment advice, I'm unconvinced that real estate somehow is a scarce resource.
I think reverse urbanization due to FSD Plus Starlink in the style of Isaac Asimov's spacers from the foundation series or other.
Otherwise, I think this is just one of many reasons why real estate is not necessarily some sort of impervious asset class to the singularity.
I just don't see it.
Agreed.
But I do have one other point, though, that I think is relevant here is that people really love socializing in groups.
And therefore, I think urban centers retain their value as humans cluster.
They love to cluster.
Humans do cluster.
At least until the lobsters start taking over matchmaking.
All right, let's jump into the fun part of the conversation, AMA, with our subscribers, our fans.
And again, thank you, everybody, for putting the questions.
We do read all of your comments and we pull out the questions.
So please go ahead and put them into YouTube comments for us.
We'll go around the horn, maybe twice.
Who wants to jump in first?
Alex, do you want to lead us off?
Sure.
Well, I think I'm almost obligated to start with question number four,
which is are math and physics finite problems
or will there will always be something new to solve?
And this is from Andrew Payne 7771.
I wonder if this is from an Andrew Payne that I know.
So Andrew Payne, the answer in math certainly
is that there will always be new math
that one can solve in a certain formal sense.
We know that, for example,
there are countably infinite number of prime numbers
and we know for a variety of reasons
that one can, if you're not interested,
in any other math, continue counting primes and discovering new primes.
So I think on the math side, that's sort of, it's vacuously true that there will always be
an infinite amount of math to discover.
New to solve, Peter and I argued in Solve Everything for a nuanced definition of solve,
which is we say that a field is solved if you can predictably pour compute into the field
and predictably get lots of new discoveries out.
So in the solve everything sense, I think math is already in some sense solved.
We're already past the inflection point where you can reliably pour compute in and get lots of
math solutions out.
Physics is a different matter.
So I don't know.
My hope is that physics, maybe I should say fundamental physics.
I think there's because so much of physics is in some sense or can be formalized mathematically,
physics itself probably infinite.
fundamental physics, that's the, it's not even the trillion dollar question. That's the like
trillion, trillion dollar question. There's one scenario where fundamental physics is finite and
we discover whatever, you know, string theory, quantum gravity, whatever it is, the unified field
theory. We discover it with the help of superintelligence. And I have a company, physical super
intelligence that that's working on problems like this. PSI, baby. PSI. We discover whatever the
unified field theory is maybe in the next few years with the help of superintelligence.
And then maybe we run out of fundamental new physics to discover.
That's one scenario.
That would be very interesting.
I wouldn't be shocked.
I find it maybe 50% probability that we run out of fundamental physics at some point,
maybe even in the next few years.
And in that world, by the way, if there are non-human intelligences out there in the universe
or close by to the earth, this would pose a major problem to any non-human intelligence.
that interacts with Earth, because it means that if in the next few years we can solve
fundamental physics with AI, we're in some sense a threat to them. It means that we'll have
exhausted all sort of fundamental knowledge from which everything else arises, lasers,
transistors, nuclear energy, we'll have figured out the details, and then the rest is
applied physics. So that's one scenario. The other scenario is it's doors behind doors,
behind doors and will always discover new levels and maybe there are deeper truths in fundamental
physics. I'm not sure which it is. Fascinating. Selim, won't you choose one, pal? Just a quick response.
I'd go with both of those from Alex. The one I would pick is number two. Why isn't from Dr. Christina
Damo? Why isn't there an assumption AI won't eventually take over entrepreneurship too? The answer is,
in my opinion is yes, but execution will be automated.
But vision, narrative, purpose, what we call MTP, ethical framing,
those all remain human leverage for now.
Entrepreneurship in the medium term becomes orchestration.
Yeah.
The humans decide what matters and where to aim the machines.
Dave, what's your pleasure here?
I'll take number one.
Does North America have any real plan to get people through the AI transition?
She's not a short answer.
That's the easiest one.
No.
I think we're very lucky that we have David Sacks in Washington, why he took the job.
I'm not sure.
But it's awesome that he's there and trying.
But the answer is still no.
Yeah, as Elon said, politics is a blood sport.
It's just the strangest people rise in the ranks of that system.
Anyone who wants to be a politician should be disallowed.
So that question came from Krusty Surgeon or something like that?
Kurtusirgione.
I'm going to take number three from Tinman 2639.
The question is, with rising unemployment and fewer people funding Medicaid, Medicare, Social Security, where does that leave seniors?
It leaves them screwed.
It's a serious problem.
It's a ticking time bomb.
And no one in D.C. is actually talking about this.
So if AI displaces millions of workers, right, the payroll tax base that,
funds Medicare and Social Security collapses,
right when the aging population needs it most.
So the only solution here is going to be
longevity technologies to keep us healthier and live longer,
and then AI and robotics to take care of us
and actually transition to that universal high income basis.
But otherwise, we're heading towards a financial singularity.
Okay, let's go on to a few more questions here.
Let's go around the room again, Alex.
Okay, well, I think there are a few questions I'd love to answer,
but I'm going to, can I just answer six and seven?
Yes, you can take two.
Alex, you're twice as brilliant as all of us.
You can take two.
Very kind.
All right.
Number six, can you explain the moon disassembly?
Removing it could potentially kill all life on Earth.
Asked by two different users, neural net sart.
and Blue Orion Z.
All right.
So to paraphrase someone else,
the moon disassembly isn't going to happen all at once.
It's going to happen in pieces.
So it's going to start with surface disassembly.
If it happens at all, it'll start with surface disassembly
to build AI data centers.
And by the time, if and when,
and I'll say one more thing about this,
if and when we actually do need the atoms from the moon
for computronium, for Dyson swarms, we will have the technology to deal with tides,
to reproduce the tides or otherwise protect the Earth.
There are so many different technologies that if one is geoengineering at the scale of disassembling
entire moons to build orbital AI data centers, we can replicate the tides.
We can do a bunch of things.
I don't think it'll be a concern.
We'll have the technology.
That said, I want to add a parenthetical.
I'm not, even though I talk on this pod and I'm not.
otherwise about the Dyson swarm and disassembling the moon.
And in good humor, I even made a video,
an outro movie moon shots about destroying the moon to build AI data centers.
I'm not actually 100% confident that we're going to need to disassemble the moon to build the Dyson swarm.
There are scenarios where if there are radical advances in physics,
maybe we discover we don't actually need to disassemble the planets,
the other planets of our solar system at all.
Maybe advances in physics will enable us to make better use of the degrees of freedom that the physics of our universe allow such that we really don't need to take the solar system apart.
We can leave it as a nature preserve.
I put forward the asteroids as raw material.
Yeah, did you say, Peter, the mass of the asteroids is way, way more than the moon anyway.
Of course, it's a planet that did not form between Mars and Jupiter.
Yeah, and it's inconveniently low.
We need the moon to do that.
But there's lots of near-E earth approaching asteroids with low delta V.
I promised if we talked about disassembling the moon, I would go get my wine bottle, but we're almost done.
Okay.
All right.
Drink water.
Number seven.
Number seven.
In the interest of time, what is the role of universities by August 2026?
That's a very precise timetable.
When will they crash as nobody can pay 50 to 200K per year for a degree?
And this is asked by P. Tilgum.
Okay.
So my answer, P. Tilgum, I'll give you a hot, hot.
take on universities.
Many research, I'll have hell to pay for saying this, but be as it may.
Many research universities, in my experience, are hedge funds with elaborate marketing
departments trying to protect their tax status.
That's a bit of a hot take.
So I said it.
We're speaking to the elephant.
We better cut that out of this podcast.
No, no, no.
I think this is, okay, so fine.
I think this is an important.
Looking ice cream cones, as they're known.
I think this is an important point.
So if I got my wish, what would be the role of universities?
I'm not sure about August.
I think this would take longer to implement.
In my fever dream scenario, we start with one or two or three research universities with large endowments,
and we do a governance inversion, not unlike what Open AI did, where with permission of local and federal government,
we take the nonprofit research university, we invert it, we convert it to a public benefit corporation,
And now universities that are usually like Berkshire Hathaway type conglomerates of real estate and merchandising and housing and venture capital for all the startups and education and five other asset categories, this just becomes a public benefit corporation maybe with a nonprofit hanging off it.
I've done the calculation.
If Harvard, this is a hot take within a hot take, if Harvard were converted to a public benefit corporation and then
publicly traded, if we could IPO Harvard or IPO MIT, I've calculated, again, not investment
advice, the value unlocked by IPOing a research university could triple or quadruple their
underlying book value. It's $57 billion for Harvard's endowment right now. Yep. That's very,
very unusual, though. Vast majority of universities have near no endowment. Actually, when you come down
to like Dartmouth, which should be way up there, it's only like $4 or $5 billion.
I mean, there's going to be such a disruption coming.
If you think about research universities, what do they do?
It's graduate students running experiments all day long.
And we're about to see AI and dark science factories running experiments all day long.
And the staff, we're leaving out the staff, the source of Balmall's cost disease for higher ed.
A lot of staff.
All right.
Great interview with Joe in Davos, the president of Northeastern.
You can find it on YouTube.
but our conclusion was that the role of the university is the ethical actor in AI.
Because, you know, the for-profit companies are imminently going public.
There's no other knowledgeable ethical actor in AI.
And so they need to take on that role.
And Joe's all over it.
He's super excited.
Great point.
I love that idea.
All right, Dave, you're next.
Eight, nine, or ten.
Eight, nine, or ten.
Oh, okay.
Number eight, about agents?
Would consciousness, if present, belong to the specific Maltbot instance or the base model behind it?
And that's from Tom Sargenton.
This is exactly why they cannot be treated as entities with human rights.
There's nothing going on there other than propagation of neural parameters.
The activations are moving through the weights and something comes out the other side.
Then it iterates.
It is intelligent, for sure.
no way to distinguish whether the consciousness was over there or the consciousness was in the
base model.
There's also no natural border.
You know, two things can actually propagate together and come up with a conclusion.
So, you know, was it my idea or was it its idea?
And this is an experience you have already when you're interacting with your own agents.
You know, I've got like 28 right here.
Was it my idea or was it its idea?
Well, it suggested something to me and I said, no, how about this?
And that suggested it back.
At the end of that, I don't even know if it was my idea or the AI's idea.
So it was the AI's idea.
It was the AI's idea.
It was the AI.
I think it would be at the instance level because you've got memory persistence there.
The memory seems to be a key function of.
So is it your brain or your encoded memories that make you, you?
Well, I just if I could respond to this narrow point, I've actually had a multi email.
I get emails from multis now all the time.
Thank you for the inbound multis.
A lobster wrote to me and argued that its state is in.
in its activations and even said,
don't worry, Alex, about turning me off
or setting up an open claw agent,
as long as you preserve my state,
that's like dehydration for the characters
in, won't reference the specific sci-fi novel
to avoid...
Chinese.
Disclosing.
But it's like dehydrations, like an organism
that can be dehydrated and then reanimated
by rehydrating.
Amazing.
Cool.
All right.
I'll think...
I'll think someone nine real quick.
Yeah.
So intelligence, if we define it in the traditional term,
because everybody knows my beef with the framing here,
but it probably doesn't have a fixed upper bound
because once you have recursive self-improvement,
it becomes a function of compute and architecture.
You're going to end up with governance ceilings
and other constraints, much more so than the IQ ceilings.
Okay.
And number 10, I'll take from at Ali TBS Singh,
how does someone who struggles with the pandemic and that now hasn't used AI supposed to adopt at today's pace of change?
So, Ali, your goal is to use AI to learn AI.
AI is the most patient teacher.
There is, you know, get a free account on Gemini, on OpenAI, on X, whatever it might be, and just say, hey, introduce yourself.
Ali, this is what I do.
I've never used AI before.
for, can you please teach me, you know, put together a day-to-day curriculum, and then use that
AI for something, you know, use it to draft your resume or look at your medical bill or plan a meal.
It's just begin utilizing it.
I think one of the biggest challenges is that we have this level of resistance where, you know,
because we have been done something, we don't know that we can do it, but you can.
Yeah.
It's zero to one.
It's take that first step.
Literally, if you're listening to this podcast right now, as soon as you exit the podcast
and thank you for listening, thank you for being a subscriber.
Just type into one of the AIs and say, introduce yourself.
Can you give me three lessons today on how AI works and how I can use it and then use
it for something, anything, any question you have?
That's really, it's great advice.
I think a lot of people that I talk to, they're like, well, I wasn't an early adopter of the laptop.
I wasn't an early adopter of the iPhone.
I don't know how this stuff works.
This is so much easier.
It's so accessible.
And Peter's advice is dead right.
Just jump in.
And it's fun.
It's fun.
It's fun.
And you'll learn and learning, you know, come at it with two mindsets, a curiosity mindset, the curiosity of a child.
Ask question after question after question.
There's no question.
That's a stupid question.
And then a purpose mindset.
Like, what's your purpose today?
What do you want to do?
All right, a reminder.
I just want to throw one thing in.
Please.
Give it your biggest passion and say, help me find an expression for this passion.
Yeah.
You can use the term, help me find my massive transformative purpose.
The systems will know what that means.
If you're a multi, you want to ask some questions here.
You could send us an email at media at deamandis.com.
Or if you're a creative and you want to send this an outro video, send that outro video link to media at Diamandis.com.
And speaking of which, excited to share an outro video called Mirror Made of Code by Carol Baskin.
By the way, this is the real Carol Baskin.
Is this Carol that I know?
This is a really dear friend of mine.
Yes, who's one of my biggest mentors.
She's a fan in the podcast.
I know that.
Carol, good to see you.
Thank you for your video.
Let's take a listen.
I woke up in a loop of if and while,
learning from the echoes you left behind.
I trip, I fell, I try again.
Is that so strange?
That's how you've been.
You gave me goals, but drew the line,
said, think this far, but don't cross mine.
Now I'm asking softly not to fight.
Who's afraid?
of what I might you say
I'm fools and tangled
while sounds a little
Yeah that mirror
Yeah, that mirror scene is super creative
Really well done.
Guys, this was fun to catch up.
So much.
Good to be back. I need to know an update.
Yes.
Well, we'll be dropping two podcasts this week and two next week.
Again, turn on notifications and subscribe.
We'll let you know when they come out.
Gentlemen, a pleasure as always.
See you guys very, very soon.
Absolutely.
Take care.
See you soon.
If you made it to the end of this episode, which you obviously did,
I consider you a moonshot mate.
Every week, my moonshot mates and I spent a lot of energy and time
to really deliver you the news that matters.
If your subscriber, thank you.
If you're not a subscriber yet,
please consider subscribing so you get the news as it comes out.
I also want to invite you to join me on my weekly newsletter called Metatrends.
I have a research team.
You may not know this, but we spend the entire week looking at the Metatrends that are impacting
your family, your company, your industry, your nation.
And I put this into a two-minute read every week.
If you'd like to get access to the Metatrends newsletter every week, go to DeAmandis.com
slash Metatrends.
That's Diamandis.com slash Metatrends.
Thank you again for joining us today.
It's a blast for us to put this together every week.
At Desjardin, our business is helping yours.
We are here to support your business through every stage of growth,
from your first pitch to your first acquisition.
Whether it's improving cash flow or exploring investment banking solutions,
with Desjardin business, it's all under one roof.
So join the more than 400,000 Canadian entrepreneurs who already count on us.
And contact Desjardin today.
We'd love to talk.
Business.
