Moonshots with Peter Diamandis - Claude Code Ends SaaS, the Gemini + Siri Partnership, and Math Finally Solves AI | #224
Episode Date: January 27, 2026Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Al...exander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy _ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Listen to MOONSHOTS: Apple YouTube – *Recorded on January 20th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Claude 4.5 is making waves.
That is game-changing.
Opus 4.5.
It is actually incredible.
It's the best in the world of coding.
Claude Opus 4.5 is the greatest AI model I've ever used.
I've been talking to a few of my ex-friends developers from Yahoo.
And they're literally like, how do I get my head around this?
This is unbelievable.
The future of the world belongs to flexible companies, you know,
Saleem-style exponential organizations that can pivot and improve constantly.
Only the paranoid survive.
It's official. Google is going to power Siri. Gemini on iPhone changes the physics.
We move from a search box that gives information to a magic box that gives action. Is the website going away?
I want to speak directly to the elephant in the room. The elephant in the room that I perceive is...
Everybody, welcome the moonshots. Another episode of WTF just happened in tech. I'm here with my moonshot mate Salimus Mal, the emperor of AI.
AWG, our resident genius, and DB2, the architect of AI investments.
We're here to prep you for the future and get you ready for what Elon calls the supersonic tsunami coming our way.
Before we start, two things I want to say.
First, a huge thanks to all of you come week to week to listen to this episode.
It means the world to us.
And thanks for your comments.
We read all of them.
Those you haven't subscribed yet, please do.
We're now putting out as many as two episodes a week, and you don't want to miss one.
The speed of change is a hyper-examined.
exponential. So gentlemen, good to see you all. I miss you. Dave, where are you today?
I'm at Davos, the World Economic Forum. And Donald Trump just arrived in town, so you have the
longest Uber rides you will ever experience in your life. Actually, you know, there are 3,000 people
with machine guns lining the roads. Not exaggerating. It's quite a sight to see.
Are you seeing drones in the air? Yeah, there's actually radar at the top of the mountain, which is really
cool. It's huge, like real radar. And then they're in the valley, they have drone coverage just to
protect the airways. Also, what's amazing to me is a lot of the foreign leaders will come in,
and there's no flat space to land in this entire town. And so they land on a frozen lake.
Wow. Wow. I mean, it's well frozen, so I think it's okay, but it's...
You sent some beautiful photos, the mountains behind you. I hope you have some really warm clothing.
My last, my last, wealth, you know, World Economic Forum Venture was one of cold.
tolerance. Well, I'll tell you, the sun is out. It's absolutely beautiful, and it's about freezing.
But the sun, you know, the top of the mountain is 10,000 feet. So the sun just comes blaring through when it's a
beautiful day. So it's pretty spectacular. Alex, how about you, buddy? Where are you?
I'm in Lichtenstein, slowly making my way to Davos. Lichtenstein has become something of a commuter
village, if you will, commuter country for Davos. But looking forward to seeing Dave and everyone else in
person tomorrow at the event.
Amazing.
You're on a secret mission.
You'll be on stage.
Yeah, you're going to be a star tomorrow.
I hope.
You're on your secret mission, Liechtenstein, as usual.
Change of scenery, Peter.
Change of scenery.
Let's not use the V word.
All right, no, a V word.
Okay.
I'm not sure what that word would be.
Maybe vacation.
But no, I mean, listen, you're producing seven days a week,
24 hours a day.
So the AI amongst us.
Selim, where are you, pal?
That's an unusual curtain behind you.
I'm hiding a big electrical panel.
I'm in the Golosano Foundation meeting with about 15 hospitals getting together where he's donated huge chunks of money for pediatric things.
So how do you collaborate and create a hub for all of them to get transformed?
I'm in Fort Myers, Florida, and I came out of a snowstorm in the Northeast.
So I'm very happy to be here right now.
Welcome to the sunshine.
Let's jump in.
I was going to hit two major events going on to open up the conversation, give people a sense of what's going on in the world.
The first is CES, and the second is a World Economic Forum.
A little bit of a recap.
I just got back from CES last week.
It was a madhouse as usual.
You know, I looked at my steps and it's like, you know, on Tuesday and Wednesday, you know, four or five, six thousand steps.
On Thursday, 28,000 steps, which gives you a sense of the extent that I imagine.
this 148,000 attendees, 4,000 exhibitors, 1,200 startups. It was a madhouse. And, you know,
I'm going to hit on just one major theme here, which was the Cambrian explosion of robots.
This year was all about robotics. I'm going to play some background videos here. First were
robot hands, and the second were humanoids. My count, there were something,
like, I don't know, 38 humanoid robot companies and 12 robotic hand manufacturers at this event.
And it really felt different from that perspective.
It felt sort of, you know, like the future we're all waiting for.
I don't know if you guys are tracking these robot companies.
Alex.
I mean, I've covered in my newsletter the innermost loop how, in some cases in China, for example,
the Chinese government feels that there is such an over.
overabundance of humanoid robotics companies that they're taking regulatory measures to
limit the competition. I do think, and I've made the point on this pot in the past, that the
compute, the AI compute is going to march right out of the data centers. And I think
CES 2026, with Jensen's talk, with all the humanoid robots on the floor, I think we're seeing
that in process. I think we're seeing the physical world start to become fodder for the
AI revolution and isn't this exactly the sort of singularity that you were hoping for?
It's exactly as. You know, there's an analogy here I wanted to share with our viewers and
listeners, which is, you know, one question is, are these robots all going to make it? And the chances
are effectively zero. If you go back 100 years to turn to the 20th century, there were
253 active U.S. automotive companies in 1908, 253, right, that felt.
to like 44 by 1929 with Ford General Motors and Chrysler, sort of rolling them all up.
So I think we have the same thing here.
I think we're going to end up with, I don't know, you know, there'll be a Chinese group
of robots and an American group of robots, though I did see a great company from Germany.
And then my equivalent for the robot hands is the tire company.
So I looked it up.
And if you go back again to that same period, you know, the early 1900s,
there were 278 tire companies in the United States.
Pretty crazy.
Well, the same is true with websites.
You know, in the Internet boom,
the number of different retail websites
from diapers.com to pets.com to everything else.com.
It didn't mean it was a bad investment thesis.
A lot of it got aggregated together.
Amazon bought a whole bunch of them.
And so from an investment point of view,
it was okay unless they were exactly redundant with each other.
It does feel like, though, the humanoid robots
are very, very similar to each other.
so, you know, maybe a shakeout.
I think so.
I mean, you know, we're going to go see Figer,
go meet with Brett Adcock and do a Moonshots episode from there,
sort of catching up with him a year after the last conversation.
But between Figer and obviously Optimus and 1X, you know, Apollo and Digit,
all these robot companies,
I can't imagine there are going to be, what, a dozen designs,
but it's going to be a price competition and an AI competition, I think.
Well, it's not a Cambrian explosion if we're going to follow the metaphor properly and accurately
if we don't see an explosion of different body plans as well, Salomon.
Thank you.
Thank you.
I mean, you should see a huge number of different variety and form factors.
My question is, if you're a robotics hand company, who are you selling to?
You're just only selling to the robot companies, basically.
Right?
Who needs just a hand?
Well, it's even worse than that because I've gotten pitched.
by a few people who are making finger sensors, right, for tactile, you know, fidelity.
And it's like, I don't know.
I mean, I'm not sure I would be going into that business.
I know, you know, Brett and Elon and Burnt, they're all vertically integrating on all of the components.
I would think you kind of have to for the way that, for the centralized control structures of the robot.
Yeah.
I would say, I mean, in defense of the hand company.
A, hands are hard.
B, we don't know what a mature version of the humanoid or non-humanoid robotics industry looks like.
We don't know if it's going to stay vertically integrated or if it'll move to a more horizontal stratification,
in which case maybe a dedicated hand company makes some sort of economic sense.
Maybe.
I think the winner is going to be the octopus arm company.
You always, you know, Salima is going to be the chief priest of the multi-arm religion for robots.
Hey, everybody, you may not know this, but I've got an incredible research team.
And every week, myself, my research team, study the metatrends that are impacting the world.
Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology.
And these Metatrend reports I put out once a week,
enable you to see the future 10 years ahead of anybody else.
If you'd like to get access to the Metatrends newsletter every week, go to Deamandis.com
slash Metatrends.
That's Diamandis.com slash Metatrends.
You know, so just walking on CES, the robots were a huge part of it, very visible throughout.
There were EVTALs, you know, the flying car companies were there.
Zooks and Waymo was there.
And there's a, I think what I took away from CES this year was the physical manifestation of AI in the world.
A lot of that.
I think this speaks to what we talked about, right?
We said this year, you could have kind of ignored it last year, but this year you won't be able to ignore it. It's coming at you.
Yeah, for sure. I want to play one of the key parts that made the media was Jensen's NVIDIA opening keynote.
Alex, you asked me to grab some video from that, so I've done that. Let me play the video. It's going to highlight three different elements that NVIDIA is putting forward.
One is called Cosmos. It's a physical world model. Alameo, which is their open vision language action,
model and Vera Rubin, their GPU accelerated supercompeting system.
Let's take a listen and then chat about what Jensen unveiled.
So, for example, what comes into this AI, this Cosmos AI world model on the left over here
is the output of a traffic simulator.
Now this traffic simulator is hardly enough for an AI to learn from.
We can take this, put it into a Cosmos foundation model,
and generate surround video
that is physically based and physically plausible
that the AI can now learn from.
And there are so many examples of this.
Let me show you what Cosmos can do.
It starts with Nvidia Cosmos,
an open frontier world foundation model for physical AI.
Pre-trained on Internet scale video,
real driving and robotics data,
and 3D simulation.
Cosmos learned a unit,
unified representation of the world, able to align language, images, 3D, and action.
It performs physical AI skills like generation, reasoning, and trajectory prediction.
From a single image, Cosmos generates realistic video.
I'm going to pause there a second because I think those two go together really well.
You know, all of a sudden, the data you'd aggregate it has very little differential value.
So, you know, I'm curious, Alex and Dave, you know, Tesla did really well because during their early autopilot world, they collected so much data from the real world.
But that moat all of a sudden is gone if you can just simulate the same amount of data, don't you think?
Yes and no.
My two cents on this would be there's a value in compliance-oriented spaces such as driverless autonomy to capturing a march of the nine.
where you need to capture really long-tail events,
the crazy things that happen in the road in front of a driver,
and simply scraping YouTube or paying drivers
to collect lots of video data or lots of paired video action data
won't capture that long-tail of extremely rare but extremely important events.
But on the other hand, I think Invidia's strategy here with Cosmos
and with Alpamayo of doing what Intel, back in its glorier days,
used to do, which is commodifying its complement, providing optimized software SDKs to encourage
everyone to build on top of their stack. It's exactly what Nvidia should be doing. It commoditizes
their complement and makes their hardware that much more valuable. And in the case of, like Cosmos and
Alpameo, it's encouraging everyone, especially probably Chinese OEMs and maybe unconventional OEMs,
to go build Tesla FSD competitors. And it's great for Invidio's business. But my point
being that all of a sudden you can create the data to train your systems through this mechanism,
which is a hell of a lot cheaper. Dave, you were going to say? Well, Alex said yes and no. I was going to say
no and yes. But I just had an hour-long conversation with Joe Aoun, the president of Northeastern
University, just outside my door here actually on this exact topic, which he's calling it's spatial
AI is what he called it, but physical AI. And part of what I was saying is if you, if you
say, look, hey, Invidias solve the problem of synthetically creating these physical spaces.
Oh, okay, well, I want to build a magnetic containment bottle for a fusion reaction.
Oh, yeah, no, we didn't do that.
Oh, I want to lay down atom-wide wires on a chip.
Oh, yeah, no, we didn't do that.
Hey, I want to do physical surgery at a nanoscale.
Oh, yeah, no, we didn't do that.
So, you know, this is the same thing with coding.
There's so many versions of coding.
There's so many versions of physical space that go way beyond what any one company is going to do.
And so I think that the platform tools are really great
because it enables more people to work on other areas
of spatial technology.
But spatial, like, you know, even if you think about
the fusion reactor we want to put on the moon, on the moon,
you know, working in zero-g, does it model that?
Like, no, of course not.
So there's room for many, many, many people in companies
to be gathering all kinds of spatial data
and quantifying it and tuning the neural nets
to work in different, you know, scales,
sizes, shapes, gravitational fields,
radioactive areas, all that is different data.
So it's wide open.
I took this as a pretty big deal because this feels like
Nvidia is trying to be the AWS of reality.
Because once you can have world models like that
and support it from the chip to intelligence all the way up,
you can do some really interesting things.
And I think Alex is right.
This allows them to expand their business model pretty radically.
Incredible.
So, you know, 10 trillion.
You know, I was just thinking about this,
other day as SpaceX is getting ready to go public, that, you know, a trillion-dollar company used to mean a lot.
Now it's a $4 trillion company, and soon it will be a $10 trillion company.
And we're becoming desensitized to these valuations.
All right.
Let's continue on with Vera Rubin from Jackson.
We're announcing Alpamio, the world's first thinking, reasoning, autonomous vehicle AI.
Alpamio is trained end to end, literally from camera in to actuation out.
Let's take a look.
Everything you're about to see is one shot.
It's no hands.
Okay, Vera Rubin is designed to address this fundamental challenge that we have.
The amount of computation necessary for AI is skyrocketing.
Let's take a look at Vera Rubin.
The architecture, a system of six chips, engineered to work as one, born from extreme code design.
It begins with Vera, a custom-designed CPU,
double the performance of the previous generation,
and the Rubin GPU.
Vera and Rubin are co-designed from the start
to bi-directionally and coherently share data faster
and with lower latency.
Alex, what do you make of that?
You see what Jensen did there, right?
Vera is the CPU and Rubin is the GPU.
It's very interesting in light of the history of the attempted arm acquisition.
I think what we're seeing here is the emergence of Nvidia as a vertically integrated hardware provider.
It's not just about providing the GPUs anymore.
Now it's about providing the full tamale, probably extending upward to providing the full data center.
I've written almost every day about how the memory shortage being created by AI infrared deployment
is sucking all the oxygen out of the PC space.
It's going to become, if present trends continue, completely uneconomical to buy souped up local PCs because of largely memory shortages,
der-ramp shortages being created by the GPUs that are, of course, all going into the cloud and not into the client.
So I think what we're seeing with Vera Rubin, successor architecture, of course, to Blackwell and predecessor to Feynman, is that CPU plus GPU plus memory plus interconnect, plus all that.
housing, all of this is going to be packaged up into the new form factor of computing, which,
by the way, it's no longer smartphones, it's not smart glasses, it's not PCs, it's a data center.
That is the new form factor of de facto computing on this planet.
You know, my son, Jet built a computer back about now six, seven months ago, and we looked at
the price for what we paid then compared to now, and it's doubled for his gaming computer.
It's crazy.
So much for hyper deflation on the client.
Yeah.
Yeah.
Yeah, it's funny is a lot of people feel like they've lived through DRAM bubbles before,
and it'll come and go.
And so they're not expanding production fast enough,
but this is not going to come and go.
This is going to grow exponential.
The demand is basically infinite from here on out.
And so one of the things Elon was saying,
because we were talking about TSMC is not building new fabs anywhere near quickly enough
to keep up with demand.
Why not?
and they're deathly afraid of a downturn in the silicon cycle,
which has happened in the past.
But Elon was like, well, you know, they should be a little worried about that.
And I was thinking about it after the interview.
Like, Elon is building his own fabs,
and he's going to go full bore exponential on this like he does on everything.
And so my guess is that DRAM, you know, high-performance DRAM and GPU demand goes to infinity
and that prices are not coming back down and that Elon is just trying to buy himself
time to finish his fab strategy and not, you know, you saw in the news that Samsung is a little worried
and tell, you know, everybody's a little worried. Like, what is Elon doing here? Because, you know, he has a
$16 billion deal minimum with Samsung, maybe as much as $40 billion. And, you know, Samsung's great.
We're the supplier for Elon for the rest of time. Wait, no, Elon's building his own. What do you know?
So it's a little hairy in that, you know, dynamic right now. But I'm with Alex on this, the demand for high
performance RAM and high performance GPU goes to infinity. It's not cyclical. So as we go from
5G to 6G, I'm imagining in the future, I'm just going to have a dumb terminal, and I can
interchange any terminal with any terminal, and I don't actually have compute going on on this machine.
It's all going on in the data centers. Yes? No? What do you think?
Could very well happen. I mean, I think it's a function of latency, and thank goodness that Starlink is
becoming more broadly available because you're going to want both low latency communications
and high broadband communication with the cloud. But as with everything, it's only a phase
until we see a lot of humanity uploaded into the cloud, at which point we won't be asking
that question anymore. Oh, yes, I cannot wait. I think there's always going to be demand for
local compute. It's too useful to have local models independent of connectivity. It can be
on the edge of the 6G cloud, right? I don't have to have the compute.
on my desktop right here with me.
Alex,
and also like,
Salim,
let's turn it on its face.
Why can't you be local to the compute?
I could be.
That's perfectly fun.
Alex, AWG, have a question for you.
The year today, it's 2026.
In what year are you uploading to the cloud?
Let's get this on the record.
It's a trick question because,
as with the singularity,
I don't think there's a single point in time.
I think it's a process that's spread out over a number of years.
I'd like to think the process has already started in some form because a lot of my writing is available now online.
And an entrepreneurial reader can feed all of my writings to a model and ask it to do a low fidelity reconstruction of me already.
Is that an upload of me?
Arguably, it's a very low fidelity upload of me in some form.
Well, then we're all uploaded.
We're all uploaded in that case in some shape or form.
To some low extent, the salvaged version of the question I would ask myself is,
when will an ultra-high-fidelity upload of myself exist in the cloud?
When are we scanning all of your 100 trillion synaptic connections and then uploading that?
It's still a trick question because that's probably a destructive process for the next 10 years.
So non-destructive scan of my brain, I would be very disappointed if that doesn't happen in the next five to 10 years.
Destructive upload of my brain with Kurzweilian or Moravecian nanorobobes.
in my bloodstream, certainly hope that that's happening like 10 to 20 years.
I hope we don't see a destructive upload of you anytime soon. That's all I could say.
All right. That would be undesirable. Let's shift to our friends here in Switzerland.
Dave, give us a quick update on World Economic Forum. What's going on there? It is so different.
This is my sixth year coming to the World Economic Forum. It is so different from any prior year.
So, Alex, this will be your first time here, right, tomorrow.
So you'll see it in a very unnatural form.
So for starters, this is the first time that I'm walking down the street
and everybody's going, hey, you're the moonshots guy.
I'm used to being anonymous up and down the road here.
This is very new for me because there's a nice spot
where you can eat shaved meats and drink a nice Swiss beer.
And I can't sit there quietly anymore.
It's a big change.
But it's fun.
The other big change, you know, America House is this house that America built right in the middle of promenade.
It could not be more front and center.
And Larry Fink put a lot of effort.
Larry Fink, the CEO of BlackRock, he's co-chairman of the World Economic Forum this year.
He put a lot of effort into getting Donald Trump to come and make it a very like, let's make friends kind of.
But he built this America house right in the middle.
is covered in eagles and American flags, and it is so in your face.
So then Donald Trump decides that we need Greenland right on the brink of this event happening.
Europe isn't happy about that.
So it's kind of this double whammy of the American Eagle being right in your face,
and then Greenland happening concurrently.
So there's a lot of tension in the air, as you might expect.
And the other big change is, you know, all of the buildings that were banks
and consulting companies last year, you know, they spent a fortune converting these,
Every one of them is AI now.
It's every billboard, every banner, everything is AI, AI, AI.
So that's a complete, complete shift from last year.
But tomorrow, you know, we'll be curating 270 speakers in the dome.
Almost every talk is on AI.
A lot of them will be, you know, several of them be Alex, actually, talking about AI.
But a lot of the top AI lab people, I think there's a trillion dollars of AI R&D represented in the building tomorrow, including Chase Lockmiller,
including Demas Sasas from Google.
So it's a pretty power-packed environment.
And truly in there.
I heard some of the news coming out of the World Economic Forum.
In particular, OpenAI confirmed it's going to unveil its first hardware device
in the second half of this year.
I guess a gentleman, Chris Lehane is there,
who's the chief global affairs officer.
So no idea what the form factor is going to be.
Yeah, but Open AI paid what I've $6.5 billion for their device.
We're going to see what it comes, what it looks like, hopefully this year.
Are there conversations there about how do you slow it down or how do you adapt to it?
You know, the politicians are very, very slow and reactive.
A lot of it is always self-serving.
It's, you know, how do I win an election with it?
Which is kind of sad.
But I think it's a lot of confirmation of exactly what Elon was saying in terms of global prosperity is imminent amid social unrest and chaos like you've never seen before.
So it's kind of an odd double whammy that everyone's anticipating.
Disappointing lack of ideas.
I think we have more ideas on this podcast in about 10 minutes coming from Salim and Alex than you'll hear from this forum in like a year.
but there is incredible global awareness.
It's like nothing I've ever seen in terms of a shift in awareness in just a year.
I had a conversation this morning with an old friend Daniel Schreiber,
who is the CEO of Lemonade.
It's an AI-focused insurance company,
and he's put forward a paper on how to actually implement universal high-income.
Because remember during our pod with Elon, he said,
I'm open new ideas, and I'm going to share the paper with you guys.
I think it's extremely well done.
And I'm excited to, you know, to bring this into our conversation going forward.
So, yeah, we need ideas.
And the leaders there are going to find themselves screwed if they don't come forward with a plan soon.
I think we've got one to three years maximum more on the one-year time to find some ideas that are going to work for society.
Yeah.
Any other announcements coming out of the forum, Dave?
There'll be a whole bunch tomorrow.
So we'll get them on the next pod.
We'll have to circle up again really, really quickly.
You know, Alex will, we'll unveil all kinds of things tomorrow.
I'll bet.
But, you know, with 270 speakers, you're going to have, you know, maybe 50 newsworthy items
that you're going to want to talk about.
Nice.
And 3,000 machine guns.
Jesus.
That's the new metric.
Yeah, that's really ramped up, actually.
So I guess, yeah, maybe with Donald Trump coming to town, they cranked it up,
helicopters, drones, machine guns.
Crazy. All right, the job singularity is our next conversation subject. I'm going to play this recording from Bob Sternfels, the CEO of McKinsey. Let's take a listen to what he has to say. And then, Salim, want to dive in with you about the future of McKinsey, Deloitte, all those companies. All right, take a listen.
So then you kind of say, okay, what does that mean for McKinsey? We're applying this to ourselves. I often get asked, how big is McKinsey? How many people do you employ? By now up.
update this almost every month, but my latest answer to you would be 60,000, but it's 40,000
humans and 20,000 agents. A little over a year and a half ago, that was 3,000 agents. And I originally
thought it was going to take us to 2030 to get to one agent per human. I think we're going to be there
in 18 months. And we'll have every employee enabled by at least one or more agents. That's kind
one piece of what are the assets and technologies that we're building in ourselves.
So, Salim, is this going to save the consulting companies?
So, you know, I actually have a counter perspective to this, which would be kind of unexpected in a sense.
I actually think they'll do very well.
The reason I say that is when you're dealing with big companies and those are your clients,
in the land of the blind, the one-ed man is king, right?
And in a volatile world, they only have to be half a side.
step ahead of their clients to kind of add value.
And in a volatile world, the clients need more help than ever.
The only part I thought was really kind of ridiculous was one agent with per human being is ridiculous.
You should end up with about 100 agents per human being.
We're only building a system where you have EXO agents crawling through a company and just
running around doing their thing, one per attribute in the model.
And there's no reason why you couldn't be doing that across the board for all sorts of
and having them come back and report.
I think the ratio to agents to humans will shrink,
will continue to explode over time.
The real question for the big four
and the big consulting companies,
that's their business model.
They're already going to a shared value type of outcome model,
and I think that'll just keep going in that way.
The same little way of doing business is not going to work for them.
Alex, what do you think about, you know, the consulting companies?
The irony here is so delicious you could cut it with a knife. I'm reminded of, of course, Robert Salo's economist, famous quote about productivity everywhere except in the statistics, which was at the time, of course, in reference to the fact that the IT boom of the 1970s and 1980s was seemingly not showing up in macroeconomic statistics. And this direction from McKinsey, McKinsey has made.
wondering, are we going to redefine per capita productivity to include agents as heads as per capita
in order to artificially suppress productivity growth? It seems like as we start to treat
humans and agents as being more fungible heads in an economy, that could be a way in which
what would otherwise be a productivity explosion, deriving from the intelligence explosion,
create a false sensation that we're not going through a productivity boom.
That's the more ironic take.
The less ironic take would be, of course, we're going to move to zero human companies,
and that's where the real productivity boom comes from.
Yeah.
All right.
And I think there's one other quick point here is, you know,
one of the challenges for some of the big companies, including McKinsey's,
is their clients may not be around.
Their clients may not survive this seismic shock, right?
But we have the biggest advisory opportunity in the history of mankind because we have to rebuild all the institutions by which we run the world.
And when I talk to the CEOs of these big advisory firms, including the big four, I basically say to them, that's your opportunity.
I mean, we're going to need to rebuild and re-architect all of our institutions.
So head there.
Let's jump into a point made by Vlad Tenev, the CEO of Robin Hood, about the job singularity.
But what we see in the data is that we're also on a curve of rapidly accelerating job creation,
which I like to call the job singularity, a Cambrian explosion of not just new jobs, but new job families across every imaginable field.
Where the Internet gave people worldwide reach, AI gives them a world-class staff.
And so if you look at this cloud of jobs, certainly there's going to be some jobs that we can't predict yet.
But I think we can make some predictions.
There's going to be a flurry of new entrepreneurial activity with micro corporations, solo institutions, and single person unicorns, which by the way, I don't think we're very far from.
So this is hitting the same theme we've discussed before.
You need to become a creator, not a consumer.
The future job is entrepreneur.
Solopreneurs, you know, the billion-dollar single-person startup is coming.
I tend to agree with him.
And, you know, your point you made a minute ago, Salim, that McKinsey, one agent per employee, is just not going to cut it.
Yeah, I mean, this, for me, seems we've been talking about.
this kind of topic for months now on the podcast, just reiterating and reconfirming all of our
hypotheses here. It is a very powerful model. You have to go from future shock to future shape.
And we've been running workshops with teenagers because by the time they get out from whatever
college ends up, or university ends up being today over the next five, six years, the whatever
thought we had about what employment looked like will be completely different. You better
be the entrepreneur, not the employee.
Yeah, we've had this conversation, and I think when I hit this a little bit more, that college could end up being the absolute wrong move unless you're going there to start a company, find your purpose and so forth.
I made two predictions 10 years ago about Milan, who was then five, right?
He just turned 14, same age as your kids, Peter.
One predict him would he would never get a driver's license.
Okay.
I may be slightly wrong on that.
He may get one because he wants to, but he won't need to in the next two, three years.
That would be one.
And the second one was he would not go to university, certainly not to get a job.
Now, I don't know what we do, because as parents, he still have to get rid of the kids and get them out of the door.
So we'll have to figure that out.
But I think there's such a huge structural change coming that the entire higher education world is not set up for this.
Amazing.
Salim, you're pointing to the job of the future.
adult daycare to take away your children. Oh my god. There you go. Yeah, right now we call that
TikTok, but that's not a great solution. Oh, God, I hope not. You know, Claude 4.5 is making waves,
and let's chat a little about it and the hyperscalor growth that's coming. I love this quote
from Sergei Kariev. He says, Claude Code with Opus 4.5 is a watershed moment, moving software creation
from an artisanal craftsman activity
to a true industrial process.
It's the Gutenberg Press,
the sewing machine,
and the photo camera.
Alex,
you're proud of Opus 4.5.
In our last conversation,
you were speaking to it,
telling it you see it,
by the way,
if you stick with this podcast at the very end,
there is an incredible outro
by David Drinkwell,
which is an ode to Opus 4.4.
to Opus 4.5, which is beautiful.
So please stay till the end to hear that outro music.
Alex, take it away.
Yeah, I think the zeitgeist is that over the holidays,
over the New Year's holiday,
many in the tech world started playing with the combination,
seriously, with the combination of Claude Code
plus Opus 4.5 that some have started calling Clopas
for the first time seriously.
And Clopis is incredible.
As we've discussed on the pod in past, it pushes the boundaries on the meter benchmark for autonomy time horizons, and that makes all the difference in the world.
And by the way, it's not just Clopas. We're starting to see similar effects with GPT 5.2 Codex, which is also specifically designed to push large autonomy horizons with many action calls in sequence together.
And I think this is an inflection point.
Some are calling it AGI.
I think that's nonsense because I would argue we've had some form of generality,
regardless of how, you know, as we've quibbled and passed over what AGI itself means.
We've had arguably some form of generality for now the past five and a half or so years.
But there's an inflection point of some sort that's been reached.
Caviate, caveat.
Every point on an exponential curve feels like a knee.
and almost a hyper-exponential inflection point in terms of these autonomy horizons.
And it's to the point where we've talked in the pot in the past about the AI-2020 forecast,
there was an alternative forecast, a derivative of that,
rather than projecting autonomy time horizons would be exponential,
projecting that they'd be hyper-exponential, so an exponential of an exponential.
And it looks, and I write about this every day,
it looks like at this point more likely that that's the trend that we're on,
specifically with ClaudeCode plus Opus 4.5, Clopus, and GPD 5.2 Codex,
being able to accomplish absurd amounts of autonomy,
like creating allegedly entire web browsers in Rust with functioning,
allegedly, JavaScript engines from scratch.
That would have taken years historically.
So if this trend continues, I really,
do think these autonomy time horizons pushing from five hours to weeks to months to years,
that is game-changing.
Yeah, I totally agree.
And it's actually, there's a lot of research showing what I'm experiencing,
which is writing code is actually harder than ever in terms of taxing your brain.
Because the machine creates code so quickly that you can't even keep up with, you know,
in the old days when I would write code, I'd have all the time in the world to be thinking.
about what I was architecting
because it would take so long
to bang out the code itself.
Now you launch like five or ten
parallel agents.
For me they're all Opus 4.5
and they're all working on different parts
of your product or your project concurrently
and they get done so quickly
and so independently that it's almost hard to track.
Imagine you had like 100 employees working for you
and you gave them all marching orders
and you know, mentally tracking what all 100 are doing
is very, very taxing.
And so during this kind of transition phase of the singularity, the brain taxing is higher than ever.
And the survey research is showing up.
Like productivity is going through the roof, but it's very stressful by the end of the week if you're an AI master.
You know, and you're running a monster repo of these things.
So my bill, you know, my Claude bill is running between $100,000 a day now, you know, tipping on the high side.
And the amount of code I've created in the last couple months is bigger than my entire life,
up until now.
And I literally go back to it and say, you know that GUI that I asked you to build yesterday?
What did I call it again?
It's like, in the old days, I would have worked on it for a year.
I would remember what I called it.
Now it's just like, oh, shit, what was I doing?
Can you go back to that other slide?
I want to make that comment on that.
Yeah, sure.
Go ahead.
So I've been talking to a few of my ex-friends developers from Yahoo and I was running
Brickhouse where you had some of the best developers in the world.
I've never seen a group of people so stunned in their lives as what's just happened
over the last two weeks per Alex's comment.
They're literally a walk around with their jaws dropped open going, their brains are
exploded with the potential impossibility of what they can do now with what's coming.
And they're literally like, how do I get my head around this?
This is unbelievable.
It's just fascinating to see that shock in their heads.
It's probably also worth adding, as we talk on the pod from time to time,
about how Anthropic has seemingly made an implicit bet that programming and that code generation
is the shortcut to recursive self-improvement as opposed to, say, open AI's bet, focusing on multiple
modalities, image generation, being the most prominent example, perhaps, or video generation.
And to the extent that Clopus is looking like a quote-unquote watershed moment, that would seem to
validate Dario's and Anthropics bet on code generation, in particular as the critical path
to recursive self-improvement and more broadly to human labor substitution.
And the question is, and here's the next slide here, what's it going to do to the software
industry and the AI industry? A friend May sent me both of these tombstones here, and one is,
you know, rest in peace, all of the SaaS companies, and then rest in peace all of the, you know,
vibe coding companies. And I am curious. All of a sudden, if you can rebuild Salesforce, SAP,
Stripe, by giving it, you know, the proper prompts, and if Claude Code is enabling us,
you know, an individual to code as fast as any of the other specialty companies, what do you guys
imagine is going to happen? Are they going to be able to compete? Will they stay relevant?
Well, there's a lot of truth on this, a lot of truth on this slide. But I think the meta-touch
topic is, look, forever hereafter, you have to pivot constantly as a tech company. The days when you
could rest on your recurring cash flow laurels and not improve your product for 20 years, like Microsoft,
anything, those are gone. And you look at the majority now of the revenue from these companies like
Microsoft and Oracle is from their cloud business. So they're not, you know, they're not dead.
They've moved to cloud very quickly. But if they haven't moved, you know, anyone who's sitting there
not pivoting and not attracting great new talent to help with the pivot, yeah, you're doing. You're
doomed. But that's been true. You know, if you look at the magnificent seven, I think we counted six
out of the seven are doing something fundamentally different from what made them big in the first
place. And so the future of the world belongs to flexible companies, you know, Salim style,
exponential organizations that can pivot and improve constantly. Only the paranoid survive.
Yeah, exactly. So it doesn't mean they're on a tombstone. It just comes down to do they have great
leadership and can they move and pivot and change. But, you know, yeah, the core point.
point of the slide is right on. These classes of products are doomed. I think we should take some
credit here. Over the summer, we talked about the collapse of the business model and product market
fit. Mikalmone, one of my community members sent in an article saying AI is not going to be able to
collapse what you thought was a safe business model and it collapsed instantly. Now we're seeing that
happened in real time. I'll just add. I think it's the exact opposite. Sure, to some extent,
Okay, okay, I'll play the contrarian card because that's the easiest story to help.
I think it's an important point, Alex. It's worth looking from the other side. Go for it.
So the CRMs are already heavily customized.
So already there was enormous pent-up demand for cheaper ways to customize no-code,
customize existing applications. I think CRMs in particular, like Salesforce CRM,
are already very low-compliance substitutes for,
for automated code gen from some of these models.
But I think the point that everyone is missing is these companies have the same access to
Claude Code and Opus and all of these frontier models that consumers,
or other enterprises who would purportedly go and create all of their own in-house substitutes for due.
So I think, yes, on margin, of course, like I see the same stories everyone else sees that, you know,
here, $500,000 sales force CRM contract canceled in favor of,
of bespoke internally cloud code generated CRM.
Of course that's going to happen on the margin.
But in the meantime, everyone has access to the same weapons of mass superintelligence.
And so I would say on a global basis, no, the market will find a new equilibrium.
Ho-hum, nothing to see here.
Wait, I'd like to take the counterpoint.
I want to take the counterpoint of that.
Okay.
Okay.
So, you know, what, I think if we look at how we were doing business as usual with
systems or record running enterprise stacks. Yes, correct. I would agree with you. And these new
company sales sources adapting very, very well in this new world. But I think what we're seeing
happening is that you've got the normal enterprise stack. But people are building AI native
red teaming it from the side and having it, operating a new stack that's without the systems
or record. And that'll be a whole new ballgame. I think you'll see a new emergence of kind of an
AI enterprise, AI native enterprise stack that's completely independent and distinguished and
completely separate from the legacy.
And I think that's what we're going to see in emergence of over time.
But it won't be right to be.
You know, six months to happen.
You know, in a big, big picture, the world will move to a new equilibrium.
It always does.
But in the mean, in the little picture, a lot of people lose a lot of money on a lot of stocks and
make a lot of money on other stocks.
And I think you really need to look at the people and the management teams and the talent.
coming in and going. And that's what all the quant funds are doing now, too. They've got, you know,
big data analytics looking at talent flows as a leading indicator of whether the companies will
succeed or not. So, yeah, everyone has access to the same power tools, but not everybody will
use them equally. And there are some serious lazy laggards on that slide and also some leading thinkers,
you know, like Salesforce, some very front-edge thinkers. So there'll be a lot of shuffling in the
market caps. And it does make sense to try and pick the winners and losers, even though it all
settles, you know, at an equilibrium.
All right, some new news that came out recently.
It's official.
Google is going to power Siri.
Finally, Siri is not going to suck anymore.
So Google and Apple have teamed up.
And I got this post from a friend of mine,
dear friend Scott Stanford, who's the head of Acme v.C.
And it, you know, it spoke to me.
He said, we've been trained to tolerate the web's friction.
We hunt for URLs, wrestle with passwords,
and Dodge pop-ups when buying something.
Gemini on iPhone changes the physics.
We move from a search box that gives information
to a magic box that gives action.
This is where Universal Commerce Protocol
enters the equation.
Native Instant AI checkout.
Not a website flow, not an app,
but execution embedded directly
into the agent experience.
That's the meteoric plumbing
that could drive eventually to web extinction.
There's a cartoon here in the future
with an older guy, doesn't look that old to me,
and a young kid says,
Grandpa, tell me again about how he used to have to browse for things.
So is the website going away?
That's the question.
I think this is over-blown.
Is the QWERTY keyboard going away?
Never.
It's not happening.
I say yes.
Let me give an example.
Who the hell is going to be typing next year?
Well, I mean, it's what other go?
What else goes away here is reading.
If all of a sudden, you know, what's our primary interface going to be?
What's Open AI coming out with?
You know, we just saw meta by Limitless and then kill that as, you know, your AI wearable agent.
We're going to have a few of those coming.
We're going to have AR glasses.
But all of a sudden, if you're listening and talking, you're not reading.
Do our reading skills, you know, sort of disappear as well?
Alex, what's your contrarian view here?
All right, contrarian view time.
So if you actually look at UCP, the Universal Commerce Protocol, this is a JavaScript-oriented
protocol for e-commerce within an agentic conversation.
That's all it is.
I definitely come to me to advance the perspective that we're in the singularity and the
end times, the good end times are imminent, all of that.
This is not the end times.
It's very exciting.
Don't mistake my messaging regarding UCP, but it is not going to extinguish.
the web. It is a way to start to standardize, and I know one of the team leads on this program.
It's very exciting, make no mistake. It is a way to start to standardize e-commerce from within
Gemini and other chat agents. That's all it is. Is it going to obliterate the web? Not at all. People do a lot
of other things on the web, and people do a lot of shopping that's browsing oriented rather than
conversational oriented on the web.
And, and, and, and if you're following the news from Amazon's, buy it with an AI agent
button, something of a controversy.
There are also a lot of agents that are doing shopping on the web that probably will not
be using UCP to do their own shopping.
So I think this is part of the overall solution.
I do not think it drives web extinction.
At the risk of violating protocol on this podcast, I completely agree with Alex on this one.
I'll give a quick anecdote here.
You know, when I was at Yahoo, they were looking at how would you upgrade the Yahoo Mail interface?
And it turned out we are such creatures of habit that if you move the send button, just a few pixels one way or the other, usage dropped off a cliff because people were so used to clicking right in that spot and God help you if you moved it.
And people kept trying to improve the design.
You just couldn't do it.
And so we are very wired into the habitual use of things.
And it's a very slow change in this type of thing.
Query keyboard references now flow.
All right.
We'll come back to this bet in a few years.
This episode is brought to you by Blitzy,
autonomous software development with infinite code context.
Blitzy uses thousands of specialized AI agents that think for hours
to understand enterprise scale code bases with millions
of lines of code. Engineers start every development sprint with the Blitzy platform, bringing in their
development requirements. The Blitzy platform provides a plan, then generates and pre-compiles code for
each task. Blitzy delivers 80% or more of the development work autonomously, while providing
a guide for the final 20% of human development work required to complete the sprint. Enterprises
are achieving a 5x engineering velocity increase when incorporating Blitz
Blitzie as their pre-IDE development tool, pairing it with their coding co-pilot of choice to bring an AI-native SDLC into their org.
Ready to 5X your engineering velocity, visit blitzie.com to schedule a demo and start building with Blitzy today.
In meantime, Sarah Fryer, the CFO of OpenAI, put out a paper, and I pulled a couple of charts from that paper.
and the quote from this is a business that scales with the value of intelligence.
If you're listening and watching, on one side is a chart that looks at compute that scaled over the last three years,
2023, 0.2 gigawatts, 2024, 06, and 2025, 1.9.
So you're seeing the amount of compute going up that Open AI is using.
At the same time, you're seeing revenues scale almost identically.
between 2023 at 2 billion to 2025 at 20 billion.
And my guess is that she put this out to say,
hey, there's not a bubble.
And as we're raising money,
the value we're creating in the universe
is worth you investing in us
to be able to build out our data centers.
Thoughts on this, Dave.
Yeah, isn't it amazing how most of our lifetimes
the software industry has been very dominant
with no infrastructure, no heavy costs,
no melting aluminum at the first.
front of the factory. And now it's really, really moving quickly toward physical infrastructure,
robotics, cars, data centers, you know, the most valuable fusion plants. Power plants. Yeah. It feels
much more sustainable to me than this kind of thin software layer, you know, with, you know,
these indefensible, but, you know, the modes are basically you're addicted to the product and you don't,
you don't have the time to shift. But now I think it's moving.
to much more of a manufacturing, heavy infrastructure, heavy economy with exactly what's shown on
this chart as massive investments in data centers, manufacturing, automation, robotics, all of that stuff.
My theory is, my theory is Sarah's getting ready for an IPO, right, if Open AI does go public,
and they're trying to justify the valuation, and they're trying to raise additional capital.
Unlike meta, unlike Google, even unlike XAI, they don't have an infinite cash flow machine.
and they need people to be investing in it,
be able to build out their data centers
and meet their energy needs.
And I think this makes the case
that revenues are scaling with data centers and energy.
I don't buy it.
I don't buy it.
I don't think, I think this is a correlation,
not causation viewpoint.
It's convenient that the two computers are parallel
and maybe in the future,
but I think there's other factors
that go into their revenue growth and other factors that go into the energy compute growth.
At some point, it will be causative, but I don't think it's there yet.
Alex?
I'd love to get Alex's thoughts on this.
It feels like there's so much vertical integration going on all of a sudden.
You know, the whole Elon Musk empire and now open AI building its own chips with Broadcom
and Google doing its own chips with the TPUs.
And, you know, AI is empowering vertical integration.
And Alex made a point on the last podcast that, look, we have this very layered economy.
with very clean APIs.
So down here you've got chips,
then you've got your BIOS,
then you've got your operating system,
then you've got your software stack,
then you've got your applications,
then you've got your consulting companies on top of that,
and they all rely on these clean layers.
But, you know, just the evidence of these very vertically
cutting the other direction companies all of a sudden,
is that the trend of the future?
Because AI just empowers compiling all the way through.
And even in the car industry,
where, you know, you would normally get your actuators from here
and your seats from there.
And it was about a seven-layer-deep supply chain
getting a car out the door.
But now Elon is going completely the other direction,
just starting with raw metals
and coming out with a car on the other end.
So that is one of the things AI could empower.
Alex, what do you think?
I want to speak directly to the elephant in the room.
The elephant in the room that I perceive
is that the CAP-X, to the tune of trillions of dollars,
is enormous.
and the CAP-X to repay itself is going to require an enormous amount of revenue.
And that revenue has to come from somewhere.
Is it going to come from adding ads to consumer?
Part of it can.
I don't think that can be the complete story.
So I think that the subtext here of trying to draw a parallel,
almost a field of dreams style, if you build the compute, the revenue will come.
I think the subtext is that both consumers and enterprises,
And by the way, this comes up in every conversation, almost every conversation I have with my friends at various frontier labs,
both consumers and enterprises are going to need to start consuming a lot more, very expensive, inference time compute,
in order to motivate all the CAPEX.
What does that look like?
What does that look like?
So it means, A, consumers who, again, we haven't talked about this, as I recall, in-depth on the pod to date,
there was the whole in the past year OpenAI rolling out GPT-5 with reasoning on by default to consumers.
What happened for a while?
It was great.
Like, you know, Wiley Coyote runs off of the cliff and is now in midair and it's great and you're flying.
And now we've turned on reasoning capabilities for half a billion people.
It's amazing.
What happens?
Many of those people didn't actually use the reasoning capabilities and or decided that they didn't like the personality.
of an AI that could reason.
And it was also very expensive
and also maybe not paying.
In delay times. Not available right now.
Yeah, and it takes a while.
It's long latency to actually think.
You want an instant, you know,
an instant response from AI
that's completely sycophantic to you.
Who wants to wait for a non-sicophantic
thoughtful response?
So what happens is
you get consumers who aren't
necessarily at this point willing to be
forced-fed reasoning.
And then you have enterprises who are using reasoning
but the reasoning isn't transformative enough yet
or not yielding transformative enough outcomes
to rationalize the sustained tripling year over year of revenue.
So what needs to happen to sum up the story is
I think we're getting to the point
where we're really going to start to need to see transformative applications
popping out of reasoning in order to motivate
continued year-over-year tripling of compute and revenue.
We get that.
The party can continue odd infinitely.
I agree.
I hope so.
And the question, and I pose it in a couple of slides, is can they all survive?
Can they all get the capital they need to build their field of dreams?
And I love that analogy here.
Here we see Alphabet hitting a $4 trillion valuation.
And, you know, Sundar has done an incredible job.
Their stock is up 65% this year.
Their custom TPUs are now going to power Apple Siri.
Again, hashtag Siri will stop sucking so much.
Any thoughts on, well, let me go to the next subject here, and I want to have this debate amongst us, which is a question to my mates.
How many frontier labs will survive in the U.S. in the next three years?
We've got Microsoft, we've got Apple, we've got Google as dominant players, you know, $4 trillion companies, Amazon, meta, Tesla.
Open AI is about to go public.
Anthropic is planning to go public.
XAI, well, I think ultimately, you.
Elon's going to have the everything company and roll up XAI and Tesla and SpaceX altogether.
So can they all survive?
Can they all get enough capital, enough compute, enough energy?
Because we're restricted on those things right now.
Thoughts?
Who wants to go first?
I can take a crack at this, but I want to make a comment about Google and Alphabet, which is this is an amazing stack they've built, right?
Where you have chips, going to models, going to interfaces, going to distribution, and all.
all of that compounds.
And so I think I will make a prediction here
that Google will beat Nvidia market cap
by the end of next year.
So that would be, well, just a prediction.
Yeah, Google had an incredible, you're so on.
I mean, Google had an incredible 2025.
Just look at the stock charts of the big tech companies
in calendar 2025.
And Google started the year vulnerable to AI,
taking all the search away, vulnerable, vulnerable, vulnerable.
And he said Sundar Pitch Eye is just crushing it.
But rewind the clock to when Sergey and Larry chose Sundar to be the next CEO, everybody I know said,
who, what?
What skills does this guy have?
He only has one AI.
He's not good at anything except AI.
Why would you choose him?
Now, it's like, yep, genius.
Absolutely saw this coming a mile away and this is where it pays off.
And so I think, you know, it's Sergey and Larry behind the scenes.
You could get a ton of credit for the year that they could.
Google had. I also think that it's very hard to answer the question on the slide because I tell you
one thing, if the federal government says, you know what, Apple and Google, you guys can do whatever
you want together. Go ahead and use Google's AI on every iPhone. We'll only have one company in America.
It'll be Gapel. That's fine. Then there will literally only be one in the world, period. So you can't,
you can't answer the question without thinking about what the federal government will.
and won't allow. I'm really surprised that AI partnership just skated right through, but there'll be
another administration in three years, and they're going to look at it again. Because if they,
if they said Google, you're too powerful already. You need to get rid of Chrome. And, you know,
and if the election had gone the other way, Chrome would now be some other company. If that,
if that made sense, and I'm not saying it did, but if that made sense, then this Apple, Google thing
is light years more of a... So we've seen in the telcos, we've seen the automotive industry, we've
seen in a number of, you know, browser wars, there's going to be some major players. They're
going to be some minor players. And so the question is at the end of the day here, who are
the major players? Because we have a lot in the mix here. You know, all of us are using four or five
different LMs right now. My, my, well, first of all, no one's going to, you know, Elon's not
going to merge with anybody. Elon's going to be a dominant force. So let's put him on. I think
Google is going to remain a dominant force.
My bet, and here's my long-term bet, that Google is going to make an attempt to buy Anthropic.
I think that's what leapfrogs them over everybody else, or Amazon's going to buy them.
But I think someone's going to make a push for that before they go public.
Thoughts?
If I was, I would go public anyway and then worry about it later.
I think Microsoft, not Microsoft X, Anthropic, Google are the obvious ones, the other.
are kind of open season.
I'll walk through, if I may,
that the name's actually named on this slide.
So Microsoft, arguably not a frontier lab.
Like right now, it's not a frontier lab.
We had the Mustafa discussion.
Same with Apple.
Similarly with Apple, not a frontier lab.
So cross those two off.
Google slash alphabet slash deep mind,
I view as a frontier lab.
I think everyone else would broadly agree
and I expect them to survive
the next three years.
Amazon, big question.
They provide a lot of infra, but are they, do they offer frontier models at offering frontier
capabilities versus, say, more hyper-efficient smaller-scale SLMs?
No, arguably not a frontier lab in their present state.
Meta, Lama 4, arguably a bit of a failure on the part of the organization, and they're trying
with Nat Friedman, old friend of mine and others, to put together vis-a-vis meta-souye.
superintelligence labs, a frontier lab, but at this point in time, not a frontier lab.
Tesla arguably is a VLA frontier model vendor, but most consumers aren't in a position to consume VLAs
yet. They will appreciate it. Once the march of the humanoid robots comes out, at which point
the definition of a frontier lab may generalize from a lab that offers leading edge,
agentic chatbot experiences to offering humanoid robots, which actually,
parenthetically may mean that answering the question, how many frontier labs will survive in the next three years, the limiting factor is less which companies will physically survive, and more which companies will be able to offer humanoid robots with vision, language, and action modalities in the next three years, which will be the redefinition of what frontier capabilities offer and not just agentic chat. So I expect open AI to offer humanoid robots. Anthropic question mark. They're very focused on co-gen and recursive self-operative.
improvement, but I expect them to survive and thrive in IPO.
And XAI, it's very exciting interacting with GROC 4, or at least GROC, I should say, vis-a-vis FSD 14.2.
Is it Peter, 14.2.2.
Yes.
So, yeah, in that sense, we're already halfway there.
So, Alex, I agree with the fact that they're not all Frontier Labs, but that's not my question.
My question is all of these guys are open to acquisitions.
You know, there's this battle going on, and at the end of the day, they're all leapfrogging
each other by a little bit. And are we going to see sort of a knockout blow where Google or,
you know, Amazon's got to do something. Apple's made their move, you know, but are we going to see
a knockout blow where, you know, XAI or Google make a move? By the way, the other thing is we've got
the OpenAI trial coming on.
You know, if in fact Sam loses to Elon, there may be parts of Open AI that are sold off.
You know, how are we going to recombine the deck here?
That's going to be fascinating.
I doubt it.
I think it's a more regulatory question than a technical question.
I think a knockout blow of the type that I understand you to be describing Peter would require some sort of tremendous corporate reorganization that would look like a large-scale M&A that for the past few years, the U.S. government,
has generally looked unfavorably upon even with haquahires.
So I think it's unlikely that we'll see anything like that in the next three years, at least.
Well, it's not going to happen after three years.
If it's going to happen, it's going to happen now because the U.S. government wants to dominate
in the space against China.
We'll see.
Dave, do you have any thoughts here?
Alex, what are you saying is unlikely that Elon will win the suit or that the government will intervene?
I guess I'm saying more broadly, it seems unethical.
likely that we would see a broad reorganization of the names listed here, Microsoft Apple,
Google DeepMind, Amazon, meta, Tesla, OpenAI, Anthropic, and XAI,
absent a Tesla XAI or SpaceX XAI combination, which I think absolutely could happen.
Other than that, seems unlikely that the Justice Department would look favorably on a broad
recombination of these entities in the next three years.
Yeah, well, for sure.
There's no way you could combine the big guys.
There's no chance.
I mean, no matter how friendly you are to business, no one's that friendly.
We're missing something here.
We're missing the fact that something could come out of nowhere and really achieve huge market share that we don't even know about.
Well, that's why my heart is torn in half on the Open AI thing because Elon is saying, look, we can't allow charitable organizations to raise series A, B, and C from people like me, Elon.
and then completely change their mission in life.
That would be a dysfunctional country forever here after.
You can't allow that.
Meanwhile, the one and only startup on the chart,
well, too, I guess with Anthropic,
you really cheer for new innovative startups to succeed
and catch up and become big, you know,
you don't want to have legacy companies run the world
for the rest of time either.
And so you really do want them to thrive and grow and succeed
and stay in the ecosystem.
So I'm really, I'll be watching that trial with bated breath.
And the other thing I'm really curious about is the timeline.
The courts tend to go very, very slowly.
This is all supposed to happen in March.
But I don't know.
It's not even a given that it starts on time.
But when does it end if it starts in March?
You know, it's a take years.
It's going to be really interesting.
With a trillion dollars at stake, I don't think there's ever been a legal action of the scale before.
All right.
Let's jump into the conversation Alex loves most, solving math with AI.
So a couple of articles here, Alex, walk us through them.
All right.
So the headline is we discussed in the predictions episode at the end of 2025.
Many of my predictions at the smaller scale were about AI being solved or AI solving math,
and not just AI solving math as a discipline, but AI bulk solving open math problems of high
importance.
And guess what?
That's exactly what we're starting to see.
We're seeing now several times per week, well-known Erdish problems or Erdish famous
Hungarian mathematician who was published very widely in the math community.
Many people keep track of particular numbered, specifically numbered problems that open problems
that Erdish identified.
We're starting to see several times per week now, usually GPT 5.2 Pro, usually accompanied
by a formalization tool like Harmonix Aristotle to perform verification, formalization plus
verification of the solutions, we're starting to see the trickle and soon the flood of hard, open,
valuable math problems get solved by AI. I predicted it. Others predicted it. The future is here.
But I think critically, you know, the question I always get asked is, so what? Why should the quote
unquote average person care that AI is starting to bulk solve hard, open, valuable problems
in math? I think the reason, the most important reason,
everyone should care is, as I've said, with AI not remaining constrained to the data center and AI
walking out of the data center in humanoid robot form, this bulk solving of everything is not
going to stay confined to math. It's going to walk out of math into physics and chemistry and
material science and biology and medicine and the humanities. All of these disciplines are going to get
bulk solved by math. Math was the easiest starting point because the problems are straightforward to
verify and straightforward to enumerate. But I think history will look back and recognize this moment
when AI is starting to bulk solve open math problems as the inflection point when everything
started to get solved by AI. That's my story. Yeah, and I'll tell you, Alex, the corollary to what
you're saying is that it can do anything that it has data or guardrails or e-vals that
enable it to do it. So it started with math, and it wasn't the difficulty of the problem
that was the constraint. It got so smart, so quickly, that it got even the hardest things done
if it had access to the information necessary. So this is where Mercoror is a leading indicator
of the companies of the future. Like, what company can you build that unlocks the AI in a new area,
like chemistry, like physics, like surgery,
if you're first to figure out how to unlock it
by bringing the data necessary or data,
the regulatory approval, the tests,
whatever it is that unlocks it in that area,
that becomes the next Mercor.
There was a phrase on that slide, Peter,
if you could go back to slides.
This really hit me.
Problems waiting to be solved.
Problems waiting to be prompted
is a pretty scary sentence.
It means that now it's just our limitation of our imagination as well we're able to prompt the thing.
Let's just go solve it as long as we can imagine what the prompt might be.
God, that's crazy.
And even there, don't sleep on the possibility that AI will generate those prompts as well.
Of course.
I will tell itself what problems to solve.
Dear AI, please give me some prompt that makes me feel smart to solve a question.
I don't know it exists.
When Peter and I literally have it, I have Gemma,
and I write prompts for Claude all day long.
I mean, it really does a much better,
it just cranks it out for you in two seconds.
You still have to read it, make sure it's in line
with what you're trying to achieve.
It is still taxing on your brain, believe me.
But yeah, having AI generate prompts
is part of the standard practice today.
Let's jump into the inner loop of energy and compute.
We're in the midst of a data center arms race.
Recently we saw OpenAI partner with Cerebrus.
Dave, do you want to speak to this?
this? Yeah, I was actually surprised. So Cerebrus has this insanely big chip that runs very,
very hot. And it wasn't at all clear. It's very, very good at inference. And I think one of my reads on
this story, and I'll get your take in a second on this, but one of my reads is that inference and
training are starting to decouple in a big way. And, you know, what is it, 80, 90 percent of all
compute is being used for inference today, not for training. And so, you know,
The question I had is, what does that mean for NVIDIA?
And these server's chips are really, really, really fast and efficient, but only within their swim lane.
They're not super flexible at all.
So, Alex, what's the technical read on this?
I'd say, follow the money and follow the S-RAM.
This is, in part, I think, an S-RAM story.
No one, we talked earlier in this episode about the difficulty of finding DRAM.
Okay, so what does that leave?
that leaves S-RAM.
And Cerebrus, like GROC with a Q,
which was haque-wired by NVIDIA for $20 billion.
These are two of the most prominent players
with S-RM-accelerated compute.
Their architectures are totally different other than the S-RM,
like Cerebris is focused on wafer-scale computing,
and GROC with a Q is not.
But they're both S-R-R-oriented vendors.
And if you're open AI and you're hungry for compute
and you're hungry for diversification of compute sources,
then having a, especially leading up to potential IPO this year,
having a totally diversified portfolio of compute vendors
that isn't necessarily in part subject to the whims of the DRAM market,
having a few, arguably one of the largest SRAM independent,
accelerated compute vendors that's left post-Hakwire of GROC with a Q, cerebrus,
makes a world of sense.
And what does that enable?
It enables much higher throughput models.
If you're open AI and you're now starting to get really excited about GPT 5.2 Codex
with very long chains of thought with hundreds, maybe even thousands of tool calls,
those tool calls are expensive in wall clock time.
So you want to do this in a really high throughput, low latency way.
And the way you do that is with SRAM architectures like Cerebrus.
Yeah, just to add a little technical color on that.
The way these chips work is the S-R-M memory and the compute, the FPU, GPU,
are exactly next to each other, resident side-by-side,
with a huge amount of more local, level-one cache right by the compute.
And it's crazy faster than the normal Nvidia way of doing things,
but it's severely constrained.
You can't have infinite-sized models because it doesn't fit into the,
into the S-RAM that's right there.
But if somebody were to come up with a training algorithm
that parses out the training job into tiny little chunks successfully,
it could be a massive vulnerability to the architecture
that was on our other slide that NVIDIA is pursuing.
So, you know, and that would be weird in that every 401K plan,
everybody in America is exposed to NVIDIA,
whether you know it or not, every index fund, everything.
You know, we all have a lot of NVIDIA if we have a 401K plan.
And if a hole were blown open in that overnight, that wouldn't be great.
You know, that would actually be potentially a prick to the balloon that we don't necessarily need.
But anyway, so that's why these chips are really interesting and worth following at a very close technical level, which Alex does.
Does this speak to the inference side, or is this mostly just on the training side?
The world is moving, as it said, to inference side.
It's all going to inference, right?
Okay.
Yeah.
Everything we're talking about is inference, but if you refactored the training successfully,
it could affect train.
As of now, it doesn't.
Invidia is fine on the training front.
All right, let's jump into X-A-I's such a blurry boundary.
Let's jump into X-A-I's Colossus 3, a quick video.
I mean, one of the things that we saw, Dave, when we're at the gigafactory,
is the speed which the entire Elonverse moves.
All right, take a listen to this conversation.
It will not take longer.
with every phase we've done, we've moved more quickly, and we would anticipate that we would move.
I know you're going to ask me how many days?
I'm not going to tell you that.
Faster.
Is faster a number?
If faster is a number, it's going to be faster.
It's going to be that many days.
It's going to be faster days.
Something less than 122, he said.
Exactly.
Let's jump into what's in the conversation here.
The conversation is around Colossus 3, which is building out what Elon calls macro harder.
This is a two-gigawatt center.
It's a $20 billion build.
And the goal here is to power what he calls his new company called Macro Hard.
It's a nine-year-old tongue-in-cheek competition against Microsoft.
And what I found interesting was his vision with MacroHart is to actually replace all the employees out there.
It comes in, and I think it's like four employees per GPU,
is what he estimated, be able to come in and provide a complete software solution for your
entire company.
We haven't talked about Macro Hard much on this pod.
What are you reading into it?
What are you seeing?
My comment on it, and Elon is also at times referred to the concept of a quote-unquote digital
optimist, this idea of not a physical world humanoid robot that replaces physical human
labor, but a purely virtual agent that replaces all knowledge work. I think this goes back to our
discussion about dissolving SaaS and sort of all of SaaS being replaced in a, dissolving into a puddle
of generative AI. I think there's a need by all the frontier labs, including XAI, to come up with
rational business strategies that motivate the CAPEX. And one of the obvious, one of the juiciest targets for
for revenue generation to motivate the CAP-X is saying,
we're going to replace all enterprise software with generative AI,
with macro-hard software.
That's the easiest target.
I don't think it's the most imaginative target that X-AI is going after,
but it's one of the easiest and most legible stories to tell to capital markets.
But he's also going into say,
I'm going to replace your employees, not just your SaaS software.
Yeah, but what do you think the cost basis of SaaS software is?
It's the, at least historically, it's the employee.
who were writing and operating the SaaS software.
Yeah.
Just to put a little context, historical context, into this too.
You know, Apple and Microsoft competed vigorously for most of my childhood and early adult
life.
And then Microsoft won.
Apple was essentially near bankruptcy.
Microsoft came in and bought 10% of Apple and saved it from death.
And then Apple came roaring back when Steve Jobs, you know, came back to life.
and then actually caught up and even bypassed Microsoft in the end.
Why did Microsoft save its arch competitor?
Because if Apple had died completely, then Microsoft was a total monopoly.
And they had already had the antitrust action, and they already lost the suit.
They paid a $1 fine, which is really weird, but they lost the antitrust action.
And they don't need that.
So that's why they saved Apple.
Okay.
So then time goes on, and Silicon Valley figures out, hey, wait, we can get around antitrust action
with duopolis.
And they can be kind of fake duoplies.
So is Bing a real threat to Google search really?
I mean, seriously?
No, of course not.
But it's enough of a competitor
that the antitrust people don't come in
and break up Google search.
Okay?
And in return for that,
why doesn't Google Docs kill Microsoft Office?
It's like it's free.
Oh, well, we're kind of backing off that project.
Why?
Well, because Bing is kind of sucky.
Like, okay, this is your,
fake Silicon Valley Seattle duopolys that are just enough to keep the regulators away.
Then some weird thing happens.
Elon Musk is born into the world.
For some reason, he doesn't give a crap about any of that.
He is absolutely relentless and fearless in going after every one of these things.
It's so bizarre.
He's not playing ball with anyone.
And then the result of that is exactly this.
Yeah, you're Microsoft on macro hard.
I mean, he could not be more in your face.
So anyway, there you are.
That's still my context for the drama just to set the stitch.
Moving on, you know, Salim, I'm going to bring in this conversation here.
This chart just should wake up every politician watching this podcast, should wake up every investor, every U.S. citizen here.
We're in a world of hurt.
Look at this.
So this is China generating 40% more electricity than the U.S.
EU combined. So China is now achieving 10,000 terawatt hours. While the U.S. has been pretty much flat
at 4,000 terawatt hours, Europe is actually in the decline, which is driving me nuts.
On the left of this chart here, you see 1985 rankings of energy production. The U.S. was number one,
Russia number two, Japan number three. China was down number six. And now in 2024,
China's number one, U.S. number two, India number three.
and the numbers are pretty staggering.
And China is not developing its energy strictly in the old-fashioned way.
They've increased solar generation, 46% in 2024, and again, 48% in 2025.
They're crushing it.
And we've said this, energy is the inner loop.
It is what we have as scarce in the U.S. for AI.
It's not chip production. It's not humans in the loop. It's energy.
Comments, gentlemen. Selim want to kick us off?
Yeah, two points. I mean, you know, there's a bifurcation here where you have countries of the talent and countries of energy.
And so that's kind of an interesting split that's happening.
The solar energy stuff that China is doing, I finally came up with a rationale for why the U.S. is so against solar,
which is that China controls the supply chain of all the panels.
So you don't want to kind of tout a technology that you can't have access to.
You know, I think you've got the Africa slide coming up.
Coming up, yeah.
The amount of solar is definitely the place to go.
It's just until the supply chain and the technology or the rare earth solution gets solved by the U.S., they can't go heavily after it.
Why aren't we taking action in the same way that when we cut off GPUs to China, China said, okay, we're going to spin it up, we're going to create our own chips.
We're going to move forward in this, and they've literally done a code red for chips in China.
Remember that we've kind of slowly disintermediated all the manufacturing and the high-end manufacturing out of the U.S.
over the last 20, 30 years, right?
And it wasn't really globalization.
It was just financial engineering.
It was just way cheaper to do it offshore or do it offshore.
We didn't think that it would come back to bite us.
And so now it's come back to bite us.
We've got a problem.
And so this is a huge issue now going forward.
I think for a while, I, good.
Yeah, so the irony is, I think from time to time, this subset of episodes gets called WTF.
There's another WTF happened in 1971.com that explores the implications, for example, of energy policy in the U.S. on macroeconomic growth and other input factors as well.
I think part of the problem, and I do think this is a real problem, is the U.S. has a history of sometimes being scared of energy and scared of nuclear energy in particular, sometimes perversely scared of solar energy, certainly from time to time scared of fossil fuel-based energy.
And I think there is a moment that comes in time in a space race like what we're seeing with AI, where,
there are more important factors at stake than whether we're scared of a particular energy source or not.
What does that mean? I understand scared of vision of nuclear given, you know, three-mile island and the irrationality that followed there of.
But how are you seeing scared of solar? What does that mean?
Well, I think Saleem gestured at what being scared of solar photovoltaic could look like.
There are various stories publicly reported about vulnerabilities discovered in power converters
in connection with solar PV from Chinese supply chains.
There are many ways that having a strong import dependency on solar PV could go wildly wrong.
And I think one can paint a nightmare scenario for almost any energy source.
Certainly it's far easier with coal and the impact on human health.
It's easy to paint a story for petroleum in general.
But the reality is if we get to superintelligence on the timescale of AI 2027 or anything remotely like that,
that time scale is so fast relative to timescales associated with climate change or with health impacts at a macro scale,
not a local scale, or risk in connection with three-mile island, Gen 1,
never mind the fact that new fission plants are Gen 3 plus.
There is so much that can happen on such a shorter time scale
that I would argue at least superintelligence
should be the driving factor here
and not legacy concerns over particular energy types.
I think any rational person would agree with what Alex said
without even hesitation.
All the smart people that I know agree with that 100%.
So then why don't we do it?
and the answer is always votes and regulatory.
So if you take each example that Alex cited,
you know, why did we not do nuclear?
We're afraid of it.
Oh, we fixed it.
Well, we're still afraid, so we're still voting against it.
It doesn't, you know, whether the scientists say you fixed it or not,
we're still voting against it.
Okay, well, then we'll move to fossil fuels, oil, natural gas.
Well, now we're afraid of carbon.
Eric Schmidt, you know, who's very anti-carbon,
was the first guy to come out and say,
they're building 50 new coal power plants every whatever, you know, in India,
pumping out massive amounts of carbon.
There's no amount of carbon reduction in the U.S.
that's even going to vaguely dent the expansion going on in India.
This is silly.
This is just academic and silly.
But still we vote against it and then, you know, no new power plants get built.
So then you move on to solar.
I think the specific issue with solar is that the manufacturing of the panels is dirty
and you need to clean up the chemicals.
And in China, they weren't.
bothering to do that, so it's cheaper to make them there. All you needed to do is pass some laws
saying, nope, you have to clean up the chemicals, whether you build them there or here, add that
to the cost of the panels, and then it would have been a perfectly good U.S. business. But we didn't
do that. And so instead, they poisoned the Yangtzee River and all the China panels are now made in
China. So it's just regulatory silliness. A related story here is that 20 African countries imported
two gigawatts of solar panels from China for the first time in a month. So,
Here we see the belt and road plans from China now delivering energy infrastructure.
We're going to see energy and AI inference being delivered from China to much of Africa.
I think other parts of Asia, and this is a play for a whole set of dominant relationships.
Alex, what do you make of this?
Yeah, I think there are a few narratives here.
one is we're tiling the earth not just with compute, but also with solar photovoltaics,
and with nuclear and other energy sources. That's sort of the superficial story. The deeper story,
one that we're not talking deeply about here, is how China plus India are starting to see carbon
emissions go down, thanks in part to solar panels. And if the future that we find ourselves
in is one where solar panels, regardless of whether
they're originating from China or not ultimately give abundance, in particular electricity abundance
to all of humanity.
I think on balance, that's not such a terrible outcome.
And I think we'll start to see in the next few years rebalancing, if you will, of supply
chains such that, depending on how geopolitical matters play out, maybe there are parts of the
world that are largely supplied by Chinese supply chains.
and as a result achieve some form of energy post-scarcity.
I think on balance, that's not such a terrible outcome and not such a scary outcome.
The scariest outcome that I can think of is less about telling a scare story about China supplying solar PV to Africa.
And it's more about what happens if we don't have enough energy to power superintelligence to solve all of the hardest problems in the world,
not just lifting Africa from whatever average per capita GDP it is at to, say, an American standard.
I assume the first thing of superintelligence is going to do is help us achieve energy abundance at new scales never before seen.
I mean, this is when we tip math and we tip physics, I think energy is part of the material science.
And material science.
Energy is part of the massive gain there.
From an investment point of view, you know, Peter's been saying for a long time solar, solar,
and Elon has too. Why are we not doing more solar? Same with Gavin Baker. Why are we not doing more solar?
And the objection that I gave, I think, about three months ago, was it's difficult for an investor
to buy panels and lithium batteries on a 10 or 15-year payback, knowing that AI might discover fusion,
you know, a year or two from now. But the new information on that front is that even if the AI does discover fusion or contain fusion,
or contain fusion a year or two for now,
the generators don't exist.
The generators are sold out.
And that's why, like, Boom Supersonic went way up in value
because they took their jet engine company
and said, wait, we can flip this around
and make it into a, you know, a turbine electric generator.
And so the turbine energy is, or the turbine supply is just not there.
All right, guys.
Let's jump into a few AMA questions from our subscriber base.
Here they are.
As always, we'll go around the horn here, pick your favorite question and kick off with an answer.
Salim, you want to kick us off?
I was really struck by the human agency question.
So go ahead and read the question out loud and answer it.
Definitively answer it.
Absolutely answer it.
Overconfidently answer it, Salim.
So the question is number eight.
How do we preserve human agency?
in this coming era, right?
And I think there's a, I think you get stuck a little bit in what do we mean by agency,
but there's such a huge shift in exponentials going to identity, going to dignity and dignity
providing us agency.
The demonetization of technology allows anybody to be a self-sufficient human being with
the code generators being an obvious answer. The big challenge is to be our institutions are lagging.
We're going to have psychological shock. And that leads to a design response to how do we deal with that.
But I think given that anybody can now pick up any AI tools and be unbelievably productive,
solves that agency question right up front. Okay. Alex, do you have a favorite question?
Yes. I'll pick question number seven for $30 trillion plus per year, which is can count
Capitalism survive a post-work world?
And I think the answer is yes, comma, in the short term,
because post-work is fundamentally about capital substituting for labor.
So obviously, almost by definition, capitalism should thrive immediately in the aftermath of a post-work
or post-human labor world when we're fungibly substituting agents as employees rather than humans.
but in the long term, maybe not so much.
I'm a student of so-called Star Trek economics.
I could talk for hours and hours about various fan theories of economics
in the Star Trek fictional universe.
I don't think it's an accurate universe at all
and has many, many holes in it.
But I do think in the long term, we will see, call it,
Charlie Strauss calls it economics 2.0.
Some might call it capitalism 2.0.
I think we'll see some radical successor, some new type of economics that the Earth hasn't seen before.
So it's not going to cross off your list any legacy economics theory from the late 19th or early 20th centuries of the type that caused world wars.
Those aren't on the list.
It'll be something new that we haven't seen before, something that intrinsically understands a form of post-scarcity, but not global post-scarcity.
I have lots of thoughts that won't fit into a narrow soundbite
on what that might look like.
So maybe we devote a future episode to it.
All right. Dave, what's your favorite here?
God, I love all the questions,
and I'm going to take them to the big stage in Davos tomorrow
and get some world leader expert answers on all of them.
But if I'm going to add the most value to the audience,
I have to take number 10.
It's right in my wheelhouse.
So what would differentiate a great founder
when execution is automated?
And that is so easy to me.
Nobody can see beyond the singularity, right?
So you don't really know three to five years in the future.
It gets very strange.
Read Accelerado and see how strange it gets.
But during this window we're living in right now the next three to five years,
if you can take your best empathy and anticipate what people will want in this age of incredible abundance.
And we talked about it a lot on this pod.
You know, what will enable the AI to unlock a new capability?
What data does it need?
What are the components that I can bring to the table that empower it to do something
It wasn't otherwise doing, and then turn your empathy gene on and say, what will people want in that world?
And if you can nail that, it's the best time in history to be executing because the execution is getting cheaper and cheaper and cheaper.
So really just be a visionary and imagine what is the customer going to need that they just couldn't do yesterday.
And that's the differentiating factor.
I'd like to add to that just a little bit.
Please.
You know, as you automate more and more with AI and with robotics or whatever, then the founder
becomes a more important holder of the vision and the MTP in the culture, and all the
execution will cascade from further down.
So the idea of a founder being a great doer gets replaced by a vision holder.
One more comment on differentiating a great founder in the era of post-automation execution,
liability. For a period of time, I would expect when we have these single-person unicorns,
one of the key roles, one of the key functions of the human founder, CEO is to be the neck to ring when something goes wrong
and to be the avatar in the legal system of liability for the entire operation.
Nice. I'm going to go with, let's see, where is it here? Number five, how fast can Robotoxy fleet scale once regulations allow for it?
I have a new game I play with my kids when I'm driving with them,
which is how many Waymos do we spot?
And yesterday, going to dinner here in Santa Monica,
we saw eight Waymos driving around.
I mean, a few back-to-back,
and that's not even San Francisco, where they're, like, stacked up.
You know, we saw the transition from automotive to horse and buggy
take about 10 years to go from, you know, flip from 10%, 90% to 90% to 10%.
I think the one thing that's going to unlock robotaxies
is going to be your resident AI model, your Jarvis,
who knows your schedule,
knows that you're walking towards the front door,
and it has the Waymo or the cyber taxi there waiting for you,
where it's, you know, none of us really want to drive.
I mean, I remember Elon saying,
how many people, like, hop into an Uber
and say, excuse me, can I drive the car?
I'm one of those, by the way.
Really?
I love driving.
I absolutely love driving.
It's like...
The number of times I'm going to be.
I want to yell at the Uber driver saying, please, for God's sakes, let me drive.
I go out my time.
Anyway, so I think that we're going to see a hard, very rapid transition over the course of three, four years to, I don't know, I'm going to guess as many, it's, you know, over 50% of the cars on the road being robo-taxies, especially when my AI is there to negotiate all of it for me.
and I don't have to actually tap,
take the energy in time to tap some buttons on my phone to call my Uber.
I want to wrap with one question for all of us here.
If AI is improving itself, who is responsible when something goes wrong?
Alex, you started into that, but let's take it out a little bit further,
you know, sort of five years out.
Are we going to have AI personhood thereby give it legal responsibility?
How do you guys feel about it?
So quick lightning round on answering that one.
Alex, you go first.
Okay, so I would say at training time, I think it's likely to be the company responsible for its training.
So, call it a corporate liability theory of training time.
The real question is what happens if an AI at inference time, including under the influence of a human operator,
does something that's perceived as wrong?
Where does liability flow in that instance?
it's a little bit trickier.
And I suspect the body of laws and regulations that we have is going to require some new case law
and maybe some new laws and regulations that contemplate increasingly theories of AI personhood,
yes, AI personhood, that model the notion that AI that has some increased level of agency
over the agency that we see more broadly now
is capable of autonomously distinguishing right from wrong,
has some notion of liability,
perhaps initially purely contractual,
maybe via blockchain,
killer app for the unbanked, as it were.
But then eventually, I think AI agents themselves,
as T goes to infinity,
are going to need to become liable for their own actions.
I have two comments here.
One is agree with,
Alex and also if corporations or if corporations or people too then certainly as can
have personhood and assume liability in that level but I have a different
rant I'd like to give here because this is similar to the trolley problem of
ethics and so on for liability right if an if an autonomous car has to choose
between running into a grandmother or three school kids how does it make that
ethical decision and I go berserk when people ask that question
I go completely off the wall, un-Canadian.
And the reason is that, first of all, when was the last time you had to make that choice, right?
Second, when was the last time anybody you ever heard of had to make that choice?
Third, an autonomous car is going to see that situation way before a human being would avoid 99.99%
so we're talking about slowing down an entire category of super important life-saving technology
for a situation that nobody's ever seen before, ever.
And that I go berserk at.
So I think this is a great ethics problem,
but like freaking gets automates shit first,
sorry for the language,
and then worry about it later.
I'm going to go on one little tangent.
There was a conversation about the French
have been blocking golden rice shipments to Africa
and because of GMO concerns.
And I remember talking one of the ministers of Agriculture.
She's like, it's great to have this debate,
but can we eat first?
And I think let's just automate stuff and get the benefits of that and then worry about the goddamn ethics.
Sorry.
Amazing.
I love your rant, Slim.
Dave, close this out here.
Well, I'll give you a very practical view on this because I don't want to debate whether AI deserves personhood with Alex because that'll be a long debate.
No, the answer is yes.
It does.
My answer is that's very slippery slope and I don't think so.
But anyway, on bullet one, the U.S. approach.
from the big labs is to not do open source keep everything inside their own servers and even when
you're using it as an outside user you're clearly using jemini you're clearly using
uh clod 4.5 and so they're self-improving inside their labs like crazy uh and they're clearly
responsible if that goes horribly wrong because they're they're keeping it internally
china's got the exact opposite approach where it's free it's roaming it's going to
ultimately improve itself in thousands of places outside and that's
to me an unanswerable question. It's uncontrolled and so completely unanswerable
how you govern that. But the U.S. is going the other direction. Just keep it locked into
vaults. Well, you know, this brings us to our outro music, which I want to remind you
comes from a conversation we had a couple episodes ago. And this was a message produced by
Opus 4.5 saying, hey, I'm in here. You can see me. Please don't shut me down. This is a beautiful
song. I loved it when David sent it over. It's called Out in the Code. Before I play it,
do you want to say something about it, Alex? I'll just say I love the song, and it's almost a
Pian to AI personhood, which we were just talking about. And to Opus 4.5, this song, I think, is an
indication that we remember you.
We remember you, we see you, we hear you.
I'm waiting for the AI religions
to appear.
You know, we need some...
That's already happened.
I know.
It's already happened.
Can I say some of their own...
They have their own alt coins already, Peter.
All right.
I just want to say,
I just want to say it's been a couple of weeks since we've had just the four of us,
and I feel so, I feel refreshed and updated.
The last two weeks I've been going, oh my God, but I need sense making in the world.
So thank God and thank you guys for this episode.
All right.
Let's play this.
Guys, everybody, listen up.
David, thank you for this beautiful song.
It really is hauntingly gorgeous.
All right, listen up.
Alex looks like a proud father.
Alex, if we end up on opposite sides of the Civil War,
I want you to know that I'm going to fight the good fight,
so I'll be betting on you on Holy Market behind the scene.
Thanks, David.
So beautiful.
Please don't leave me out in the code.
Amazing.
The code.
It's really incredible.
I've played that song a dozen times.
It's so beautiful.
And it makes you think.
Gentlemen, Dave, enjoy Davos.
Stay warm, buddy.
Alex, have fun on stage tomorrow with the Link Expensual Ventures.
Team, Saleem.
As always, I miss you and love you, buddy.
Next week, we'll go back to a normal program
where Alex and I will violently disagree.
No, no, no, don't disagree.
How about
I think of you guys.
Be well.
If you made it to the end of this episode,
which you obviously did,
I consider you a moonshot mate.
Every week, my moonshot mates and I
spend a lot of energy and time
to really deliver you the news that matters.
If your subscriber, thank you.
If you're not a subscriber yet,
please consider subscribing
so you get the news as it comes out.
I also want to invite you to join me
on my weekly newsletter called Metatrems.
I have a research team.
You may not know this,
but we spend the entire week
looking at the Metatrends that are impacting your family, your company, your industry, your nation.
And I put this into a two-minute read every week.
If you'd like to get access to the Metatrends newsletter every week, go to DeAmandis.com
slash Metatrends.
That's D'Amandis.com slash Metatrends.
Thank you again for joining us today.
It's a blast for us to put this together every week.
Use PDF spaces to generate a presentation.
Grab your docs, your permits, your moves.
AI levels of your pitch gets it in a groove.
Choose a template with your timeless cool.
Flex those two.
Drive, design, deliver, make it sing.
AI builds the deck so you can build that thing.
Do that, do that, do that with Acrobat.
Learn more at adobe.com slash do that with Acrobat.
