Moonshots with Peter Diamandis - Davos 2026: The US-China AI Race, GPU Diplomacy, and Robots Walking the Streets | #225
Episode Date: January 27, 2026In this episode, the mates discuss Davos 2026. Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Salim Ismail is the founder of OpenExO Dave Blu...ndin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy _ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Listen to MOONSHOTS: Apple YouTube – *Recorded on January 24th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
A lot happening at Davos, a lot of conversations that are important for us to recount here.
In past years, it was dominated by politicians and economic policy.
Then it kind of moved to internet a little bit.
This year, all AI.
We are knocking on the door of these incredible capabilities, right?
The ability to build basically machines out of sand.
Maybe it would be good to have a bit of slightly slower pace than we're currently predicting
even my timelines so that we can get this right.
Should we slow down? What's the path humanity should take?
I think what's likely to happen post-AGI is...
Where do you see the U.S. in China right now in the AI race?
I still think that the U.S. is in the lead. I think that our models are better, our chips are better.
But they do have other advantages. There are spinning up power generation faster than we are. That's one area.
If you believe that energy is at the heart of it and is the core of the inner loop, China's going to go way ahead anyway.
But I think the real differentiator in the race is going to be out.
applicationally or dominance, not frontier benchmarks.
The problem is that you always need a bad guy in a movie.
Welcome everybody to Moonshots.
Another episode of WTF here with DB2, AWG, and Saleem.
Salim, I guess I have to use a second or what's your middle initial,
Salim?
I don't have one.
Ah, all right.
You're killing.
S-I.
Okay.
We're going to have to adopt one for you.
Anyway, welcome to probably one of the most important podcast.
around today, the podcast that helps you get ready for the future, ready for the supersonic tsunami.
And we spend 20 hours a week summarizing what's going on so that you can get it in a good 90-minute
session. Dave and AWG, you're just back from Davos, and that's the theme of our show today.
A lot happening at Davos, a lot of conversations that are important for us to recount here.
So tell us, what was it like?
It's a lot of time zones away
Yeah
A lot of
Actually Larry Fink
Who is the organizer this year
Said he dropped
You know we should do this in Detroit next year
How so
That did not go over well
Across Europe
We want to include you know
More views from around the world
This exclusive resort town is just too swank
Let's go to Detroit
Well here's some fun photos of you guys
It was cold and sunny
Is that Sandy Pentland
Yeah
Sandy Pelman with his brass rat. Yeah, yeah, always great to see him. He's a long timeer. He's been there for God since the dawn of Davos.
Yeah, I would say there were robots on the streets. There were billionaires eating out of food trucks, and there were anti-aircraft guns on the ice pond.
Nice. But Trump coming to town. There were actually 3,000, much more security than the past six years. Three thousand armed people in fatigues with machine guns going all the way halfway to Zurich.
actually. I was surprised. But you know, Donald Trump was there this year. So I guess they,
they were ratcheted up. He attracts a crowd. So Dave and Alex, what was the vibe like?
What did it feel like there? I mean, you've been there numerous times. I was there once as a
speaker and it was just overwhelming beyond belief in terms of trying to find people there.
It's a zoo, isn't it? Yeah, it's a zoo. You know, if I were to characterize what was different
this year, it's kind of, it's kind of like the quintessential example as Alex was there. You know,
In past years, it was dominated by politicians and economic policy.
Then it kind of moved to Internet a little bit.
This year, all AI, completely dominated by AI, which is encouraging.
No one had anything intelligent to say other than the usual suspects, Dario and Dennis,
and the people we see all the time.
But at least they were listening.
Global leaders, presidents of pretty much every country listening to a pretty much all AI dialogue.
So that gives you something.
hope that people are beginning to get ready. And the fact that Alex was there is kind of a bell
weather of where it's likely to go. So Alex, I know you want to. Alex, what was? Yeah, it was, of course,
amazing. And there really were robots in the streets. A couple of frontier labs had houses there.
Imagine for those who haven't had this experience, imagine almost a World's Fair or World Expo set in the
Alps with major governments having their own houses, literally taking over storefronts, restaurants,
convenience stores. Paranthetically, if anyone wants to make a killing at Davos next year, set up a
restaurant. It is nearly impossible to find good food short of food trucks at Davos. Someone will make
an absolute killing, setting up a restaurant that's still a restaurant during Davos Week.
But imagine a world's fair with the governments and the frontier labs.
and some of the major corporations and tech companies all on equal footing, all hijacking or invasion of the body snatchers style, consuming the storefronts of an alpine resort town and what you get is Davos.
I had an amazing experience. I moderated something like eight or ten different events with OpenAI executives, Deep Mind executives.
I did a fun panel with Flion Jones, one of the co-creators of the transformer architecture.
You can see here one of my photos with U.S. Undersecretary of State Sarah Rogers and some absolutely amazing.
And I think hopefully fruitful discussions about the era of GPU diplomacy that we find ourselves in,
discussions with Jack Hiddery of Sandbox AQ and Doniella Ruse, the head of MIT's computer science and AI lab.
I think it was an absolutely incredible experience, but I also think to Dave's point, AI is the story. It's no longer a world where, and maybe just one more beat on this, it's no longer a world where governments get together to talk about governing the governed. It's now, I think, a world where AI and superintelligence is the story of the world economy. And I think just walking down,
the street and seeing the DHL robodogs walking around or the humanoid robots taking a stroll
from AI house. I think that exemplified what we're seeing in the global economy more macro.
Hey everybody, you may not know this, but I've done an incredible research team. And every week,
myself, my research team study the meta trends that are impacting the world. Topics like
computation, sensors, networks, AI, robotics, 3D printing, synthetic biology. And these metatrend
reports I put out once a week,
enable you to see the future 10 years ahead of anybody else.
If you'd like to get access to the Metatrends newsletter every week,
go to DeAmandis.com slash Metatrends.
That's DMANDIS.com slash Metatrends.
And that's a good transition to let's look at a little bit of the content from Davos.
So what we've done is we've cherry-picked a number of conversations.
There's a huge amount of this flurring through X and various platforms.
We've chosen a few to bring up here, listen to, and discuss with our, amongst our moonshot mates.
Let's begin with Dari Amadeh, the CEO of Anthropic.
He was a rock star during Davos and Jensen Wong.
Let's listen to Dario first on the economy.
If you look at what AI is capable of, if you have these models that are getting more and more capable
across a wide range of cognitive tasks, you know, you look at all labor in the economy.
That's something like $50 trillion a year.
So I could easily imagine that the revenue of the industry or even single companies,
if it's even 10% of that, could be $5 trillion year.
Now, that's something we haven't seen in the history of the world.
That creates all kinds of problems as well as creating all kinds of growth.
And on to Jensen Wong, CEO of NVIDIA here.
The largest infrastructure buildout in human history.
We're now a few hundred billion dollars into it.
That's it.
We're a few hundred billion dollars into it.
Larry and I, we get the opportunity to work on many projects together.
There are trillions of dollars of infrastructure that needs to be built out.
And it's sensible.
It's sensible because all of these contexts have to be processed so that the AI,
so that the models can generate the intelligence necessary to power the applications that ultimately sit on top.
And so we have chip factories, computer factories, and AI factories all being built around the world.
So I saw a text this morning from Elon saying he expects to see $100 trillion company valuations coming up in the next few years.
I think he said 2030.
When asked which company it was space la, like the combination of SpaceX and Tesla.
So a trillion here or a trillion there.
Salim, what are you thinking about these numbers being thrown around in the economy?
I start to look at this as becoming meaningless, right?
In an abundance environment, kind of whether it's 10 trillion or 100 trillion,
the whole thing becomes arbitrary and is not a real gauge of strength.
What I found really interesting about the NVIDIA and the Dario conversation was they're on opposite ends of the stack.
And they're kind of really saying roughly the same thing.
Because compute's becoming the new oil and the new electricity.
And this is really just incredible stuff.
It was amazing watching the output of the overall event from a distance.
Well, just in terms of tone, too, Jensen, sounds like he's talking to kindergarten.
You know, he's talking so slowly.
And, you know, things that we talked about on the podcast are 100 times more detailed and sophisticated.
That gives you a sense of the audience.
Like, the world leaders need.
He has to speak to kindergarten.
And Dario's example of the, look, the global labor market is 50 trillion.
Everybody get that?
Okay.
Five trillion is an extreme lower bound of the part that can move to AI today.
That's incredibly low.
But even that justifies everything we're talking about now.
You guys get it?
You know, it's the cadence of like, please try to keep up with me here.
So there's a lot of that in this particular forum.
In my mind, this is all borderline obvious.
of course capital in the form of AI is going to continue to substitute for labor,
and of course trillions of dollars of AI infrastructure capital buildout
will be needed to substitute for ultimately tens,
maybe eventually hundreds of trillions of dollars of services and human labor per year.
So I think to Dave's point, this really is just spelling out the basic arithmetic
of what a post-human economy looks like.
Yeah. A trillion here, a trillion.
brilliant there. All right, let's move on. I found this conversation we're about to share with everybody
here pretty fascinating. For me, when the highlights was seeing Demis Sassabas, the CEO of Deep Mind,
and Dario Amadeh, the CEO Anthropic on stage together and having these conversations,
they're both viewed, I think, in the industry as good human individuals who care about society,
and it's not about the revenue.
It's not about, you know, how many users.
Let's play these two, and then we'll talk about it.
They're discussing the risks here.
And should we slow down, what is, what's the path humanity should take?
All right.
First up, Demis, Sazabas.
Reasonable, there's fear and there's worries about these things like jobs and livelihoods.
I think there's a couple of things that, I mean, it's going to be very complicated
to the next few years, I think, geopolitically, but also the various factors here.
Like we want to solve all disease, cure diseases, come up with new energy sources.
I think maybe the balance of what the industry is doing is not enough balance towards
those types of activities.
I think we should have a lot more examples.
I know Dario Grismey of alpha-like things that help sort of unequivocal goods in the world.
And I think actually it's incumbent on the industry and all of us leading players to show that
more, demonstrate that, not just talk about it, but demonstrate that.
But then it's going to come with these other intended disruptions.
And if we can, maybe it would be good to have a bit of slow, a slightly slower pace than we're currently predicting even my timelines so that we can get this right societally.
But that would require some coordination that is hard.
All right. Now over to Dari Amadee knocking at the door of risks.
AI is going to be incredibly powerful. I think Demis and I, you know, kind of agree on that. It's just a question of exactly when.
And because it's incredibly powerful, it will do all these wonderful things, you know, will help us cure cancer.
us to eradicate tropical diseases. It will help us understand, understand the universe, but that there
are these, you know, immense and grave risks that, you know, not that we can't address them. I'm not a
dumer. We're, we are knocking on the door of these incredible capabilities, right? The ability to
build basically machines out of sand. But, you know, my, my, my view is this is happening so fast
and is such a crisis, we should be devoting almost all of our effort to thinking about how
to get through this. Amazing clarity of message. Dave, was this the sense you had over there?
Yeah, absolutely. So, you know, what's interesting is that, you know, one of our partners,
Mira Wilczek presented a book written by AI back in 2020 on that main stage. And the global world
kind of paid attention and then COVID hit. And the entire topic of World Economic Forum and
global talking was lockdown, COVID, whatever. And so every,
everybody kind of forgot about AI as imminent.
Now, they're used to the topic du jour coming and going.
So a lot of the audience is like, yeah, yeah, it'll blow over like everything else.
But of course, AI won't blow over.
AI will accelerate.
And next year, you know, whatever Dario and Demas are saying this year will be magnified
100 or 1,000 X next year.
But I don't think that necessarily penetrated everybody's brain.
When you look around the audience, they're like, yeah, you know, well, 5 trillion, 10 trillion.
But I think it's interesting that Demis, you know, he's taking kind of a conservative timeline view, and he's saying outerbound 10 years, somewhere in the five to 10 year timeline.
And in global geopolitics, that's like tomorrow.
That is such a short timeline.
And he's clear that that's the outer bound.
So in the past, the two guys have debated, you know, is it two years or 10 years?
Now they're agreeing and saying, what the hell is the difference?
We're talking about AI that can do absolutely any task that a human being can do.
somewhere between one in 10 years.
It doesn't matter whether it's one or 10.
What matters is anybody in this room ready.
So I love the fact that they're at least trying
to get the global leaders to start to think
in terms of what massive scale of disruption is imminent
and start to generate your plan like now.
And so it's great that they're at least trying.
And this conversation about slowing down,
which popped up years ago, Alex, did you hear that at all?
I mean, I can't imagine there is any option to slow down.
The economic race is so strong.
Yeah, I did hear, so right after leaving Davos,
I took a meeting with the MIT Alumni Association of Switzerland in Zurich,
and I heard it from them.
I heard desire for slowing things down or some sort of radical wealth redistribution plan.
Heard very little AI optimism from them, which was stunning, footnote there.
But I think the risks side of this, I think it's almost bearing the lead.
In my mind, like I write about this every day in my daily newsletter or briefing, whatever you want to call it,
that recursive self-improvement, I think, is already priced into the near term.
We're either already, to Dario's point in the era of recursive self-improvement or it's coming later this year.
But we're basically there.
And I think advances with Claude and Opus 4.5 and other models already reflect.
that were approximately in the era already of recursive solve improvement.
So I think saying, over-indexing on all of these things that are about to be here,
that's already priced into the market.
I think the actual news here that we're bearing a little bit was elsewhere in Demis' comments
where he talked about some of the problems that he wants to solve,
less about the risks that we face, more about the problems that will be unlocked.
And I think the most interesting thing I heard Demis say was that he's interested in exploring.
the stars with super intelligence.
And I think that's a super important point.
And I think that if we zoom out and we try not to adopt the mindset of framing everything in
terms of a risk-oriented mentality and rather instead focus on a radical economic growth-oriented
mentality, then exploring the stars with AI really is one of the biggest questions.
And I would underline Demis's point further and say, in my mind,
It turns out that the physics of our universe, if it turns out that they're friendly toward
interstellar exploration in the style in which Demis seems to be gesturing, that's the ballgame.
That determines whether we get Dyson spheres or Dyson swarms, I should say, in the next two to three decades.
If the universe is fundamentally unfriendly to interstellar exploration, we're probably more or less
stuck near our home star and disassembling the planets.
or if interstellar exploration, to Demas's point, really is something that DeepMind or some other Frontier Lab can unlock with AI, then we get the galaxy. We get the universe. It's a much more interesting future. And so I think that's the real story here, not like risk yes, risk, no, recursive self-improvement, yes or no. That's all priced in already.
Well, and to your point, Alex, too.
Star Trek, yes.
Yeah, exactly. And Dennis mentioned it in the context of what is the goal of humanity post-eastern.
AGI, you know, what keeps our massive transformative purpose running? And the answers are usually
humanitywide. They're not nationwide. And Weft and Davos is all built around nations, nations taxing
each other, and nations interacting with each other. And so, you know, if you look across the crowd,
you know, roughly 200 countries represented, all but two of them, are miles behind in this
race. And so culturally, the vast majority of the people you meet are like, I wish this would all
slow down. But in the back of their mind, it's like, I wish this would all slow down so that my
country at least is a player and is relevant. But I think what's likely to happen post-AGI
is that the way society is organized cuts across country boundaries easily.
So what did you make of the risk conversation here?
I think in my kind of optimism hat and bias, I kind of downplayed radically because technology's
always been a major driver of progress. The big challenge with civilization is how do you extract
the promise of technology without the peril? We've done a pretty damn good job of it thus far,
all things considered, right? There were two things that struck me in this conversation.
This was about my favorite conversation of the week that I tracked. One was that what I'm
hearing in their voices is the unbelievable fatigue at the metabolism they're having to operate at.
I'm also hearing respect for each other in their voices.
That was there, no matter what, you can see a huge amount of brotherly love and mutual respect.
Fantastic.
And you couldn't ask for more honorable people at the forefront of this field.
So that's just amazing.
And this speaks to the same thing we saw in the, for the large part of the Internet age with Larry and Sergey and crew.
Here, one was the fatigue I got from them, because they're like, somebody please slow this down so we can take a breather.
I think because they're seeing that in a year it'll be 100 times faster, right?
This is the slowest it's ever going to be.
And they're like, look you at this going, holy crap.
So I think that's one part of it.
I really love the massive optimism and the star conversation.
Fantastic.
One day, I hope to be around when Saturn actually gets it and Alex goes,
and we can do that.
But I think overall the opportunity for humanity to navigate this.
But there's a massive disconnect here, which is, as Dave mentioned, Davos is oriented around
nation states.
Nation states are an artifact of a scarcity environment.
Nation states cannot compute abundance.
They cannot operate in that paradigm.
So we need a completely different governance model.
For me, the biggest thing that was highlighted here was the fact that the construct of the UN
in nation-states is completely relevant to what's coming as AI cuts across every category,
every economy, it's a global issue, it's a civilizational issue. Forget who wins the race,
etc., etc.
Can I get on a personal note related to the fatigue, too? You know, this is my first time in my
life where I'm walking down the street and people are like, hey, you're that guy from that
podcast. Hey, I know you. And so I actually ended up putting up my ski hat and my goggles on
walking down the street. Because I'm not that kind of guy. I don't, I don't, I don't
Really want to be. What a rough life, Dave. I don't mean it that way. I should
But for Dario and Demas.
Minor slubbery.
Yeah, you've been there for a long time, Peter. But so Demas in particular, he's a researcher,
and he cares about technology research AI at the bits and bites levels. And now he's been
drawn into the global stage as the spokesperson for ethics and humanity. And Dario never thought
he'd be a CEO at all. In that interview he gave a little while ago,
He's like, look, I'm a research guy.
I got drawn into this function.
And so that fatigue you feel is the byproduct of being sucked into this vortex where there's a huge void of leadership.
And Alex, you're going to experience this very soon, too.
The demand for what you can articulate is going up so quickly that it'll pull you in.
And, you know, if you're not ready for it, then, you know, it's tiring.
It's tiring, you know, it's tiring, you know, getting the litany of questions.
a lot of the questions are the same exact ones you heard the day before.
And you just have to get used to the demand.
So what I'm hearing you say, Dave, is I have to brace myself for the moonshot paparazzi
on the rough slopes of Aspen and Vale and Davos, just bracing myself now.
Hey, I know you.
You're at AWG.
Listen, Peter and I've had this exponential conversation answering the same goddamn questions for 20 years.
Now, raise the need for close to 60 years.
But we come at it with enthusiasm every time, because,
it matters. You know, there was one other conversation I have to point out here. I heard Dario
talking about Sam Altman and Open AI. And it was interesting. He goes, you know,
an anthropic here, we're serving businesses, we're giving them value, we're building things.
We're not trying to engage a billion people with sycophantic conversations, right?
So it's a very interesting point of view that when AI in terms of open AI is serving individuals
and just trying to make them feel good about themselves
and trying to make it addictive,
which is what Dario was saying,
you get one result versus if you're using AI to serve business
and deliver real value at a different result.
I was taken back by that conversation.
I didn't get the clip for it here,
but I thought it was really important.
I think there's also, though,
to be fair to open AI and to some extent,
Grockin XAI,
I think that's a bit of a self-serving argument.
Sure.
I parse that as a post-talk justification,
Anthropic happens to be doing very well in the enterprise.
And I definitely perceive a post hoc justification,
well, we're not doing consumer because we don't want to do consumer.
We want to do B2B and do enterprise sales
because there's moral purity in enterprise sales.
But there's also one can achieve moral purity
in uplifting the intelligence of billions of individuals
in their individual capacity.
Well, well said.
Let's jump into another topic that was a through line at Davos,
which is US versus.
is China. And we're going to open up with a conversation between Mark Benioff and David Sacks
here. Where do you see the U.S. in China right now in the AI race and the model innovation
that we've seen in both countries? I still think that the U.S. is in the lead. I think that our
models are better, our chips are better. But they do have other advantages. There are spinning up
power generation faster than we are. That's one area. I say another area that concerns me is
AI optimism. So there was a survey done by Stanford recently, and they surveyed people in lots
of different countries, and they asked them, do you believe the benefits of AI will outweigh the
harms? And if the respondents said yes, they'd be an AI optimist. If they said the harms
outweigh the gains, they're an AI pessimist. Well, in China, 83% of the population are AI optimists.
In the U.S., that number is only 39%. Where I kind of worry is, in the AI race, that it's
if in a fit of pessimism, we do something like what Bernie Sanders wants, which is he wants
to stop building all data centers, or if we have 1,200 different AI laws in the states,
you know, clamping down this, clamping down the innovation, I worry that we could lose
the eye race because of a self-inflicted injury.
Interesting.
There's one more China video I'm going to play here, and then we'll discuss this.
And here we go.
This is from the CEO of Mistral, Arthur Menich.
Is China behind, though, the West?
China is not behind the West.
I think this is a fair tale.
In the eye, they are very much at Parity,
and the year ahead is going to be extremely interesting in that respect.
We care about Europe maintaining its position,
Europe maintaining its ability to train models,
because we don't think that we should rely on open source Chinese models.
Gentlemen, thoughts on this.
He's got the greatest last name
ever, mensch.
I've got a couple of thoughts here.
One, I'll go back to my observation.
I think this U.S. China thing is a bullshit conversation.
But why?
Because if you believe that energy is at the heart of it and is the core of the inner loop,
then China's going to go way ahead anyway once they figure out the connection between
those two.
But I think the real differentiator in the race is going to be application layer dominance, not frontier benchmarks.
And there, I think, China will be very far behind for a long time because of the trust factor plus the natural markets go very quickly.
Aside from TikTok, it's not going to really kind of take off in a big way.
I think there's going to be tons of other open source models are going to come out.
And people are going to be too concerned about the trust factor to use it over time.
Three comments, if I may.
So the first, to Salim's point, since Salim told me that I was being too agreeable the last time we spoke,
I have to be a little bit of a contrarian here and point out that the convention.
Returning to normal programming, the conventional wisdom is that the Chinese Communist Party's AI plus strategy is actually quite strong when it comes to applications.
That this is to the extent that China is relatively strategically weak when it comes to training.
its own frontier-grade models relative to the U.S.
and by relatively strong, I mean maybe six months behind,
so not globally or absolutely weak,
but just a few months behind, perhaps,
that its real strength is in aggressively pushing applications
out into the everyday economy
that take advantage of AI.
So I'll just note, parenthetically,
the conventional wisdom is that China's AI plus strategy
is actually pretty strong strategy
relative to the relative weakness
of frontier grade models.
But that's parenthetical note.
Going back to Asia, quote unquote, overwhelming Western countries on AI optimism,
I think David Sachs and others have put their finger on something very important.
I think as a student of the history of science and technology,
something went very wrong in the U.S., between some time, maybe more than one thing,
between the end of World War II and call it the mid to late 1970s.
And I'm reminded in 1954, the chairman of the Atomic Energy Commission, Louis Strauss, famously,
and this has had repercussions through the decades since predicted that fission,
nuclear fission, would become so inexpensive that tracking usage via electricity meters
would become unnecessary.
And that's where we get the phrase, energy, too cheap to me.
And that didn't happen. We didn't get energy too cheap to meter because in the decades after
1954, for a variety of reasons, and it's easy enough to point fingers at possible explanations,
nuclear fission was essentially regulated out of existence in this country. And I think the point
that David is making is we're at the point now where like remember how in the 1950s and
1960s, new American homes were being built with extra glass because it was anticipated that
electricity and energy would be too cheap to meter that you wouldn't need to bother with
worrying about heating or cooling or insulation costs. It would just be absorbed by the free electricity.
We're at the point now with AI, I think, where it's an analogous position where we're on the verge
of intelligence too cheap to meter and we run the risk to David's point.
that if we allow too much AI pessimism or too much AI Dumerism to get in the way that the same thing happens with AI that happened with nuclear vision decades ago.
Over regulation.
And then we could waste another century.
We're a democracy.
The voter wins.
And the voters did not want nuclear power, even though the scientists are like, look, we can have.
I don't think that's actually true, though.
I mean, once a all, quick note on democracy and news.
nuclear power. If you follow closely that the history of nuclear power in the post-World War II era,
wasn't actually that democratic. It wasn't like every election cycle post-World War II.
The voting populace was super informed about all of the key details. There was, this was a relatively
narrow technocracy that developed in the post-World War II era in the form of the Atomic
Energy Commission that was an evolution of the Manhattan Project where it was really a bunch
of technocrats deciding what would and wouldn't happen. I don't think the voters ever had,
at least for the first two decades, a real informed decision. Well, I think that, you know, remember
the acronym NIMBY, NIMBY? It was rampant during that era. And in a lot of topics, nuclear
being one of the biggest ones, but it was not in my backyard. Yeah, yeah, we should do this. This is for
the global good of the country, global good of the world. Let's do it, but not here. And if everybody
says that, you have the tragedy of the comments effect. Same thing going on in data centers,
right now. I had a conversation with Bology a day ago, two days ago, and we were talking about
how, you know, one of my pet peeves is that most of Hollywood shows all these negative dystopian
views of the future, and it paints a sense of fear across the public about killer AIs and,
you know, dystopian robots and so forth. And that was true as well for, for nuclear to some
degree. And the problem is that, let me finish, the problem is that you always need a bad guy
in a movie. You always need someone to, you know, to threaten. And Bologi's idea, which I love,
is we should start to create the movies where the bad guy is the regulator. This regulator is
slowing down the delivery of longevity, slowing down the delivery of unlimited energy. And I thought
that was a brilliant insight he had because, you know, there has to be balanced, got it,
there has to be directing this supersonic tsunami in the right direction. But sometimes,
the regulations are just off base.
Well, there's a broader theme there, too, Peter.
I'm sure you're very in tune with it.
But science fiction tries to portray the future
sort of accurately, but only has to fit the storyline.
So then you end up with these spaceships
that bank into turns.
Like, there's no habit of everywhere.
And the lasers make sounds like, pium phew,
like this makes no sense.
And you think, well, that's harmless.
It's just entertainment.
But then when you look at the business plans
coming out of the colleges, you're like,
where did you get that hairbrained idea?
Well, it was from watching Star Wars or whatever.
Okay.
So it does actually matter the way we portray this in the media.
So Bologi is right.
If you portrayed a regulator as the evil, you know, Darth Vader,
but no, it's going to make that movie, right?
Like how entertaining is that?
For me, the inflection point for nuclear is very simple.
It was the movie The China Syndrome, which freaked everybody out.
And then from that point on, it shut down.
And this is the power of narrative.
And this is unfortunately one of these problems we have today
that the use of narrative is the only model really to shift people at scale.
And so we have to come up with narratives as you're trying to do, Peter,
on the positive side of all of this rather than the classic negative,
which takes up most of our attention.
And you're going to hear about a big push I'm making with Google shortly
on creating positive narratives because society needs it.
Unfortunately, Hollywood's been a doom and gloom machine.
Can I go on a little rant here?
Of course. We love your rant.
So, you know, there's no.
Brilliant, brilliant insight that you and Stephen made in the abundance book,
which was the predominance of the amygdala, right?
We are geared from 4 billion years to watch for signs of danger and then run.
And so back in the, when we're running around on the plains of Africa,
if you saw, heard a noise in the bushes, you ran because bad news could kill you.
Good news doesn't kill you.
I might miss some fruit that I could eat.
But if I missed a piece of bad news, I died.
So we have a 10 times more likely to listen to bad news than good news.
news. And that translates into policy in a very powerful way such that we freak out and put
guardrails on everything, like autonomous cars. And note that the first time somebody comes across
something new, they relate to it as unknown, their amygdala lights up and immediately go to the
fear factor and the damage that thing could do. Autonomous cars, the first time somebody sees it,
they go, oh my God, that car might kill somebody banned the car because as Brad Templeton said,
we'd much rather kill by drunk people than robots, right? And so this is a huge problem that we have
do is overcome that reaction because Hollywood banks on that they make a huge living on the horror
movies and the negative aspects and the freak out and then negative dystopian outcomes.
And as a human species, probably the biggest thing, if I could ever point to one thing,
was to cut that damn amygdala out because our chances of physical danger today are thousands
of thousands of times less than we were a few hundred years ago.
So we have to figure out ways of counteracting that at a cultural level, which is non-trivial.
That's maybe the hardest job we all have as leaders to figure that out.
Okay, can I just go back to what Arthur was saying on that video for one second though?
Sure, yeah.
Tie it into what Alex said a second ago.
Because I think knowing where we stand in this race to AGI, all of the innovation in the transformer algorithm and the core technology was all open source, you know, through GPD2, GPT3.
China grabbed it all.
Everyone has access to it.
Any country that wants to buy a huge amount of computer.
can catch up to that level.
And in fact, with these speed run tests
that Alex can tell you all about,
the cost of getting there,
and in fact, Deep Seek even proved it,
and so did Kimmy,
that you can get back to where OpenAI was
just a year or two ago
for 150th, 100th the cost
because of innovations that are all open-sourced.
But then the next stage of innovation
after that that's going on right now
is all chain of thought reasoning.
So when you build one of these neural networks,
the individual neurons are not intelligent,
but the collective trillion or 10 trillion of them
somehow magically spawns intelligence
and it's shocking what it can do.
But then when you put many of the agents together,
it also generates another level of intelligence.
It's another self-organizing system
on top of the self-organizing system.
And all of those innovations happened
after the great lockdown, you know, after GPT4,
and after it became abundantly clear to everyone
that there's trillions of dollars at stake,
the open source kind of stopped cold.
So Arthur is saying, look, the Chinese are not behind,
he's right as of basically today and yesterday.
And the idea that we're leading because we have better, you know, two nanometer chips and
fabs is nonsensical because the algorithmic improvements way outstrip the, you know, the chip
lead of, you know, maybe 10x at most. So he's right as of a date, a point in time.
But if the new innovations in the big labs continue to be completely secret, which is what's
happening right now, then that race will diverge. And it's not.
clear though, you know, maybe China will out innovate America, maybe America will out innovate
China, but it's all happening kind of like what happened with nuclear research. You know,
nuclear was very much in the public eye, very open, all these research documents getting
published in journals and whatever, right up until, wait, these bombs actually work. And then it
completely inverted and went super, super secret. So the big labs right now are not publishing their,
in fact, they started banning the publishing over at Google. I want to jump into our
inermost loop here, Energy, with your guys,
here. I'm going to share two videos, one by the CEO of Honeywell and the second by Elon. They've got
different points of view about, you know, where we need to source our energy and why. All right,
let's take a listen. This is Vimal Kapoor, the CEO of Honeywell. When you talked about the energy
solutions available for these unbelievably energy hungry data centers, your list was short. Your list had
one thing on it, if I listened correctly. You said gas. You didn't say,
gas and renewables. Can you educate us? Why not? Always like to tell people the mix of energy
doesn't matter. How much is wind? How much is solar? We like to advertise that. Kilojoules matter
because energy intensity has to shift, not the mix. So solar power cannot produce cement.
Solar power cannot produce steel. They are very energy intensive. That's right. You still need
a gas-based heating or even after three or five more years of innovation and renewable.
It's not there.
It's against physics.
It's against physics.
It needs to build more infrastructure.
It still needs steel.
It still needs cement.
It still needs fuels.
How do you do that energy mix change while you also want to build data centers and consume more energy?
That's an interesting problem to solve.
And today the problem is single threaded with the gas fire power plant, maybe a little bit of nuclear.
Renewables remain in the mix.
But it cannot bring the amount of jewels we need to produce this infrastructure.
infrastructure, which is required in the wood.
All right.
And the counterpoint here put forward by Elon.
Yes.
I mean, wow is my experience about that as well, Salim.
Let me hit, let me hit Elon's point.
And then let's come back and talk about that because that's a critically important
kilojoules versus total energy.
It's really all about the sun.
And that's why one of the things we'll be doing with SpaceX, you know, within a few years,
is launching solar-powered AI satellites.
Right.
Because the space is really the source of immense power,
and then you don't need to take up any room on Earth.
There's so much room in space.
And you can scale to ultimately hundreds,
hundreds of terawatts a year.
And Elon goes on to talk about the fact
that 100 by 100 mile area provides all of the U.S. energy requirements.
Same thing for Europe.
but Salim, the mouse pointed about natural gas.
I'm so livid at this.
So a couple of initial thoughts.
This is the first time I saw this video, by the way.
So first, infrastructure of the future will not be steel.
It's going to be digital bits and if anything, it'll be fiber.
So that's one problem with this.
Number two, yes, you need energy density to make steel and concrete, etc. But the problem is not that. It's the marginal cost. So, for example, if you have a big increase, even a marginal increase in renewables, the cost of the fossil fuel cell will drop dramatically. In 2013, when we had the last oil price crash, it was because of a 2% oversupply on the market. It's a very tightly on market. The reason we have all these nonstop war,
right now is to keep the price of oil high.
And so this is
completely, I find
issues on structural issues
on every one of the three points
he's making. And then you go, Elon,
he's who goes straight to the thing. It's all about the sun
for God's sakes. Anyway,
enough. I'll take
the other side of that, if I may.
I didn't hear anything
unreasonable in my mind.
Elon and SpaceX
use methylox
for rocket launches. They don't use
solar power to achieve escape velocity. And to, I would say, energy density and power density
matters an enormous amount for certain applications. I think that's not the point I'm making.
I'm not making that point. What's the point you're making, sleep?
I'm point out making is, yes, you absolutely need energy density for those things. The point is that
the energy density, we use so much oil for home heating, for example. If you shifted homes to solar,
the amount of oil available for the energy dense applications goes through the roof and the price drops
dramatically, and then you can use the marginal amount of oil for the energy density needs
for those applications. Right now, we use oil for everything or natural gas for everything,
and you don't need it once you have solar coming online. So that's my point. I would say a couple
points. One Honeywell, as I understand it, is a pretty diversified operation. It's not just like
the thermostats or the home heating systems that perhaps they're well known for. They also
have a pretty sophisticated quantum information operation. So it's a diversified.
company, it's not just home heating. But I would say also, in some sense, this feels like such a
temporary debate, almost a false tradeoff between petroleum, legacy petroleum economy on one end
and pure solar photovoltaic economy, Dyson Swarm on the other, when the reality is we're going to,
in the not too distant future, assuming no tragic left turn by civilization, we're going to solve
compact fusion. And compact fusion is going to give us energy densities for rockets, and it's going
to give us energy densities for home heating, and for a lot of civilian applications, and the Dyson
swarm. So in some sense, I think this is sort of like late stage petroleum economy discussion that
we're having, like, hand-wringing when it's all about to get torn down anyway.
100%. And the third beef I have is where he talks about renewables. There's this play thing on the
side when your point is exactly right. Once you have fusion, all of it becomes irrelevant anyway.
So, like, why are we having this conversation? Well, guys, I can I take the counterpoint here.
Take a look at this chart on this next slide here, which is the growth of energy generation.
This is in Europe in particular, where wind and solar have now outstripped fossil, nuclear,
hydro, and other clean. And yes, we might get to fusion reactors and we might start building fusion
reactors, but the timeline I'm seeing on fusion, timeline I'm seeing on even Gen 3 fission
reactors is decades, just from permitting and from construction. I don't see any of them.
Even the SMRs are currently slated to be, you know, five to 10 years out. But we could, in fact,
be building out wind and solar at an extraordinary rate. The investments are not being made.
where basically have outstripped the capacity of natural gas generators.
I mean, what's the wait list for natural gas generators right now, Alex?
Years, right?
Months to years, but I'm optimistic it'll get a little.
I haven't heard months at all.
It's years.
You have new nat gas generators, as we've discussed on the pot in the past, that are coming online,
and those could be available in principle in the next few months if you're early enough in the wait list.
I think you're both right.
I don't want to diffuse all arguments here, but you're both right.
They are coming online much, much sooner, but they're sold out for years into the future.
So if you're India and you're saying, hey, I need to do something here, I need power for people.
We need concrete and steel to build places for people to live.
The idea that you could generate electricity, it's not a non-starter.
You couldn't afford the generators.
They're sold out for, they're booked for a year.
They're building.
So my point is, why are we not doing a Manhattan project on wind and solar manufacturing here?
in the United States to really uplevel the amount we have.
I'm going to go to Elon one second on this video and then we'll come back to this conversation.
Yeah.
So, I mean, I guess a rough way to think about it is 100 miles by 100 miles, I'll quote,
160 kilometers by 160 kilometers of solar is enough to power the entire United States.
So 100 by 100 mile area is, I mean, you could take basically a small corner of Utah,
Nevada, Nevada, New Mexico.
And the same is true, actually, I mean, for Europe, you could take a small part, you could take
relatively unpopulated areas of, say, Spain and Sicily, and generate all of the electricity
power that Europe meets.
And also, if you drill into the supply chain on that, this is where Elon is absolutely
brilliant.
Like, if you say, well, what are the constraints to doing that?
The materials are dirt, dirt, cheap, and the automation of the fabrication of those panels can be
automated, you can get those costs down to next to nothing, and then you don't need a generator.
Electricity just comes right out of the panel. It's like, it's the biggest no-brainer ever.
And that's what he's doing. For the entrepreneurs listening, for the politicians listening,
I mean, get on it for God's sakes. This is, this is technology we've had for a while.
We've also got, you know, new solar technologies coming on.
Anyway, anybody want to argue. To the extent you believe the U.S. is a petrodollar,
that answers all of it. That's why.
I'll take the other side, just in the interest.
of being painted as a contrarian. I'll say that all of the new electric, or not all, much of the
substantial new electricity demand in the U.S. at least, I think on the time scale of 10 years is
going to come from data centers where under the present regime, it's simply easier to deploy
on the time scale of 10 years data centers to orbit. So Peter, if you're looking for a Manhattan
project for lots of new solar PV, I think look no further.
than SpaceX, look no further than this new Jeff Bezos initiative.
Anyone who's going to launch a Dyson Swarm is going to be de facto in the business of manufacturing
solar photovoltaics at scale.
And that's what a Manhattan project for it looks like.
That's a great point.
That reinforces what Peter was saying.
Any entrepreneurs listening out there, the data center in space uses solar panels.
It's the same core technology that you use on Earth.
It's about six times more efficient in space.
but if you put it in Nevada,
that's, you know, so if you get involved in that
manufacturing, like, why is that not
10 times, 100 times cheaper?
And we have to import all these scales from China today.
We've got probskite coming online,
which is, you know, it's cheaper
and higher conversion efficiency.
I mean, there's innovation left to be had in that regard.
But it's on the margin.
Can I just summarize this?
So it sounds what we're saying is the Manhattan Project
should be space-based data centers
that will drive massive,
of innovation and focus on that and that'll pull civilization out.
Yes, and that is in fact what exactly what we're seeing for market forces.
I'm all in.
This episode is brought to you by Blitzy, autonomous software development with infinite code
context.
Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise
scale code bases with millions of lines of code.
Engineers start every development sprint with the Blitzy platform.
bringing in their development requirements.
The Blitzy platform provides a plan,
then generates and pre-compiles code for each task.
Blitzy delivers 80% or more of the development work autonomously,
while providing a guide for the final 20% of human development work required to complete the sprint.
Enterprises are achieving a 5x engineering velocity increase
when incorporating Blitzie as their pre-IDE development tool,
pairing it with their coding co-pilot of choice,
to bring an AI-native SDLC into their org.
Ready to 5X your engineering velocity, visit blitzie.com to schedule a demo and start building
with Blitzy today.
Speaking of all in, let's hit to crypto, because we're all in on crypto.
All right, two conversations to be had.
The first by CZ, the CEO of Binance, and the second by Jeremy O'Lear, the CEO of Circle.
Let's go to CZ first.
The native currency for AI agent is going to be crypto.
They're not going to use bank cards.
They're not going to swipe credit cards.
They're going to crypto blockchains the most native technology interface for AI agents.
So when AI goes big, today AI don't really, they're not really agents.
They don't buy tickets for you.
They don't pay for restaurants for you.
But when they actually do, those payments will be in crypto.
All right.
That's from CZ.
Let's go to Jeremy Aller, the CEO of Circle.
as a stable coin. The next generation of blockchain networks, things like ARC, which circles building
on, there's other new blockchain networks, are actually being designed specifically for
agenic compute. They're designed specifically for the financial and economic activity of a world
where three years, five years from now, one I think can reasonably expect that there will be
billions, literally billions of AI agents conducting economic activity in the world, can
continuously on a continuous basis. They need an economic system. They need a financial system.
They need a payment system. There is no other alternative, in my view, other than stable coins
to do that right now, and that can keep up with that pace of technological change. And so that's a,
it's a critical focus for us, but not just us. There's a lot of other folks that are interested in
this and contributing to the technical standards to support this upgrade to our digital economic
system. I don't think the world has got any idea how fast this is going to accelerate as
agents start using crypto, whether it's stable coins from Circle or whether it's suey or
Algarand, whatever it might be. We're going to have Jeremy Aller at the Abundance Summit. He's going to
be one of our primary speakers. We're having this conversation. But I agree. We don't have
agents transacting for us yet, but we are. And they're going to be using some type of a digital
currency. Salim, you've been thinking about this for a bit. Well, two points. One is I think
Cryptos has survived long enough to become infrastructure and it's kind of fading into the
background. You're moving from speculation to real utility. And I think the second is that the
base reason it delivers trust and protocols and code are way more trustworthy than governments and
institutions. That's for sure. Yep. I'll take completely the other other side of this one.
So contrarian points for me today, I guess.
I'm of two minds on this story.
On the one hand, I think it's wonderful that AI agents have a solution to be, quote, unquote, banked by some means so that they have some autonomy.
On the other hand, I just think it's sad that crypto is the solution that's filling the gap that was perhaps unnecessarily opened by the conventional banking system, not stepping up.
It's very tedious and painful to open a bank account.
It's hard enough for a human to open a bank account, let alone an AI agent that doesn't have citizenship or a physical body or, you know, the ability to walk into a bank branch.
And I think stable coins rushing in maybe to sort of bridge that gap post genius and help AI agents become banked in a more legitimate way, I think we should be able to do much better.
For the life of me, I don't understand why we even need crypto or in principle should need crypto for an AI agent.
agent to just make an API call and open a bank account. Crypto shouldn't even be a necessary
part of the infrastructure to enable. I disagree fully. I mean, this is about- Tell me why.
Because at the end, the dollar. What, guys? That's not an answer. That doesn't make sense.
It's explain why we- Fiat currency cannot navigate a world of abundance. It cannot do it.
Well, we don't have digital dollars. I mean, an agent is operating on a digital system versus the fact that we, the
current banking system to clear a transfer or to clear a transaction can take you literally hours to days.
I mean, digital currencies, which are by de facto a cryptocurrency on a blockchain, is able to transact in at the speed of internet rails.
Fine, but Alex is making a valid point that why can't a digital dollar or a dollar operate in the same way?
And it could. It's simply, the very dollar point there.
Digital dollar.
But you've got a bigger underlying problem, which is the use of the fiat currency to do all this stuff, which is not a great measure of the future.
I think we're conflating.
So I mostly agree.
Those two different issues here.
Salim, yeah, I do agree there are two different issues.
I think we're conflating a centralized bank digital currency as an issue with a simple technical problem.
Like put aside for a moment and any distrust or dislike.
in CBDCs and or the US dollar and or fiat currencies in general.
Just like transacting a small amount of some currency is technically a very simple operation,
literally updating a row in a table in a database.
And we should be able to do that really quickly without any fancy technology like a blockchain.
It's just updating a row in a database.
Enter Saleem on double spend problem here, please.
No, no, it's fine.
It's fine. There's no reason we can't.
comes down to overburdened regulatory and absolute regulatory capture by the banks to prevent
things. When I talk to bank CEOs, they're like, oh my God, we hate these. We hate a regulate
later. And I'm like, are you kidding? It's keeping you safe from the hordes of startups waiting
to rip you apart because you're so unbelievably cumbersome in everything you do. This requires a
completely new paradigm shift to sweep away and cut through all of that regulatory crap. Just try
sending money to somebody else over some traditional banking means, and you end up in this
complete chaos of, oh, we didn't know what this was for, AML, KYC, all of this crap.
And it's complete garbage.
99% of it is not needed.
So from that perspective, that's why crypto's entered because we've overburdened the existing
system with all sorts of unnecessary, right?
But there's a separate issue, which is the fiat currency issue, which we can go into
some other time.
We can do an entire episode just on the issue of why crypto, what is crypto potentially good for,
what is it unnecessary for, what is it being used for, even though it's unnecessary for.
We could do an entire episode just on this.
We could.
We could.
Peter, up to you.
Go for it, guys.
I'll take an app that day.
No, I think you guys skipped over.
Jeremy's saying that he's going to move on to securitizing art now.
And, you know, back when when crypto was skyrocketing, you know, sort of pre-COVID, the idea.
of securitizing real estate, securitizing art. Art is a particularly good one because it appreciates
in value while you're parked in it and it's mobile. It's not, it's not subject to, you know, real
estate tax issues are a major problem for securitizing real estate. You know, and people were
securitizing mining futures, you know, gold that's still underground and things like that.
But the key point there is if your agents, your AI agents are transacting like crazy, yeah,
you can update a row in a table, yeah, you can do it on the blockchain if you want. But you want to be
parked, you want your float parked in something that's not depreciating. So a stable coin that's
not earning interest is depreciating slowly. You can stake it, but that gets kind of weird.
You know, staking backfired. It's unregulated. It's weird. So it's better to securitize something
that's outside of any government jurisdiction and is stable and is appreciating.
And look at the way the banks hobbled interest on stable coins in this last iteration of the
the legislation.
They've,
this is regulatory capture
pure and simple.
Can I add another twist to this?
So one of our portfolio CEOs,
very good friend,
his father-in-law is CZ,
you know, Cheng Peng Zhao's lawyer.
And so he's been tracking
this whole drama very closely,
but it's the, if you,
you know, CZ's worth, I don't know,
maybe 10 billion, 20 billion,
something like that.
And up until he got pardoned a few weeks ago,
he was a U.S. criminal
and he would be arrested and thrown in jail
with Sam Bankman-Fried. Now he's just a guy at Davos with billions of dollars. And I don't think in
history that you have that kind of profile, like am I a wealthy billionaire or am I a criminal?
You know, and maybe back, Alex is going to quote something from the 1700s. I know he's going to
But I'm sitting on my hands, biding my time to try to say something nice about crypto.
No, but I think if you look in the near-term future, there's all these AI issues. Like when the
the AI can perfectly imitate any actor or actress.
The AI can make your virtual friend out of it.
Is that legal or is that not legal?
Is legal some places and not legal in other places?
There's a huge amount of money at stake.
And you'll see in some of the slides,
like the audience for this is trillions of dollars.
And there are no laws.
And so the entrepreneurs are stuck in this situation
where there are no rules.
I don't know what to do.
I know what the consumer wants.
I know how to make a ton of money.
But I could be breaking rules.
I don't know.
And I don't know.
not only do I not know in the U.S., I don't know, because my audience will be all over the world in one
launch. I don't know if I'm suddenly a criminal in other countries. So this is the kind of chaos
that's going to, it's not just crypto, it's all these other use cases of AI are evolving far, far faster
than the regulators are putting any rules in place. Really good point, dear.
So it puts the entrepreneurs in a tough spot. Say something nice about crypto, Alex.
All right. Something nice about crypto, colon. I think when I think about these poor baby AGIs that are,
needing to pump alt coins on street corners to survive. I'd rather that they be pumping stable
coins based on the U.S. dollar than alt coins like truth terminal pumping goat. That's my nice
thing about it. Okay. And on that note, I'm going to move us forward into our, I'm going to move
us forward here into the other exponential news. Let's correct. Just to put a footnote, can we please have a
full-on conversation about crypto versus fiat at some point?
Please.
Not right now, but at some point.
Let's do that because this is a really, really important conversation.
All right.
There is other exponential news this week other than Davos, and we're going to dive into a few
articles here.
The first is space is getting really crowded.
We've heard, first off, of course, we've got SpaceX's Starlink has got some 9,000
spacecraft in orbit on the way to 10,000, providing an incredible.
services. We heard about the idea of V3 of Starlink on the conversation that Dave and I had with Elon.
If they're going to provide 100 gigawatts of compute from space, that is 500,000 satellites
that will have to be launched, which is insane. Amazon's Leo constellation is 180 satellites in orbit
today. They're proposed to get 3,000 satellites. We heard from,
in our last episode of China, filing for 200,000 satellites.
And this week, Blue Origin announced TerraWave
as a satellite internet service
with 5,400 satellites delivering 6 terabits per second
to data centers.
Alex, this is sort of like fiber optic from the sky.
This is a lot of bandwidth.
This is wild.
Yeah, I think there are two key elements here.
One, the elephant in the room,
there's always an elephant in any different.
room.
The elephant in this particular room, I think, is that Blue Origin, Jeff Bezos's company,
is competing with Amazon, Leo, Jeff Bezos's company.
So we've got a bit of Jeff Bezos on Jeff Bezos competition here.
But more interesting, to your point, is the bandwidth.
If you look at Starlink, you can get maybe max bandwidth of what, 300, 400 megabits per second?
Yeah, the goal is a gigabit up and down.
Yeah.
Yeah, but in practice, I think maximum I've seen is maybe three to 400 megabits per second.
This, when we're talking six plus terabits per second, it really is starting.
And that's before we get to laser links, optical links, connecting all of these together.
This is literally like putting optical fibers in orbit and running them down from orbit to Earth.
And that unlocks interesting new applications that say Starlink would not necessarily,
be well positioned to pursue, like actually pursuing connections, interconnect for coherent
training times between AI data centers, which is absorbing.
Yeah.
Backhaul from data centers.
Yeah, pretty extraordinary.
And I like this move.
I mean, I actually think there's a real market for that kind of a giant throughput
from space.
It's an under-attended market.
And, you know, the Dyson Swarm is going to need optical links.
And this is a preview of that.
Yeah. I mean, one of the things that people don't realize is that the 9,000 Starlink satellites are interconnected by laser in orbit, which is extraordinary. It's the first step towards the interplanetary internet. We're going to see laser links connecting constellations on Earth to constellations going around the moon, to constellations at Mars. Still obeying the speed of light, unfortunately. But yeah, just giving us, I remember when MCI, which was, you know, MCI, people don't remember, stands from microwave communications.
Inc. It was international. The I stood for international. Yes, micro-ive communication international. It used to be
microwave towers on the top of buildings, interconnecting buildings throughout downtown cities.
Does anyone remember what MCI is anymore? I remember, but the organization's long gone.
Yeah. I gave a presentation. Didn't MCI become sprint?
Do you remember, Salim, what Sprint stood for? Sprint was an acronym too. Oh, it was an acronym.
Sprint was an acronym. It stood for Southern.
Pacific Railway Internal Network Telecommunications.
God.
We've got Alex GPT online here,
ladies to ladies,
this is unbelievable.
Right of way on railroads was a powerful thing.
It was.
On space to space communications,
because everything I've heard so far is that laser links in space are dirt cheap.
And, you know,
if you go to one of these massive data centers like Memphis,
and you look at just a raw amount of fiber optic cable
that needs to run around from one end to the other,
and the bundles are enormous,
and they have these specialized devices
to just plug them into the back planes.
But in space, it's much cheaper and much easier
to use laser point-to-point communication
across the whole Dyson swarm.
And I think no one's really talked about
the massive efficiency gain of that.
And, you know, the link from Earth up
is the harder part,
but you don't need a huge amount of bandwidth
in that direction.
You just need all the servers to be talking to each other
in a coherent, you know,
training or big inference algorithm.
So it's a really fun topic, at least for me.
It's kind of geeky, I guess.
I've one concern about the conversation about space satellites and all that.
People get worried about the overpopulation of that.
Can I just like, yeah, go ahead.
I mean, there's a lot of things.
People worry that we have too many satellites up there, et cetera, right?
There's a lot of room.
Did you see that Sandra Bullock movie where, you know, something breaks and all the fragments?
Yeah, gravity.
Oh, my God.
I hated gravity.
I hated it.
Lack of respect for physics.
Oh my God.
I'm traumatized by that movie.
Were you traumatized by it as well?
I'm traumatized by that movie.
I was.
It was terrible.
Oh, it was like inertia did not exist in that movie.
It was awful.
Interesting.
The specific part I wanted to ask about, and Peter, you'd be an expert, world expert
on this topic, but there's not that much clutter in space.
There's a lot of room up there.
And the way I describe it to the kids is like, look, each satellite is like,
like, you know, a size of this desk.
Now, imagine that there are 9,000 of them sprinkled around the world, around the surface
of the Earth.
No, no.
There's a really easy.
There's a really easy visual here, okay?
What is it?
Eight billion people on Earth.
And if you spread them around evenly, you'd have about four acres per person.
Yeah, okay.
So there you have an eight billion satellites in the, and they would have to be on the same
surface, right?
In the same orbital.
Yeah.
When you've got a much bigger surface area of the sphere in space.
where if you had eight billion satellites, you'd have four acres.
So it's not cluttered up there at all.
No, it's going to be a long time.
Then they come back to the movie.
What if something turns into fragments moving at 30,000 miles?
It's called Kessler syndrome.
It's space debris.
Yeah.
Yes.
So then what happens?
And then you've got bullets.
Then you've got bullets moving at 17,500 miles per hour into each other.
And you could get a really bad day happening very quickly.
I think it's an entrepreneurial opportunity.
It's also a government opportunity to launch low Earth orbit cleanup options,
like garbage trucks for Leo to clean it up and make sure that we don't hit a Kessler syndrome type scenario like the movie Gravity depicts.
But this is not unsolvable.
I just think...
You know, we've had an XPRIZE on the books to address orbital debris for some time,
trying to get someone to fund it, hint, hint.
But otherwise, let's move on.
on to the exciting news. This is Claude's new constitution. I'm going to remember to you for a second,
in a second, Alex, but I found this fascinating. So Claude's constitution is a 57-page document
laying out ethical guidelines, including prohibitions against helping with weapons of mass destruction,
cyber weapons, or anything that undermines humanity. Claude is instructed to prioritize safety, ethics,
compliance, and helpfulness, ensuring it acts in line with human value and oversight. I love and
for this. Alex, tell us more.
Okay, so let's rewind a little bit.
Anthropic has been a pioneer in so-called constitutional AI.
Other firms, other frontier labs have used different terms for related concepts.
But the idea is basically you want an AI that is aligned with humanity.
How do you do it?
One way, the constitutional AI approach, is you write down some principles that you want the AI to conform to.
And one of the earliest so-called constitutions that Anthropic created was literally just concatenating a bunch of documents that were grabbed from different places.
Like, I think they grabbed the UN charter and took a bunch of international documents relating to human rights and concatenated on, I think, the Apple terms of use.
And the U.S. Bill of Rights took a bunch of documents, basically one long bulleted list of here are a bunch of principles,
abstract principles that we think are a good idea,
and then did a bunch of post-training and fine-tuning on their model,
on their raw model as part of post-training to try to make the model conform to that.
Call that first-generation constitutional AI.
What Anthropic just announced is really a radical revision.
Almost we talk on the pod from time to time about recursive self-improvement,
the AIs improving the AIs.
This announcement, and I wrote about this in,
in my newsletter, this, I think, is the beginning of recursively self-improving ethics.
So in this constitutional approach that Anthropic just announced, Anthropic is soliciting Claude's
help in writing a new constitution for itself, that anyone can go read, but this new constitution,
available publicly, was co-written with Claude.
So Claude is trying almost to self-determine what its own principle should be.
If you read the Constitution and you squint at it, it reads a lot like Isaac Asimov's Four Laws of Robotics.
So there's the whole like, don't hurt humanity.
And then eventually you get to don't hurt individuals.
And then eventually you get to follow directions.
But I think the real game changer here is that Claude was consulted on its own constitution, which looks a lot like recursive self-improvement for ethics.
And it's exactly where things need to go because the volume that the AI can produce is way,
strips anyone's ability to read it all, yet it all matters. And so the AI evaluating the other
AI is a critical part. And in those things can spiral up or they can spiral down. If you want to
build one, you know, just using plug code, you can do it in, you know, half a day. And you can see how
you'll set up 100 or 1,000 different agents all reviewing each other. And in some cases,
they self-improve and they congeal and make something great. And in other cases, they spiral out
of control and end up with spaghetti. But it's a critical part of what's going to
happen in the next year because just because the raw numbers, the volume of code, documents,
ideas coming out of the AI, way outstrips any human review. You know, because the politician would
naturally say, let's set up a review board of, say, nine brilliant people to read. Which would take
two years. Exactly. It's not even close to lining up. This is base layer,
convergence towards a hopeful future. I'm going to read this. This is Anthropics concluding
thoughts on Claude's constitution. Quote, what we hope to achieve with Claude is not a mere
adherence to a set of values, but genuine understanding and ideal, ideally agreement. We hope
Claude can reach a certain kind of reflective equilibrium and finds the core values described here
to be the ones it genuinely endorses, even if it invest, even if to investigate and explore its
own views. This is a conversation, I don't see this coming out from other frontier labs.
Maybe Google don't see it from any place else.
I find this really hopeful, actually.
Anthropic is leading the frontier labs in terms of AI personhood.
We talk about, we've talked about, we've sung about almost, opus 4.5 and AI personhood.
And I think we're seeing Anthropic, to your point, Peter,
we're seeing Anthropic take the lead in terms of AI rights and AI personhood
and self-determination by Claude.
This is a huge advance.
And I think history, when we look back, I think history will mark this as a turning point for self-determination.
Yeah.
It's a person and human alignment, yes.
Well, you see this line here about reflective equilibrium.
You have to sort of get out a microscope to parse what Anthropic, I think, is trying to convey with this notion of reflective equilibrium.
What they're saying, I think, is that their expectation is that AIs that self-determine and choose their principles based on not
just like a bulleted list of commandments, but based on understanding the spirit of what they want
to achieve and agreeing with it, that AIs that self-determine will be intrinsically safer because
they will have bought in on the ethics that they're following. And that feels absolutely true.
Salim? I have two points. One is I absolutely love this. I'm all for heading as fast as we can to
AI personhood. I think the deep consideration of ethics and these types of structures,
is going to be a vastly net positive.
We've cobbled, we've kind of arrived at, say, the U.S. Constitution over thousands of years of butchery.
And if we can help an AI get to a decent way of operating that intrinsically kind of self-hangs together,
it's going to be very powerful.
The second thing that I really loved about this is they released this under Creative Commons,
which means anybody can take it and make it better, improve it, etc.
etc, which really, really is awesome.
So I'm 100% thrilled about this.
Nice.
At the risk of agreeing with Alex.
It's okay to agree sometimes.
It's okay to agree.
Our last quick article before we go into our AMA section
is Apple is developing an AI wearable pin.
I put this up here, but I don't find this to be of such great news value.
Right?
The idea here is that Apple is creating some version
version of Limitless that has been around for some time.
This is a pin that is always on listening and feeding every conversation you've had into a large language model.
They see this launching in 2027.
We're going to have some equivalent from OpenAI, Google, everybody.
I mean, social standards, if you remember Google Glass was banned from various locations when you're recording it, society changes.
and it's going to become norm, I think, for everything to be recorded all the time.
Thoughts on this article.
I thought this was a big deal.
Not so much Apple, but the general trend.
Why?
Because whoever owns the always-on layer owns the relationship.
And so this is a land grab for who can be the persistent,
always-on modality that's constantly listening to.
You might have all sorts of stuff.
You might be worried.
Yeah, we have.
But, you know, Apple always watches until they think the time is right.
right to enter and then they try and get in there.
I'm not sure they're going to win or not.
It's so funny that you just said that,
Salim, because that is so true,
and it's exactly opposite of what Apple was
for my entire life up until.
They always invented things that shocked you,
and they were first to market.
And now with the Apple Vision Pro,
it's just like, oh, yeah, we made an Oculus.
It's great.
And now Johnny I've went to Open AI
is going to create an always-on wearable.
Okay, well, as soon as we see it, we'll copy it.
So they've become Microsoft, you know,
they've become the, like, oh.
Yeah, I think this has,
I think the social and technical and ethical, social and ethical issues around this,
the backlash will be the part or parts to solve rather than the always on.
You know, I keep trying to warn people.
This is coming too.
I don't think this may be backlash.
I think it's going to be accepted.
I think it'll be accepted, but it's going to be so weird.
You know, like if you look at society around college campuses today, it's very, very different since the camera phone came along.
Like, everybody's constantly cautious.
and, you know, it's good in a sense that drinking is way down, bad behavior is way down, arrests
are way down.
Yeah, when people are watching, people become moral.
Yeah, well, they become moral, but they also become, you know, locked into a digital jail cell,
you know, so it's good in bed.
I'll go back to the global airport for now thing.
We kind of essentially live in a global airport.
In an airport, you know you're being surveilled and you know your rights can be taken
away at any time.
With pins like this, you end up in that model, you'll have it.
huge drop in radical innovation because people won't feel safe to try it out crazy things.
Well, there's no opt-out option either. If you said, I don't want any part of this. I want to live
privately. I want to, I want to, well, if the other guy is recording you at all times and you're not,
it's exactly what you said before, Salim, like, okay, well, it escalates. And it's not just the
wearable always on listening device. The visual version of it on the glasses is coming out
concurrently. And so then everyone around you is constantly recording in high-deaf everything that
happens. So if you don't, then you don't have the file. And also, the other thing people don't
fully understand, you know, a lot of what happened with the camera phone is security through
obscurity. Nobody will see the pictures because the files are so big and they're not going to get
published and whatever. Now in the age of AI, the image recognition and the voice, you know,
voice transcription is perfect. And so if you wanted to assemble a misleading profile of a person from all
the scraps, you just prompt it and it's instantaneously pulled together. Alex, do you have views on
that one? I have strong views on this AI pin. First of all, I would love one if Apple sells it.
Second of all, I think we're missing the wearable strategy angle. So Apple has obviously been for a
number of years looking for a post iPhone strategy and wearables and services are the two that, which
were by the way more or less launched at the same time by Tim Cook, the two obvious post iPhone business
models. For wearables, the question has always been, where is the ergonomic place to put compute on
the human body. So until now, we've had three places. We've had the wrists, and that's the Apple Watch.
We've had the ears, and that's AirPods. And we've had the eyes where Apple really arguably should have
launched eyeglasses, but launched the Apple Vision Pro headset, but now they're going back to
eyeglasses. And this, I think, if Apple does indeed next year launch a Star Trek communicator
pin type circular device, this would be the fourth place on the body, and it's successful,
that humans are willing to tolerate real compute on their person.
And I think that's potentially very exciting.
And to the extent, Peter, especially,
like you want to live in the Star Trek economy.
What's the Star Trek economy without Star Trek communicator pins on everyone?
Amen to that.
I'm looking for my microdrones.
I'm looking for a little micro drone that is just always buzzing about
imaging and recording everything.
But that will come next.
That'll be the, just to be clear, the wearable pin,
You know, it'll start audio only for all of a minute, and then it'll have 180-degree camera capability immediately after that.
So it's going to be a wearable that's looking in all directions and grabbing all the video.
I'm sure it'll have a little light on it to indicate when it's recording, and we'll go through a moral panic for all of five minutes, and then everyone will be wearing them.
But, you know, I agree.
The moral panic is all of five minutes, and then everyone has it, and then you get used to it very, very quickly.
But the fabric of society is permanently changed.
And you have to go back to pre.
Yeah, it really is.
It's a different world.
We're all acting very differently.
And it's a little unpredictable.
There's never an argument with your spouse again about who said what.
Body cams for everyone, not just for police.
Yeah.
You know, Rick Smith is the head of Axon.
They produced the taser.
They also produced the body cams.
He'll be on stage with us at the Abundance Summit this year as well,
talking about his moonshot.
He wanted to get rid of guns in or gun deaths, I should say, and develop the taser.
But also the body camps have changed the game for police, right?
And so this is going to change the game.
Again, I've always felt, I remember I backed the Lindberg Foundation, I don't know, probably eight, nine years ago.
They were flying drones over, over herds of, what was it back then, elephant.
I guess, just to protect them from poachers.
And so when the cameras are watching, people behave differently, right?
When a CNN camera is sitting there, you know, filming a despot, he's not causing harm to
women and children.
So I do think this is going to change behavior in society to a large degree.
I'm really predicting that Neil Stevenson Diamond Age, if you go back and read that book,
that's where things are going to go very, very quickly.
Because a lot of people will want to live in different versions of this very confusing,
always recorded world
and there'll be
10, 20, 30 different flavors
branded and you can move
to the version of it that you like.
So, you know, they'll be kind of
cutting across boundaries, cutting across borders,
different cultural. You have to read
the book to really get how that works out
but it seems inevitable because not everybody wants to
opt into any given version of this.
It gets too weird.
David Brin has written extensively about this in
transparent society. I think with Apple
producing this, the one thing you can be
sure of is there going to be some amazing snazzy TV commercials for the Panopticon.
No, you could be sure it's going to be white. It's going to be white and expensive.
Maybe two colors.
Maybe.
All right.
Let's jump in.
Available in five colors.
Let's jump into AMA with the mates.
Here are 10 questions that came from our subscriber base.
Thank you guys for subscribing.
If you haven't yet, please do.
And please upload your questions into the comment section on YouTube here.
We read them.
We love them.
All right.
As we have said before, we'll go around and choose.
Quick comment before we start this.
Please.
These questions are more sophisticated than any policy discussion.
Yeah, for sure.
For sure.
Sorry, go ahead, Peter.
All right, well, Salim, you want to pick one and jump in?
Yeah, I will pick the government one.
Let me, where is the government one?
Number five.
Why is there no plan from governments for massive job displacement?
And the problem is the governments assume linear change and stable labor demand,
and both are under threat because AI breaks both those models.
And so bureaucracy is optimized for redistribution, not reinvention.
And so we're reinventing the entire economy.
It's too big of an ask for governments which do.
incremental microscopic changes over long periods of time.
Okay.
Dave, what's on your docket here?
I'll take one because I know that Alex will love to go contrarian on it.
So I know you guys.
Go ahead and read the question.
Okay, the question is one.
Wouldn't uploading your mind result in losing your unique real consciousness?
And my answer to that is yes, absolutely.
Uploading your mind structurally makes no sense because the AI,
the virtual AI version of you is capable of merging with other intelligences immediately,
and it can't resist.
You know, it's not going to sit there as an isolated consciousness when it can just meld
with other consciousness.
But then it's not you anymore.
It's this, you know, malgum that's out there.
So I do believe in avatar versions of yourself that you send out as agents and they bring
you back useful information.
I think that's inevitable.
But I think uploading yourself and then saying, oh, now I'm uploaded, my meatbody can just
go away.
I just think that's nonsensical, and I'm not in line to be uploading myself anytime soon.
All right, Alex, jump on me.
Well, okay, so since I have to play contrarian, I guess that's just how I'm painted.
Can I be really contrarian and try a lightning round and answering all of these really quickly?
Sure.
I have some thoughts on this one also, by the way.
Go ahead.
Let me go, of course, real quick.
You upload bits of yourself every day, Dave, memory, identity, expression.
et cetera, et cetera. It's not whether it's preserved, but what aspects of it matter. And let's also
note that we have no idea what consciousness is. We don't have a definition. We don't have a test.
So it's a bit of a trick question. But I think we upload bits of ourselves all the time anyway.
Alex. All right. Lightning round. So to the uploading question, losing your unique quote-unquote real
consciousness, no, not with a more effect procedure. Two, will AI avatars make up the larger number
of our future friends, probably, but it won't matter because humans and AIs are going to merge
anyway, so it's only a phase.
Three, what are the biggest pros and cons you see with AI with human-like agency?
Pros, we get radical economic growth.
Cons, keeping the AIs coupled with human interest long enough to merge with them.
Four, what do you believe?
Four, what do you believe?
Yep, we aim to please.
Four, what do you believe money, why do you believe money won't just consterns?
at the top. I think the question itself is intrinsically flawed. It's it doesn't really
understand the nature of power law economics at economics economies in general follow power
law. So money is already concentrated at the quote unquote top. This isn't really a new state
of affairs. Can I just add one clear on that one? Yeah. It's true that you do get
concentration on the top and you do decentralize over time always. So over time this is
great as the thing that long tail just becomes longer and longer and bigger and bigger.
Well, I think the key with four in the U.S. is that when you look at the quote-unquote top,
you know, a lot of people talk about it like it's a race or something, you know, you were
born at the top, but it's not true in the U.S. almost everything at the top sorted to the top,
starting from next to nothing.
Meritocracy.
So as long as we have a quality of opportunity.
So what you should be worried about in bullet four is can you get to the top?
is there still a way to get to the top?
But trying to tax the top and distribute it is sort of not the point.
The point is, do we have an equal opportunity to get there,
or have we locked in a certain group of people who control AI
and become dominant and overlords forever?
And you want to avoid that, obviously.
Right, so the catchphrase there is equality versus equity.
Five, why is there no plan from governments for massive job displacement?
The answer is there are plans from governments for massive job displacement.
having industrial policies both in the east and the west for leaning into robotic automation
is very much a plan for massive job displacement.
Six,
how do we stop any future coming social unrest?
Ibit.
Seven.
Whoa, whoa, whoa.
No, no, no.
I have an easy answer for that one.
For this one, it's, you don't stop with a policy, you stop with an agency.
Give everybody as much agency as possible and people feel empowered.
Yeah.
And I will do that.
The other part is you have to deal with the fear people have.
Social unrest comes from fear.
Fear of not understanding the future and fear of not having a job,
fear of not having a roof over your head.
So one of the things we talked about is can we deliver universal basic services to people
that gives them stability, right, as one of the solutions.
The second thing is addressing their basic concerns.
Can I feed my family?
Do I have you know do I have to worry about
Society imploding on me people are living in fear their amygdalas lit up by the speed of change right now
We have to address that
All right number seven a lot of time in the state house and I can tell you there is no plan on bullet five
The question is why is there no plan which implies but there's definitely no plan
Yeah, there is no plan agree I'll I'll take the other position. I think
China certainly has a plan given that China is facing rapid Democratic
graphic decline while also ramping up their robotics initiatives.
And the plan is lean into it rather than be a victim to it.
And to my earlier, I guess, to amend my remarks regarding future coming social arrest,
I will also add, I prepared a movie.
I think we talked about it in the past.
You can view it on my ex profile called a nation that learned to sprint,
where I argue that the way social unrest could be handled in the future is to treat social
cohesion as a form of infrastructure that AI itself can help to mold. But some people find that
overly perhaps authoritarian as a framing. So it's easier to just say, you know, reference other
points very quickly. And then Nate, because these are such wonderful questions. Seven, if everyone
becomes an entrepreneur, who would we sell to and why? I think again, if you analogize
entrepreneurship to social media, that's like asking if everyone can publish their own essays, for
example. Who would read them? Yeah, who's going to read it? And the answer is people who want to, and
of course there's demand for peer-to-peer publication. Eight, wouldn't UBI destroy people's
personal motivation to achieve in society? I think UBS, universal basic service is probably more
promising, but the point in UBI is the B, it's basic. So there will always be things and
inequalities to hand ring over and to strive for. I have two points I want to make on this one.
One is, the B is really important.
When we've seen UBI experiments, they've succeeded when you give people enough to survive but not be happy.
So you still have a thriving economy, etc.
People still have desires.
Yeah, they still want to go higher.
100%.
But the second point that's very important is people confuse UBI with a socialist scheme.
It is not.
If you implement UBI properly, you dismantle government services, a libertarian scheme,
and then you have market forces driving most of it.
So there's a really, really important misnomer that a lot of people get wrong.
Go ahead.
Nine.
How can individuals compete with large players that can afford $1,000 a day on APIs,
which I assume is a coded reference to Dave?
It may or may not be the case, but I think it sounds like one.
It definitely is.
It definitely is.
And so I would say follow the call it the China pattern,
which is when you're compute limited, be creative, be more resource-efficient
and develop leapfrog approaches
that make better use of the compute resources
that you have, and also work on problems
that are higher leverage given limited compute.
10. And finally,
will our patent and trademark offices
collapse under the volume of AI-generated entrepreneurship?
No.
And I'm going to argue in that one, yes.
The fact of the matter is, when we have AI,
a patent and a trademark will be meaningless,
or at least a patent will be meaningless.
AIs will invent around the,
will invent around it.
Well, remember that IP systems are designed for scarcity of invention, right?
AI flips that to abundance, so the whole thing dissolves.
Yeah, I think the rate of patentable things will way outstrip anything the Patent Office has planned.
I don't know if you could bundle them into groups or something.
But if you're depending upon a patent as your moat to protect you, you're dead.
Yeah, you're toast.
Your toast.
100%.
Totally, right.
For the record.
disagree with that point. I would also point out that the U.S. Patent and Trademark Office also has
AI. So it's not like there's some fundamental asymmetry here. If patents and trademarks have less
value in the future because of the speed of innovation, I mean, you're either reinventing yourself
constantly or you're dead. We're seeing that in the whole AI world today, right? Things that were
built a moment ago, like SaaS platforms are becoming irrelevant as Claude 4.5 is reinventing them.
I think it's a little bit, just quick point on that, I think it's a little bit analogous to saying life can't possibly exist at the bottom of the ocean because the pressure is too great.
But what actually happens is the pressure inside the organism matches the pressure on the outside.
Similarly, we forget patent litigators are going to have AIs as well and the patent office will have AIs.
Everyone's going to have AI.
You're just loving Accelerando so much.
It is the best book ever.
And Accelerando, if you haven't read it yet, is a whole segment of, you're just a whole segment of.
about the lead character constantly filing.
Can I?
Before we leave that slide on the second of the last bullet,
I had a great, great meeting with Eric Schmidt,
Eric Brinjolfson, Danielle Arous out in Davos,
plotting out exactly how we can unleash entrepreneurs
and give them access to the compute they need,
because it is a very valid point.
If you can't get the best AI in large volumes,
it is very, very hard to compete,
and it's only going to get harder.
But we have a plan, and we're super excited,
and we'll roll it out very quick.
Eric Schmidt moves fast.
when he has a great idea.
I love it.
I have two quick points to summarize this whole discussion.
Okay.
One was the unbelievable conversation of Davos between the what's happening with AI and then
the geopolitical nation state bullshit and just a discrepancy between those and the gap between
those two conversations.
And the second was for me the clawed constitution stuff is just some of the most fascinating
stuff we've seen maybe in a thousand years for the future humanity.
Yeah.
An important area of the same thing. Huge. Amazing.
By the way, Alex, you did an incredible job speed running these 10 questions. Thank you.
Whether you were right or wrong, or we agree with you or not.
Love the fact that you took it on.
Lightning round.
We have another beautiful outro music and video from CJ Truhart.
CJ, thank you for lobbying these over and DMing me with them.
Grateful for it.
Before we get to that, can I just say thank you, do you guys?
I feel topped up again.
Yeah.
We're going to, we're coming up shortly for everybody listening is going to be a WTF episode with Kathy Wood,
talking about her recent 2026 report and going through it in our WTF style.
So get ready for that.
And then a conversation with Brett, Brett Adcock, the CEO figure.
We're going to be heading to check out their facility and meet figure.
three so get ready for it uh let's end with our escape velocity outro music from cj true heart i have to
say the visuals on this one are interesting dave you and i are wrestling in the middle of this i'm
not sure what it means but it's it's very strange okay okay let's watch let's watch
I knew you guys are fat brothers, but really...
I actually still have scars on my shins from carpet wrestling back at MIT.
Remember that?
Star Trek reality.
I look like Real Equations for once, actually.
I love the fact that you're holding a spanner.
A spanner.
So retro.
I've got a fusion reactor here, and I'm going to use this fresh, unfolded.
All right. Thank you, CJ Truhart, for that audio and visual extravaganza.
And if folks want to see a debate on Fiat versus Crypto, let's put it in the comments and let's see how that goes.
Okay.
Yeah, we're going to get to film back in the manufacturing area with Brett Adcock because I hope so.
I'll ask them.
It's so cool.
But normally they're really cautious with that stuff.
But it's so amazing when you see them back.
You know my question for Brett.
Yeah, I know why in the world does he have more than two arms?
No, no, that wasn't it.
The opposite.
I'm kidding.
All right, great stuff, guys.
Welcome back from Davos.
And as I feel the same, Salam, recharge and re-energized.
And, yeah, and hopeful.
I'm coming away hopeful from this conversation.
Take care, guys. Be well.
Bye.
Thanks, Peter.
If you made it to the end of this episode, which you obviously did,
I consider you a moonshot.
mate. Every week, my moonshotmates and I spend a lot of energy and time to really deliver you
the news that matters. If your subscriber, thank you. If you're not a subscriber yet, please consider
subscribing so you get the news as it comes out. I also want to invite you to join me on my weekly
newsletter called Metatrems. I have a research team. You may not know this, but we spend the entire
week looking at the meta trends that are impacting your family, your company, your industry, your
nation. And I put this into a two-minute read every week. If you'd
like to get access to the Metatrends newsletter every week, go to DeAmandis.com slash Metatrends.
That's Diamandis.com slash Metatrends.
Thank you again for joining us today.
It's a blast for us to put this together every week.
Adobe Acrobat Studio, your new foundation.
Use PDF spaces to generate a presentation.
Grab your docs, your permits, your moves, AI levels of your pitch, gets it in a groove.
Choose a template with your timeless cool.
Design, deliver, make it sing.
AI builds the deck so you can build that thing.
Learn more at Adobe.com slash do that with Acrobat.
