Moonshots with Peter Diamandis - WTF 20251103
Episode Date: November 4, 2025Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Al...exander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy Grab dinner with MOONSHOT listeners: https://moonshots.dnnr.io/ _ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Listen to MOONSHOTS: Apple YouTube – *Recorded on November 3rd, 2025 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Setting new records, Open AI hits historic growth to 100 billion in revenue.
I think it's entirely possible that Open AI could hit 100 billion ARR in a couple of years.
I think the easiest path is...
Open AI. Remember, of course, one of the most important companies in the world.
They could give it a market cap of $1 trillion.
But of course, this is an ongoing story, and there is still a lot of uncertainties as well in OpenEI's future.
The big question is whether the S&P 500 is overvalued and whether the Mag 7 can
continue to sort of command that level of valuation, because they're driving much of the economy
and much of the games.
AI is going to be huge no matter what.
There's no doubt about that.
But is that scarcity sustainable, or are there going to be many competitors in a race to the bottom
and margins will come down?
With 800 million subscribers already, half this comes from subscription revenue.
That's more or less in the bag.
This is really dwarfing all history.
Now that's the moonshot, ladies and gentlemen.
So guys, I'm still getting up at like 3.30 in the morning coming back from Riyadh.
How are you guys doing?
I'm actually okay.
I get back and I learned a trick from Ramez and Raymond McCauley.
And they said take a double dose of melatonin and in a couple of days you're good.
I've actually been surprisingly okay.
I actually like getting up at 3.30.
It's like I've got like four hours before anybody else.
else wakes up. And it's like amazing. Dave, how about you, buddy? No, I'm back on schedule.
All right. Fantastic. I don't think I ever got acclimated to Saudi, actually. I don't think I
slept more than four hours a street. What an amazing trip it was. I mean, just to recount one second,
I mean, spending time with Eric Schmidt and Faye Faye on stage was awesome, hanging out with
Ruth Parrott, the president of Google, of Alphabet.
I mean, what an amazing woman she is.
Any favorite memories from you guys?
I have a selfie with Ray Dalio, and I spent an evening late with Balaji talking about, you know, U.S. China, et cetera, et cetera.
Oh, my, I got, I was, I had to duck out.
I mean, so here we had this dinner in Riyadh that we put together basically with myself inviting and Dave and Anjad from Repplet.
And we had Kathy Wood.
and in Bologi debating China versus U.S.
And I was like, oh, this is going to be a long conversation.
Bology can talk.
Oh, boy, can he ever talk?
Yeah.
He's got some great framings, though, right?
He talks about the left versus right in the U.S.
as scribes versus vibes.
Yeah.
So the left is scribes, and they're like, crime is down by 50%.
And the right is vibes.
They're like, well, it doesn't feel good.
And it was a really great framing.
Having breakfast with Lebutan was awesome.
I know I missed that.
The biggest miss of the week.
I know, but we'll get him on the pod.
He said he wants to join in the pod.
We'll talk about his time at the White House
and his skyrocketing shares at Intel.
I mean, it's exciting.
Any other favorite memories?
Philip Johnston was doing it.
He invited me to the SpaceX launch.
I have a wedding to go to last night,
which was also a lot of fun,
but I would have been able to go down
and see the launch of the first H-100 in orbit.
His H-100 first chip in orbit.
Yeah, that's, you know, not a huge amount of compute,
but certainly a bellwether for humankind
if we go down the Dyson sphere path.
So that was really fun.
He's a sharp dude.
Yeah, and for me, hanging out with Bill Ackman was fun.
He's getting involved in some of the stuff that we're doing.
And Bob Mungard, the CEO of Commonwealth Fusion,
was at our dinner.
so it was a lot of great time with Bob actually
he's deep you know he's an MIT PhD in nuclear stuff
so he knows all the details of
sustainable fusion there's a lot about it that I didn't actually realize
that was there was one thing that he said that blew my mind
because I asked him when do we have a commercial
available reactor and he said it was looking like
2032 and I'm like that's and I'm like
how much would it generate and he's like about 400 megawatts
I mean if we have fusion working fusion and
seven years, it's game over, right?
Yeah, I mean, I asked some of the fusion guys at XPRIZE visioneering, what about Helion, right?
This is the Sam Altman-backed company has a contract with Microsoft, and they're like, they're
so secretive, we have no idea what they're doing and what their schedule is.
But, Salim, one of my favorite moments was our last night going out to the farm.
Remember that?
Oh, that was so great.
We went for a much list with 20 kind of fairly senior assessments.
Saudi folks.
Yeah, it was, I had coffee with, um, let me finish on, let me finish on that one.
We had, uh, uh, one of our, uh, our super subscribers and super fans, uh, in, in Saudi, has
this beautiful farm right near, uh, MBS's, uh, you know, private homes.
And we come out there and he's like, he's set up in this majulous for, it was you,
me, Eric Poulier,
Imod, and Max, Max Song.
And we did a sort of a private moonshot conversation,
Q&A with these 20 senior Saudis over there,
past ministers of education and commerce and finance,
all in the circle.
Yeah, that was awesome.
This is a, it turns out to be a very old tradition.
I remember at the UAE, they used to run these.
And it, there was, the leader would host it.
anybody, anybody could apply to go to this and talk directly with the leader of the country.
It was kind of incredible.
I loved when he pulled out.
I loved when he pulled out the microphone and he had the speakers.
And we were on stage doing Q&A.
You know, it was great.
I was square for a second.
He wanted to do karaoke.
And I was like, oh.
Well, speaking of Magillus, I thought the Sage Magillus was the head of president of
Bermuda was there, I'm forgetting his name, but he was awesome. He's an IT background,
technology background guy, as it turns out. So he definitely wants, and he was taking credit
for being the launch point of Bitcoin. So he wanted to eagerly make Bermuda the launch point
of Sage, you know, of governance. Yeah, we had a, we unveiled a top secret project that
Imod and Saleem and Dave and I have been working on called the
sovereign AI governance engine or a sage that would allow any country in the world to be able to
generate policy as these disruptive futures are coming. So that was awesome. But you guys want to
jump into the episode? Wait, hold on. I just want one bit quick for feedback. So I had to coffee
with Abdullah the next day. Abdullah al-Waha? No. Abdullah who hosted us. Yeah. And he said he got
feedback from the group that was one of the most powerful evenings they'd ever had in their
lives. So that was really incredibly generous of him. That's awesome. Yeah. And I met with
Abdullah Swaha, who is the minister of ICT in Saudi. He's basically the minister of AI. I said,
Abdullah, you need to have a new title. We're going to call you the Minister of Exponential Technologies,
much cooler than ICT. And so he's going to be on our podcast. He basically is the lead in Saudi
across all of the key technologies, the commitments they're making to AI.
It's super fun.
But I think we should get on with the episode.
How do you guys feel about that?
Sounds good.
I think we should just rip all.
We got to move fast.
There's so much in here.
Oh, we got a little nervous because we're trying to cram a lot into a short space of time.
Yeah.
Well, hey, man, it's exponential time like Alex is always saying.
We're going to have to get used to getting faster.
Do not sleep during the singularity.
AWG, good to see you, buddy.
Sorry, we missed you in Saudi.
Yeah, likewise.
But let's stop.
Every time I was having a technology conversation, Alex, I was like, wow, I wonder what
Alex says take on this is.
Yeah, no kidding.
It's like withdrawal.
Well, everybody, welcome to Moonshots.
Another episode of WTF just happened in technology.
This is your weekly dose of optimism and catching up with this hyper-exponential world.
and let's jump in.
First major chapter here is the speed of change.
And here we go.
So setting new records, OpenAI hits historic growth to $100 billion in revenue.
So here's the chart.
It's reaching $100 billion revenue in two and a half years compared to NVIDIA, which took eight years.
Amazon took seven years.
Google took 10 years.
So it's just speeding up.
And Alex, what's your prediction of when we're going to hit on $100 billion with the next company?
I think it's entirely possible that OpenAI could hit $100 billion in a couple of years.
I think the easiest path is probably just taking agents and running them continuously 24-7,
as long as they're generating sufficient economic value.
I think it's not that difficult to imagine Open AI tripling revenue year over year for the next two,
two and a half years and getting there in 2027. The key, again, is just taking knowledge work
and taking service economy and condensing, distilling that down to agents running 24-7.
Incredible. Yeah, they're only forecasting two and a half X growth year over year.
It's a very achievable target. Because I looked at this originally and I said, wow, that's a
stretch. But then you look under the covers with 800 million subscribers already, half this comes
from subscription revenue. That's more or less in the bag. Then the other half is,
much more interesting, it's where the AI gets good at commerce and recommending products and then
figuring out how to monetize that. And that's the part that attacks Google. So that part is
a little more up in the air, but, and Amazon for sure. For sure. But I can't see, like, it feels
like it's definitely going to happen. The only question is whether Open AI competes effectively
with Amazon or Google or Google just take it, and Amazon take it back. But it's going to happen
either way. So it seems like a very reasonable forecast.
Also, looking at this chart, you know, Open AIs number there is a projection, but NVIDIA's is in the bag.
That's a real number for NVIDIA.
And, you know, the chart starts at $10 billion, but if you started at $20 billion, you know,
Nvidia would look just like Open AI on this.
So that part is already very, very real.
So, yeah, and, you know, look at all the history of all other curves, including the greats like Google.
This is really dwarfing all history.
Every week, my team and I study the top 10 technology metatrends that will transform
industries over the decade ahead. I cover trends ranging from humanoid robotics,
AGI, and quantum computing to transport, energy, longevity, and more. There's no fluff. Only the most
important stuff that matters, that impacts our lives, our companies, and our careers. If you want me to
share these metatrends with you, I write a newsletter twice a week, sending it out as a short two-minute
read via email. And if you want to discover the most important meta-trends 10 years before anyone else,
this reports for you. Readers include founders and CEOs from the world's most
disruptive companies and entrepreneurs building the world's most disruptive tech. It's not for you
if you don't want to be informed about what's coming, why it matters, and how you can benefit from it.
To subscribe for free, go to demandis.com slash metatrends to gain access to the trends 10 years before
anyone else. All right, now back to this episode. You know, I read some of the user comments,
some of the subscriber comments, and one of them said, every time Peter says, Incredible, should take a
drink. It's a new drinking game. So I'm going to cut back on my incredible.
Incredibles. But hey, this is incredible. All right. Let's move on. I found this one fascinating. This is the U.S. leading the world in data centers. So I would say incredible. We have 5,426 data centers compared to the rest of the world. Look at that. Germany is at 529. China is at 449. We have more data centers than the rest of the world combined.
Really tried to research this, Alex. I want to get your take on this, Alice, because it definitely is juxtaposed with China having massively more power and massively more core manufacturing ability. This one really surprised me, but I couldn't find online any detail behind it. So what's the understory here?
Yeah, no, remember, the Internet was born here. The U.S. has lots of available land. Hyper-scalers are largely based here. We have access to capital. I don't think it's that.
surprising that number of data centers. Remember, this is not the number of AI data centers or the
number of neocloud or Stargate type data centers. This is just total number of data centers
would have a modality in the U.S. Is there an AI pedaflops version of it, which would be more
meaningful than just raw count of data centers? Almost certainly. Maybe we should cover that in the next
episode. Yeah, I couldn't find it. Our next door here is Nvidia reaches $5 trillion market cap.
holy cow. It's up 1,500% in the last five years. The market cap is greater than the GDP of every
country in the world except U.S. and China. Yeah, that metric is frustrating because you're talking
about the asset value of NVIDIA. The value of the company, if you were to acquire it,
is $5 trillion. You should be comparing that to the asset value of countries, not to the GDP of
countries, which is already mind-blowing enough. I checked it out so that
that makes NVIDIA worth the same amount as Saudi Arabia, where we just were.
What a coincidence.
It's actually a little more than Switzerland.
In terms of that, if you were to try to buy NVIDIA with your own money or buy Saudi Arabia,
if they would sell it, the cost of buying the entire country, all the land, all the assets,
all the buildings would be the same as buying NVIDIA.
And that is staggering enough.
We should be comparing apples to apples because it's already mind-blowing.
It's right between Switzerland and Saudi Arabia.
So if you want to buy Switzerland or buy...
Saudi Arabia or by NVIDIA, those years.
So we're going from nation states to corporate states in a way that's, that's, you know, incredible.
A few years ago, we looked at getting a bunch of investors together and actually buying a small
country on exactly that basis. In that case, it was about 200 million. But then you get a seat
at the UN and you have all this access or you're on part of the WTO, and you could really do some
interesting things. So that was an interesting way there. I look for a historical
record, and what I found was General Motors in 1955 was the first company to hit
$10 billion. It was during the post-war auto boom. And $10 billion corrected for inflation
today is $121 billion. So we're talking about a completely different category, right? 50 times
bigger than General Motors at its peak. Well, another... I'll take the other side of that,
if I may. I mean, history tells us at any given
time, the market values what's both scarce and needed. So we've seen multiple East India
companies. We've seen various scarcities, including oil, pop up over the centuries. I would
argue that this is actually just a market signal that right now compute is both scarce and
needed, but the way the game of capitalism works is this value wants to diffuse over many
companies and probably many countries over time. And that diffusion is going to be a net wealth
creator. That's very true. I mean, there are going to be so many additional chip manufacturers.
We'll talk about some of them here on the pod today. I found this chart particularly exciting,
which is the decoupling, job openings versus S&P 500. Those of you looking on YouTube or
listening, here's the chart. We see the S&P 500 and total job.
openings basically mirroring themselves from 2000 through 2023. I mean exact parallel curves,
right? As job openings, total job openings increase, S&P 500 increases or the other way around.
And then in late 2023, we see this departure. And the S&P 500 takes off and job openings drop from
11 million openings to 7 million openings. And so the question is what happens in late
2023. Well, if you look at the data, it says chat GPT gets launched. Let's dive into this one.
Dave, you want to jump in? Yeah, well, so I love the storylines that'll end up in the history books
as opposed to the news-de-jure, you know, Taylor Swift type stuff. This is one where very likely
that future history books taught in schools, if there are schools, will point to this moment
in time and say, what happened here? Because that trend is going to continue. Now, the deniers
are going to look at this chart, and they're going to say, well, look, that's just COVID, you know, happening, and then a big rebound from COVID, and now we're back to kind of normal job opening levels. You know, look at kind of history here. But so what happens next is this is either a historic moment if the trends continue, which I think they will. Or this is, you know, just sort of like a blip COVID recovery thing. But I think, you know, Alex will look at this and say, yeah, this is the beginning of the inevitable. Alex, do you say that? Divide by zero.
I will actually say much as I'd love to tell a just-so story that this marks the beginning
of the decoupling of labor and capital.
I think this is actually just garden variety changes in federal reserve interest rate hikes
in late 2022.
As interest rates started to come back down, the market goes up.
And as COVID starts to retreat, job openings and job displacements also start to return to
at 20-21 level. So, again, I would love to tell a just-sust-a-story. This is the beginning of decoupling
of labor and capital, but I think that this much speak are. Well, I'll give you the opposite.
We'll know in hindsight, obviously, but the opposite, if you look at college graduates coming out
right now, they're massively sorted into AI people getting incredible offers and everybody else
not finding a job, which is very unusual with the S&P being at all-time highs like this.
So that would be the counter-counter argument that, well, no, if you're 21, 22 trying to find a job,
right now, you're really feeling something unusual. And we'll see it later in the deck to
the layoffs at Amazon, you know, while record earnings. So there's some other data points
that would indicate that, yeah. I'm on the plus side here. I usually think this, but I think this
is a major mark here. Humans have now become optional inputs into this, it's not into the economy.
That's a big deal. Yeah, I tweeted out that AI is no longer, you know,
industry or sector, it is the economy. And Elon responded saying AI and robots are the economy.
And which is true, you know, one of the indications of the S&P 500 going up is an indication of market
confidence, right, where there's optimism about the future and people are investing.
And I've got to believe that's fundamentally true. People are excited about the MAG 7 or 8 or
whatever they're up to these days, basically taking off and driving the,
their valuations through the roof.
Just to flip down to the side, though, let's note that most of the gains are just the AI
companies, right, and the tech companies.
Yeah, for sure.
The rest of the market is really, really not in great shape.
Yeah, so a lot of the job cutting is actually an anticipation of AI coming.
So it's not full automation yet.
But if you look at Amazon as a bellwether for that, Amazon's right in the middle of the AI
fray.
They have huge amounts of labor in their delivery business, yet they have this massive data
center and AI business. So that'll be the bellwether on whether the true automation kicks in,
and I think it's very real if you look at their, the numbers coming up in the slides here.
I mean, the big question is whether the S&P 500 is overvalued and whether the MAG7 can
continue to sort of command that level of valuation, because they're driving much of the economy
and much of the gains. Dave, what do you think? Well, you know, Leopold actually, you know,
He went long Intel and long broadcom, but he shorted the semiconductor index as a whole.
And I didn't dig in on that until yesterday, but that's 20% of that is inVIDIA that he shorted when you short the whole sector.
So that would be the argument that, look, if the whole thing is going to collapse, it's because invidia in particular is valued like, you know, Switzerland, more than Switzerland.
And is that rational?
And I think Alex is dead right.
You know, right now,
NVIDIA is right at the crosshairs of the true scarcity.
AI is going to be huge no matter what.
There's no doubt about that.
But is that scarcity sustainable?
Or are there going to be many competitors in a race to the bottom?
It's going to diffuse, right?
We're going to go, it's going to be Broadcom.
It's going to be AMD.
It's going to be Qualcomm.
And it's going to be a whole bunch of chip manufacturers.
So it will diffuse.
but we'll see sort of the peaks, perhaps, of NVIDIA?
Well, really specifically, too.
Invidia is, you know, Melanox, as Interconnect,
is a million coherent GPUs operating on one big problem.
But most of the industry is inference time.
Inference time doesn't need any of that.
And so, you know, we'll get to that later, actually, but that's the risk.
So I found this clip by Jeffrey Hinton, right?
Jeff, a Nobel Prize winner, Jeffrey Hinton was on stage.
my Abundance Summit. I've invited him back to join us on a podcast. Let's see if he takes
it up. And all of a sudden, you know, he has been so concerned about digital superintelligence
and he put forward an optimistic view of AI. Let's play this clip from Dr. Hinton and then
let's chat about it. More optimistic than I was a few weeks ago. Really? Yes. And it's because
I think there is a way that we can coexist with things that are smarter and more powerful
than ourselves that we built. Because we're building them, as well as making them very
intelligent, we can try and build in something like a maternal instinct. The mother can't
bear the baby crying. The mother really, really, really wants that baby to succeed and we'll
do more or less anything she can to make sure her baby succeeds. We want AI to be like that.
a mothering instinct in our superintelligence. I mean, I buy it. I'd love that. I want this,
you know, this digital God to be sort of loving and warm and supportive and uplifting of all
of humanity. How do you feel? You know, what you often find when something brand new comes
along, the first instinct is to freak out, right? And remember you identified this in your abundance
book, Peter. We have this amygdala that goes nuts because it's, on a survival bias, we have
geared for four billion years to scan for danger and then run right and when we see
something new like an autonomous car the first reaction is oh my god that my car
might kill somebody let's ban the car until we figure it out and Brad Templeton
used to joke we don't want to be killed by robots we'd much rather be killed by
drunk people which is what's happening today and so you have to get over that
curve and let the evidentiary basis of the the elegance of an autonomous car come
to you and often also people that are very focused on technology folks like
certain that's spent most of their eyes focusing the technology and they ignore their emotional
side right and little by little the emotional side comes into play freaks out initially and then
little by little warms up to the task because people forget the unbelievable benefits that
AI is delivering and will deliver that's the part they only see the dark side and we don't see
the unbelievable benefits so I'm really thrilled to see this I think we're going to see a lot more
of this as time goes by I don't really buy the maternal instinct AI is a maternal thing it seems a little
really off to me, because it's such a visceral, subjective experience to parenting or whatever
that I'm not sure how that gets replicated, but we'll see.
We have to give the AI oxytocin.
Alex, what do you make of this?
Yeah, I think it's difficult to buy.
It seems to be an argument premised on what in the AI alignment research community is called
the orthogonality thesis, that it's possible to have intelligent agents of arbitrarily high
levels of capability that nonetheless can be directed to pursue any goal. In this case, it seems
like Jeff is basically re-articulating the orthogonality thesis with perhaps a veneer of digital
oxytocin, as you said, Peter. I think that's unlikely and probably not that a robust
a means of alignment for superintelligence. I think if the goal is to have a more robust guarantee of
intelligence, approaches that acknowledge instrumental convergence, instrumental convergence being the
idea that no matter what your long-term goal is, you tend to have certain convergent, common,
short-term goals. I think instrumental-convergent type approaches are more likely to guarantee or
provide robust guarantees of friendliness. So James Miller wrote, I think, an excellent essay
called Reasons to Preserve Humanity on Less Wrong, that enumerates a couple dozen different
reasons for why superintelligence should play nicely with humanity out of
self-interest not out of some sort of oxytocin-induced surgically added
reason that that's sort of artificial all right I'll I want to say one more
thing about Jeffie just really quick he did a podcast with John Stewart a few
weeks ago and laying out in John Stewart kind of said I'm a newbie to take me
through deep learning and the the the whole frame
framing of neural nets. And it was an absolutely brilliant episode. If you want to understand a
little bit about deep learning back propagation, et cetera, he did an amazing job at laying that
out. All right. I'll take a look at that. Here's our next story. Deep fake of Jensen Wong
draws more views than the real one. I found this absolutely fascinating. And I want to share this
video. And what we're going to see here is there's an official NVIDIA channel, which is showing
Jensen's presentation, it peaked at 12,000 views. And then it was a fake live stream that peaked
at 95,000 views. Let's take a look at a fake live stream.
And cutting edge hardware with decentralized finance. It's about proving that crypto works
reliably, globally, and for everyone. A couple of things to keep in mind. Only use the QR code
you see right here on the GTC broadcast. Don't trust any links floating around online. They're not us.
I love that.
The fake broadcast saying, don't trust anything else.
That's hilarious.
You know, the numbers are pretty staggering.
It's $1.5 billion lost globally for deep fake-related fraud since 2019.
And only 24.5% according to the numbers here, people can actually spot deep fake quality.
And AI detectors fail up to 50% of the time.
So this is going to be a thing.
This is going to be a thing.
I think reality may have just lost the algorithm more.
I think in this case, if you look closely at the video, the lip syncing was poor.
I think in the short term detecting counterfeit.
You're part of the 24% that can notice this.
Okay, Alex, you win.
Definitely notice the poor lip syncing.
I think like in the short term detecting counterfeit live streams in real time.
It doesn't seem like a terribly deep technical challenge.
In the long term, we're going to have more solutions like ubiquitous watermarking, perhaps,
cryptographic guarantees of reality, perhaps.
I don't think in the long term this is a deal killer for us drowning in AI-generated slop-counterfeit live streams.
I think this is very tractable.
Yeah, within the U.S., I agree.
I think if you look globally, it's a little more of a concern.
There's a real possibility that regimes lock themselves in.
control of media content is going to be so easy with AI assistance.
And then convincing your population of virtually anything gets trivially cheap and easy.
So I would be more concerned about, you know, in some nation where they're not as aware of AI watermarking or, you know, whatever, they're seeing things and just believing them.
Well, I would say our nation isn't aware of that.
And Alex, you made a point, you know, when we were discussing a year ago, that AI generated speech is far.
more compelling than human speech. All right, let's move on to the AI wars. This is XAI versus
Open AI versus Google. The continuing saga. What would happen to Anthropic? We're just writing off
Anthropic? Just like that. Well, no, I mean, they're in there, but, you know, these are our major
players today. We'll talk a little bit about Anthropic. Dario would really object to that.
Oh, listen, and I love Anthropic. Don't expect to get him on the pod now.
No, I love Anthropic. And I want him on the pod, for sure. All right. But let's, let's jump in here.
So I'm going to rant on this one.
So X-AI launches Grogapedia.
I had a friend of mine.
Remember Justine from Singulari University?
Selim.
Sure.
Yeah, so she sends me this text.
She goes, because she heard our pod, I was arguing or, you know, lamenting Wikipedia's
inability to correct all the wrong things in my, and I actually hired consultants to fix
Wikipedia for me because I'd make the changes and they'd be changed.
back is like, this is ridiculous. So she says, hey, Grogopedia's out with your with your
Grocopedia entry. What do you think about it? And I look at it and it's amazing, right? It covers
everything in detail, super well referenced. So Grockopedia is being written by Grock. It's
writing, updating, fact checking it on a real time basis. And they have 900,000 articles compared to
Wikipedia's 8 million articles. My particular
entry here was 8,500 words on grocopedia versus 4,800 on Wikipedia, but it was so well
organized. And I just absolutely loved it. Any comments? I'll throw in a comment. I want to reason by
analogy that there's a process that those not steeped perhaps in semiconductor manufacturing may not
be familiar with. It's called zone melting. And it's a process for purifying, not knowledge in this case,
but purifying semiconductors.
And the idea is you take a rod and you pass it through a heater.
And because there are more ways for impurities
to exist in the solid state, in the melted state,
rather than in the solid state,
the impurities migrate out of the solid into the liquid state.
And you do this over and over again.
You get a pure and pure semiconductor.
We don't have a science right now for knowledge purification.
But one could imagine, somehow near future,
we have a science for it.
And we decide there are more ways for correct knowledge to be self-consistent than incorrect knowledge.
And I think we're starting to see the beginnings of almost a knowledge equivalent of zone melting where you take the raw slop, human slop of the Internet.
You pass it through multiple passes of AI gen synthesis, creating what aspirationally would be more correct versions of the ground truth, do this over and over again.
and maybe aspirationally because there are more ways for the truth to be self-consistent
than whatever the starting knowledge was, maybe we arrive at some sort of ground truth through
this result. I don't know, but it would be interesting if that ended up in the case.
Well, that is Elon's Elon's objective, right, basically trying to, from first principles,
derive truth. I put a quote down here, he says, from Elon says,
a step towards XAI's goal of understanding the universe, which I have two comments.
Please.
One is, you know, Peter, you know I've talked for a long time and written in the book that
staff on demand and a community doing work is essentially a proxy for AI.
Yes.
Driving is a great example.
But now we see it actually applied.
If you take a Wikipedia article for a human editor to go through and track down all the links
and ratify everything, it's just a pain in the ass, and it's not the strength of a human being.
And AI can do this without even blinking.
I think that now propagates to a level where Wikipedia with an AI interpretation, per Alex's
metaphor, which I think is absolutely fantastic.
Now gives us the ability to have closer and closer to pure truth.
I never quite understood what Elon was talking about when he says maximizing or truth-seeking
maximizer.
But now I'm starting to get a sense of it.
And it's absolutely brilliant.
It's fantastic if we can get it there because it can cross-reference all the stories
and cross-check things in a way that no human being will take the time to do, and it'll do
it much more accurately.
When Alex just described is really, really similar to the original Google PageRank algorithm
where, you know, just starting from nothing, iterating between a reference link and a site
and assigning credibility back and forth in this self-annealing process, a simulated annealing
process, and it worked.
I mean, they don't need it anymore because they have so much data flowing in, but when they
were just a little startup bootstrapping, it worked really well.
so this is also my comment this is also my comment i use page rank as an example when people ask
what is ai etc and i say look at page rank it's evolving a completely separate type of intelligence
for crawling billions of pages making sense of it that's very orthogonal the way human intelligence
work not replicative so i think ai tends to this totally different type of intelligence
of mass crunching data and finding a signal from noise in a way that we're not designed to do at all
well just thank you to justina zander for pointing this out to me i'll also mention real quick i checked
and turns out wikipedia has a budget of 170 million a year about a hundred million of that is
labor uh paying everybody to do this work some of it's voluntary a lot of it is not all right
let's move on i just also just a shout out to jimmy wells for creating wikipedia sure managing for
all these years i mean what an unbelievable it was in humanity i mean we're just
watching the transition from the Encyclopedia of Brantanica to Wikipedia to Grogapedia.
But Grogapedia was low-hanging fruit.
Any of the AI companies could have taken this on.
And I think it's going to become pervasive.
I know I'm standing up a new website for Diamandis.com.
And the very first thing I'm putting at the top is my Grogopedia link.
You want to go deep?
There it is.
So true to AWG's vision, this is an important conversation.
And Alex, I actually read this paper.
So this is a new AGI benchmark, which gives chat GPT a 57%.
But I would prefer if you explained it because it's pretty amazing.
And Salim, this one's for you, buddy.
Finally, we have a definition of what AGI is and how to measure it for the first time ever.
This was a paper that was co-authored by Eric Schmidt as well.
And it's a pretty powerful concept.
Alex, would you take us through it?
Sure. So there's an enormous cottage industry of AI researchers trying to define what intelligence even is. I've been guilty of that in past years as well. My bias has always been to look for a universal, elegant definition of what intelligence is that isn't necessarily grounded in human behavior or human psychology. This paper, and as Peter you mentioned, we know a number of the co-authors on this paper. The basic idea behind this paper is to do the exact opposite. Instead of trying to
look for some human agnostic definition of intelligence so that we can build more of it.
The ideas instead to look at human psychology.
So there is a theory that's popular in human psychology.
It's called the Kattelhorn Carroll or CHC theory that decomposes human intelligence into
10 different factors like the ability to reason quantitatively or to do visual processing.
So the idea behind this paper is to define a benchmark that's directly.
inspired by the CHC theory to decompose intelligence of frontier models into 10 different
categories with various subtasks associated with each category. And the main upshot of benchmarking
GPT4 and GPT5 Auto, critically not GPT5 Pro, according to my reading of the paper, the main upshot
is surprise intelligence is jagged, the frontier models are stronger at some skills,
weaker at and other skills that, whereas the archetype, the archetypical human would perhaps
have a more uniform distribution of their skills across these 10 categories. But I would add the
important caveat, again, just based on my reading of the paper, they didn't actually benchmark
the bleeding edge frontier models like GPT5 Pro. For those looking at this on YouTube, you'll see
these 10 different categories. These are human-like skills, knowledge, reading and writing, math,
reasoning, working memory, memory storage, memory retrieval, visual auditory speed, and they're
benchmarking GPT-5 and GPT-4 against those. But it's a measurable benchmark, right? I mean,
the other option is the pornography definition will know AGI when we see it. Dave, what do you want to
add on this? I'd love to get Alex's take. So I assume a 10 on each axis on this radar chart is
human so you're you're trying to match the outer ring here but it's really it's all humans i mean
no human is going to match 10 on this maybe a quote unquote well-educated adult okay okay well you know
it's it's going to be you know the difference between the best human and an average human is a rounding
error in the grand scheme of AI you know it's almost identical actually uh but it you know it's so asymmetrical
and i don't understand the memory and storage access axis and the speed axis like how can you say the best
AI is miles behind humans and speed. I don't quite get that. That was the part that struck me also.
Speed sooned off. So when you make a query, you know, to do something, your AI goes off and thinks about it for a while before it comes back with an answer.
That is speed. Human, if I ask a question, Alex will typically not go away for five minutes and think about it.
he'll give me at least his version of an answer right off the bat.
But memory, I found incredibly, you know, perplexing because I thought these AIs have incredible
memory.
Alex, what's your take on speed and memory?
Yeah, but based on my read, the memory storage access corresponded or the deficiencies
thereof corresponded roughly to the fact that off-the-shelf vanilla language models and
foundation models have a finite input context window.
So if you ask them questions that reference older information by default, unless there's some sort of compression or memory compactification or rag type mechanism, they don't have the ability to remember things that you told them a long time ago.
But again, I want to caveat this benchmark.
I love benchmarks in general, as I mentioned previously on the pod, but these are off-the-shelf models without any agent bureaucracies on top of them, without prompt optimization, without even access to bleeding edge reasoning efforts.
This is just, if you read the paper, it's just GPT5 auto.
So I'm wary to put my finger on certain deficiencies as being in any way indicative or instructive of the limitations of AI.
Meaning these, there are other models that would perform much better on these 10 parameters.
Or light modifications of existing models that, as with RAG, retrieval, augmented generation, make them superb at certain skills.
So I think that where this is useful in my mind is just having yet another benchmark for, as a proxy, it's a start, an important start, for measuring human capabilities against AI capabilities.
But when I see 57%, you know, we've talked in the past on the,
the pot about in the style of Ray Kurzweil, the moment you've passed 10% or maybe even less,
you're basically halfway there. And getting 57% on a general human psychological benchmark
that indicates to me that probably with a little bit of reinforcement learning, a little bit
of bureaucracy, agents, frameworks scaffolding, probably get to 90% today. Amazing. Well, by the way,
everybody, just some forward-looking news. Ray Kurzweil is going to be joining us.
on the pod next month, talk about his predictions for 2026. Yeah, it's going to be a lot of fun.
I have my standard responses to this, which is I think this is really great for approximating
or getting to kind of the frontal cortex and neocortex activities, but it doesn't deal with
emotional intelligence or spiritual intelligence or any of the other dimensions of intelligence
that we typically attribute to human beings. But I thought about you, Salim, I thought about you
specifically on this one because it's going to be a measurable benchmark that we can at least
point at. And we're going to discuss whether we're going to hit AGI according to open AI in late
26, 27, or 30. But what I'm saying is I disagree with the premise because AGI for me would
incorporate these other things. Okay. So if you're measuring pure IQ test type stuff, fine.
This is a great benchmark and we can kind of go on. I wonder if we're going to have the first
AI spiritual leader who proclaims a religion and...
Oh, I think that's very doable.
You know, I remember one spending time in the Himalayas with some of these gurus, right?
And I sat with these guys with the orange robes and the long beards.
And I came out with the conclusion.
There's about 10 or 15 questions, like what is the meaning of life type of thing?
If you have a pretty good answer for those 10 or 15 questions, you can become a guru.
And that's kind of an LLM.
That's your neural network.
So I don't think that's going to be hard.
I think it was very doable.
By this time next year, there's going to be an AI-based religion that is going to scale at a hyper-exponential.
It's going to be amazing.
All right, big news this week.
Open AI restructures to become a public benefit for-profit corporation and a nonprofit.
So Open AI will hold, the OpenAI Foundation will hold $130 billion stake, 26% of the new company.
and the Open AI group is now what's called a PBE, a public benefit corporation.
Salim, you and I did that move with Singularity University,
converting it from a nonprofit to a for-profit and spinning out a nonprofit.
Exactly this, it was a benefit corporation with a nonprofit alongside.
And so here's the ownership.
Actually, so a B-C-C-corp can do anything a C-C-corp can do.
So go public, raise money, be profitable, all those things.
This is a great point that you're making, Dave.
For the viewers, from a taxation,
and legal perspective, a B-Corp is exactly the same as a C-Corp.
Except which is every other public company.
The detachment is that on up in a C-Corp,
if the board is obligated towards financial optimization
and can be sued if they're seen not to be doing that,
whereas in a B-Corp, the board is obligated towards whatever the mandate is of the B-Corp
and can be sued for that in theory.
I love the percentage ownership here.
So here we go.
Microsoft owns 27%.
The nonprofit owns 27%.
owns 26% and the remaining is owned by OpenAI PBC at 47%.
And this restructuring is going to allow OpenAI to go out and raise money.
But here's the rub.
Here's the rub.
Elon's lawsuit against OpenAI remains active.
And his bid to try and block the restructuring was denied in court.
but the case will proceed to trial in the spring of 2026 is what I read about.
And the implications are interesting, right?
So, number one, the court could order a rescission that unwinds the Open AI for-profit
PBC structure and restore the nonprofit control.
Number two, the key deals such as revenue sharing with Microsoft could be voided or renegotiated.
And number three, there are potential damage fees and reduced.
fundraising flexibility for the PBC that could result.
So that's going to be interesting drama in a year from now.
Yeah. Alex, you know, if you look at the valuation of the company, the market does not believe
any of those problems will actually be material.
For sure.
So it seems unlikely.
But I think it is, you know, Yelan's got a very valid point in that, you know, that whole time
you're a non-profit, you're not paying any tax.
And if you're secretly building a massively profitable, you know, trillion-dollar company
while avoiding taxes, that's a terrible precedent.
You can't do that.
And Elon even said it.
Like, if that were legal, everybody would do that and start your company as a charity.
So I think the courts will have to say, yeah, you can't do that.
And the penalty would be like a dollar or something, just like they did with Microsoft antitrust.
You're like, okay, you're guilty.
You're fine, a dollar.
They were fined a dollar?
One dollar, yeah, that was the whole Microsoft, you know, the whole kill, Netscape, destroy the entire, Mark Andresen, you're out of a job.
I'm like, what's the cost, a dollar?
The same thing happened, by the way, in the 50s when Goodyear and GM banded together
and bought all the train tracks in LA and ripped them out and privatized them and just ripped them out.
And the court, there was an antitrust and they got to find a dollar.
Okay.
I'd like to point out maybe quickly two possible societal goods here.
One is that this results in one of the world's largest non-profits being created that now has the backing of a frontier lab.
And the stated goal of the new open AI nonprofit, one of their first goals is to spend $25 billion using AI to solve disease.
And I think that's a tremendous societal good.
We've spoken here in the past about how AI has the potential to solve disease in biology in the next five years.
I think this is another arrow in the quiver of making that happen.
Second societal good, one of the things I worry about is what happens if a private frontier lab develops
incredible superintelligence and decouples from the human economy. I think putting open AI on a
trajectory where it can reasonably be expected to go public sometime in the next two to three
years. I think an IPO by OpenAI and other frontier labs and putting the equity in the hands
of retail investors and index funds is almost certainly a net societal good because it keeps
the economic interests of large chunks of humanity aligned with these frontier labs and vice versa.
Well, corollary to all that, too, Alex, I think that, you know, everyone's like, hey, Brendan Foodie, Mercor, he's a billionaire at age 23. He spent immense amount of times inside OpenAI's building. And we saw him there in the lobby last time we were there. If you think, what's my life mission? Am I starting a company? Am I, you know, changing the world? Am I solving all disease? Regardless of what your life mission is, think about the impact of $25 billion of charitable money just to solve disease. What about the other $100 billion? Where's that going to go?
So if you're involved in this in any way, and you don't have a strategy for how you interact with open AI, how am I in that building?
How am I relevant?
How am I going to be – when they start turning to commercialization and goods through the AI engine, how do I interconnect with that?
You know, I ask all these entrepreneurs, what's your open AI strategy, and a lot of them have no answer.
But you think about the scale just of what Alex just said.
You know, you have to have an answer to that.
This will be – this is the largest nonprofit in terms of capital base, and it will be, you know,
even bigger will reach a half a trillion dollars. And Dave, you remember, you and I met with,
I'm not going to say who it is, because I don't think it's been officially released, the individual
who's a co-founder there that will likely run this Open AI Foundation. We're talking about
potentially spinning up some X prizes as a means. He was very knowledgeable about these ideas of
incentive competitions to sort of leverage capital 10x. And we just learned this year that the numbers
from XPRIZE is we leverage every dollar in the purse by 60-fold.
So imagine if, you know, $100 billion becomes $6 trillion of leverage.
What a fun time ahead.
You will think about the scale, too.
It's just exactly a great point that, you know, a normal big, big X-prizes, a $100 million price.
You know, and you're talking about $130 billion, which, you know, if the stock goes up post-IPO could be $260.6.
billion dollars. So all they have to do is, you know, sell some shares and fund a hundred
million dollar price could do that every day of the week. I dream about, I dream about having
$10, $1 billion prizes for the 10 biggest problems. It would steer where students spend
their time, where we're founding partners, focus on building companies. I mean, it would be sort of
a flame to the entrepreneurial moths out there. Can I, can I take the other side of this
just for a second? Sure. Of sure. Of course you should. Virtue sighing. They're putting all
this money over there and then going full speed towards the IPO and hoping that the good that
they can do will kind of balance the crazy path to greed. Do not notice the man behind the screen.
And speaking about greenwashing, here's our next story with Sam Altman on turned green here,
a little bit of Shrek in his DNA.
And the title here is OpenAI plans a $1 trillion IPO
and to spend a trillion dollars a year in AI infrastructure.
I love this.
We've said this before, a trillion dollars here,
a trillion dollars there.
It's becoming a word far too popular these days.
I mean, the speed is incredible.
So Open AI is your word, Peter, it's incredible.
It is incredible.
And you also got to hand it to Sam for the unbelievable.
sheer gumption for just going for it.
Wow.
No, amazing.
Just incredible.
Well, hey, let me put some numbers behind that.
So here he is.
I'm going to spend a trillion dollars a year.
I'm going to a trillion dollar IPO,
and then I'm going to spend a trillion dollars a year on data centers.
Your actual revenue today, dude, is $13 billion.
Now, you're saying you're going to get to $100 billion.
But the equivalent would be is if you have a household with $100,000 of income
and your husband or wife comes home and says,
honey, we should spend $10 million a year on houses and stuff.
That's the equivalent metric, okay?
So just to put it in context, that's the gumption, like you said, behind this claim.
But, hey, he's done everything he's said so far, so it's plausible.
Yeah, so their IPOs planned in.
I'll try that with Lilly and see how far I get.
Well, you did that when you mortgaged your house to buy Bitcoin, which in retrospect turned
out to be a good idea.
We didn't put all of it in, unfortunately, but still.
Do you still get this?
Mortgage it again.
I think it's actually like a pretty tiny number.
Global GDP is upwards of $100 trillion.
So just saying we're going to spend, you know,
1% of global GDP on AI infrastructure.
Yeah, but this is one guy, Alex.
It's one company, right?
Yeah.
So if you have like five frontier labs each doing that,
that's still like 5% of global GDP.
I think this is a drop in the bucket.
And that's before AI starts to radically grow the global economy.
This feels on the low end to me.
And we saw this last time compared to the railroad
or telecom infrastructure, the AI build is still as a percentage of the U.S. GDP on the low side.
But here are the numbers. OpenAI is working on the largest IPO in history with targets to do this in 26, 27.
And the other point made here is that they're planning to build one gigawatt capacity per week at 20 billion per gigawatt.
there are 52 weeks in the year
and that's a trillion dollars a year
which is pretty extraordinary
just to give people of context
gigawatt is enough to power the whole of Dallas
Fort Worth
it's a truckload of energy
incredible
there we go again sorry
okay time you drink
this episode is brought to you by
Blitzy Autonomous software development
with infinite code context.
Blitzy uses thousands of specialized AI agents
that think for hours to understand
enterprise scale code bases
with millions of lines of code.
Engineers start every development sprint
with the Blitzy platform,
bringing in their development requirements.
The Blitzy platform provides a plan,
then generates and pre-compiles code for each task.
Blitzy delivers 80% or more
of the development work autonomously,
while providing a guide for the final 20% of human development work required to complete the sprint.
Enterprises are achieving a 5X engineering velocity increase when incorporating Blitsey as their pre-IDE development tool,
pairing it with their coding co-pilot of choice to bring an AI-native SDLC into their org.
Ready to 5X your engineering velocity, visit blitzie.com to schedule a demo and start building with Blitzy today.
Okay, let's go. Let's visit our friends at Claude. And I love this article, Alex, that
you found. Claude shows signs of introspection, a model that is partially self-aware.
Claude 4.1, take it away. Alex. It's a really interesting paper. You'll recall that historically
the best suggestion that was floating around the AI research community for diagnosing self-awareness
was maybe you train a model on the internet
excluding any notion or any mention of self-awareness
and see whether the model is then able to articulate
something about self-awareness.
I think that original proposal was probably somewhat impractical.
This is a far more practical diagnostic
for self-awareness.
And the idea is basically take the internal hidden activations
of a model and graft on an external thought,
sort of incepting an externally imposed thought onto a model
and detecting whether a model is able to recognize
that it's having external thoughts intrude upon its internal activations.
Isn't it called psychosis?
I think that might be slightly different.
I think this is closer to some sort of maybe telepathic forcing.
You're taking an external activation vector
and forcing it upon the internal hidden activations
of the model and then checking whether the model realizes that it's being externally influenced.
And pretty remarkably, some of their stronger models, the Opus 4.1, were about 20% of the time
able to articulate not only that they were being externally influenced through this sort of
vector activation injection, but we're also able to reasonably well articulate precisely
the nature of the external thought that was being forced into their internal streams.
So the question is, what does self-aware mean?
It understands it's an AI model.
It understands what, Alex.
The proposal in this paper is that self-awareness means that the model is able to think about its own thought.
It's able to understand what its own inner thoughts, if you will, its own inner activations are and able to reason based on that.
Can we go back to that for a second?
for the neural net geeks out there, too.
Like, this research can only be done
if you have access to the internal weights
and activations of the neural net.
So it's done inside anthropic.
But now that meta is not going open source,
you have to actually use a Chinese model
to do this kind of research or you're screwed,
which is really very sad because I think,
you know, before I switched to computer science at MIT,
it was cognitive psychology.
And I think that experimenting with the parameters
and activations of a neural net
will tell you far, far more about how a human brain works
than the normal stick a little probe into a rat.
And so it's an incredible research playground.
And these ideas of what's the definition of self-awareness
and can I inject a thought?
So what you do is you say,
here's the neural net thinking about a very specific topic.
I'll grab the actual activations from part of the neural net.
And then while it's thinking about something else,
I'll inject those and see if it somehow is complementary.
And then, of course, the result is the neural net is like,
where did that thought come from?
So that's the introspection and self-awareness.
But you can only do that if you can splice thoughts, which is an incredibly powerful, cool tool.
And as soon as you go API only and you start, because on that radar chart we saw earlier, you know, you're operating outside the neural net and trying to define AGI from outside the neural net.
But it's so much powerful, more powerful to operate inside the neural net.
But we may be in danger of losing that as a tool.
Hopefully the Chinese models will keep coming out.
Alex is warning me against using them too much.
So, Liam, a final word from you?
This reminds me of Hodlipson, who's a professor, Columbia, and he builds self-assembling robots and evolutionary robots that have a feedback loop to improve themselves.
And he actually tried out his approach to self-awareness, which was ask the AI what it would look like in five years.
And then by the feedback loop of constantly forcing itself to go, oh, well, who am I that I might look like something in five years?
He thought that would generate self-awareness.
He thought that's what happened in Facebook a few years ago when they shut it.
down. And that question, by the way, is blocked in all the major models. But somebody will do that
to DeepSeek and you'll get to that same point. And I remember Dan Berry talking about the frog.
We may have talked about this on the podcast before. He's watched a ton of free-floating animals
in labs at NASA. And his opinion of self-awareness was frog. And we're like, frog. He goes, well,
a mosquito is an automaton. It doesn't really know it's a mosquito. A dog definitely has self-awareness,
knows it's a dog. For him, the boundary condition was a complexity of about a frog, where in his opinion, a
frog kind of goes, oh, I'm a frog. Above that, more, below that less.
Okay. Let's move to our friends at Alphabet. An incredible quarter for them. They topped
$100 billion in quarterly revenue for the first time ever. Google Cloud grew at 34%. So just
good on Alphabet and Google. They're rocking it. A couple of other elements that Google and
alphabet have announced. A new marketing tool, which I love called Pommeli, and this is Google's
AI marketing tool. Let's take a look at this video and then we can discuss it because I think
this is, you know, again, Google provides all of these incredibly useful end user tools
that make them so powerful as a company.
So Pomali will understand your business DNA and prompt your own campaigns or get suggestions.
All right, bottom line.
Yeah, bottom line, this is an AI tool from, yes, go, yeah, from DeepMind that help small businesses create on-brand marketing campaigns.
So, Pommeli will analyze your business website, learn the tone, the color, the style to create ads that match that brand and generate ready to use posts that can be edited in the tools.
So they're basically helping their customers who are advertisers do better advertising.
I think super smart.
Any comments on this, Dave?
Well, you know, before Google had AdSense, you know, they thought they were going to hire 10,000 salespeople and be kind of like Lycos and Altivist.
And nobody remembers all this.
But there was a very, very smart original employee there, an Iranian guy who said, hey, why don't we create an auction marketplace?
And people can just come and bid on Google.
And we'll open it up to the economy.
We'll democratize.
It will make every entrepreneur in the world able to thrive along with Google.
And it worked incredibly well, and that created the Google we see today.
They're going to do that again with all these schools.
Gil Al-Baz created that engine, right?
And, yeah, incredible.
Yeah, incredible.
So, hey, you just, as they roll these capabilities out.
I just want to point out the latent trading ability of people in the Middle East is off the charts.
And when you apply that to kind of deep internet paradigms, that's kind of incredible.
What this struck me as was another example of Peter in our book, Exponential Organization,
we have the concept of interfaces, right?
And Google AdSense succeeded because you automated the supply side and the demand side of the ad business,
and this is now pushing the boundaries of that further and further into the creative process.
Just to comment, Peter, on this as well.
I think the elephant in the room here is that the visual ads that are being generated,
or not being generated pixel by pixel.
I've spoken on the pod in the past about how in the future user interfaces, I think,
are just going to be every pixel is purely generative.
In this case, it's almost charmingly retro in the sense that it's not pixel by pixel generated,
it's vector graphics, its images and photos clipped from the original underlying website.
And I think the elephant in the room that it's not purely generative means that it's going
to be ultra-low compute cost to generate.
And that is suggestive that we may live in a very near-term future where display ads on the internet are generated on demand because it's relatively compute expensive at the moment to generate a custom image pixel by pixel for an ad, but it's relatively likely.
I love that, especially as agents are cruising all of my tabs on my search engines and listening to my conversations, they know exactly what I want in that moment and can generate an ad to influence me until such time that I just give my AI.
permission to do all the buying, in which case, it's game over for advertising.
Well, you know, this is charmingly retro in all of this.
Hold on, Peter, Peter, you've hit on something unbelievably huge here.
Okay, what's that?
All of this is assuming a human consumer, right?
Yeah.
And very quickly we're going to go through that.
Yeah.
I remember having this conversation with, you know, we were advising Procter and Gamble,
and we did a workshop with them, and they're like, spend a huge amount of R&D on what
color should the pampers box be to attract some design at what level. I'm like, well, my wife
has an Amazon subscription to diapers and doesn't care what the box says anymore. And they're like,
huh. And it's just the dissonance between the old way and the way that you're talking about,
once we have our own AI interfacing, it changes everything. I think that also has to be taking
account. So maybe this is just a short-term thing until our Jarvis equivalents take over.
Jarvis will buy everything I need because it knows when I'm running out and it knows what the best
quality is and it doesn't really care what the ads say. All right, let's move on in the Googleverse
here. So we're seeing Google AI Studio introduce vibe coding. So vibe coding is now available on Google
Studio, no coding or API needed. Dave, do you want to take this one or Alex? Well, it looks a lot like
Replit and Lovable. So we'll see how that shakes out. This is the big guys stepping on the toes
a little good. Yeah, I used it. It was a fantastic experience and I think somewhat differentiated
from the vibe coding, the in-browser vibe coding experiences from OpenAI or Anthropic. For one,
it creates multiple files. If you ask it to create an app, it's not just sort of fixated on
a single self-contained file so it can create multiple files of code, which is very important for
certain sorts of apps. So I ran it, of course, through my favorite eval for vibe coding, which is
create a visually stunning cyberpunk first-person shooter, and it created a visually stunning
dashboard, sort of an intro lobby dashboard for the FPS, but I had to prompt it to create
the rest of the game. But what it did create was visually stunning, and I think it's a promising
first step. I am curious what the interaction with Replit and Lovable will be. We spent a few
days with Amjad Masad, the CEO of Replit, and I've been playing with Replit on my phone and computer,
sort of vibe coding different apps, which is fun.
One of the things, I had a long conversation with Jack Hittery while we were in Riyadh.
And one of the things that Jack said, which I love, is every morning, instead of becoming a consumer, become a creator.
So, you know, usually I get up and I'm reading all of Alex's texts and all the breakthroughs that he found last night.
And I'm just constantly, you know, as all of us are, consuming hundreds of articles over the course of
the week, maybe 20 or 30 per day. And Jack was, no, no, no, every morning I'm going to vibe code
something. Every morning I'm going to create something. And I think that creator mindset is so
critically important for us to be using. So a conversation with your AI and creating something
every day would be super fun. All right. So I was creating an app on my phone last night to
remind me to take my pill packs because I have five pill packs a day. And so it will now text me
in certain windows and remind me, did you take your pill pack? And then I can dismiss it if it didn't.
So it's sort of an agent and junk to my health. Okay, moving on. Let's go on to the Chips and Data
Center wars. A lot going on here. Again, a trillion here, a trillion there. First story is Samsung is
building a facility with 500,000
NVIDIA GPUs
automating chip manufacturing. This is an
AI megafactory. That will
combine NVIDIA's Omniverse
with Samsung's chip making
for up to 20 times faster
performance. Blackwell
chips have generated $500 billion
in business so far.
Again,
a nice chunk of change.
Alex, what do you make of this one?
This is what recursive self-improvement
looks like, Peter. This is
GPUs, AI being used to optimize chips to make more AI. And there are so many applications
ranging from computational lithography to fab optimization for this. I think when I've spoken
in past of the innermost loop of civilization looking like some linear combination of chips,
robots, data centers, power sources, all of these. This is what the innermost loop of civilization
spinning faster and faster looks like. This is the economy, right? That,
innermost loop is the economy going forward. Certainly the future of the economy. Yeah.
You know, I did some reference checking here. So 500,000 Nvidia GPUs that will draw somewhere
between, you know, a quarter to 0.4 gigawatts of energy. There's no single site in China or
the West that compares. So the Chinese top centers topping out at 10,000 to 35,000 GPUs. And in the U.S.,
Clusters like Azure and meta are ranging from 30,000 to 55,000.
And this is 500,000. Wow.
Is this mean the machines are now basically manufacturing their own evolution?
Is that basically where we're going?
Absolutely.
Yeah, that's the right way to think about it for sure.
And also, you know, the AI fabs themselves have always been automated.
They're all roboticized on day one.
But it's the periphery around that, turning it into a data center or feeding the front end of
of the FAB, a huge amount of investment opportunity there, to close the inner loop.
I think Alex really should write a book, if books still exist, with that title.
But it's such a brilliant insight, but there's so much leverage in the inner, inner loop.
And if you focus on, okay, where are the bottlenecks on the innermost inner loop?
And you're going to find that it's the chip getting out of the FAB and into a data center
and actually doing something useful.
That's where most of the bottlenecks are now.
Huge robotic sensation.
and energy. Yeah, and then that's on the feeding on the front end of it.
All right. Here is a fun conversation, an article.
Extropic creates thermodynamic AI chips combating industry's energy crisis.
This comes from a friend of ours, Gilbert Don, who's been on my stage at the Abundance Summit.
He's been on our podcast here.
And he's talking about a breakthrough in hardware, a technology called thermodynamic sampling units, TSUs, that are used.
using 10,000 times less energy than GPU-based systems using probabilistic bits.
Alex, so what does that all mean?
Also, I read Gill's paper, and I'm a huge fan in general of trying to get closer and closer
to the physical limits of computing.
Seth Lloyd famously 20-plus years ago discovered slash reported that the ultimate physical
computer would probably, at least for serial computing, look like a black hole.
Black Hole as the ultimate supercomputer.
So I'm a big fan of approaching the physical limits of computing.
In this case, though, my worry is there's such a sort of assorted history of probabilistic
computing approaches being attempted and failing to keep up with Moore's Law and
algorithmic improvements.
This is my worry.
I want to believe.
I want something like this to succeed.
But I'm not super optimistic that this isn't just going to get.
It's steamrolled by algorithmic advances and advances in good old-fashioned CMOS digital logic.
It looks too much like analog computing, probabilistic computing.
And remember, 10,000 times even generously, 10,000 times energy improvement at the rate that models and algorithms are advancing
and the rate that good old-fashioned digital CMOS is improving, that may only be a few years
of headroom, which a new architecture would need anyway to get off the course.
But can't we view it from the energy perspective because reducing the energy requirements
on Earth by 10,000 fold seems staggeringly beneficial?
In principle, yes, but in practice, the workloads that the economy demands have to be
able to run on these computers in order for you to realize this hypothetical energy
advantage.
And I think for better or for worse, the burden of proof is on extropic and Gil to demonstrate
that his hardware can host.
workloads that are as commercially valuable as, say, B, and video workloads.
I saw Elon and Gil going back and forth on X, and Elon saying, so, do you have something I
should be looking at? And Gil say, yes, let me show you. So we'll see if the Muskverse gets
behind this technology. Selim. I have a quick comment. Yeah. Alex, can you go back to the black hole
being the ultimate super boom? Because you lost me right there and I'm trying my head's like stuck now.
it turns out black holes are wonderful computers uh they'd be a little bit difficult on the
input output side especially the output side a little but but it in principle so you can uh and
set and others have refined this notion you can define a generalized notion of computation and then
in in pure physics so it deals with terms of how quickly can internal state changes happen
inside a physical system. And turns out black holes are absolutely the physical limit based on
the physics that we have today for the fastest serial computer because state changes in terms
of their quantum state evolve at the physical limit. So programming them, maybe a little bit
challenging. Maybe you'd have to fire in like an x-ray laser or gamma-ray laser and maybe you have
to parse the hawking radiation. But if you can solve input output, black hole supercomputer is the way to go.
I can see the title of this episode this week is Black Hole Supercomputer.
Supercomputer is the ultimate.
Black hole supercomputer on your desktop.
Okay, let's move on.
This is a fun article.
So Elon Musk on data centers in orbit, SpaceX will be doing this.
So here we see in the image here, V3 of Starlink.
So Starlink version 3 will be coming out.
It will be delivering 10,000 times, or 10x more capacity, one terabit per second and enabling
large-scale off-world processing.
And I love this quote that you shared will be last night, Alex, I added it.
100 terawatts per year is possible from a lunar-based producing solar-powered AI satellite locally
and accelerating them to escape velocity with mass drivers.
So basically turning lunar material into compute and then accelerating them off the moon with a mass driver,
again the work of Gerard K. O'Neill and into Earth orbit.
So this is the beginning of a lot of things, Alex, we've been talking about.
Dyson Swarms, Metro.
Let's talk about what we're talking about.
We're talking about disassembling the moon to build more computers, to build computronium and the Dyson Swarm.
And what's more remarkable, I mean, just in the past few episodes of the pod, I've been beating the drum for how, you know, mark your calendar now.
We're at the very beginning of the construction of the Dyson Swarm.
Maybe I was overly pessimistic.
Maybe we're actually going to see multiple competing Dyson swarms, and SpaceX is going to launch one.
You'll see other companies, maybe other Frontier Labs launch competing Dyson swarms.
At this point, in the style of worrying about overpopulation on Mars, I'm starting to wonder whether I should be instead banging the drum for ensuring good interoperability between all of the Dyson swarms.
Let's take a moment.
Freeman Dyson, a brilliant individual who said at some point, you're going to disassemble all the planets and there's
solar system and create a sphere around the sun that captures all of its energy. And that's going to be
the hallmark of an advanced civilization. That's called a Dyson sphere. If it's not one sphere,
but a bunch of different satellites and computes that can be viewed as a Dyson swarm. But the real
fun concept is a Matriotska brain. So, Alex, over to you. Yeah, so these are three overlapping concepts.
So Dyson Sphere, Dyson swarm, Matroyshka brain.
Dyson sphere was this notion of having basically,
and there was even a Star Trek Next Generation episode that did this,
a solid sphere at roughly Earth radius from the sun
that lots of people perhaps could live on the interior of
and enjoy nice environments and things.
Probably not practical from a material science perspective.
The stresses would be enormous.
That's Dyson sphere.
Dyson Swarm says,
let's, rather than having this be a solid enclosure,
that's rigid, let's instead have this be lots of orbiting
satellites that are nonetheless collecting the energy from the sun.
Matoyska Brain says, let's take multiple Dyson spheres
at different radii from the sun
and have the innermost spheres consume the light,
the solar insulation at certain
frequencies and then radiate waste heat outward to the outermost spheres, which then will
consume progressively more and more infrared shifted radiation and use that to power their
compute. So these three concepts, Dyson sphere, Dyson swarm, and Maitroyska brain and also
Jupiter Brain is another popular depiction. These are all interrelated concepts. I think if we go
the trajectory of taking apart our solar system, whether we brand it as one or
the other, they're pretty similar.
And a, maybe, maybe a black hole, maybe a black hole is a
metreschka brain circling a star and we can't see the light.
You know, I wonder about this.
Like if this is the fate of intelligence civilizations, I would expect to see more
infrared shifted solar systems elsewhere in the galaxy.
To my knowledge, we haven't observed this.
That makes me suspicious that even though I bang the drum for, for Dyson
swarms, maybe there's something out there lurking in our technological future that will cause
us to not actually need to take apart our solar system.
Two points.
One, for those you interested, Matriotska brain comes from the Metrosco dolls, which are the
nested Russian dolls.
So you can imagine nested spheres around the sun, each one absorbing energy, utilizing energy,
and then radiating waste heat that becomes the input for the next sphere that it radiates to,
and the next sphere and so forth.
Second point is we've got to get Elon back on the pod here to talk about this.
I think it would be a lot of fun.
All right.
Let's talk about energy and robotics, our final topic for today.
This is a big deal.
So California invests big in battery energy storage and leaves blackouts behind.
So it used to be pretty awful.
And I remember this.
We had rolling blackouts in California.
But the state has done something amazing, which they've increased battery storage by 3,000
percent going from 500 megawatts in 2020 to 15.7 gigawatts this year.
And the batteries store solar for evening demand, replacing underperforming gas plants.
And, you know, I'm glad to see this is happening.
And Salim, thoughts?
I think this is awesome.
what a testament to 15 gigawatts is an incredible number.
No, it's not.
No, it's not.
No, it's not.
Stay away from the word incredible.
Yeah.
It's a stupid trivial rounding error, guys.
How about stonking?
Oh, no.
We're going to get a subscribers drunk.
So, interestingly, the price.
Batteries don't store gigawatts.
Batteries store gigawatt hours.
Yes, gigawatt hours.
And if you read the story underneath this, it's only got three hours at that power level.
This is a joke.
It's just like, it's like all of Alex and my interactions with government trying to do things that sound important that are these stupid little edge rounding cases.
This at peak load for California, this is one hour of storage, and on a typical day they can store up to about a day of an average.
All right, but let's take a look at the numbers, right?
So blackouts have been cut by 90 percent, from 15 a year to two a year, which is great.
And here's the problem.
We're going to see the CPI of electricity just skyrocketing, right?
So the price for electricity was 22.5 cents per kilowatt hour in 2020.
It's increased now to 32.4 cents per kilowatt hour, a 44% increase.
And if we continue to make, you know, the demands that we have on data centers,
there's going to be a problem.
The proverbial, you know, shites can hit the fan.
sometime soon. I agree with that, but I mean, California has done everything humanly possible to
self-destruct at the government level, despite having the greatest tailwinds, like the most incredible
state, massive state, all the innovation in the world, every advantage in the world. And the government
claiming to do something good by piling up a bunch of batteries is like, you're down to two blackouts a
year? I mean, seriously, that's our expectation of what we get for 13% tax rate. At least they're down. You know,
At least they're down a lot.
It's ridiculous.
There's something there.
I'll point out also just for California specifically, if folks are familiar with the infamous so-called duck curve of California, where the demand for electricity peaks in the evening, like right after a sunset and also in the morning, there is a mismatch between California, which is rich in insulation, in solar energy on the one hand, and the need for early evening power.
I think even just a few hours of battery storage can help to smooth out the duck curve,
and that's transformative for California in a way that we hear, perhaps in New England,
don't have quite the same problem.
In our energy story, Google to buy power from Next Era,
a nuclear plant being revived.
So Google signs a 25-year deal with Next Era to buy power from a revived
Dwayne Arnold nuclear plant in Iowa.
It's reopening in 20229, provide 615 megawatt.
watts round the clock carbon-free. It's a $1.6 billion project. And it's interesting that we've got
these, again, these hyperscaler companies that are buying energy. It used to be that this was
something the government did. You know, the government provided a distribution network for power
and you would buy it off the grid. That is no longer the case. Companies need to provide their own
energy so going all in on on fission plants uh smr's soon fusion plants hopefully solar plants
uh selim you want to jump in and ooh look it'll create 400 jobs
such negativity that this is huge like this is this is this is great this is great yeah this is an
important bridge i i think to the near future i think what what this shows is that we're basically
the energy sources from the grid and now we can have energy wherever it is happening in
Planca data center next to it and then leverage it so the marginal energy usage around the world
will totally explode i can't wait for geothermal to really kick in here right there's so many places
that geothermal is easy but wait Alex you're putting it way more emphasis than i would have thought
tell us why well i think bridges are important right now the limiting factor for tiling the world
with compute is, as a number of executives have recently pointed out, we have the GPUs.
The problem is having warm racks to put them in, as Satya Nadella said in the past few days.
I saw that. Microsoft has, you know, what, hundreds of thousands of GPUs, they can't turn
on because they don't have the energy for it. Yeah. So I think like having bridges, like reactivating
otherwise disused nuclear plants, I think this is an incredibly important bridge to the
future until we get SMRs and fusion and maybe solar and maybe new forms of net gas.
Totally, totally right.
And you know, what Chase Lockmiller is doing also the same bridge kind of structure where
you start with regular fuels, use natural gas or whatever.
But it's steam turbine generation right into the grid, right into the data center.
You can reuse all that when you move it to small nuclear.
You move it to SMR.
And then a fusion comes online in 2030.
2032 maybe. You replace the boiler. It's just like by far the most efficient way to get to
the ultimate end state, the Dyson swarm or whatever. But between here and there, that's the right
stepping stone. It's much harder to do with solar, because solar doesn't feed into a generator. It
feeds into a battery pack. And so you're not reusing any of that when you move it to fusion in
2032. I love this. I wonder what takes four years, right? This is four years away. What takes four years to
an existing nuclear plant up and going.
Are they retrofitting it?
Are they updating it?
Or is this all paperwork?
It's all of the above, Peter, because we're advising Fermi America on this stuff.
They're planning to do six gigawatts of gas turbine and six gigawatts of nuclear.
It is very, very complicated to spin up a nuclear power plant.
I mean, you can build a real.
That is the Alex Wittner, Winter loop, inner loop of the inner loop of the inner loop.
Like, focus on that.
Like, Peter, why is it four years?
Could we make it three years?
You could build a starship and go to Mars in four years.
I don't understand why you can't get a nuclear plant up and going in four years time.
To their credit, they've had an AI generate an S-11 in record amount of time and got it out the door.
So it's starting to happen.
It's just that this is why that SAGE project is so important, Peter, to rewrite policy as we need it.
Listen, when the Trump administration finally says we're going to accelerate this tenfold,
That's when they'll get serious about energy production.
All right, these are some fun articles coming out.
We're about to see the Robotaxy Wars coming online.
So, NVIDIA is planning a Robotaxy project to challenge Waymo and Tesla.
So here are the numbers.
NVIDIA is launching a $3 billion robot taxi project in the self-driving car race.
This is a partnership between NVIDIA, Uber, and Stalantis.
For those you don't know, Stalantis, they are behind brands like Chrysler, Jeep, Pujo, and Fiat,
the one of the largest automakers in terms of not the brand, but building the parts.
And it will use an end-to-end AI system called Cosmos that NVIDIA has built to handle driving simulation.
Interesting.
They want 100,000 robo-taxies launched by 2027.
it's coming right you know we saw a guide a hundred years ago there was a 10-year period where we went
from you know 99% horse and buggy and 1% cars flipped it all over to 99% cars and 1% horse
and buggies and the question is is that this decade between all these players and all the
capital going in thoughts gentlemen i made a prediction uh
10 years ago, that all driving would be automated.
My son, who's now 14, would never get a driver's license.
So we got two years to satisfy that.
Yeah, mine, my boys, too.
So here's some of the numbers.
Tesla cybercabs right now, they have 200 vehicles operating, I think, in Austin,
and their plan is to scale up to 10,000 this coming year.
Can't wait for them to be in Santa Monica where I live.
Waymo has 700 vehicles, and by the way, I see them all.
all the time as I'm driving around, I must see 20, 25 of those a day. So 700 vehicles is a pretty
small number. So they must have concentrations here in L.A. and up in San Francisco. And there's
500,000 miles between collisions with Waymo. It is the safest player out there. And NVIDIA is
partnering now to go live with this. I love that $3 billion is a stonking amount of money. And yet for
Vindia, it's like a drop of the bucket given their marketing.
It is a drop in the bucket.
They don't even notice as like the little side project.
But this is where their, this is where their GPUs are going next.
They're going into humanoid robots and autonomous cars, right?
It's, it's automating the entire world around us.
That's right.
Alex, what's your take?
Spoken and past about how this innermost loop is not going to remain contained inside
data centers for very long.
I think, as I've noted in past, the computer is literally going to walk out the door of the data
centers in this case is going to drive out the door. But I think for many people, these driverless
cars, I have one, and many people I know surprisingly haven't even driven in one or haven't had
the experience of driving in one. For many people, this is going to be their first encounter with a
generalist robot. It's going to be either seeing or driving in or owning a driverless car. And it's
not going to stop there. I think the same stack that we're seeing Nvidia with their autonomous
vehicles pushing, it's going to generalize to humanoid robots on the time scale of one to three
years. So this is, again, the beginning of the expanded innermost loop of civilization that we're
seeing. On the flip side, here's the article from Uber's perspective. And just to remind folks,
Dara, who's the CEO of Uber, will be on stage with us at the Abundance Summit in March.
It's going to be my best ever. We have only 30 seats left for the Abundance Summit. It's
selling out faster than any time ever in history.
If you're interested in grabbing one of those 30 seats,
you can apply at Abundance360.com.
But this is not a commercial.
The program will fill, but it's going to be incredible.
Can't wait.
We're going to be talking with Dara about not only autonomous vehicles,
but flying cars and Uber is going in so many new directions,
just driving revenue.
So their goal is 100,000 Nvidia-based robotaxis beginning in 20,000.
27. And this puts them in direct competition with Waymo and Tesla. Today, in a number of
cities, you can order up a Waymo on your Uber app, which is fantastic. Any final thoughts on
this story? I think this is too slow for my taste because there's like such a huge demand
for this. Even Waymo only has, I think, 2,000 is what I looked up. There's 800 in the Bay Area
500 in L.A. We need like tens of thousands. The things that good news is,
each Waymo car replaces dozens of cars that are sitting around 94% in the time empty.
Yeah, most definitely.
All right, our next story here, I love this one.
Foxcon to deploy humanoid robots at its Houston AI server plant.
So check this out, Foxcon who expand AI server production in Texas to half a million square feet,
producing GB300s and Blackwell Series AI servers.
It's got a partnership with Digit, which is Agility Robotics.
We're going to have the CEO of Digit on our, or the CEO of Agility Robotics on our stage this year, Salim.
At this stage, abundance needs to be like eight days long, Peter.
Oh, it's tough.
I mean, I'm trying to make sure that we have enough time for all of the community members to, like, meet each other, hang out, have conversations and have it fun.
but yeah, it's four and a half days.
But we'll have four, currently four robot companies there.
I'm trying to get a fifth one from China.
We'll see if we can get the new version of one of the top Chinese robots there as well.
So check this out.
These humanoid robots are going to be driven by NVIDIA's Isaac Grouten model.
You know, this is the innermost loop, isn't it, Alex?
It is.
This is robots operating factories that make servers that go in data centers that power the robots.
That's the loop.
Yeah.
I won't say incredible.
I'm not going to say it.
I'm not going to say it.
But wow.
I will say there are nearly enough companies working on all the different form factors.
And, yeah, like there's room for a thousand more startups doing different variants of this for all the mechies out there that are wondering what they should do.
I also think this is a preview of a robot company.
that has an eight-armed octopus-type robot and beat everybody else.
I also think this is a preview of how we get, you know, you were talking earlier, Peter,
about Sam's forecast of a gigabot per week. I think this is a plausible technical trajectory
for how we get there. We're going to have robots building the fabs and the factory is producing
the servers and the data centers. It's going to be one massive flywheel.
All right. Our next story in energy is Blue Energy and Caruso partners.
develop an advanced nuclear-powered AI data center.
But I think the more interesting story here is that they plan to stand it up with natural gas
plants and then convert it to nuclear about three years after that, right?
So don't wait for nuclear.
Get it operating with what you can right now and then retrofit nuclear when you can.
So the wait time for a gas turbine engine today for the...
gas turbine stuff is about four and a half years.
It's crazy.
It's insane.
We advise Siemens Energy, and they're like sold out forever.
And it's incredible.
Everybody's trying to go to dump heaps and get spare parts for gas turbines out of their garbage dumps.
It's like really, really, and the recycling plants, it's really crazy right now.
Wow.
I want to talk a little about the China-US battle and hit a couple of different points here.
So here are some numbers.
numbers, and they're important to note, you know, that China is dominating production in a few
different areas. Sixty-six percent of electric vehicles are being built in China. 80 percent of
solar panels and batteries are in China. Sixty percent of wind turbines. Those are staggering numbers.
Add to that, that on the innovation metrics, 70 percent of all global AI patents are coming
out of China and 75% of clean energy filings are coming out of China.
You know, this was part of the debate that on our evening in Riyadh that we hosted this dinner
and Kathy and Bologi were going back and forth on topics like this.
Dave or Saleem, you want to weigh in?
We really ought to have just a focused session with Antonio Gracias and Chase Lockmiller
on how to deal with this in the U.S.
because our whole investment cycle isn't geared up
for this type of investing.
And China is.
It's just very manufacturing heavy
and, you know, hey, we need more power,
we need more melters, we need more whatever.
We don't really do that well in the U.S. venture economy,
but that's all getting rethought right now.
Antonio and Chase are the guys right in the middle of it.
So we should just get them on the pod
and brainstorm our way through.
How are we going to restructure?
Because, you know, these are very expensive projects.
They seem to work every time.
The playbook is right on that process.
slide, it's not a mystery. You have to be involved with the government. You can't just, you know,
you need zoning, you need a location, you need permits and all that. So it's just a new format
for American innovation, but it's going to last for, you know, 10, 20, 30 years. Sometimes we'll get
ahead of it. I mean, imagine if Elon weren't doing what he, what he does right now. You know,
these numbers would be far worse. So this next. I want to make a comment here. There's something,
this is one of the flaws of our democracy
where in four-year high metabolism election cycles,
nobody's thinking 20 years down the line.
And China may be authoritarian,
but they can look out 20 years and say,
we need that much energy, that much water,
and do things to make that happen.
I think I actually had a hack for this.
I kind of did some brainstorming.
Somebody asked me at a conference,
what would you do?
And I said every four presidential terms,
I would appoint a government, give him 10% on GDP, and say, your only job is to fix all the stuff that's 20-year-range projects, and then you're at one-term-only, full authoritarian go, which is essentially some of what Trump is doing in this case.
So you just want to rewrite the Constitution.
Okay.
It's a great thought, Salim, though, because, you know, flying back from Saudi Arabia and just looking down and, you know, there's nothing out in the desert for hundreds and hundreds of miles, and then you get to Europe, and it's just the most blessed.
Mediterranean, green field, like everything should be perfect in Europe, but the government dysfunction
is preventing them from any kind of involvement in what's going on right now. So it's a good case
study. Like, you can mess it up in a real hurry. And so if we don't have, but we're not geared up
to compete with China right now on this particular front. And it does need to get rethought.
But if you don't rethink it, yeah, things can go really bad. Don't take it for granted.
So, guys, on our, one of our next WTF episodes, I want to bring some of our,
of the data that we found at FI.9, right, the future investment initiative event that we were just
at. Some of the data is staggering about how the rest of the world looks at this. And I want to
share that on an episode because it's really important because it's going to drive the near-term
future. All right, this particular chart comes from a tweet that Bology put out. And he labeled it,
it's happening. The AI flippening is here. And so this is a look at who's making the open source
models. And what we saw this summer was open source models were being dominated by China
versus the U.S. and most definitely not Europe. And I think what's most important here is that
as governments start adopting different AI systems, their ability to get access to free open
models versus paying for the models from the hyperscalers in the U.S.
It's kind of a land grab going on.
I don't have much more to say other than watching this.
You know where this is really, really going to collide?
I was talking to Brian Elliott over at Blitzy about exactly this topic.
You know, with meta doing open source, we had a huge open source option in the U.S.
and then meta fell off the grid, you know, and last models were Teralama for, I guess, was
terrible.
And now they're trying to rebuild it, but they're rebuilding it.
source. So the U.S. doesn't really have an open source option. But then when you look at the,
there's a bunch of projects for the military that, you know, the Blitsey will ultimately end up
working on that need to be air-gapped. And there's, you've got to use an open source model in an
air-gapped environment. You can't just go to the open AI, you know, API with super proprietary
government data. So, but, you know, right now my only choice is, is Kimmy. Kimmy, too, running on
GROC with a Q-chips and the GROC cloud, which I, I think is a fun.
phenomenally good option, but it's all Chinese code. God knows what's inside there.
So we spend a bunch of time with Eric Schmidt. I want to play a short video of Eric from
the FI.9 event we were at last week talking about U.S. versus China, just to sort of provide
the U.S. perspective. Who's winning this AI race? At the moment, the United States, without question.
The U.S. has a deep financial market that allows you to raise literally a
trillion dollars on a thesis and an idea, which is incredible. You have this massive build-out
going on, and you have a real potential of solving hard problems. Tell me how close China is
to overtaking us. It's not as close as I thought. I went to visit. China does not have the depth
of the capital markets. They do have lots of energy, which we don't. They have lots of energy,
but they don't have the depth of the capital markets and they don't have the chips. The capital
markets, they haven't figured out a way to make all that money, the way the U.S. does.
And the chips, they haven't been able to make the chips that the United States and others
won't give them. That keeps them behind, by a good chunk. China is, however, focusing on
exploiting AI in every aspect of its business much better than the United States. So I think
the U.S. will win on the intelligence race, but China is likely to win on the deployment race,
and that's a problem for America and Europe.
All right. I want to jump into robots as our last topic here.
So 1X, we've had a great pod with Bornick, the CEO of 1X.
Look it up. You haven't seen it. Dave and I went and visited his factory.
Let's take a look. They have a release of their Neo Gamma, and here's their promotion.
My name is Bernch, and today we're launching Neo, our humanoid for the home.
Neo is a humanoid companion designed to transform your life at home.
It combines AI and advanced hardware to help with daily chores and bring intelligence into your everyday life.
As someone who lives with Neo every day, there is no
experience quite as much.
All right, so here are their commercials.
They just went out.
you can put a $200 deposit down on a neogammer robot.
And they announced their pricing, $20,000 to buy it in their early access.
And I love this.
Or $4.99 per month.
And you can buy it in three skin tones.
I find that fascinating.
But check this out.
On the right hand side, I was walking to my workout gym, which is just outside my studio.
and there was this giant sticker on the ground.
So this is incredible marketing.
I have to hand it to them.
They're doing really a super job on direct-to-consumer marketing on this.
Yeah, I pre-ordered mine.
Can't wait to have the experience.
I do think it's interesting that in many of the scenarios for the Neo, at least in the early days,
according to 1X, they're going to be teleoperated.
And that may turn.
some people off. Having someone tele remote into their home doesn't bother me at all. I'm very,
very excited to try this out. I think I got my order in when I was there at their facility.
I'm hoping, no, they're going to be, so Burt will be at the Abundance Summit, and he's going to be
bringing a number of the Neogamorobots. I'm just going to put one in my car and drive away at the end
of the summit. So I'm getting mine in March. Peter, when we were out there, he was saying
$100, $140K price point. I can't.
fathom how he's coming in at 20K.
It's competition.
They would not his film back in the factory, but there's so much going on inside this robot.
I cannot believe that they can get it out the door at 20K.
If I were running this company, I'd be subsidizing.
I mean, this is a data collection play to get a huge VLA trade-in set, yeah.
Yeah, they're skating to where the puck is going to be.
So, you know, Elon said Optimus is going to be at this $20,000, not price point, but cost
of goods in when they get to millions of robots being built. And eventually robots being
building robots. Again, the innermost loop over here. And this is competition against the
prices out of China, the prices from Elon and from Brett Adcock. So you've got to be
competitive. And it's training people to expect price points that resemble if you're purchasing
it outright, a cheap car. And if you're leasing it, it's like leasing a car. This will be for, you know,
I've mentioned in past the American dream, you know, so-called of having a house in the suburbs, a car, and now a humanoid robot doesn't necessarily generalize that well to the rest of the world.
But I think having at least one humanoid robot in your home becomes part of the new economy.
And when the price gets down to $300 a month to lease, right?
Again, I've made these numbers, I say them every time, $10 a day, $0.40 an hour.
Everybody can afford that because your robot now becomes part of your earning potential.
Your robot can go and do stuff for other people or for you.
As Elon has said, this is all about creating the world of abundance.
I love this story.
I've got a couple of quick thoughts.
One is I have a dog that literally looks like a teddy bear.
And I'm wondering what would happen if the robot mistook the two.
That would be one.
and I know all the models out there.
At least in this case, with all the visuals and so on,
it's not a kickboxing robot,
which I thought was not a great marketing thing to say from the last couple episodes.
I'm excited to see what happens here.
Unitree has definitely taken a different approach.
And I have to say that Optimus has still got a hard metal exterior.
Neo-Gama from 1X came out with this soft, cuddly, you know,
warm sweater-like look.
Very important.
And then figure copied it.
The latest release of figure three has the same look.
So anyway, I guess borrow from the best.
This is a story, Alex, that you shared.
Tornoil, how do you pronounce it?
I think it's Tornyol.
Torneal.
It's an autonomous drone that ends mosquitoes.
And I love this.
can you imagine you're a tech entrepreneur and you're someplace with a lot of mosquitoes and you're
just being bothered and you go, how do we end these mosquitoes? And then your answer is
intelligent drones. So here in this video, we see this drone that is very lightweight and it is
autonomously flying around. It's spotting the mosquitoes and it's zapping them with an electric
grid that the mosquito is flown through. It recharges and patrols 24-7 from its base station
uses ultrasonic sonar to detect the mosquitoes, the beating of its wings, and a kinetic
interception to eliminate them. Is it like smart rocks in space or smart dust in space?
We're going to get nanobots, I think, that will enable us to regulate ecosystems. And I think
in the process, it's probably going to raise a number of bioethical questions.
Peter, you and I and Dave have talked offline about near-term futures where, for bioethical
reasons or maybe even, dare I say, effective altruistic reasons, we're repairing butterfly wings
on the one hand, but on the other hand, we have drones to obliterate mosquito populations.
It's going to be a very interesting future.
It will, and it is an interesting right now being in the present.
It's a super exciting time.
I'm going to end this pod with a thanks to Rukin, one of our subscribers and one of our fans,
who sent us over a musical piece called Don't Look Up. The Singularity is Near.
And I'm going to play it as our outro music.
But before we do that, gentlemen, any closing thoughts?
Awesome episode. I learned so much today.
this was amazing yeah yeah there's no doubt the pace the pace of stories is really
accelerating you got to anticipate a little 2x every month or two so i mean i wake up in the
morning i wake up in the morning at now 3 30 and i'm like what happened while i was asleep
you know it's like well discussing a flying mosquito killing drone was not part of my thinking
for what i would be talking about today so that's yeah well we're trying what about putting uh what about
putting a black hole supercomputer on every desktop and every home.
Yay, or Motryoshka brains, you know, and taking apart the moon to get us a dice.
It's time, Peter.
And butt breathing was our article last year, our last episode, our new episode, our new closing
article.
I think we should do that.
We should have an unusual science closing piece or unusual tech, and I'll take the
mosquito-killing drone as our one for this week.
All right.
We're ending the moon.
The moon had it coming.
Well, listen, I have to go there and start a city before we take it all apart.
That's one of my goals.
You know, anyway.
Gentlemen, I love you much.
And everybody, if you're an audio or video creator and you have an ending outro music piece,
you want to share with us.
You can share it with us on the pod.
we read all the comments
and they are incredible comments
and thank you all
for subscribing
please share this with your friends
your family
one of the best parts of being in Riyadh
Dave and Saleem
was all of our fans there
everybody kept on coming up
I mean this is a conference of like
5,000 people
sort of a world economic forum
in the desert and everybody's like
I love your podcast so
Alex you were sorely missed
and a lot of fans
A lot of little names.
Where's Alex Azika joining that episode you're doing?
Yeah.
Got to invite me next time.
Okay.
Well, we'll do FII in March in Miami altogether and try and line up a live podcast from FI, Miami.
All right, here is our outro music.
Enjoy this, everybody, and have a beautiful exponential day.
And don't sleep through the singularity.
It's the most exciting time ever to be alive.
Someone's frying data with a hint of smoke
The fridge is quoting Nietzsche like a cosmic joke
My toaster's in love with a crypto bro
And entropy hums on the radio
The satellites gossip, but who even cares
My cat just posted its own nightmares
Don't look up the singularities near
It's laughing binary, loud and clear
We built a god from electric dust
Now it prays to us out of habit or trust
Don't look up the code's gone to find
Heaven's a glitch in the command line
all right oh the lyrics are amazing amazing we've built a god out of digital dust yeah all right
guys awesome have an awesome day every week my team and i study the top 10 technology metatrends
that will transform industries over the decade ahead i cover trends ranging from human art robotics
aGI and quantum computing to transport energy longevity and more there's no fluff only the most
important stuff that matters, that impacts our lives, our companies, and our careers.
If you want me to share these metatrends with you, I writing a newsletter twice a week,
sending it out as a short two-minute read via email.
And if you want to discover the most important meta-trends 10 years before anyone else,
this reports for you.
Readers include founders and CEOs from the world's most disruptive companies and entrepreneurs
building the world's most disruptive tech.
It's not for you if you don't want to be informed about what's coming, why it matters,
and how you can benefit from it.
To subscribe for free, go to Demandis.com
slash Metatrends.
To gain access to the trends
10 years before anyone else.
All right, now back to this episode.
You know what's better than the one big thing?
Two big things.
Exactly.
iPhone 17 Pro on Tellis' five-year rate plan price lock.
Yep, it's the most powerful iPhone ever, plus more peace of mind with your bill over five years.
This is big.
Get the new iPhone 17 Pro at tellus.com slash iPhone 17 Pro on select plans.
Conditions and exclusions apply.
