Moonshots with Peter Diamandis - Is AI a Bubble? Experts Debate the Future of AI w/ David Blundin, Salim Ismail, and Alexander Wissner-Gross | EP #190
Episode Date: August 27, 2025Download this week's deck: http://diamandis.com/wtf Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Salim Ismail is the founder of OpenExO Da...ve Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified, focused on AI and complex systems. – My companies: Test what’s going on inside your body at https://qr.diamandis.com/fountainlifepodcast Reverse the age of my skin using the same cream at https://qr.diamandis.com/oneskinpod Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding –- Connect with Peter: X Instagram Connect with Dave: X: https://x.com/davidblundin LinkedIn: https://www.linkedin.com/in/david-blundin/ Connect with Salim: X: https://x.com/salimismail Join Salim's Workshop to build your ExO https://openexo.com/10x-shift?video=PeterD062625 Conect with Alex Web: https://www.alexwg.org LinkedIn: https://www.linkedin.com/in/alexwg/ X: https://x.com/alexwg Email: alexwg@alexwg.org Listen to MOONSHOTS: Apple YouTube – *Recorded on August 25th, 2025 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
MIT study reports 95% of AI pilots are failing. Big firms are running pilots, but struggle to scale.
Meta-freezes AI hiring? Is there an AI bubble?
Sam has a dinner where he says that there could be an AI bubble.
Well, here's another bubble. Here's another scam that I called out that many people have yelled at me for.
A new risky bubble could be forming at the same time.
Absolutely not a bubble. It's the biggest shift in human history.
I would argue that in AI, we've actually crossed the singularity, like the base.
So change is faster than we can process it.
We're about to get general purpose, humanoid robots, that are running foundation models,
but they're going to be running locally at ultra low latency.
I think this is an incredibly exciting idea for AI.
The worst thing you can do is not get on board and ignore it.
That's the worst move you can make.
This is what's going to help us continue on this accelerating curve.
The new economy is coming, and some things need to be rethought just for speed.
Now that's the moonshot.
ladies and gentlemen.
Everybody, welcome to Moonshots.
Another episode of WTF just happened in tech.
Here with my Moonshot mate,
Sileem Ismail, Dave Blundin, and Alex Weisner Gross.
Gentlemen, as I like to say to our listeners,
get ready to add 20 IQ points this morning.
There is a lot going on in the world.
Oh, my God, a crazy amount.
We were in California, what, one week at Open AI headquarters
to come back and about 50 things have happened
that we need to talk about right away.
It's unbelievable.
You know, literally the team and I and all of us spend like 20 hours
getting this slide deck ready, trying to figure out what to put in it,
what not to put in it.
And, you know, like last night, we've got to add this.
We got to add that.
It's a goddamn full-time job at this point.
It was ridiculous.
I tell you, we're a full bore and self-improvement and singularity mode.
The rate at which things are popping now, it's not going to stop either.
No, it's accelerating.
I would argue that in AI, we've actually crossed the singularity.
Like, the pace of change is faster than we can process it.
You're talking about a like-minded soul's here.
We're all going to agree on that topic.
And it's funny, you know, I hung out with the family over the weekend,
saw my nephews, and they're just not aware yet.
They will be very soon.
But it's just crazy.
But they're watching the pod, so they're keeping up.
So let's accelerate them today.
Yeah, well, and I appreciate that.
I mean, the number of people who have reached out and said,
Oh, my God. I love WTF. I love moonshots. It's been really heartwarming. And Alex, they love seeing you, too. So welcome back as our fourth here.
Very kind. Yeah, super excited to be here.
Yeah, there are a bunch of topics today that are very technical, and hopefully you'll be the one guy on the planet who can explain it.
So very much looking forward to that. Hey, Salim, you look like you're at a space station someplace. Where are you?
I'm at Newark Airport trying to do a podcast. So this is not conducive from an environmental facility.
But I found a corner in the airport, hopefully it's okay.
Okay, well, hopefully no one shows up.
If they do, just yell bomb or something.
If I get dragged away, you'll know why.
Okay.
All right.
Oh, my God.
All right, well, let's jump in.
Like you said, a lot's going on.
You know, the very first subject, it's the AI war is always.
A.I's accelerating everything.
We'll cover that.
We'll cover robotics.
We'll cover BCI together today.
We'll cover a number of different subjects.
but it's a heavy AI digest.
And we're going to begin with the latest on GPT5.
And Dave, like you said, we were, we just recorded a podcast with Kevin Wheel, the chief product officer.
Hopefully people are enjoying that podcast.
It was really a beautiful setting.
And a lot.
And then literally a week after we're there, everything continues popping.
It's great.
So here's a podcast, by the way, it was absolutely a must watch.
It was an awesome, awesome episode because it had been.
Yeah, we missed you there, so then.
Yeah, for sure.
You guys had really good questions.
Even if I wasn't there, the questions were really good.
I was in the hot tub with my son this weekend talking about, he's at Wayfair right now.
And I said, you got to get into that building.
Go to San Francisco.
You have to get into that Open AI building.
History is happening in real time at light speed.
And just find a way to navigate in there and meet a couple of people.
the energy in that building is like nothing else on the planet yeah well i mean like it's that way i'm sure
at the jemini team at google and at and at space x yep yeah so here's the here's the first article
um here's an iq curve this is the menson norway test and by definition it's a bell curve and we've got
the average human IQ at 100 and i've been watching this i mean of all the metrics uh Alex that we speak about
I've been watching IQ just because it's a humanizing effort, right?
And here we see GPT5 Pro come out at an IQ of circa 148, which is pretty damn good.
I'm not sure it's you, Alex, but it's pretty damn good.
I'm hoping, Alex, we can trigger you on a rant about how we crossed the Turing test and nobody noticed.
and now we're crossing barriers
that would have been unthinkable two years ago
and people are like, well, you know.
My mission is accomplished.
I have, Dave, you're telling me ahead of time
what I'm going to say.
This is great.
Yeah, no, I think this is yet another sign
that benchmarks that are based on the average human population
are saturating.
We're running out of Sigma on a bell curve here
to benchmark GPT5 Pro.
arguably the strongest generally available model at the moment.
And we need, as Dave wants a conventional brand, here it is, we need new benchmarks.
We need new harder benchmarks that arguably look less like the average distribution of the
human population and start to look more like specialist knowledge that isn't generally
accessible to a test that would be administered to the broad human population.
You know, we talked to, we talked to Kevin Weil about the idea of an abundant set of benchmarks where we're looking for AI to solve the biggest problems in the world.
And he liked that. So maybe we'll see some benchmarks. But anyway, it was GPTO3 at like 12 or 136. And we've bumped up another 10 IQ points here.
And it's going to be interesting when we start seeing IQ points that, you know,
are beyond 200, and therefore really immeasurable and not making any sense anymore.
At some point, one might expect tests like this to start to factor in AIs.
Right now, this is based on the distribution of unaided human, individual, meat body, brain capacity.
What happens when AIs start to merge with humans and the curve itself gets dragged upwards?
We're going to talk about that for sure. That's a fun one.
I have my usual rant against this, which I won't get into now.
But, you know, the Neo-Cortex is what the size of a dinner napkin, right?
And what happens when AI makes it the size of a tablecloth or a football field?
What do we do then?
We listen carefully.
Okay, here's our next article.
So, AI models hit consumer hardware in 12 months.
So using a single top-of-the-line gaming GPU like Nvidia's RTX-5090, which is about $25.
bucks. Anyone can locally run models matching the absolute frontier LLM performance somewhere in the
next six to 12 months. Alex, thoughts here. I think there are two stories here. The superficial
sort of cliched story is that this is about consumer privacy and empowering individuals with
personal superintelligence enabling individuals to have conversations with chatbots without
needing to reach out to a server. I think that sort of superficial story completely ignores the
actual story here, which is these frontier models are starting to incorporate new modalities,
actions in physical world, video modalities, and the net op shot of all of this is we're about
to get general purpose robots, humanoid robots that are running foundation models like
chat GPT or it's now numerous frontier competitors, but they're going to be running locally
at ultra low latency. So when these curves cross, if they cross, and even if they
don't cross. If they come close enough together, this is going to give us GPUs embedded in
general purpose robots, ultra low latency that are performing general world human complete AI
complete tasks. Yeah, and that's a very, very big deal for the consumer experience, you know,
talking to your car, talking to your personal robot, having it say intelligent things back to
you. I think for industrial use, everyone's going to want the best of the best. They want to go
up to that next curve. You know, when you're writing code or when you're trying to design a rocket,
you need the best of the best. But when you're trying to interact with data to
life, the other sort of 98% of use, you're going to have a super intelligence locally
that's more than good enough to know exactly what you want, what you want, you know,
clean my house, drive my car, all of those things. So that's imminent. That technology is here
right now. There's the other part which a lot of companies don't want to, you know,
give their sensitive data or interactions over to Open AI, over to Microsoft, over to Google,
and the ability to run all of this locally and have that capability in your, in your phone,
you know, wherever you want it, I think is super important.
This is absolutely critical because, you know, even you have MCP where you can do,
but your quarry still go up there.
So if you're a law firm or an accounting firm or government, any kind of sense of a government,
you don't want your quarries being uploaded into the model either.
And so this is a huge deal.
And I think as we embed it, you know, you'll have a lawnmower having this stuff embedded in it.
And it'll have a little LLM checking the weather and when it's going to rain.
to me, it's for me, have that much more intelligence in it.
You know, I just re-read Hitchhiker's Guide to the Galaxy with my son, Jet.
And I enjoyed, you know, Marvin, the depressed robot.
I can imagine having a lawnmower or having an attitude about, no, I don't want to cut the grass today.
I'm tired. I did that last week. I knew something different this week.
He was so far ahead of his time, Douglas Adams, just unbelievable.
I'm going to recount my favorite quote from him where he said, anything in the world,
world when you're born we call it normal anything was invented when you're young that's called
a career and anything invented after you're 35 years old is just bad for the world just bad
Dave you were saying oh anyone who missed that one X robotics podcast go back and watch at least
the first 10 minutes of it where Peter's interacting with the robots and they they're you know
they're a little clunky when they're moving and that'll get fixed very quickly but they're
perfectly vocal when they're talking to you it's just unbelievable everything you say
understands perfectly, and then its reactions are perfect as well.
So that part of the interface is just, is already...
You know, it was interesting when Bert Borneck, the CEO of OneX, said, you know, we need to have
the compute, you know, you said, why do you have the compute in the head?
Why don't you have in the cloud?
And he said, well, because we can't afford the delay time, the time for the, you know,
for the electrons to get back and forth from the, from eyes to brain, from brain to, you know,
actuators. Crazy. That's it exactly. So maybe to tie a bow on what I was saying earlier,
I think this is actually about latency in the long-term privacy considerations. Yeah, sure,
to first order, but the reality is, I think exactly as Salim was gesturing at,
ultimately you want new knowledge that isn't already pre-trained into the model that requires
reaching out to the world and then privacy gets lost. Every week, my team and I study the top
10 technology metatrends that will transform industries over the decade ahead.
I cover trends ranging from humanoid robotics, AGI, and quantum computing to transport, energy, longevity, and more.
There's no fluff.
Only the most important stuff that matters, that impacts our lives, our companies, and our careers.
If you want me to share these metatrends with you, I writing a newsletter twice a week, sending it out as a short two-minute read via email.
And if you want to discover the most important meta-trends 10 years before anyone else, this reports for you.
Readers include founders and CEOs from the world's most disruptive companies, and I'm
entrepreneurs building the world's most disruptive tech. It's not for you if you don't want to be
informed about what's coming, why it matters, and how you can benefit from it. To subscribe for free,
go to Demandis.com slash Metatrends to gain access to the trends 10 years before anyone else.
All right, now back to this episode. All right, let's move on. This is a related subject in that
where things are continuing to move. And just, again, our mission here is give you a sense of how fast
this is going and how this is progressing. And there have been no barriers, no ceilings that have been
witnessed. So this is a article that's labeled AI scaling loss have been shattered. And this is
a 32 billion parameter model that broke the Pareto Frontier for Aim 24 and Aim 25. Going to you
again, Alex, what are we seeing here? Yeah, there are overhangs, so-called everywhere. An overhang in
general term of art in the research space in AI is this notion that there are capabilities
that are latent just waiting to be unlocked, literally waiting to burst out if only we know
where to look for them. So there arguably was a compute overhang when large language models
and even before LLMs some of the earliest machine learning advances were around because we had
GPUs lying around from video games just with all this compute waiting to be unlocked for this
new purpose. Similarly, arguably, this paper here, which announced a new capability that
was labeled or branded as data-efficient distillation is pointing to the notion that there's a
arguably a new class of overhangs that are just waiting to burst out and unlock new performance
with relatively low cost. And this is this idea of distillation. Distillation means taking a
larger so-called teacher model and using the teacher, not unlike human education, to train
a smaller student model with the best of the teacher's knowledge. And the core idea from
the paper behind this chart is that with a properly structured curriculum, with a proper
data set, with a teacher model explaining step by step the teacher's knowledge and a number of
other innovations, that it's possible to take a relatively smaller student, a smaller cost-efficient
student, and have the student demonstrate an enormous jump in capabilities. So I think
innovations in distillation and the organization of training data sets are yet another
overhang that's just waiting to yield 10x, 100x improvements in model performance.
Dave, one of the things that we always see in these charts is you see these logarithmic axes
and you'd say, well, what's the big deal?
That red star is not.
That x-axis is a log scale.
So the red star is 1-100 of the training corpus of the purple star to its right.
So the point here is that you can have the equivalent knowledge with 1% of the compute in the training process.
So 100x difference.
And so the implications for startups trying to build foundation models that are specific to use cases is unbelievable.
Like, you know, with a, because you tend to get intimidated by open AI and by Google having, you know, a billion dollar plus training budget.
But if you can build equivalent capability with one percent of the data and then specialize it with data that's, you know, specialized in reading X-rays, specialized in designing parts for rocket ships, then you can actually build a as intelligent as any other intelligence in the world's specific model within a reasonable, you know, budget.
Awesome.
All right.
This is one of my favorite articles of our day here, a conversation, is that GPT-5 can predict the future.
So the concept here is can these systems actually predict economic formance of complex systems or human social societal performance of where things are going?
And we're going to find out.
But these rankings on a Breyer score are pretty impressive.
Again, Alex, how much credence do we give this?
Do you really think we're going to see AI models predicting, you know,
sort of the S&P 500 or, you know, the Olympics, Olympic winners this next, you know, next cycle?
Well, what's wonderful about predicting financial indices like S&P is they immediately get priced into the market.
The moment that there is an amazing crystal ball for predicting market.
performance, every financial institution, every quant fund will race to the extent they haven't
already, probably have already, incorporate these LLMs and foundation models to trade better.
So in some sense, I think one can separate predicting financial markets where, as the meme goes,
don't worry, it's already priced in, versus the rest of the universe where I think, quite frankly,
I think this starts to look like Isaac Asimov's psychohistory for forward prediction.
And then something I hear almost no one else talking about.
What about retro prediction?
Can we predict the past or retrodict the past?
So much of our past is a black box to us.
If we can predict the future, can we do a really amazing job of retradicting what came
before us to very high resolution?
I love that.
So you mean like we have data points of ancient Greece and ancient Rome?
Can we fill things in?
Is that what you're speaking about?
Exactly, to ultra-high fidelity.
Some have called this aspirationally quantum archaeology.
Could we retrodict the past light cone to quantum level fidelity?
And I think, ironically, predicting the past, retrodicting the past might be even more exciting than predicting the future.
I love that, Alex, that's amazing.
I mean, you and I have had the conversations about going out, you know, thousands of years out to the light cone and looking back at Earth and being able to see what happened if we had the technology to do that.
That's right.
I think this is an incredibly exciting idea for AI.
Celine, what are you thinking?
It's fascinating because, you know, history is always written by the winners, right?
So the narratives and the types of publications around history are always coming from one side
and just gives us an opportunity to balance the playing field.
In my office, you may notice when I'm there, the stack of books behind me called The History of Civilization by Will Durant.
And they spent their entire lives trying to document objectively what actually happened rather than the Romans saying this after they conquered something.
So now we can actually go and really fill it in.
It would be amazing to see how you would rewrite that.
You could almost create a Wikipedia of this of what actually happened and let that be a referenceable modeling itself.
I'd go even further and speculate maybe this is what mature civilizations do.
They attain a certain level of superintelligence and then some fraction of their compute gets allocated to naval.
gazing and figuring out where they came from.
Dave, what do you think is going to be interesting to predict in the future if we really get
this right?
I was going to ask Alex that question, because this benchmark is another one that looks like
is getting saturated in a hurry.
And the concept, you know, the headline there isn't exactly eye-catching.
You know, we used the mean squared error like everyone does.
But what are we going to do next to predict the future in kind of a benchmark way?
I mean, there's so many options there.
But what do you think is coming?
I think at some point predicting the future starts to become indistinguishable from innovating.
So you could ask, like, what's the next major scientific invention next year?
Well, to predict that accurately, you actually have to make the scientific discovery or the invention itself.
And I think that's, that is, as we've discussed previously, that's the thing that happens the day after superintelligence.
We start to get this flood of scientific mathematical engineering discoveries.
The best way to predict the future is create it yourself.
Yeah.
Exactly.
is to invent it.
Can you lay down a specific challenge?
You know, Eric Brunyolson has a whole new class coming in soon.
If you give them a challenge, they'll rise to it.
But what would you want to measurably want to predict?
That's just fun and cool.
How about the details of the next 20 Nobel Prize winning discoveries?
Super cool.
All right.
I mean, honestly, what happens then is where do you invest your money, right?
So if you have the ability to predict, you know, given the fact that, you know, you can distribute your capital across the board, but, you know, is there a higher likelihood ROI on one specific technology other than, you know, what we're seeing is digital superintelligence, you know, put your capital all there.
For hedge funds, I think this will be amazing.
Yeah, for hedge funds.
And I think if you can boil it down into predicting something that's happening in near real time, so people can follow it.
play-by-play. So here's what the AI says is going to happen next, you know, either in a
sport or in a news event, you know, and then you can track it, because people love their
polymarket and they love their call sheet. And so if you can say, here's the, here's the
AI benchmark and here's the resolution happening almost instantly. People get super
engaged with that. You can do that right now with fusion, right? Because there's all these
things breakers of can we hold a magnetic field for X amount of time. And once you can hold it for a
a certain amount of time. It means we can actually extract the energy out of it. And then that's a
huge. There's a sequence of known steps there that you could probably lay on a timeline and track
in real time to see what happens there. Amazing. I'm taking this to Vegas with me. That's for sure.
All right. GPT5 Pro develops new mathematics. So I think one of the things that I really want
to track on this, on this WTF podcast every week, is the breakthroughs in math, in physics,
in biology, in chemistry, and material sciences, because that's really where the juice is going to be.
This is what's going to help us continue on this accelerating curve.
So the researchers entered an unsolved math problem into GPT5 Pro for a convex optimization paper.
The model producing new proof, improving the paper's attempt.
GPT5 Pro has had similar breakthroughs in physics, other scientific domains.
Alex, you've been talking about this forever.
Yeah.
And I want to flash the meme.
you know, it's here, it's happening. I think we're just at the very leading edge now of
AI starting to bulk solve math, science, and engineering. Right now, it's a trickle. It's
sort of an interesting newsworthy moment when a weak improvement, arguably, over an existing
optimization theorem was proven independently by GBT5 Pro. It's remarkable, just this one little
proof, but the trickle, I think, is going to turn into a tidal wave over the next year or so,
possibly by the end of this year. So I think what this is going to turn into is basically
bulk proofs of math, bulk discoveries of in science and bulk inventions in engineering,
all happening at once, which right now, culturally, we have no precedent for it.
And Alex, I really want to get your, like, I know for a fact that we're in full bore self-improvement
right out of the Leopold-Dash and Brunner paper, but a lot of people,
are in denial. And, you know, having worked in neural network research hands-on writing the code
for seven years of my life, I can tell you that what's happening in this proof is exactly the
kind of things you do when you're researching neural networks. And if you can do this,
you can self-improve. I'd love you to comment on that just to reinforce it. Yeah. So my mental
model is there's an innermost, in computer science, you have this notion when you're trying to make a
program faster of looking for the innermost loop. Usually there's, there are loops.
inside loops, inside loops, and you're looking for sort of the core engine of a computer program
that's the most time-sensitive, most critical path part that you want to optimize.
With accelerating technology, with the singularity, if you like that formulation,
arguably the innermost loop looks like optimization.
If tomorrow we can use AI to discover a better optimizer that offers orders of magnitude improvement,
there's almost no other, there's almost no other juice that's worth the squeeze,
mixed metaphors, other than developing better optimizers and developing optimizers that are
better at developing optimizers. That certainly smells like the innermost loop of our civilization
right now. Exactly. Exactly. I'm so glad you said it. And we, you know, we swagged the software
only really rapid improvement at somewhere between 110,000 X. And that was in a slide a few
podcasts ago. But now you saw on that slide we had a couple, a couple minutes ago, 100x just in
the in the data selection, you know, in the choice of which. So there's a hundred. So there's a
100x on just one of those, I think we had like eight dimensions in that slide of improvement
that are all multiplicative when you put them together.
Yeah.
But if you have 100x in just that one, then our estimate was, if anything, on the lower bound.
So the implications of that are just mind-blowingly big.
Because a lot of the deniers are saying, well, look, as we throw more compute at this,
we're getting diminishing returns on this curve over here.
Isn't this all going to slow down?
And they're right on that one dimension, but the acceleration in these other dimensions,
It's so much bigger than that slowdown, and that's why people are going to underreact.
I want to connect back to the previous discussion around history here, because imagine you take all of this capability now and apply it to all of the hundreds of thousands of experiments.
You know, somebody did an experiment with a thousand lab mice giving them something with a control group and whatever, and they're looking for one specific pattern.
Now you have an AI that can look for all sorts of other patterns that a human being couldn't possibly see.
And I think we'll see unbelievable breakthrough is coming just to analyze and do better analysis of the experiments,
thousands and thousands and millions of experiments that have already been done.
That would be really incredible.
So, Salim, I mean, I'd say that's overhangs everywhere, including an overhang of previous scientific discoveries that are just waiting to be reanalyzed and reinterpreted.
Amazing.
All right.
Let's watch a quick video here from Salman about the Indian-Mond.
market for GPT-5.
Again, I labeled this slide the land grab.
I want to talk about that, the idea that we have these companies going out to deliver
capacity to nations at a time.
We've seen this in the UAE.
We've seen this in Saudi.
We've seen this in other places.
All right, let's listen to Sam here.
India is, India is now our second largest market in the world.
It may become our largest.
We've taken a lot of feedback from users in India about what they'd like from us, better support
for languages, more affordable access, much more.
And we've been able to put that into this model and upgrades to chat GPT.
So we're committed to continuing to work on that.
So India, 1.41 billion people, you know, the vast majority, 80, 90% in severe poverty, half of
those in squalor.
It's a nation that needs AI more than
anybody for health and education.
And OpenAI wants to go there and give it to him.
I'm going to link this article with the next one, which is OpenAI in talks to provide
GPT Plus to the whole of the UK.
And it's not, this article on its own isn't critical.
But here we have these companies going in and saying, hey, let's give your population,
your school kids, your, you know, your factory workers, everybody, access to our model.
And I do think it's sort of a land grab.
What do you guys think?
I cannot wait to hear your thoughts on this, guys.
Something is going on beyond just the cover story here.
I know it for a fact.
When we were at Open AI, not this trip last week, but the prior one about five weeks ago.
Yeah.
I said, why don't you guys open an office in Boston?
We have like 10 times more computer scientists in Boston than you have here in Silicon Valley,
incredible talent pool.
And they said, well, not going to do that because strong AI is imminent.
And this workforce is going to be all AIs.
But then they go a couple weeks later and open this huge new office in New Delhi.
And you're like, okay, you skipped right over Boston and New York and went right to New Delhi.
That's not coincidence.
And if you look at the demographics of India, it's the biggest population in the world, just crossing China right now.
But the age is right in that, you know, 20 to 35 sweet spot is much bigger than any other country in the world.
And so it also, Mercor, Mercor is now at a 10 billion.
dollar valuation, you know, the Brendan Footy story, which we can talk about if we have time.
But Mercor is almost entirely operating in India now in terms of recruiting talent for the big AI
companies.
And so something beyond just the, it's the biggest market in the world is definitely part of this
plan.
I'd love to hear thoughts.
We were talking about overhangs, right?
The intellectual overhanging in India is unbelievable.
I don't know if you know the story of the mathematician Ramanujum.
This is an obscure accountant in India 100 years ago, and he sent him to
Cambridge and he got a lot of racism, so he came back, died in obscurity. And then his widow
handed in all his mathematical notes after he died. And he found that there's like seven
problems in mathematics that have never been solved for like a thousand years. And he solved
five of them. And so they've got teams of PhD students who are reversing the notes now.
How the hell did he do this? And this is a endemic across India. I think the bigger issue here
is the infrastructure and energy and bandwidth and so on that need to be solved for.
because you're hitting people at the, you know, as you mentioned, Peter,
a large number of Indians are below the poverty line, right?
And so this will give them, this has a double effect of allowing them to get out of that
if you can get them the compute and infrastructure to scaffold themselves out of there.
So the potential is unbelievable.
Yeah, I tend to agree broadly that there are several, maybe two or three feedstocks
to what we perhaps think of as.
global abundance and abundant intelligence or abundant superintelligence is arguably one of the
most important inputs. To Salim's point, arguably abundant energy is another one of those feedstocks.
If the world is just drowning in intelligence and energy, maybe materials query whether material
scarcity just follows or is resolved automatically with energy and intelligence post-scarcity.
I think everything else, all of these global abundance challenges that we speak of, I think all of these are downstream of those input, those feedstocks and can be resolved and mitigated much more easily.
That's one thought.
The other thought regarding UK specifically is this starts to look like a prime example, if it were to come to fruition, of what one might call universal basic compute, UBC.
And UBC maybe call that a special case of larger class of approaches universal basic services, sort of the supply-side dual of universal basic income.
And the future looks very interesting if every citizen of a country is automatically supplied with a basic level of compute.
You know, I also think this is going back to your question, Dave, is this is an economic play.
You know, if you can go in and get your software in as the basis to a billion people on the planet who are going to use your software to create more income for themselves and get a better life and then be able to pay for your software, I mean, you know, isn't this just an ability for them to, you know, I'm trying to find a good analogy without going to drugs and giving, you know, giving the school kid a taste of a drug just to make sure that they start to use it.
I mean, this will become addictive to entrepreneurs and educators and health care workers and
government workers over the next, you know, over the next few years.
And the question is, if you start using OpenAI, chat GPT5 and 6 and so on, would you switch
or this has become baseline for a billion people in India?
Let's go to our next story here.
And this is just part of Open AI's mission.
You know, Dave, you and I spoke about this when we were up at.
at opening a headquarters last week. So Open AI's global data center dominance, opening up two
large compute centers, one in Texas, Texas Stargate, up to five gigawatts capacity. Again,
note, we're measuring the data centers in terms of power, not numbers of GPUs. And then the Norway
mega center, 290 megawatts, in this case, 100,000 GPUs powered by hydro. It's interesting. I was in Brazil
talking, you know, Brazil is a very energy-rich country, and I was saying, you know, look at what's
happening with Open AI in Norway. They're going there for the hydropower. If you want data
centers down here, you know, make sure you get your access to power and make it available.
Power is sort of the, you know, the pheromone that attracts the data centers there.
Dave, what are your thoughts here?
This came up in a big way when we were at Open AI last week, because I was asking Kevin
Weill, you know, is there a vulnerability for Open AI in this area? Because the, the
big competitors have, you know, Google has massive data centers from years of GCP and Microsoft
has huge data centers. And Kevin's answer was, yeah, well, Stargate, Target will be online.
And these are the biggest data centers, biggest investments humanity's ever made. But, you know,
Open AI is starting from not having any data centers at all. He did confirm also that they're doing
their custom chips. I don't know if that was public information. I guess it is now. But doing custom
chips as well. So I think all the horses in the race now are running in parallel with huge
infrastructure build out, build out its custom chips. And then now, you know, the new thing is
the AI designing the chips. Yeah. And the data centers. And soon the energy supplied to the
data centers and soon predicting which politicians are going to support the data centers.
So, you know, it's all AI all the way down. All right. I want to play a short clip from CNBC here.
This is with Sarah Fryer, who's the CFO at OpenAI.
This is going to sort of wrap up our Open AI only segment.
We'll go to the rest of the AI world in a moment.
But here it is.
Open AI hits $1 billion per month in terms of revenue.
CFO warns of huge compute demand.
And there was a buzz about some remarks that Sam made about, is there an AI bubble?
We'll talk about that a little bit.
The developer outcome was actually great.
I think our numbers were up something like 50% just week over week on the number of tokens and so on being used.
What we see is tokens in particular for agentic behavior and so on almost doubled.
Reasoning, which is why I get real excited about because that's a place where I think we've really extended our lead was up 8x in terms of usage of the reasoning components of the model.
Okay.
So tell me about this.
Sam has a dinner.
I believe that in San Francisco.
He does.
And he says at the dinner, though,
This is the famous dinner in the last week where he says that there could be an AI bubble that's taking place.
Do you believe there's an AI bubble?
And I say that in the context that there's apparently a secondary sale of some of your private stock that some of your employees may be trying to sell at a $500 billion valuation.
Amazing.
So, Dave, is there an AI bubble?
There's definitely not a bubble.
And two things.
Sam, first of all, is now in full board downplay mode because he doesn't need to hype.
anymore. He's exactly where he needs to be. So he's in full board downplay mode. We've seen that
before. And then there are plenty of bad investments out there, all kinds of charlatans running
around raising capital, and those companies will fail. And then people will say, see, I told you
it was a bubble. But that's not true. The tailwind is like nothing we've ever seen. And
everybody is now, you know, whether they know what they're doing or not, they're all kind of
jumping on the ship. All the business school people are coming out of the woodwork, getting involved.
And so, yeah, there's going to be some bad investments.
And then people will say, see, I was right.
It was a bubble.
Absolutely not a bubble.
It's the biggest shift in human history.
And the worst thing you can do is not get on board and ignore it.
That's the worst move you can make.
Salim.
For me, this smacked of trying to manage your market cap,
you've got employees trying to sell their stock on secondary.
And you're like, oh, my God, I'm trying to raise money out here separately.
This is a total disaster.
I have to do something.
I have to say something.
So this is what it looks like.
for me. Those numbers that Sarah was quoting were the jumps in users and compute right after the
GPT5 announcement, because there's a lot of conversation, right? Just to be clear, this is the biggest
announcement on the planet, right? This was hyped, not necessarily by Open AI, but by the world.
Open AI's got the GPT5, right? The Leopold, the Ashenbrenner paper was like, we're going to have
a recursive self-improvement when we get there, so hard takeoff. And I think everybody was expecting
you know, again, AGI, and we got, you know, a simpler model with lower costs and not what was
expected, but the world has responded doubling and redoubling on their use of open AI.
Alex, what do you think of the GPT-5 model? You've been playing with it. Is it, is it everything
it's been cracked up to be? I've been very impressed. So prior to GPT-5 thinking and GPT-5 Pro,
I 03 and O3 Pro were two of my favorite models, and GPD5 thinking especially is increasingly
my go-to model for most tasks. It feels like a credible improvement. And I also think more broadly
on this point of trillions of dollars being spent on data centers, as long as the revenue
growth continues to grow spectacularly, I think the party can continue for capital expenditures
on data centers.
The way that I, yeah.
Sorry, there's just something else here that's really important, Alex, in our last
podcast, you talked about the fact that GPT-5 uplift 700 million people into free models, right?
And that is a massive, massive jump, and we're going to just start to see the beginnings
of that over the next few weeks and months.
Totally.
And there is a recursive element to it in the sense that if 700 million people, many of whom
are now using reasoning for the first time, are using reasoning.
and then using that to increase their productivity and their intellectual output and their
economic output, that starts to recurse back through the system and feed back more available
capital, more real capital into the system to build more data centers and empower more people.
And it's a wonderful, positive feedback loop.
And then hopefully, yeah.
I really love Alex's thoughts on this downplaying of expectations because we now have infinite
appetite for compute, which has never existed in the world before.
Like, you know, if I'm running spreadsheets and I have a billion computers, who cares?
Like, it's still a spreadsheet.
But now all of a sudden it's completely inverted with infinite appetite.
And Sam has to be very careful about what he can promise to the world because it's all completely constrained at the compute level.
And so when you demonstrate like a V-O-3 capacity, people get very excited about it, but then you realize, oh, wait, I have way too many users and I can't actually deliver it.
So that's, I think, what's driving the, well, hold on, guys.
we're where we want to be, we're capable of doing a lot more than we wanted to show on GPT5
launch day. But if we show it, then people will want it, and we just can't deliver it until
Stargate's online. And even then, it'll be constrained.
Yeah, so Dave, to your point, I think what we're starting to see emerging, albeit in latent
form at the moment, is a microeconomics of productivity per token. Some tokens, if we assume just
naively that every token is equally expensive to deliver or to generate, some tokens are much more
economically productive than others. So one could imagine per token, maybe some sort of spreadsheet
agent, very productive, unlocks huge productivity, but maybe video generation relatively less
productive per token. So I think we're going to start to see a new microeconomics of token level
productivity emerge. Fascinating.
We're totally managed by an AI, of course. Of course, of course.
Hey, everybody, there's not a week that goes by when I don't get the strangest of compliments.
Someone will stop me and say, Peter, you've got such nice skin.
Honestly, I never thought, especially at age 64, I'd be hearing anyone say that I have great skin.
And honestly, I can't take any credit.
I use an amazing product called One Skin OS01 twice a day, every day.
The company was built by four brilliant Ph.D. women who have identified a 10 amino acid peptide
that effectively reverses the age of your skin.
I love it, and like I say, I use it every day, twice a day.
There you have it. That's my secret.
You go to OneSkin.co and write Peter at checkout for a discount on the same product I use.
Okay, now back to the episode.
All right, let's go to the AI Wars for the rest of the field.
And again, just to be clear, we've been speaking about Open AI and GPT5,
but we're about to see Gemini 3 coming online and then GROC 5 coming online.
And it's just a literally leveling up week on week on week.
So in the rest of the field here, Claude Sonnet 4 now supports a million tokens of context.
A million tokens is a good amount.
I remember in the early days being able to only put in a few pages at a time.
A million tokens is about 750,000 words, three to four thousand pages of text.
I asked for an analogy and GPT5 said, hey, it's the entire.
Harry Potter series. And we're seeing what, GROC probably somewhere in the, in the five to
600,000 tokens. Right now, GPT-5 is around 250,000 tokens. It was a nice step up. Any thoughts
in this, Dave? Huge, huge, huge. Everybody thinks, I don't need that. What am I going to do with
a million tokens? Then what happens is the AI is so productive for you. If you write code, if you're
writing text. It's so productive that you end up with a massive amount of stuff you've created
very, very quickly. And it will forget what you did the day before because you've moved so
far in a day. So expanding that context field allows it to remember a lot more of what you're already
working on. And on the first day, you don't care. By day three, you care tremendously. I've written
more code in the last two weeks than in the prior 40 years of my life. Wow. And it's functional.
incredible, it's working, it's self-documenting. But now my hard drive, I have literally, what,
almost a terabyte of code and text that I've created in the last week. And it needs the
context to remember everything that it was already working on. And so it's never enough,
you know, at the rate that everything's accumulating, it's just never enough. So this helps a lot,
actually. Is there an upper limit to the context window? Or is it just purely compute-based and
ram-based. There's no theoretical limit that I'm aware of right now to to an upper limit for
context window sizes. Do you remember when 64 kilobytes should be enough for anyone? It feels like
we're in that era now where one can reasonably foresee a few years out. Maybe we'll have
effectively infinite context. And when we find ourselves in that world, retrieval augmented generation
rag, maybe that goes out the window. Maybe fine-tuning of models goes out the window completely.
Why bother doing any of that if you can just dump your entire company's corpus of knowledge, code, documentation, emails all into the context window and get effectively free marginal intelligence?
This is exactly why Blitzie is signing deals as quickly as they can have meetings, because Blitzy has this infinite context window coding capability, and it took them a solid year and a half to develop, and now it can take an infinitely large context and restructure it to fit into whatever windows available.
So, Dave, take a second because we're going to be doing a podcast with the CEO of Blitzy.
Take a moment.
This is a company that you incubated, you supported, and Link Ventures basically financially backed.
Give me some context and appropriately here on Blitzy.
Yeah, so Blitzy is two best friends from Harvard Business School, one who went to West Point,
who's an organizational genius, Brian Elliott, and Sid Pardesci, his co-founder,
who's a technical genius from India, also came to Harvard Business School.
business school. They founded the company together in our office. It's still in, it's taking over
the office space, like a board right now, eating, eating desks. Like I said, signing deals as quickly as they
can have meetings. Also, their Harvard Business School professor joined the company. So you know,
that's a good sign. And then my youngest son also jumped on board because it's just, it's just,
it's sucking everything up. So they, they write, you know, two, three, four, five million
lines of code in a night that's all fully debugged and functional the next day. And,
And the original use case was, hey, there's all this legacy code, mainframe code that hasn't
been touched in a decade, incredibly expensive to maintain.
Can AI come in and just rewrite all that in a modern language, make it much more efficient,
move into the cloud?
And so that's a lot of their bread and butter.
But now they're moving on from there into Greenfield, like, yeah, what can we create
from scratch that didn't even exist in the world before?
And, you know, the specs for these, before you even launched the code, the specs become
these 100, 200, 300-page documents, all written by AI.
and it's just hard to keep up and even proofread them
before you hit the go ahead and build a launch button.
So they're really on the forefront of building really big things
and really short timeframes and having the system debug and fix itself
and something fully functional comes out the other side.
I love that. I love that.
All right.
Coming out the other side, we have Perplexity making a $34.5 billion bid for Chrome.
You know, you guys really love Perplexity.
I've just started using it to, you know, look at it, compare it to Google.
So this is a $34.5 billion unsolicited offer.
You can imagine you're sitting at Google headquarters and someone comes up and says,
I want to buy your favorite child.
Here's $34 billion.
So the offer is larger than perplexities reported valuation.
And I guess the concept here is that, you know, Alphabet and Google have been under incredible regulatory pressures.
that could force them to divest, you know, to split it up.
You're making too much money.
You're dominating the field.
They are the, you know, projected winner of the AI race by a lot of the, a lot of the, you know, experts out there.
So thoughts here.
I mean, is this anything other than a PR play, Saleem?
I think two things.
One is a PR play.
I think second is, here's what I predict happens.
I think the Trump administration gets involved and says, well, you want to give us 10%
a Chrome
stuff to help you keep protective.
That's what I think happened.
Oh, good.
Oh, yeah.
Well, so what was going on here, though,
is Google was claiming that Chrome can't be split from Google
because it's useless without Google and nobody would want it.
And perplexity wanted to show the FTC,
we'll take it, and we'd be willing to pay for it.
So it absolutely is not inseparable or inseparable, whatever.
It can't be removed from Google.
it has to stay inside, it has to stay inside perplexity saying, no, that's absolutely not true.
And it's all part of the grand strategy.
I think they would buy it if Google is willing to part with it.
But it's more, you know, it's not PR, it's business strategy trying to keep the FTC active in this breakup.
All right.
I love this next sequence of stories.
So Meta-freezes AI hiring after going on a blitzkrieg, we'll use that term.
and offering everybody, you know, I haven't gotten the call yet.
I'm not sure if you got the call yet, Dave.
And I guess I'm not going to expect the call yet from Zuck.
But after hiring 50-plus researchers, you know, with salary packages in the tens of millions to reportedly a billion dollars,
META has stopped its hiring.
It's reorganizing its AI teams in the four groups, AI products, superintelligence, infrastructure,
and fundamental AI research.
And I'm going to match that story.
with the following story, which is now Microsoft is fighting back.
It's, you know, what's the, what's the Star Wars analogy here,
attack of the clones or the empire fights back or whatever.
The Jedi. Empire strikes back.
Empire strikes back, yes.
And Microsoft is the empire.
So Microsoft is now offering multi-million dollar pay packages
matching the enormous offers that Mehta was making.
And it's trying to raid the meta coffers.
Microsoft created an internal most wanted.
list of engineers and researchers.
It's great.
And they fast-tracked hiring process.
And it's been said, like, if it's critical AI talent, you can receive an offer within 24 hours.
Now, I think the backstory here, somebody should make a movie immediately about this.
But the backstory is really interesting, too.
There was complete piece in detente for a long time between Apple, Microsoft, Google.
They basically settled in this mode where Apple, you keep.
cranking out the phones. Our Microsoft phone failed miserably. We'll give up on it. Microsoft
Office, that's our cash cow. Google Docs, you need to sit there and not threaten Google Docs
and in return, Bing will not threaten Google Search. And it's not all written down anywhere because
that would be illegal, but it's clearly stable. And that lasted for what, 10, 12, 15 years
of stability. Now they're colliding and fighting like you would not believe over AI. And it is
full bore going after your best people anything i can do to get a head start on you but we've never
seen all the tech giants going after one brass ring before and so it's it's really great for
startups because we have turbulence and chaos which is always great for the new guys coming in
Dave that is beautifully sad buddy that is beautifully sad selim well there's seven big companies and
there were seven kingdoms in the game of thrones and it feels to me like that is what's going on
I totally agree with Dave.
This is full.
I mean, I was in at Yahoo and you'd see people very politely moving between companies,
but it was very, very delicately done.
And there were back channeling nonstop to kind of make sure everybody was okay with it.
It was one of the really great models of co-op addition that was out there,
but now the gloves are off.
Yeah, it is a winner take-all.
And here's the other side of the equation.
You know, interesting here.
we have unlikely bedfellows.
Elon tried to enlist Zuck in a hundred billion bid for Open AI.
This is more just, this is just soap opera land.
I don't want to say anything more than that, but that was fascinating.
This came out in the news.
And then I want to close this segment on companies going after each other's employees on this
CNBC article.
This is AI deals creating zombie startups.
And, yeah, you go in and you hire the CEO, the CEO, and you leave the rest of the team there, sort of like, oh, my God, how do we, we just took, you know, $100 million in capital and we can't deliver our products.
Dave, are you seeing this, zombie, zombie companies?
I mean, this is the best friend model.
Yeah, yeah.
Yeah, no, it started with Character AI, you know, Google bought Noem Shazir back for, wow, $6 billion or something like that.
that and that started the wave and now it's all the rage but that windsurf deal was the real
bellwether because windsurf was a very young company 18 months something like that acquired for
$3 billion in a huge windfall by open AI oh wait Microsoft blocked the deal and then it turned
into an aquire of the talent into Google but what did the shareholders get and that's still in flux
you know I'm polling our seed stage friends that are in that deal saying did you get ripped off
and wiped out by this?
Or, you know,
what does this mean for the venture world?
Venture capitalists I talk to are not super worried about it.
It's not, like,
it's not affecting that many of their investments.
But it's,
you know,
it really wrecks the whole venture landscape.
If this becomes the de facto standard,
you know,
kind of end game for a great startup.
There's a whole meme around the fact that VC,
the category is going to be ending soon around this,
with this is one of the,
jigsaw puzzle pieces
that breaks the whole thing. It's a really
bad problem from a
VC perspective just because
if you don't know what's going to happen
with that startup, what are you going to do with this?
Especially in a hot area, right?
Traditionally, that's where you funded it.
If somebody comes of licenses the tech and then
hires all the people, you're left in it with a shell.
If this becomes systemic, this will be
a big problem. I expect to see a bunch of lawsuits
coming out of this. Probably. A lot of this
is also the antitrust laws, right? There was a lot
of challenges and being able to acquire companies. I mean, companies, there wasn't an IPO window
for the last number of years. Finally, that's opened up. And so you try and acquire the company
and then antitrust would say, no, you can't do that. So you'd buy, you know, as with Alexander
Wang, you'd buy, you know, 49% of the company and effectively get control of it. And now,
instead, why just, you know, hire the talent?
Yeah, that's the deal.
You buy 49% non-voting.
So that doesn't trigger the FTC and Hart Scott Redino and all that.
And then you move all the best employees over with huge pay packages.
And then there's a commercial deal where you license all the technology, and that's not disclosed.
So you have no idea what's buried in there.
It could be like you owe us all of your children for the next 10 generations for Alwina.
And you just don't know.
But that's the standard deal because it doesn't have to go through the waiting period.
And so in the race to AI, the big tech companies are desperate to move as quickly as possible.
So this is a really, you know, if you read Accelerando, Alex's will recommend, Alex will recommend that to you all day long.
Manfred Mann, the character in the beginning, the whole structure of the way things are created has to get rethought because of the pace of AI.
And this is kind of the first foray into that new terrain that we're saying.
And just in terms of the venture community, you know, we have multiple companies reaching multi-billion dollar valuations in under two years.
prior to this wave of AI, I only counted seven times in the history of the world that's
ever happened. Now we have three in a single portfolio. So the tailwind for venture is
unbelievable, like bigger than ever, ever, ever before. So I don't want to leave the impression
that something's about to fall apart in venture. These are relatively rare deals. But it is
the first bellwether on the new economy is coming. And some things need to be rethought just for speed.
A few quick articles about X-A-I.
So X-A-I co-founder Igor Babushkin exits to launch an AI safety venture amid growing executive turnover at XAI.
Kapathi, who headed Tesla's AI systems, is thought to be coming in for X-A-I.
We'll see if that gets announced.
Alex, any thoughts on this one?
Yeah, I love to Igor's farewell note regarding X-A-I, he told.
sort of his life story of how he first went into AI because he wanted to solve science.
So in some sense, now this is the perfect time now that AI is arguably on the verge of solving
science for him to strike out and fund ventures in that area.
Nice. I love this article, again, in the Elon X universe. So Elon on AI increasing birth rate.
So here's a tweet. AI is obviously going to one shot the human limbic system.
I love that.
And this, of course, is the idea that, you know, why get married?
I have an AI girlfriend.
I have an AI robot.
But he goes on to say, I predict counterintuitively that it will increase the birth rate.
We're going to program it that way.
So a couple of quick thoughts here.
One, we're going to program it that way.
Your AI is subtly telling you, hey, you should have kids.
Hey, go get another girlfriend.
Hey, you know, Elon's got 15 kids.
You should have 15, too.
So I looked at, you know, when I saw this in the prep for this,
I don't understand any one of those statements at all.
If any of you could explain any of those,
what do you mean AI is going to one-shot the human limbic system?
How will that happen?
So any insights from any of you, I would really love it because this makes no sense to me at all.
So I think I understand.
understand it, I just don't want to comment on it. What I will say, though, is putting aside
understanding, any change AI-driven or otherwise in the birth rate would take decades to be
felt in demographics, and the changes that we're seeing in AI. Otherwise, right now are in much
shorter time scale, the time scale of months to low numbers of years, not decades. So I'm not sure
it really ends up mattering either way. So just taking a second to think about the decreasing
birth rate. We've discussed this at length that places like South Korea, Japan, China, much
of the world other than what we've seen in India and Africa is below the replacement level of
2.1. Some countries are dangerously as low as 0.7. And they're literally sublimating. They're
evaporating. And the question is, why is a birth rate going down? Well, a couple of things. One,
women's education is going up. So women are desiring to stay in school.
school more. Number two, you know, as people move into the cities, it's more expensive to bring
up kids. Number three, the, you know, the child mortality rate is lower. A lot of children.
And, you know, God, the rate of number of children per family back in 1950s was on average
globally above five. And we've seen this precipitous drop because kids are living. You don't
need to have an extra two or three kids to make sure they're there to work the farm.
So all of these have reduced the birth rate.
And the question, I think, logically, is if we do have abundance, if you have access to robots for helping raise your kids, if you have access to AI and universal basic income of some sort to help you with income, you know, can we shift back to building families instead of having to work or instead of having to, you know, make choice between.
work and a family. That might be part of, you know, sort of the Elon Musk counterintuitive
approach. Well, this quote, I know Alex needs to keep his reputation pristine because he does
a lot of work for government agencies and also nobody wants to irritate Elon. But this quote
from Elon goes hand to hand with the one he had a year ago where he said, look, course AI is going
to be smarter than all of us. We're not as smart as we think we are. And basically, basically,
Basically, the undercurrent here is, look, AI is going to be incredibly persuasive very soon.
It is persuasive already.
It's more persuasive than the best humans.
Well, so then the purpose of that last quote, I don't know why he's being so honest about this.
But, yeah, the natural state is the birth rate is going to plummet to near zero.
But the AI, we're going to program it to convince you that it's a good idea to have kids as a way to stabilize human population.
That's a dangerous message that we're going to program AI to influence you.
you on anything.
Yeah, exactly.
That's why it's kind of hard to touch this slide, but that is exactly what he means.
I don't know why he's saying it because the backlash.
Huge, but he's saying it.
All right.
I love this.
Musk acknowledges Google's the lead in AI.
So Elon concedes that Google currently has the highest probability of being the leader
in AI, citing its massive compute infrastructure and data reserves.
Google's dominance is backed by $85 billion in AI-related capital expenditures and strategic investments, like at a 14% stake in Anthropic.
So we look at all the prediction markets, and we saw even when GPT5 was announced, we saw the prediction markets all of a sudden flip to, yeah, Google's going to win the race by the end of August, by the end of the year.
Alex, do you believe this as well?
I think this is more reflection that winning, quote unquote, is a combination of spending enormous amounts of capital to build out data center and compute capacity plus default distribution.
That could come with billions of users from an existing service or from a new service like Chad GPD that's just emerging and reaching toward a billion perhaps by the end of this year.
But that winning combination of both enormous capex for compute and default distribution.
that's what I think this quote, and more broadly, the conventional wisdom right now suggests
is what it takes to win as a frontier lab in AI.
Salim.
Two things, kind of positive and negative here.
The positive is, I think, well, the negative is, I think, as a building, kind of cutting at
the AI, we've always seen that a small team will outperform a big company, always.
And so, therefore, this is the state of the investment inthropic and so on, is super,
super smart here. But Google itself, I don't think we'll do it. It'll rely on some of these
external teams. On the positive, their access to compute and infrastructure is so
ridiculous. And as Dave mentioned earlier, now that the upper end is simply there's infinite
demand for compute, that that may be the reason why they win. Yeah, maybe. I think the real race
actually under the covers and the real race is between the TPUs at Google and the dojo chips
at XAI, or Tesla, wherever those are theoretically made.
The Elon chips.
Because you just signed that $16 billion,
which is really more like a $40 billion deal with Samsung,
to manufacture those chips.
That's a lot of chips.
I think you'll get the data centers to put the chips in.
The question is the relative performance of the dojo chips
compared to the TPUs and the other next generation chips
that are getting designed right now.
So it's a foot race.
He's conceding because he wants to kind of,
Just look, don't look over here.
I'm going to be working on this over here.
Perhaps.
So, you know, at the same time that Elon making that statement, here is some interesting data.
Users are choosing GROC over Gemini.
So in terms of the, you know, the star rating of Google Gemini at 4.8 and 394,000 downloads and ratings versus GROC at 4.9 and 502,000 downloads and ratings.
Uh, it's interesting, um, maybe, you know, listen, part of this, part of this is that
Elon's got the largest megaphone of everybody being able to, on, on Twitter, on X,
be able to say, hey, check out what X can do, downloaded here. That works pretty well.
Yes, it's exactly, exactly right. And he's using it aggressively. And in theory, Google has a
bigger megaphone because Google search is actually bigger than X, uh, but it cannibalizes. So it's a
That's the issue.
If Google were to push this as hard as possible,
the extreme would be to say you could only use Google search through Geminiar.
Then they'd bypass GROC in a heartbeat,
but it would cannibalize their entire revenue engine.
And GROC doesn't have that hang up.
So it's really an interesting little balance in this great war that's going on.
What I love is the fact that with the ratings like 420 and 4.9,
regardless of who's ahead, the users win.
Yeah. That's a good point. Dave, there's another point here worth discussing that you and I've
discussed in the past, probably not on our Moonshots podcast, which is having a celebrated CEO who's
out there makes a big difference, right? So, you know, Elon love him, I hate him, and I would never
bet against him, is out there constantly putting himself out there tweeting, you know, 30,000
times a day, literally. And I've been with him at.
parties and events and he's in his phone, you know, tweeting away. And then he'll pop up and have a
conversation with you and he's back into his phone. So, you know, it's him to a large degree. I'm
not sure anybody else is posting for him. But we don't see that from Sundar. We see this from
Sam. Sam is out there as much, but doesn't have the platform yet. I'm certain that Open AI will
eventually create their own platform equivalent. We're seeing it a little bit from Dario and
anthropic, but I think that's so important. You want to speak to that? I totally do. I'm so glad
you brought it up because when Elon did Saturday Night Live, that was the turning point where,
you know, the definition of what it means to be a great CEO completely flipped on that day,
because here's the busiest guy on the planet and finding time to go to New York and be the host of Saturday
Live. Like, why would you make that choice? It's not random.
You know, it's not ego. It's part of a strategy. Why are you doing this? Because it clearly works for recruiting and capital raising. It attracts talent. And it's like you've been saying forever, Peter, you have to have an MTP. You have to have a massively transformative purpose that improves the world. But just having that purpose and not broadcasting, it doesn't recruit. If you have the purpose and you broadcast it, then talent floods to you, like they're coming to Elon. They're coming to Dario because you're out there and people recognize you. And, you know, social media is the cheapest
form of media in the history of the world. If you don't embrace it and get out there,
it's just the way you win as an entrepreneur now. You embrace it. You get your voice out there.
You get your face out there. Again, for the entrepreneurs, you know, we have incredible
population of entrepreneurs and builders who watch this podcast. And just a piece of advice,
you know, if you are passionate, if you want to change the world, it's not enough to just be
on your computer putting out code. Either you or your co-founder,
someone on your team needs to have an outsized personality out there, letting people know what you stand
for, what you're doing.
You know, I think it's critical.
All right.
Let's move on here to our next article.
And it's Google Drops AI model that runs on 1% battery power.
So Gemma 3, 270 million parameter model.
It's tiny smart AI model that runs right on your phone.
It can handle 25 chats using just 1% of battery power on a pixel 9 Pro.
So again, this is AI that's being infused power efficiently into your phone and soon into everything that you touch, feel, and use on a day-to-day basis.
Thoughts, gentlemen.
I'll chime in and suggest I think this may actually be more instructive regarding what the future of foundation models and frontier models look like.
than, say, the large, high-parameter, multi-trillion parameter sparse models.
My suspicion is that if I had to predict what is the most futuristic possible
frontier AI model look like, I think it probably will look like some sort of nanocernel,
maybe with far fewer parameters even than Gemma 3 with 270 million.
Maybe it'll only have a few million parameters.
Maybe it'll dispense with the notion of parameters entirely,
but be sort of like a small diamond nanocernel of an AI that doesn't have memorized the world's knowledge.
That's all externalized to some external database or knowledge base, but it's multimodal.
It understands video and text and audio.
It's able to reason, but it's everything else, everything else is externalized.
And I think this sort of gem, this sort of perfect crystallized superintelligence is where all of this ends.
fascinating. So, I mean, the value, the reason we're going to these small models is simply power.
Part of it is just the desire to push inference, compute to the edge, to enable local chatbots on phones.
But I think ultimately these SLM's small language models are going to power the largest LLM's frontier models in the data centers as well.
The question is, what is, given that we arrived at this AI revolution by compressing
all human knowledge into as smaller model as possible, is there another phase transition where
we can compress it even further and figure out what that nucleus of superintelligence looks
like? I think we're going to get there in the next few years.
Well, more importantly, this enables that lawnmower checking the weather LLM type thing,
right? Because you can embed this and everything. I think that granularization, it allows you
go off of the long tail of edge cases, which there are an infinite number.
Sure.
Sorry, I think that vision, no, that vision that Alex laid out, there are two parts of it that I'd
love to add to you.
One is that when you take one of these, you know, one to ten trillion parameter mega neural
nets, all world history is baked into those parameters, just a massive amount of
information, most of which you don't need.
So there has to be this diamonds nugget that Alex was describing.
That has to exist, where it can call.
on data, you know, just by looking things up, but it has the same level of brilliance as the
big model just doesn't have all that waste. And then the other thing is, you know, if you talk to
the liquid AI team, they're really fixated on this notion that these Apple devices have an
incredibly good neural processor in every single phone, every single laptop. And it's largely
unutilized. And if you can move that diamond nugget into that latent compute that's in, you know,
hundreds of millions of devices, it unleashes a huge amount of intelligence, especially during
this next kind of two to four year window when big data centers are struggling to catch up.
So there's a big short-term opportunity in unleashing all that compute in some useful way.
And so this takes advantage of that.
All right.
I'm going to move us along.
The next article is from CNBC.
The U.S.
government takes a 10% stake in Intel.
Oh, my God, what a story this has been, right?
So the White House invested.
I think actually the White House and the Chips Act actually granted.
Intel $8.9 billion.
They're turning that grant into an investment.
I'm so curious about what conversations took place in the White House when Lipu, Tan, met with the president.
But we've seen Intel stock, you know, rise on this news.
Our friend Leopold Ashen Brenner, who took out call options on Intel has made a killing in that process.
Dave, you and I were talking about that the day we discovered it.
Oh my God, what a trade that would have been.
Well, hey, watch the podcast, listen closely, get those tips.
But we didn't know that Lipu would be in Donald's office two days after the podcast came out.
All we knew is that Donald Trump had called him to the mat.
But remember, we said on that podcast that what will happen next is Lipu will meet with Donald Trump.
And when they come out of that room, you'll know whether they cut a deal or struck a war.
if they cut a deal, what you'll see is lip booth stays in place and Trump says really nice things
about him. They go listen to the podcast. We said, and that's exactly the way it played out.
And it's followed up by the investment. What were the call option priced at when we had that
conversation? The short term ones are up 100x from that day. Oh, my God. I texted to you.
I don't know what Leopold bought. It was at like three cents to call the call option then jumped up
to three bucks. That would have been that would have been the investment for sure. I'm going to
After every one of these podcasts, I'm going to call you, Dave, and say, okay, what's the investment today?
You know, this actually was done a little bit by Obama when the 2008 financial crisis hit.
The U.S. government gave a huge chunk of money to the car industry to save the car industry,
and they got paid back in space because it turned into kind of an investment in it, and then they liquidated.
So this is very similar to that.
Yeah.
This is a really big, big, big, big deal for it.
until to, we met with Greg Lavender, the CTO, back before Gell Singer got fired.
And he said, you know, that chipsacked money has so much crap attached to it, so much baggage.
Because, you know, the way these things go through Congress, they add garbage to it to the point
where it's useless.
And so they received, you know, $10 billion of unusable money.
So it was of no value.
So, you know, Trump being the business guy has restructured that into an equity investment.
Now they can just use the money.
Nice.
So you'll see some serious motion coming out of that money now.
At the same time, we're seeing Masa San from SoftBank signing a $2 billion investment deal.
And, of course, SoftBank also owns 90% of Arm, whose chips power 99% of smartphones.
And it's, you know, really smart following what Masa is doing here, getting into the chip industry.
I don't want to spend more time on this, but, you know, we need native chip capacity.
we're seeing Samsung investing in chip plants here. Intel, we're seeing our friends from
Taiwan coming to the U.S. A lot's moving here.
The other point here is that Intel is too big to fail, too strategic to fail.
Yeah, for sure. I mean, that would have been the obvious conversation as Intel's price.
I mean, the question was whether Intel was going to be broken up and sold to other U.S. companies.
Anyway, here from Bloomberg, Apple expands iPhone production in India for U.S.-bound phones.
So all iPhone 17 models are being built in India, not China.
And they're being built by Foxcon, right?
So Foxcon that built all of this in China, you know, literally took over the Chinese economy.
It's moved it now to India.
They're going to be shipping this next month.
That's a big deal.
Salim.
I'm actually getting on a plane
to India right now
so
I'll go to check
the
You don't have to go to India
to get your iPhone
they ship it to you
I want to make sure
the production quality is high
because in India
quality is a bit of a question
sometimes
oh my God
all right
I just
it's interesting
right so much pressure
to get out of China
right now
wait wait
let me just
let me just
make one more
quick point about that. This is a very, very big deal because one of the problems that India's
always had is this perception that you can't build high quality things there. And this will
shatter that perception. I think a floodgate of manufacturers start to move to India.
One of our closing stories on AI and we're going to be going into robotics soon. And by the way,
I just mentioned that we're going to be leaning more into crypto. Crypto and tokenized economies are
going to be so important, especially as they connect to AI. So for those you, we'd love to have you
tell us who do you want us to interview in the crypto space? We're going to be talking about
crypto and AI because it's going to just be, it's going to drive the future, especially with
agents being able to trade and manipulate currencies, not manipulate currencies. That's probably
not the right thing to say. That's the AI part. All right, let's jump into this article here from
CNBC. Albania wants to replace its corrupt government with AI. So Prime Minister, E.D.
has advocated for AI ministers and prime ministers.
It's a big deal.
We've seen a little bit of this in the UAE and a few other governments,
but this would be a fascinating move.
Salim, comments.
Yeah, I think so.
I met IDorama a couple of years ago, part of the circle of kind of heads of state
that revolve around our EXO world.
And he did an incredible job turning around Tirana, the main city as mayor, and then went to the thing.
And then he found that the government is corrupt enough, that you need to do something.
And you can't get out of it in an easy way.
And coming at a top down with AI is a super smart thing to be able to do.
I expect to see this across the board.
At the very least, having AIs that are monitoring activities of ministers and so on, that allow you to reduce corruption.
Right. In Colombia, for example, there was a port being built and all the government folks bought all the land around the port and then sold it for like 100 X just after that announcement.
So you have that kind of institutional corruption like congress people, congressmen and women in the U.S. can have insider trading.
That's kind of incredible to me that public allows that.
And so this is the kind of thing where I can oversee some of this and start to make a kind of hacking into that problem space, which is huge.
Because the corruption problem is multiple trillion on the global economy.
All right.
So, Dave, to you here, MIT study reports 95% of AI pilots are failing.
Companies have spent $30 to $40 billion in generative AI, yet 95% see no financial return.
The adoption rate is high, 80% are testing and 40% deploy, but the impact has been low.
Big firms are running pilots, but struggle to scale.
So I'm going to hit the next slide as well for a little more data.
And then let's talk about it.
So in this study, 95% of the failure is a failure to deliver financial benefit.
And the study goes on to say that it's principally because the companies don't understand how to use the AI tools properly.
There's a learning gap.
And companies that buy existing AI solutions succeed two-thirds of the time and those that try and build it internally do not succeed.
Only a third of them do.
We saw a huge stock market dive as investor fears.
I sort of hit on this.
Oh, my God, is the AI bubble real?
And then interestingly enough, and I think this is one of the most important things,
Salim, you and I talk about in the EXO world,
is that startups achieve a much better return on investment regarding AI
because startups have fewer entrenched bureaucracies and business processes.
They don't force AI into existing workflows.
they are native in AI and they reinvent their business based on AI.
I can't stress this enough.
We're finding, you know, we've been working with CEOs for the last couple of years on this.
It is imperative to structure yourself as an EXO, right?
That's one.
There are two failure modes that we see companies.
An EXO is an exponential organization, right?
Yeah, so this is the model that we've been pushing that is now shown.
I mean, companies using this model are delivering 40 times a shareholder returns
that companies that aren't.
I mean, it's just absurdly obvious once you see the model.
There are two failure modes that we see.
One is people jump into the water without looking with a rock star.
We came across a medical CEO who'd uploaded all the sensitive patient data in the chat GPT,
and now it has a huge legal exposure because of that.
So that's one bucket of challenges.
And the second bigger one is the cultural resistance,
because you try and people inside the company,
if you don't kind of get the culture right,
people are scared it's going to take their jobs.
and it's a mess. So those are two big
huge buckets that this is
essentially pointing out here.
Dave, you're living this world right now with
all of the startup settling ventures.
Yeah, I'll tell you, don't read these
reports. It's not worth
the time. I've got to be
careful what I say here a little bit because we all
love MIT like no place on the planet.
And the disconnect
between the students and the faculty is like
nothing I've ever seen. So I work with the students
every single day, all the startups,
and they are killing it. And you,
Back in 2020, we did this little research thing.
And about 14 MIT alums all time out of 140,000 alumni had become self-made billionaires,
14 as of 2020.
Now, you know, Greg Brockman at OpenAI, Alexander Wang just got acquired, you know, by Google,
Mark Chen.
I mean, they're everywhere.
You're absolutely thriving in the world of AI.
Just a correction, Alexander got acquired by META, not Google, right?
Oh, sorry, by META.
Yeah.
So actually, you've got top guys at META, Open.
AI and Google coming out of MIT, in Greg's case dropping out of MIT. And they're just,
they're just killing it out there. Meanwhile, the administration keeps cranking out these documents
saying, slow down, chill out. What are you talking about? Well, what it really is, is we didn't
invent it. Dennis Asabas got the Nobel Prize. Jeffrey Hennon got the Nobel Prize. I wanted that
Nobel Prize. This isn't really happening. It's something else. We need to, actually, the theme in
all of these reports is there's still a missing component. Some self-reflective, self-thinking
things still needs to be invented to make these truly AI. Well, I think part of this is we're still
early and we see these board of directors, the chairman goes to CEO, listen, what's your AI
strategy? You need to have an AI strategy. And the CEO or the CTO is basically just, you know,
throwing money at this without properly thinking it through. And it's the impedance mismatch
between an established large company that's doing everything the way they've always done it for
10 or 20 years, trying to, you know, force AI into the mix. And one of the last points on this chart
here is companies are wasting AI potentially by focusing on marketing and sales versus cutting
the cost of back-end processes and operations, which are the juices. Yeah. So let me, let me, can I just
Good, so there's only one path to navigate this for companies.
The bigger the company, the more this has to be followed.
Smaller companies, as you mentioned, Peter, can adapt very well.
And Dave, you're exactly right.
This is a boom that's just going to keep going.
So don't slow down.
Don't let this affect you at all.
For a big company, there's only one model that's going to work.
And this is what I've been advising when I talk to the big company CEOs that we talk to.
Go create an edge organization that's replicating the functionality that you're trying to, whatever you're trying to do.
let's say he's building cars,
create an edge organization that's completely AI native
and start automating the use cases,
bottom up one by one.
And then you create a young entrepreneur,
a mindset with the youngest employees possible
and let them lose with AI,
copying the functionality of the big.
And now you're essentially doing A-B testing
and seeing who can do it better.
And over time, over time,
you can move there, you can deprecate the mothership
and little by little move all of this.
And that becomes a new gravity center over time.
Do not try and transform the mothership.
Do not do it.
The organization on the edge, that CEO of that team, this is Lockheed Skunkworks equivalent, needs to report directly to the CEO.
Don't put them underneath the other organization.
Need to be independent, allowed to iterate and do stupid things.
This is where, you know, Kodak goes bankrupt, even though they invented the digital camera.
Yeah.
And just to plug, we have solved this problem, folks.
So anybody struggling with this, just call us.
There's a 10-week engagement that we run.
In a big company, the default answer, if you try anything disruptive, is no.
Everybody comes very French and they go, Papposibla, can't do it, da-da-da-da.
We have learned in 10 weeks how to switch that default answer to a yes.
So that's all you need to do.
But then you need to do the thing on the edge, because even if you switch to a yes,
you can't get out of the old models quickly enough.
You have to do this thing at the edge
and let the gravity center over time drift to that.
I'll get off my soap on.
Totally.
There's a lot of energy.
The other thing, get your corporate venture fund back up and running.
A lot of them, like Intel Ventures got shut down,
just trying to save money.
You saw that prior slide.
Google owns 14% of Anthropics.
That's a $14 billion position by itself.
Like, get that corporate venture fund up and running again
and then be a development partner for some,
of these startups, try and be their first or second customer, be super supportive of them and
invest in them at the same time. And so then that group that you've invested in plugs into
your internal edge group that Saline just described. And that's how AI knowledge is going to
actually get into your organization. Because the reason these corporate things are failing is not
because the AI's failing. It's because you're just throwing it into a group of people that have no
idea even how to start using it. And they kind of don't want it to take their job away. So they don't
have a huge incentive in trying to make it work. So our next subject here,
evidence gets perfect score on the U.S. medical licensing exam. This is huge, right? So we've,
we've seen the data before a human diagnostician gets 72 percent. A human plus GPT4 was getting
like 74 percent and GPT4 on its own was getting 92 percent accuracy in diagnostics.
Here we see another version of this that in the U.S. medical licensing exam, again, this is to
become a full doctor, right? You've done your internship and residencies. Open evidence is 100% GPD5
at 97%. Again, this is AI taking the lead. Pretty extraordinary. I love the fact that 40%
of doctors and you guys are already using this. That's very inspiring to me. Yeah. Listen,
every doctor is going to be using this or they're not going to be a practicing physician.
But here's the real topic I wanted to hit on.
Sal Maltman is getting into the BCI, the brain computer interface race with a company
called Merge Labs, who is targeting the combination of gene therapies and ultrasound as a mechanism
to be able to read and write onto neurons.
And I know the company well.
I don't know what I can.
I cannot say.
But I'm hoping to have this company on stage.
with me at the Abundance Summit in March. Also, super pumped that Kevin Wheel, the chief product
officer, has agreed to come on our AI day and be there and talk about, you know, how fast GPT5
and when we'll have GPT6 and what we'll have AGI. But check this out. You know, there's probably
my guess, Salim, I don't know, 20 BCI companies, probably about four or five that I'm tracking that
are extraordinarily effective and moving rapidly.
Again, when, you know, when Ray made his prediction of high bandwidth computer,
you know, brain computer interface by 2033, I was like, Ray, you got this one wrong.
And no, he got, he's got this one right.
We're going to see that.
The man is incredibly annoying.
Yeah, incredibly annoying.
And Alex, you know this, you know the team here and you know one of the co-founders as well, right?
Yeah, no.
My fellow Hertz fellow, Mikhail Shapiro is a co-founder. I'm a huge fan of the company.
My sense is this is such a rapidly moving space. I would love to see Merge Labs and all of its
competitors deliver high bandwidth BCIs in the next few years. I think if that window of the next
few years for high bandwidth BCIs isn't achieved, the risk is always that we achieve sort of a pure
AI economy that completely decouples from the human economy. And BCIs, I think, are our best hope,
if they can deliver quickly enough at keeping the human and machine economies.
And you were on stage with me at Abundance Summit two years ago talking about the idea,
the importance of coupling, right?
So either we are on the AI team fully and we are literally coupled.
And this is the quote from Sam.
So Sam Allman says the merge can take a lot of forms.
We could all just become really close friends with a chatbot.
but I think a merge is probably our best case scenario.
We've seen this from Elon as well,
the idea that being able to connect our neocortex to the cloud
and being able to ride on top of AI's acceleration
versus being left in the dust.
Fascinating stuff.
All right, let's, this is equally,
and again, this is just showing that AI and science
are going hand in hand.
So OpenAI's GPT, 4B,
and, you know, I gave, I gave Kevin Weill a hard time on his naming protocols for his, for his models.
That's super funny.
Yeah. OpenAI GPT4B designs proteins to reprogram cells as stem cells.
So you've heard me talk about the Yamanaka factors.
This is a Nobel Prize won by Professor Yamanaka in Japan, funded by Mark Benioff.
And what we've seen here is GPT 4B being able to come up with a new version of these transcription factors.
It's 50 times more effective.
I remember, Alex, when you showed me this, you were like, okay, here comes longevity escape velocity.
That's right.
I wanted to get out my Ray Kurzweil is right hat for this one.
This is what AI-driven longevity escape velocity, at least the early glimmers of it look
like a generalist model making seemingly a breakthrough discovery in in longevity. And I would
encourage everyone, if you haven't, if you use chat GPT and you haven't played in with the built
in molecular biology support and biochemistry support in the form of RD kit, RDKIT, I would
encourage everyone to try for themselves with any recent level. Go build a lot form. Go develop a new
pharmaceutical. This is now table stakes.
well this is where this is also where demis has the opportunity to become the most important figure in human history
because he is you know he's the Nobel Prize winning Demis now he's spending all of his time on the cell simulator and the
you know just solving all disease but he's also planted within Google he has access to immense amount
of compute and academic freedom to work on it and so with that in the Nobel Prize he's he's in a
position to change the world more than anybody I can't wait to talk to
in. But the rate of progress here is unbelievable, and his insights will be...
When people say, why are you so excited about, you know, longevity and health span extension,
it's this. It's the impact of AI. It's nothing else that's going to move the needle this fast.
We saw it on my podcast with Dr. David Sinclair, his using AI for, you know, creating these molecular
equivalents of what was only possible with the gene therapies before.
We almost need a whole episode dedicated just to the intersection of AI and biotech and protein folding and Yamanaka factors and all this stuff.
We'll get Demis on this podcast.
We'll have that conversation for sure.
All right.
Let's close out with robotics here.
I can't avoid the robot revolution.
The robots are coming.
This year at Abundance 360 in March, my plan is to have at least four of these companies,
maybe five there with their robots so we can play with them, see them, meet the CEOs.
All right.
One of the key things that's going on out of China, we're seeing these robot clusters in the United States and principally in China.
We've seen the first humanoid robot games in Beijing.
Think hip-hop, soccer, boxing, and track.
A quick look at the video.
Here's the hip-hop portion.
the end opening here is a soccer portion a little bit of boxing and track all right in the track in the track section unitry I had unitry last year at abundance sets 1,500 meter world record but it's still 91% of
slower than a human.
All right, quick look at the video here.
But, you know, the real news here is what China is doing, right?
Creating these games and creating these clusters around humanoid robots, they're iterating
the cycle much faster.
It's amazing.
I think that's the key part, the fact that they're making a whole public competition
This feels to me like first robotics on steroids in a funny way.
I will say make a prediction that these type of robot-to-robot games will fail
just because we watch the Olympics for the human factor,
not for the speed of how somebody ran.
So that's my prediction.
Yeah, that's probably a good prediction.
The higher level topic is really important, though.
Benchmarks are critical for inspiring people to keep moving forward.
And this is a form of benchmark.
Yeah, you're right, it'll come and go.
but there'll be some other benchmark.
And as long as you're inspiring people to show off what they can do and compete,
then this thing is going to drive forward very clooker.
Yeah.
All right.
You know, I remember seeing this.
Remember Scott Hassan's company on robotics, Salim?
Yeah, they spent forever trying to get a robot to fold laundry.
Yeah, and here we go.
And this is our friends at Figure AI.
This is Figure 2.
And this is fully automated.
This is not teleoperations, which is a really important point to make.
And here we see it folding laundry.
I still think there's a human being in there.
Well, they said it's not.
But, oh, you mean inside the robot?
Okay.
And I found this fascinating as well.
This is figures helix.
And again, here we see a robot company.
In fact, every robot company we've seen that had been partnered with,
a frontier model firm has started building their own AI models.
A figure is no exception.
Brett Adcock said we're building Helix, which is their sort of AI for navigating the physical universe.
And here we see Figure 2 walking through a very rough terrain in a very human-like fashion.
I mean, this is pretty extraordinary.
Also, you remember when we were talking to Burnock,
couple weeks ago, one-X robotics. We said, how do you debug whether it's physical or mental
that's not working when it has a mistake? And he said, well, we have a teleoperator try and do it,
you know, with their hands or with their own feet. And that tells us if the robot can do it or not.
And then if the robot could do it, then we know it's in the brain. If the robot can't do it,
then we know it's in the robotics and the in the gears. But you can see in that in those videos
that that's reaching the end of its life cycle. Because, you know, when it's folding laundry,
There's no teleoperator that can be moving the hands remotely at that speed and dexterity.
So he mentioned that when we interviewed him.
Yeah, go ahead, Dave.
Sorry.
No, he mentioned that when we interviewed him, that he was right at the edge now of where that
mode of debugging was going to continue to work.
The robots are getting ahead of anything a remote control operator can do.
And what we just saw in the video there, for those you listening,
is a figure two walking along this junk yard of wood, of planks,
of, you know, and just, you know, almost tripping, but catching itself
and walking elegantly across, which was pretty amazing.
You know, it's been rumored for some time that Apple would get into the robotic business.
You know, they went almost into the electric car and autonomous car business.
What we're seeing here is Apple is expanding a new set of AI-enabled.
devices, including a tabletop robot. This is sort of a iPad on a stick that can look around.
They're going to go into smart speakers, AI enhanced human security cameras, hopefully some type of
version of Siri that can spell my name correctly. I don't know. And the thing that frustrates
me the most about Apple is I'm texting, you know, two people. Their names are obviously there in the
text line, and they spell both names wrong. Just drives me nuts. Drives me nuts.
I was going to close on this article out of China, and China is developing the first humanoid robot with an artificial womb.
This is Kawa Technologies, is creating robots and carry a fetus in a synthetic womb with fluids and nutrients, the cost of 14K.
Maybe this is part of Elon's prediction of AI and technology reversing the decline in birth rate, you know, if you don't need to carry your own child anymore.
But seriously, there's surrogate pregnancy.
One of the things that's interesting, and I'll have Ben Lamb on stage at the Abundance Summit.
Ben is the CEO of Colossil.
It's the company that's de-extinguishing the woolly mammoth, dire wolves, many other dodoes.
And in order for him to actually hit his marks, he needs to build artificial wounds.
But they're not going to be carried around by a robot.
They'll be stationary.
They'll be physically in a room carefully guarded.
I don't understand this carrying around bit.
I mean, you know, this reminds you of Brad Templeton saying you have all these robot horses
and people are spending a huge amount of money creating robot horses.
Give me a male horse and a female horse and I'll grow you a horse.
Oh, my God.
I don't know, but I am going to read this final closing line and maybe we'll comment and break on this.
This is from a guy named Dr. Singularity.
I love the quote.
He says, in the 1960s, Star Trek envisioned a distant utopia,
placing warp drives, replicators, advanced society, centuries away.
Even through the 80s and 2000s, the future was imaged as a slow, linear march of progress.
But reality no longer moves linearly.
We're extremely close to having AI agents, researchers, matching the brightest human minds.
Soon we'll see millions or billions of.
them. A Star Trek world won't wait for the 2,200s. It would arrive by the 2030s. Amazing prediction.
I think it's a great way of closing out this episode. It's exactly where we are right now.
Yeah. I would add Star Trek is such a strange future in the Star Trek universe. It's energy
rich. They have warp drives. They're traveling around the galaxy. It's biotech poor. So longevity was
outlawed in the late 1990s in the Star Trek universe. And it's AI poor. Everyone's surprised when a new
AI pops out, a human level or superhuman AI pops out of the holodeck. Whereas I think the future
and the present, frankly, that we're finding ourselves in is going to be rich and abundant in all
three. Amazing. Dave. Great points. Well, after we did that Kevin Wheel interview, I started reading
the future as faster than you think again. Just it's in, it's on my bookshelf. Why not read it again?
But, Peter, that book has so many things in it that are coming true right now.
It's unbelievably prescient.
And so I think it's worth everyone grabbing a copy and looking through it.
It's actually much more relevant today than even when you wrote it
because there's so many new converging technologies.
So you have some examples in the book of technologies from, I guess that was 10 years ago.
Was it that one ago?
No, it was 2019, yeah.
Yeah, so six years ago.
but so many things have happened since then
infinity you know we're living in dog years here
it's compressed time
but there's so many things that that book
predicts that's worth reading again
Salim what are you
what are you doing in India buddy
I'm going for a bunch of conversations
the singularity summit is happening there
and they've asked me to do the opening
so I'm back in country of birth
doing running around having a bunch of meetings
I'm in like six cities and five days.
It's going to be pretty ugly from the carpentry of this is going to be pretty ugly.
But, I mean, India's, there's just a piece of my soul that's always there.
I love it, love it.
Alex, what's the next week look like for you, buddy?
Oh, my goodness.
Well, given this flood of AI innovations, just immersing myself in it wherever possible,
I advise a number of startups on how best to incorporate AI advances into their work streams.
But really, I think smoothing out the singularity is the name of the game at this point.
Fantastic.
And Dave, are you ending up this summer in any place in particular?
Yeah, same as Alex.
I have two weeks to grind through AI models and finish some things.
And then we're back at Stanford for the – we're at Google on September 8th and then InVIDIA that night.
And then Stanford all day on the next day, you know, 2,000 people.
the Blitzy launch will be then we'll do the Blitzie podcast right before that and then I think
we'll release it that same moment and then 60 back-to-back startups presenting on Stanford's campus.
So I got a little respite to finish a whole bunch of AI work before all hell breaks loose
the first week of September.
All right.
Well, wishing you guys an amazing end of summer here.
What a great time to be alive.
But I hope you enjoyed this podcast.
We work to give you hopefully, you know, an English.
increased IQ bump, excitement about the future, give you a positive vision of where things are
going. So if you enjoy this, please let us know. We love your feedback. Tell your friends.
Next episode, more crypto, because the time happening there as well. Yeah, I know we are. We'll be
diving more into crypto for sure. So if you're a crypto fan, let us know who you want us to be
bringing on the podcast for the conversations. We're grateful for you. We do this because we have a lot
of fun. I mean, this is for us the way we keep on top of everything, actually doing the work,
doing the research, discussing it here, and we hope it's beneficial to you. Anyway, thank you,
gentlemen, a real pleasure and honor. Every week, my team and I study the top 10 technology
metatrends that will transform industries over the decade ahead. I cover trends ranging from
humanoid robotics, AGI, and quantum computing to transport, energy, longevity, and more.
There's no fluff. Only the most important stuff that matters, that impacts our lives, our
companies in our careers. If you want me to share these metatrends with you, I writing
newsletter twice a week, sending it out as a short two-minute read via email. And if you want to
discover the most important metatrends 10 years before anyone else, this reports for you. Readers
include founders and CEOs from the world's most disruptive companies and entrepreneurs
building the world's most disruptive tech. It's not for you if you don't want to be
informed about what's coming, why it matters, and how you can benefit from it. To subscribe for
free, go to Demandis.com slash Metatrends to gain access to the trends 10 years before anyone
else. All right, now back to this episode.