Moonshots with Peter Diamandis - AI Insiders Breakdown the GPT-5 Update & What it Means for the AI Race w/ Emad Mostaque, Alex Wissner-Gross, Dave Blundin & Salim Ismail | EP #186
Episode Date: August 9, 2025Download this week's deck: http://diamandis.com/wtf Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Salim Ismail is the founder of OpenExO Da...ve Blundin is the founder & GP of Link Ventures Alexander Wissner-Gross is a computer scientist and investor. Emad Mostaque is the founder of Intelligent Internet (https://www.ii.inc) – My companies: Test what’s going on inside your body at https://qr.diamandis.com/fountainlifepodcast Reverse the age of my skin using the same cream at https://qr.diamandis.com/oneskinpod Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding –- Connect with Peter: X Instagram Connect with Dave: X: https://x.com/davidblundin LinkedIn: https://www.linkedin.com/in/david-blundin/ Connect with Salim: X: https://x.com/salimismail Join Salim's Workshop to build your ExO https://openexo.com/10x-shift?video=PeterD062625 Connect w/ Emad: https://x.com/emostaque Connect with Alex: linkedin.com/in/alexwg Listen to MOONSHOTS: Apple YouTube – *Recorded on Aug 8th, 2025 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
A huge amount of expectation on GPT-5.
The anticipation of this launch was up there with the top three product launches of all time.
This is when you see real big things happening, either a productivity boom or the inverse.
You can see you're just reaching that level now across just about everything.
They cut the cost of AI at least in half, if not more, and they caught up to everybody else in coding.
That's a big, big deal.
When we talk about abundance in all of its many facets,
It's taking 700 million people and suddenly giving them access to state-of-the-art AI, I think, becomes transformative.
Just go all in and start turning your business into an AI native business.
This leads to Peter's abundant state, I think.
I think what we're starting to see here is...
Now that's the moonshot, ladies and gentlemen.
Everybody, welcome to another episode of WTF, just happening in technology.
I'm here with my moonshot mates.
Salim and Dave and two special guests, geniuses.
You've met them before on WTF, if you're a listener.
And Dave, would you introduce AWG?
I think that would be important.
Yes, I'd love to introduce Alex.
So, yeah, genius is probably a good word.
Math, physics, and computer science degrees from MIT.
True polymath understands everything, and we're going to talk about a lot of it today.
Ph.D. from Harvard, in addition to that in physics.
and reads literally every document, every research document, every breakthrough in AI and many other fields.
So always incredibly informative to have him.
Welcome, Alex.
And Celine, would you do the honors with Imod?
Sure.
So Imod is one of those folks where every time he says something, you have to take twice the time to parse what he just said and make sense of it.
More intelligence per word density than most people you've ever met.
a founder of stability diffusion and stability AI,
a former hedge fund quant brain size of several planets,
and building, I think, a systemic layer for the next version of the internet
with crypto built in, which I think is really powerful.
So welcome, Imat.
So first of all, I literally just landed from a week in Portugal,
so my head is still spinning after a 12-hour flight.
But, hey, what could possibly go wrong?
Today, we are speaking about two or three,
special events this past week. In particular, the announcement and launch of GPT-5 and the
continuation of the AI wars. But before we get there, Salim, I think you've recently gone
through surgery, or is that just to have? Yeah, I had shoulder arthroscopy where they drill
three holes in your shoulders and do kind of an oil lubin filter on it. I had a bone spur
impinging on the tendon, et cetera. What's incredible with the advances in technology today, I was
in and out in like two hours. It's like unbelievable.
that they go that deep into your body
and then you're just out again.
It's amazing.
Don't forget the excesses.
I was going to ask you to play tennis this weekend.
I guess we're not playing, huh?
No, not for a little bit.
And my right hand is, yeah, we'll leave that for another time.
And this is a special episode
because I'm filming in the new Moonshots podcast studio.
So check out the background.
Hope you like it.
It's a real background.
We'll be doing a lot of episodes from here in the future.
Imad, you're in London, and it's midnight or something like that.
It's just time for the brain to get going.
You're amazing, buddy.
At that time, maybe his brain slows down a little bit so we can understand everything.
That's my quote.
We'll find out.
And Alex, you're in Boston?
Where are you today?
That's right.
Cambridge, Massachusetts.
Yeah, center of the known universe, at least for us, MIT alums.
Certainly the center of Cambridge.
All right, let's dive into this episode.
I'm going to start with this note.
This is Sam Altman.
Two days ago, he made the announcement of GPT-5, and in particular, this is the quote that stuck out.
GPT-5 is a significant step on our way to AGI, which also means it isn't AGI yet.
So I have a question for you guys.
We also saw the day before this announcement.
Sam put up this tweet showing the Death Star.
And now, I have to ask, I don't get it.
It's like, when he put this up, it's like, what is he trying to do?
Get views or get people really worried?
You know, a lot of this launch was pretty uncoordinated, but there were, Kevin Weil also posted something with Elmo with the fire behind him saying, you know, it's coming.
So there was a lot of sort of pre-event tweeting and buzzing or exing and buzzing about something huge is coming.
And I don't know why a Death Star, you know, but a lot of people talked about it.
already.
It feels not a great look.
You know, I mean, you're trying to get people accepting and happy about the future and
you show that imagery.
It's kind of like, okay.
Well, one of the Google people posted the Millennium Falcon, and he said, no, we're
meant to be the rebels.
So he said this is meant to be the point of view of the rebels.
Oh, okay.
There you go.
That's a lot more sense.
And everyone's like, nah, that's not the case.
That's way too subtle.
So here's my question.
We go around the horn.
you know, a huge amount of expectation on GPT-5.
And I would love to ask each of you, what do you think of it?
What do you think of the announcement?
It was a little over an hour.
Let's start with Imod.
Imod, what do you think?
Yeah, so it was kind of in line with what I expected,
because when you're doing an AI for like 700 million people,
it's very difficult to do like a mega-AI.
and so we'd be guided to it would be a multi-routing type of thing from mini up to pro
and that's kind of what we saw is like basically 04 but with one front layer
so I thought the announcement was okay it's just the expectations are so high now
particularly when you build it up that you just have to keep on beating every time by more
than a little bit I think we all thought it would beat but the question was how much
and it was like yeah okay wasn't it yeah uh Alex
about you, buddy? I tend to think the real net impact of a launch like this tends to be more
about lifting hundreds of millions of users up from a model like GPT-40 to a frontier model.
And I think the changing economics of a radical cost reduction of frontier models, these are going
to be, I think, the long-term impacts. To the extent there were expectations that there would be
an ontologically shocking moment when there would be new qualitative capabilities that would come
online. I tend to think that ultimately lifting hundreds of millions of new users to frontier level
and getting them to interact at scale with a frontier model over the long term, that's going to be
just as impactful and just as economically relevant as introducing some draw-dropping new
qualitative capability. Yeah, I hear you, and that is true. I mean, that's what Sam's
mission was deliver a single, a single user interface that enabled you to do quick answers
or do long, detailed research and coding.
You know, Salim, do you remember you and I were together up in the Bay Area with Dave
in Boston when Google I.O. came out.
And there was so much, holy shit, holy shit, holy shit moments when Google I.O. was showing
their capabilities.
What did you think about this one?
I have the same reaction as EMod, which was a, it's not 10x better than what was there before.
I think I'll concur with Alex, though, in terms of I think their real power will come in the cost drop,
which will make it much more accessible to a lot of people.
And I think downstream in a couple of months, as people start building applications and GPTs and special agents on top of this,
then we're going to see some really big surprises, which I'm looking forward to.
Let's close it out with you, Dave.
Dave, you've been thinking about this.
and watching all the telltale signs for a while.
Were you excited, impressed, depressed?
Well, I mean, you called it right, Peter,
compared to Google I.O., which had incredible showbiz value
and a ton of video, a ton of computer-generated video.
For whatever reason, OpenAI, decided to go folksy,
make it look like a high school presentation, you know,
and feel startupy.
And I don't know if they'll stick with that.
You know, Steve Jobs did the best showbiz.
the history of the world. And the anticipation of this launch was up there with the top three
product launches of all time. It really was.
Yeah. Yeah, so you have an opportunity to really blow people's minds. Either they didn't
have time to really work on it, or they don't have that staff built up yet, or they just
don't care maybe. I don't think that's the case. But they really did not put a huge amount
of effort into this event, and it came through. And you'll see some data that supports that's
pretty obvious that that did come through. Let's look at that. This is the review on Polymarket,
and what we see here is answering the question, which company has the best AI model by the end of
August? And coming into this, you know, Open AI was riding high with Google, you know, coming in
second and anthropic in third. And then we see there the times.
stamp for when this release was live. Any commentary, Dave?
Yeah, well, I mean, this is, it's great that Polymarket exists because the feeling that I
think we got, we all watched it live here in the office. We had a little, actually,
Alex suggested it. It was phenomenally cool, actually. But, but watching the ticker in real time,
you know, there's a dip when they did their first coding demo and then a huge plunge when they
did their second coding demo. And literally the betting markets went from 80% chance. They'll have
the best AI in the world, not just at the end of this month, but also at the end of the
year, to completely inverting and saying, no, Google's going to have the best AI at the end
of the month's month and the end of this year. And I think they actually showed some incredible
capabilities and rolled them out at a ridiculously great price point. But the market reaction
to it is, wow, I think Google's going to eat your lunch. So, yeah, you can't deny it. It's right
there. People are putting money behind this prediction. Every week, my team and I
study the top 10 technology metatrends that will transform industries over the decade ahead.
I cover trends ranging from humanoid robotics, AGI, and quantum computing to transport, energy,
longevity, and more. There's no fluff. Only the most important stuff that matters, that impacts
our lives, our companies, and our careers. If you want me to share these metatrends with you,
I writing a newsletter twice a week, sending it out as a short two-minute read via email.
And if you want to discover the most important metatrends 10 years before anyone else, this
reports for you. Readers include founders and CEOs from the world's most disruptive companies
and entrepreneurs building the world's most disruptive tech. It's not for you if you don't want
to be informed about what's coming, why it matters, and how you can benefit from it. To subscribe for
free, go to Demandis.com slash Metatrends to gain access to the trends 10 years before anyone else.
All right, now back to this episode. You know, one thing I just want to point out for folks listening,
And I think it's true that when you have this huge expectation of GPT-5's launch or any of these new models, you know, when GROC 4 came out, at the end of the day, I sort of feel a sense of underwhelming.
And I think it's not because it's not impressive.
It's because we become so desensitized to extraordinary progress, right?
I think there's something else here that I'm really enjoying, which is.
that given that the closeness of the different models, it means it's likely that we won't have one
runaway success. And that means you have a very competitive market, which is just good for consumers
overall for time being. And all the models will do incrementally better over time. So I'm excited
by the fact that there's not one breakout. Sure. But I do think it's important for folks to,
let's talk about the desensitization one second, because I think folks who are listening to
this have to realize that our expectations are getting so high and every time there's a new
rollout that has additional capability it's like oh eh that's not so impressive but you know
compared to what existed a year ago two years ago it's extraordinary imad do you agree with that
what are your thoughts yeah it's hedonic adaptation right like when you get into a waymo for the
first time it's great second time yeah and now it's just a whole experience around this
I think that part of it was just the communication, though, because as you noted, like, 4-0 was a good model, but we see people getting wireheaded and hallucinations and all sorts.
Lifting that up to a better base level should have been the communication with practical examples, but they didn't really show that.
Again, I think the communication was a bit off in showing that lifting of the floor.
The other thing that I think is that for the first time, I think what we saw was that there's a big gap between what the consumer gets and what the lab has.
has. We actually saw a few Open AI people say that. Like before this came out, we had Horizon
and Zenith as the two models on this LM arena, where you compare secret models against
each other. They chose to release Horizon, but Zenith was better. And Open AI have admitted they
have better models internally as well, even before the next cluster build out. So they're pulling their
punches? Yeah, because it makes more sense as you head towards AGI to actually not release the best
model to everyone, particularly because it's more expensive to inference. GPD 4.5 was so expensive,
and that was their frontier model at the time, but it was too expensive for anyone to use.
For normal tasks, for the 700 million people tasks. For the genius tasks, you don't want to give
someone else that AI. You just use it for yourself to out-compete everyone else. So I think we'll
see that bifurcation of decent models for everyone, for everyday tasks for 700 million, and then you
make $700 million using the other model because it's the ideological thing to do.
Yeah, one of the reasons I was so disappointed by the lack of really compelling demos and
showmanship yesterday is because I'm constantly trying to make more people aware of how much change
is coming and how insanely important and imminent it is and how much they need to rethink what
they're doing tomorrow. And I was hoping to get some ammunition that I can actually just forward
and use. And they managed to make one of the biggest, you know, turning points in history,
the history of humanity, make it kind of boring. I mean, maybe it was deliberate because
they had the charts that were completely wrong as well. Like maybe it's just all deliberate
in that, look, you don't have to worry too much about this, right? No, that is a theory. That is a
viable theory, actually, because, you know, all the accelerationists, including me and Alex,
we know that a lot of this is being used internally for self-improvement, a lot of the compute, a lot of the capabilities, and it could be that it was intentional.
Don't scare the world.
Don't scare the world.
Well, I mean, like, yesterday when GPT5 came out, so GPT5 is a router model, so your thing goes in and then it routes it to thinking on mini or nano, depending on something.
They said, well, it was actually broken for like 24 hours.
And you're like, really?
You released it, and then you just left it broken in that the routing was off.
But routing being broken is also a great way to actually gather data to do the model improvement.
And they discussed this flywheel of data improvement.
So again, I think we see this bifurcation now where most of the announcements by Open AI are likely to actually be very consumer-driven, very flaw-raising.
And I think we'll see less and less of the big massive stuff, apart from the outputs, like we've had a breakthrough in something or other.
but not generalizing that.
I'm still waiting to see what an AGI or ASI demo would look like or feel like.
And I don't know, but we're going to find out.
Don't get me started.
All right.
All right.
Let's turn for a bit to benchmarks.
And when I was having the conversation before this podcast began about, you know, should we talk about the benchmarks?
We'll get old.
Alex, what was your comment about the benchmarks?
Rividing.
Some of these benchmarks, Peter, are absolutely riveting.
We are so spoiled.
We're lifting hundreds of millions of people to the frontier level of these models.
We're collapsing costs.
The economics are collapsing by an order of magnitude.
And here we are complaining, oh, it,
poke it, it didn't demonstrate any ontologically shocking new capabilities.
How spoiled we all are.
We have gotten spoiled.
And let's jump into the riveting benchmark.
So, Alex, since you've got the floor,
let's begin here, debuts number one in LM Arena.
So first off, what is LM Arena?
So LM Arena is, and I think we discussed this
in the last episode a bit,
is a crowdsourced benchmark wherein the community,
the internet at large is able to interact
with competing frontier models in a variety of ways.
The ranking that we're seeing here is focused
on text-based interaction, so conversations.
There are other scores that deal with web development
and other modalities.
And what we're seeing here is GPT-5 leapfrogging
over the rest of the leaderboard to number one
in text-based interaction.
There's another parallel benchmark with web development
where you see even larger margin, a larger difference
in ELO scores between GPT-5 and the
the next largest, or the next strongest competitor.
And this is remarkable.
Again, we're so spoiled to see these leapfrogging capabilities every now three months or so
could get even faster.
But this is going to be transformative in terms of everyday conversations that hundreds of
millions of people have software development and a number of other domains.
So can I ask you, Alex, how do you reconcile this chart with the polymarket chart?
Does that mean Google will again leapfroying?
this before the end of the year?
I would say to the extent that that
polymarket is
indicating a prediction, a rational
prediction about the market, and I
think that was set for end of August.
I would interpret that market movement
as a prediction that Google will launch
a new frontier model by the end of this month.
Every expectation. And we're going to see
a little bit how much Google has done.
They've been extraordinary under
Demis Hasabist's leadership.
Here's the next
one, and I'm going to turn to you, Emod, Arc AGI 1, and we'll see Arc AGI 2 in a moment.
The leaderboard here, you want to give us a dissection of this, what we're seeing here?
Yeah, this is kind of very, very hard tasks that are meant to indicate progress towards
AGI. So GROC kind of led the way there. As you can see, which one is that?
It's GROC for thinking, right?
And so this is kind of Pareto frontier of solving these very, very complicated.
I did tasks versus the cost.
And so O3 was actually really, really good, but it's way out there in that it's far more
expensive.
GPT5 has different levels, the high, the medium, and the low.
And it doesn't quite beat GROC, which is also the case for other ones like Humanity's
last exam.
And so I think this was part of it in that we see better performance on GPT5 for everyday
stuff, and it just eeks a lead on some of these or is up there.
I don't think they wanted to blow everyone's socks off.
because remember they also have models that scored gold medals at the IMO you know and Gemini for
recently they had deep think that's a newted model of their version that scored a gold medal
so I think apart from XAI who are trying to do the best they can on all these benchmarks and
release the best they can we're starting to see some punches being pulled at the top of these
benchmarks on the AGI side on the super genius side so I think we'll see a bit more clustering up
there. Alex, you're an agreement? I think there are two ways to look at this chart. So one, as
Immaud said, is which point in the scatter plot, which is plotting cost versus score is at the
top of the chart. That's one way to look at it. The other way is what is the cost frontier? What's
the Pareto-optimal frontier where you get the best score or the best performance at a given cost?
And there, if you look just a bit to the left, you see the GPT-5 mini-series.
and to the lower left of that, the GPT5 nanoseries have defined a new frontier for cost
performance.
So I think the buried headline here is the hyper deflation that we're seeing in the cost
of intelligence.
That ultimately, I think, ends up being even more transformative than just narrow capabilities
at ultra-high cost.
You could run the thought experiment.
What would happen if we could build super intelligent computer so unaffordable that human
civilization can't afford it.
Compare that with what happens when intelligence is too cheap to meter that everyone can
afford it.
I think that's the central discussion.
I think you'll see Google and OpenAI compete on that left-hand curve effectively.
And by the way, we're going to make all these charts available to our subscribers.
You just go to deamandis.com slash WTF, and you can get all the charts downloaded for you.
We'll be doing this from all of our WTF episodes going for.
forward. Just to make sure you have this so you can share it with your friends and new family.
All right, here we see Arc AGI2 leaderboard and Imod, want you lead us off on this one.
Yeah, this is just a more complicated version of Arc AGI 1 because they're worried with O3 that it might
saturate. So again, I think, as Alex said, you see the same thing with the GPT on the left
hand side, kind of keeping that bit just a more complicated version of the previous one.
All right. Moving along.
So here's one that I think is, we discussed this in one of our previous episodes, or I think
actually was with Alex, that, you know, how we benchmark these frontier models is going to start
to saturate and understanding how these frontier models actually become economically useful,
how they're able to solve grand challenges. So here we go. This is a look at economically
important tasks. Emond, do you want to take a shot?
Yeah, I think that, you know, this is the year where you break through that line effectively, or you reach that level of performance, especially there's another chart I don't think we have of meta, which, METR, which shows the length of tasks this can do, and GPT5 is right at the top of that.
Like, you can do the tasks on law logistics sales really well for a long time without supervision and with lower hallucinations, which is the other big news that they had around this.
And so they actually become genuinely useful.
They released Chat GPT agent that you just set off
and it will look up the internet
and do all sorts of stuff a little while ago.
It wasn't quite good enough,
but soon it will be.
And once that happens,
this is when you see real big things happening,
either a productivity boom or the inverse,
you know, people getting laid off
and we're not sure which is those two features
is going to happen.
But again, you can see you're just reaching that level now
across just about everything.
Who wants to plug in on this one?
Salim, do you have a...
Well, this is where I think,
where I mentioned.
So there's two or three really big things here, right?
To Alex's point, the cost drops of running these models, and we can do a ton.
And to Emot's point, they're kind of taking out the hallucinations and cleaning it up,
even though the top line is not amazing.
There is a lot more rock solid.
Therefore, the kind of agents and applications that build off these things will be very,
very solid and stable going forward.
And I think that's where we'll see some amazing use cases coming out,
where we apply them in industry.
And how should our listeners be thinking about this?
Did I think a bit from a point of view of fear or opportunity?
Well, if you're running a business, this is a time to really build, dig in, right?
Before, you didn't know quite what you're going to get.
What you're going to see not going forward now is it's pretty reliable, pretty solid, go all in.
If you haven't, you should be doing that anyway.
Just go all in and start turning your business into an AI native business.
Yeah, the problem I run into all the time is as the AI is getting better and
better and better, the benchmarks get harder to interpret. And also in the early days, you know,
it was all just pre-training. Oh, this is 100 billion parameters. This is 500 billion. This is
a trillion. It's getting bigger. As it gets bigger, it gets smarter. And then the benchmarks are nice
and simple. Now, you know, the post-training became very important, but now the chain of thought
reasoning is dominating. It's just such a huge factor. It makes it much harder to track what's
working and what's not working. And the danger there is that people get paralyzed, when they should
be getting motivated, just like Salim just said. And that's a challenge, actually. And it's,
you know, like a benchmark like this is vague. And it's, it's a little bit difficult for people
to take this benchmark and then translate it into, should I start an AI law firm? Should I,
you know, should I use it to work on, you know, discovering fundamental physical
properties? Is it going to be good at material science? So it's getting harder to make those
predictions. And of course, the answer to all those is yes. You should.
I like to think in jobs we teach people to be like machines so obviously the machines are going to do it better
like if you look at the health bench scores for example on the hallucinations and hallucinations in general
i think something like six to 12 percent of all diagnoses are incorrect
AI has just kind of dropped below that level now i think it's close to 30 percent if you go to
primary doctor primary care yeah it kind of varies but it's a lot AI now makes less errors than humans
I think just now over the last month.
And again, that's going to be the most errors it ever makes.
Yeah.
And we'll go into this a little bit later, but, you know,
there was an interesting study that said, you know,
physicians by themselves do like 80%.
Physicians with AI models together do like 90%.
But AI models by themselves were doing like, you know, 93%,
which means that the human pulls back and enters lots of bias into the answers.
You know, when I was, when I was,
When I was chatting with doctors about who's going to do my surgery, I came across a guy and I said,
how many of these shoulder arthroscopies you've done? And he said about 10,000. I said, okay,
that's like you're more like a robot than anybody. We'll go with you. We'll go with you because
I want that consistency. By the way, that is the number one question you should ask a surgeon when you're
interviewing them is how many times have you done this surgery this morning, right? Because you're
basically training the neural net of the surgeon by seeing every possible case. And of course, we're going to
end up with robotic surgeons that can see in every part of the spectrum and have had not just
10,000, but millions of cases.
You just don't want to be the 50th one that morning.
That's all.
All right.
Here's our next benchmark.
GPT5 sets new record in frontier math.
Alex, I'm going to you on this one, buddy.
Yeah, I think this is perhaps the most exciting benchmark to come out of GPT5 in the past 24 to 48 hours.
So what's exciting here, if you look at the performance of GPT5 high, in the lower right-hand corner, Frontier Math Tier 4.
So Frontier Math Tier 4 is a benchmark that measures the ability of AIs to solve problems that would take professional mathematicians, sometimes weeks, to solve, but nonetheless problems for which there are known answers.
We're starting to see increments on Frontier Math Tier 4 that if you extrapolate them, suggest that, and I've gone through this exercise, and it's a running discussion between me and the folks at Epic AI, if you project this forward, you find by, again, the law of straight lines, by the end of this year where we, Frontier AI, are starting to reach 15 to 20 percent of hard math problems.
being solvable by AI, project that forward another year. So by the end of 2026, you get to 35 to 40%
of math, hard math being solved. Projected forward to 2027 end of year, you get to 70%. So what I think
we're staring at is a slow motion solution to math. And that's one of the reasons why I think
it's just riveting, all math, or at least math that, math is currently understood in summer of 2025.
Isn't that amazing? And I completely agree. And it does play into Amad's theory that maybe they slow-plated intentionally. Because if you were to ask me, hey, what happened yesterday? They're crushing this benchmark relative to any other model. They cut the cost of AI at least in half, if not more. And they caught up to everybody else in coding. Like if they just said that in like two minutes, that would have been, you know, the epic death star moment. Yeah, just do that.
Wait, can I drill into that just for a second?
Alex, when you say it can solve math, right?
Can you give a specific example of what that looks like?
Because I struggle with that, even though, you know, I've done it.
Better than 800 on your SATs, I guess.
What's a specific problem or class of area that you could say that it's done something interesting?
Yeah, no.
So you can look at the Epic AI website for Frontier Math Tier 4C, see lists of example
problems that have been published. These are problems, hard problems in number theory, in
analysis, in algebraic geometry that would require a professional mathematician weeks to solve
that are being solved over the course of a short benchmark by GPT-5. He also asked the question,
what does this look like in practice? Say the dog catches the car and we actually get AI that
achieves superhuman performance in math. I think it's a profoundly
different world.
It is, and it's hard for, not everybody's a mathematician, not everybody's an engineer,
but the way a lot of things get designed and built and created in the world is you run into
problems and you immediately look up in these massive books and tables, has anyone ever
solved this before?
And so if the AI is continually solving and archiving all of these mathematical capabilities
and just making them available, then the engineering algorithms can just find it and use it,
plug it in and go. And it's the same encoding. You know, huge libraries of solved problems,
solved modules that can be assembled to create things very, very quickly. I want to close on
EMOD here before we move on just because we have a lot to cover still. Emod, closing thoughts on this
one? Yeah, I mean, it's kind of an improvement over the 04 Mini. Again, we had the IMO gold medal
from OpenAI, whereby they had a verifier on the other side of their model. And they said,
just by extending the RL of GEPT-5, they got a gold medal.
So this model can get a gold medal, so it can go even higher if you push it.
From the last few days of doing some pretty advanced math,
I can say that GPT-5 high is probably the best math model out there.
But the really crazy thing is I think it's getting to the point now
whereby the solutions to math won't be complicated.
They'll be really elegant.
And that's how we typically see breakthroughs.
So people are thinking giant supercomputers, lots of work.
but most of the advances that we've had in science and math have actually been just very elegant
and if you can do a million different things at once then you can maybe find some of that
elegant theory under all of this and that's what's going to be a big leap and if more and more people
can do that now because the midium and the high are actually at the same level which is crazy
then you might have a lot more mathematicians and the humans and the AI can figure out what that
elegant theory is and now it's time for probably the most important segment the health tech segment
of moonshots. It was about a decade ago where a dear friend of mine, who was incredible health,
goes to the hospital with a pain inside, only to find out he's got stage four cancer. A few years
later, fraternity brother of mine dies in his sleep. He was young. He dies in his sleep from a heart
attack. And that's when I realized people truly have no idea what's going on inside their bodies
unless they look. We're all optimist about our health. But did you know that 70% of heart
attacks happen without any preceding, no shortness of breath, no pain. Most cancers are
detected way too late at stage three or stage four. And the sad fact is that we have all the
technology we need to detect and prevent these diseases at scale. And that's when I knew I had to
do something. I figured everyone should have access to this tech to find and prevent disease
before it's too late. So I partnered with a group of incredible entrepreneurs and friends,
Tony Robbins, Bob Hurry, Bill Cap, to pull together.
all the key tech and the best physicians and scientists to start something called Fountain Life.
Annually, I go to Fountain Life to get a digital upload, 200 gigabytes of data about my body,
hit the toe, collected in four hours, to understand what's going on.
All that data is fed to our AIs, our medical team.
Every year, it's a non-negotiable for me.
I have nothing to ask of you other than, please, become the CEO of your own health,
understand how good your body is at hiding disease,
and have an understanding of what's going on.
You can go to FountainLife.com to talk to one of my team members there.
That's FountainLife.com.
All right, I'm going to dive into a bit of video here.
This is labeled Let the Vibe Coding Begin.
GPT5 is clearly our best coding model yet.
It will help everyone, even those who do not know how to write code,
to bring their ideas to life.
So I will try to show you that.
I will actually try to build something that I would find you,
which is building a web app for my partner to learn how to speak French so that she can better communicate with my family.
So here I have a prompt. I will execute it. It asks exactly what I just said.
Please build a web app for my partner to learn French. So I can simply press run code. So I'll do that and cross my fingers.
Whoa.
Oh, nice.
so we have a nice a nice website name is midnight in Paris oh I love together
we also see a few tabs flashcards quays and mouse and cheese exactly like I asked for I will play
that so this says Lucia all right I'm going to pause it there commentary Dave what do you think
about this so this is exactly when polymarket plummeted so I'm so glad you captured that
That clip, because the audience is looking at this, and they're acting like, wow, didn't this blow your mind?
There's only two types of people in the world, people who don't give a crap about this, or who already do it,
but they've been doing exactly this with probably with Claude four, or I'm sorry, with Claude Opus or Claude Sonnet for, on Maxima.
They've been doing this for like four months.
And so it completely missed the mark, even though it was the best presented part of the presentation, purely because it didn't show.
off the new capability or the new abilities.
But the ability to do this in in chat GPT, in other words, a single model allows you do everything.
Yeah, I mean, if I'm an investor in the upcoming round, this is really big news because
Anthropic, you know, generally claims to be the leader encoding.
Most of the people who do heavy duty encoding lead on Anthropic and they completely caught up
in this release.
And that's a very, very big deal, because not only are you good at everything else,
but you're actually as good as anthropic in their wheelhouse.
Yeah, the questions are coming out was, is this an anthropic killer, right?
Yeah.
Here's my question about this.
So you generate this web page, web app, right?
But let's say I'm a language startup, and I want to launch that actual product.
There's a huge amount of backend stuff I have to do to make it systems integrated,
integrated with stripe, et cetera, et cetera.
And we're finding that that's where all of the work is going.
And therefore, this graded threw up a front end and looks good like a front-end prototype.
Is it actually doing that much behind the scenes or is there still a lot of the work?
And that's the question I have for the folks on the way up here.
Here's my question for you guys.
How should someone listening to this who hasn't played with chat GPT-5, let's call it that,
play with their own vibe coding on this?
What's their first step? What do they do? How do they play?
I would encourage everyone who has ChatGPT5 Thinking access in particular to create a game.
I think this is one of the simplest exercises.
You've always wanted to create a long-tail application, a game, or an interactive app of some sort,
and you don't have coding experience.
Go and ask ChatGPT-5 thinking to implement a new app for you, a new game, a new game,
something and let it rip.
And do it right in the canvas.
There's a canvas button right down on the little nav bar search bar at the bottom.
So click the Canvas button.
Do it right there locally.
It's much more convenient.
They've added a lot of capability inside the canvas.
So you can just build an entire game for yourself right there inside the, you know,
just go to chatGPD.com and do it right there.
Amazing.
Yeah.
Yeah.
I think the performance isn't quite there yet.
is replet, lavable, or bolts, which do everything Salim said, the stripe and every other
integration. But again, these things all verticalize very quickly. Let's move on here. And
we saw, you know, the co-founder of Cursor come on stage and spend time with Greg Brockman and
the Open AII president. Dave, what do you think about this? How important was this?
Well, incredibly important. So it was not just a little time. It was a huge amount of stage time
in one of the biggest, you know, live streams in history.
So, so it was very important.
And, of course, what happened is, you know, Open AI was going to buy windsurf and essentially
attack cursor with a incredibly powerful competing product that's virtually identical in
functionality.
Actually, people here in the office, about half use cursor, about half use windsurf, it looks
virtually identical.
And so that deal fell apart.
Microsoft torpedoed it.
because of intellectual property rights that Microsoft would have.
So they torpedoed the deal.
And here we are just a couple weeks later.
And OpenAI is now saying, you know what?
We're going to work very closely with cursor.
We're going to give them a lot of stage time.
And I think what we're starting to see here is the alignment
between the coding companies and their LLM partners.
Because previously, everything connected to everything,
so any LLM is available through any coding platform.
I think going forward, it's very likely that cursor works closely with OpenAI.
You know, WinSurf is now part of Google or sort of part of Google, half and half, half in, half out.
And then, of course, Microsoft wants VS code and they want to build their own thing.
And so you're going to see this vertical alignment.
And also already people all over Twitter or X are saying, hey, when I use it through its kind of native platform through the canvas, it works much, much better than if I try and select.
it through something like lovable or replet.
And so everyone's speculating that they're doing kind of what Microsoft always used to do.
They're hampering the people that aren't playing by their rules in various subtle kinds of ways.
And there's no way to prove it, but it's certainly all over the internet.
Yeah, I mean, I think if you look at this, Open AI and Anthropic both have $3 billion in API revenue.
1.4 billion of Anthropics API revenue, so about half, is Cursor and Microsoft co-pilot.
And so they priced GPT5 about 40% lower than Sonnet.
So they're coming after Anthropic, basically, and they will undercut them on price,
and now the performance is roughly equivalent.
They're just basically trying to kill Anthropics revenue.
All right, the AI wars continue.
Thank you. Here's another one. This is an important part of the story from the GPT-5 announcement.
And we're going to hear Sam Altman speaking about AI saving lives.
One of the top use cases of ChatGBT is health. People use it a lot. You've all seen examples of people getting day-to-day care advice or sometimes even a life-saving diagnosis.
GPT-5 is the best model ever for health. And it empowers you to be more in control.
of your healthcare journey. We really prioritized improving this for GPT5, and it scores higher than
any previous model on Health Bench, an evaluation that we created with 250 physicians on real-world
tasks. So I think a lot about this, right? And the AI models are at this point, I think,
better than most physicians, but they're only as good as the data you feed them. And that's the
biggest challenge. Can you get access to the data that truly tells your story?
All right. So, comments. So Sam, Sam did a very brief introduction to kick off the event
yesterday, and then he did a much longer segment with a woman who was a cancer survivor who had
really done her own self-diagnosis and completely changed the course of her own treatment by
talking to chat GPT and getting very, very good advice from chat GPT. And I think Sam chose to do
that segment himself, largely I think because, one, it's a very emotional human segment.
I thought pretty well done, too. But also because it's going to prevent the regulators from
ever saying slow down or stop. Like if you're going to save lives that imminently, you know,
would have been ended. You cannot slow down. You have to keep moving. And I think that's very
important as a mission for Open AI to keep the throttle, you know, there are two drivers to keep the
throttle going. One is the incredible health care benefits. The other is the threat from China.
And so both of those are, you know, right front and center. Selim?
I think this felt more like PR to me than anything else because I think you could do this
with many of the models rather than this. Okay, maybe incrementally better than the others.
I think integrated broadly into somebody's health care regime is where we'll see the real value of
something like this rather than this immediate thing. But I do take Dave's point. I think that's
exactly right. I think they're pushing hard to kind of show they're trying to add a lot of value.
Yeah. Let me throw in something on the personal front here. So one thing that I do, and I've talked
about it openly on this, is I'm chairman of an organization called Fountain Life. And when
folks come in for what we call an upload, we fully digitize them, right? We get 200 gigabytes of data
about you full body ever on you didn't you just do yours i did it i did it i did it uh i did it a few
weeks ago just got my results back i had 200 gigabytes of data uh so for me what was important was
i reduced my um non calcified soft plaque which is the dangerous plaque that can give you a heart
attack in the middle of night down by 20 percent the lowest it's ever been i got my liver fat from
six percent down to one percent which is fantastic but what happens is in my fountain
Life app. We're running this on Anthropics right now, but maybe we'll go to GPT5. We're running
much of the other programming on Gemini. But here's the point. I can query all my data.
You know, the fountain life system pulls in all my wearables, right? So Apple or my Coteuse glucose
monitor. And I can ask a question. I just asked a question there a day. I said, listen,
And there's a point at which my deep sleep increased significantly.
What was I taking?
What was the supplement or medicine that increased it?
And being able to explore stuff like that is amazing.
So the best AI, you know, healthcare models in the world are great,
but they're directly a function of do you have enough deep data about your physiology over time to understand what's going on?
I mean, ultimately that's critical.
And Imai, you've been thinking a lot about this.
I think my liver fat went from one to six percent plus weeks.
You're heading towards foie gras, honey.
Well, I mean, Selim, have you come through fountain life yet?
I haven't.
I need to find the time to do it.
You're the godfather to my kids.
You got to come through.
I have done the heart test where they check the, you have soft plaque in your,
and Lily was like, with your diet, you must be on the verge of a thing.
Go get it done.
And we got it done.
And they said, and they said,
Your whistle clean, we got nothing.
That's great.
So the challenge is...
So then you had a stake and...
I went to town.
One quick point, right?
Your body is incredibly good at hiding disease and you don't feel a cancer until stage
three or stage four.
70% of heart attacks have no significant precedence.
So you have to look.
You need to get the data.
I have a schedule.
I mean, first I want to get this shoulder shorted out.
Now I can go and do other stuff.
All right.
Well, for one for the mountain for sure.
And Ema, you think you came through, didn't you?
I haven't been through yet now.
I nearly got there.
I will.
I will.
We will have to get healthy and we all need data.
I think this will be really interesting, though,
because the models themselves are getting good and, again, better than any doctor.
Like, they mentioned the health benchmark that they have.
Doctors scored 20%, and the latest models they have scored 60%, 70% on that.
So they're better than any doctor.
But the really exciting thing is, like, we built a healthcare model
called AI Medical, which we released open source, so we've got a much better version coming,
which outperformed every single model except for GPT5 and 03. And it works on a Raspberry Pi. It works
on anything. So by next year, I think we'll be at the point where the key thing is you get
the right data, especially like Fountain Health has so much, and they just have AI just going
constantly, because what you want is for it to figure out stuff proactively as you feed
at the data. And now we're seeing these models being able to just detect breast counts for five
years in advance and other things like that. When you talk to radiologists, they're like,
we don't have any software that's actually checking all the radiological scans over time.
Wouldn't it be nice if that happened? And now you have the capability of doing that, which I think
will save so many lives before the AI makes diagnosis. And I love having a very deep bench of
data for me over the course of eight years. And every, you know, right now I go from annual
upload to quarterly updates. And ultimately, it's, you know, it is, you know, it is.
is all about the data.
Well, I think that if I just say one quick thing,
like right now, everyone on this
should be trying to get as much data as possible,
because the models are coming.
And the more models you, data you give these models
about yourself, the longer you will live,
and the better you will live.
Before now, we didn't have the right models.
Now we have the right models.
And again, they'll be available via open AI
and then also open source.
So for all the folks building stuff around this,
here's my desire to end states.
I want to get it to a point where you're about to drink a coffee and it says,
hold on, wait 10 minutes.
I'm still metabolizing the donut.
Give it time to, I can optimize your digestion.
I think that's when things get relief.
Well, I want the AI to say, warning, pull up and don't eat the donut.
Separate problem.
All right, let's continue on here.
Here you see another demo that came out of the GPT5 announcement,
an executive assistant for all of us.
And, you know, I use Outlook right now from Microsoft, and this got me thinking about moving to Google Calendar.
So let's play the demo.
And we're giving Chachapiti access to Gmail and Google Calendar.
Let me show you how I've been using it.
I've already given Chachabit access to my Gmail and Google Calendar, so it just works.
And it's easy here.
But if you hadn't, ChachabitiT would be asking you to connect right now.
Let's see what Chachabit is doing.
Okay.
That was pretty quick.
Okay, so Chachabit has pulled in my schedule tomorrow.
And, oh, without even asking, Chachabit found time for my run.
I don't think I was invited to the launch celebration.
We'll get you on there.
We'll get you on there.
Chachubit has found an email that I didn't respond to two days ago.
I will get on that right after this.
And even pulled together a packing list for my red eye tomorrow night
based on what it knows I like to have with me.
It's been amazing to see that as GPT5 is getting more capable,
chat GPT is getting more useful and more personal.
Right. I found that impressive.
I have an amazing chief of staff, Esther, that many of you know,
and she's incredible, but I think she could use this and I could use this.
Thoughts?
I'm really coming around to a madge theory that they deliberately undersold it
because this is cool-ish, but you know,
finding an email that you didn't open two days ago.
You don't need AI for that.
But we are using this stuff for business planning inside.
I'm the chairman of a couple of companies that have hundreds of employees.
And knowing what everybody's doing and why they're doing it is immensely challenging.
And we're having a field day with this in very high-level strategic planning,
in understanding performance and understanding everything going on.
It's an incredible unlock at the executive management level.
sure and again for me as a watcher you know kind of frustrating to see it you know
planning out her run when I know it can it can actually plan entire business units but still
the point is it's is very very capable so I don't know I was I was frustrated but I get it
all right you know the the opportunity to now enable you know Eric Berniolson calls this
white color drudgery right there's a lot of cruft that we do just to get through I think
this solves for a lot of that, and I think this will amplify the capability of a lot of
people. I think chiefs of staff rise up a whole level because you could use this effectively
and do it a lot more. Oh my God. Well, like the standard behavior in corporate environments is
individual people desperately want to help move the company forward. They want to contribute. They
want to have maximum impact. And they want to know that the executive team knows they're doing
that. And it goes horribly wrong when either they don't know exactly what they should be doing
or they do something amazing and nobody notices.
And this completely unlocks and solves those problems.
So when Donna and Nick Pingme and go,
what dates are you available for the next WTF episode?
And you have 14 that intersect,
figure it out with your calendar,
and I'll be able to get help with that.
You will.
In fact,
it'll get scheduled without your permission.
Well, we're kind of dancing monkeys anywhere, right?
You would just be told, okay, be here at this time.
But interesting, right?
I do what's on my calendar.
It's very funny when I'd gotten to know Larry Page and Sergey Bren very well.
Larry was on my board at X Prize in early days.
And there was a point at which they said, by the way, we fired our executive assistant.
And what do you mean you fired your EA?
Well, we learned that if we don't have an EA, no one could put anything on my calendar
without our permission.
And then like a decade later, I was scheduling a podcast with Elon, and I said,
Elon, who should I schedule with?
And he goes, me.
I said, don't you have an EA?
He goes, nope.
So maybe that's the mistake we're making.
All right, let's go on to the next topic here.
This just reads AI revenue models.
So GPT5 is available now for free, including its most advanced models.
At the same time, Gemini is their advanced models that are $249 a month.
Grock Heavy is it $300 a month.
And how do you think about the pricing situation?
here. ChatGPT is 700 million weekly active users on their way to a billion, probably within the
next six months. Dave, thoughts? Yeah, no, they really slashed the price. It shows up for the
user user, but also the APIs, which I think we have on the next slide. We do. I'm going to go
hit to that slide here. Yeah, here you go. Yeah. This is what Ahmad was talking about earlier.
They just absolutely, the cost per intelligence came way, way down. I think you said 40%. I had it
about half of where we were a week ago.
That's a big, big deal.
And it's, you know, it's more than you would expect on the curve.
And again, they didn't really sell it yesterday in any big way,
but it is a big step on that Pareto frontier.
This is a huge competitive.
GPD 4.5 was $75 input and $150 output.
Wow.
As compared to a buck 25,
input and $10 on output per million tokens.
I would say GPT 4.5 was never quite on the cost frontier anyway.
What I see in this with this almost an order of magnitude reduction in the cost frontier
is unlocking new use cases.
And those I would expect to be qualitatively different.
So for example, favorite use case, if tokens for LLMs are suddenly an order of magnitude
too cheaper, that means that for, for example, scientific discoveries or mathematical discoveries
that require searching lots of possible completions of sentences, of theorems, etc., then you can do
10 times more searching, and that makes a qualitative difference.
You can brute force it in that sense.
Yes.
Exactly, exactly.
I've got to get a shout out to, like, there are probably thousands and thousands of engineers
out there that listen to this podcast. And if you tried writing code through any of these
models, any of these really great models, either the Anthropic or the new GPD5 or Gemini
2.5 Pro, if you tried a month ago, you have to try again today. It's just night and day different
in terms of being able to build something without even looking at the code in terms of getting
exactly what you asked for. I'm using mostly Gemini 2.5 Pro deep think to do the planning, but then
I'm putting it into either GPT-5 or Cloud 4 Sonnet Max to do the coding, it's working like
you would not believe, and night and day better than just a month ago.
How many of the frontier models do you have open at a time, and are you trying the same thing
on each of them, Dave?
Yeah, I keep them all open, actually.
Of course.
But, you know, look, it's 250 bucks a month.
It's not going to kill you, and you can turn it off any time.
But I keep them all open, and I don't usually try GROC for code.
I do everything else.
I'm not sure why.
Maybe I should.
Alex or Eamon, how are they getting these cost reductions?
I think a lot of it, this is based on public information shared by the Frontier Labs.
A lot of it comes from optimizing the inference stack.
So moving to faster Blackwell GPUs, I think is one factor.
Optimizations, low-level optimizations in the tech stack.
at inference time, distillation of smaller models with fewer parameters based on higher quality
data, algorithmic innovations, architectural innovations. These all compounds. Some of them are 50%
improvements. Some of them are 2 to 3x improvements. But collectively, as is now the lore in the
industry, we're seeing order of magnitude per year cost reductions.
But are they, how much money are they losing on this per transaction?
It's difficult to know from the outside, but I would also say that the matter is somewhat
confounded by the enormous capital expenditures that are going into this space.
So it's not necessarily even a reasonable question to ask how much is being lost.
You have to sort of factor out the capital expenditures.
As we've discussed previously, we're in the process of tiling the Earth's surface with data
centers.
This is an enormous capital expenditure.
So it's a little bit difficult to separate out the amortization.
of KAPX from the OPEX of just day-to-day inference and electricity.
I can definitely tell you they're definitely not incinerating money.
There's a lot of FUD on the Internet about them.
Hell, they're incinerating money.
They're losing huge amount.
They're not.
They're operating better, about break-even or better.
And in the context of what Alex just said,
the order of magnitude improvement in cost per compute that just came from the GB200s
from NVIDIA, that would put this way over the top.
If I was talking to Gemini earlier today about, you know, what do you think they spent training GPT-5?
And it came back with this insane number of billion dollars on H-100s.
I said, but I don't think they used H-100s.
And said, oh, okay, well, if they used GB-200s, it would be more like 60 million.
Wow, okay.
But, yeah, it's about a factor of 10 reduction in the cost of the compute, and they're passing some of that through.
And, you know, the GP-200s are just coming off the line and starting to get into production, so it'll be a little while.
So when I saw the pricing, I thought they were doing this for competitive advantage and taking a huge loss.
What I'm hearing you guys saying is that's not the case.
They're really running maybe a break-even, but it passing on massive savings to the consumer.
Salim, this is what hyper-deflation looks like.
And it's an interesting thought experiment to ask, assuming this is sustainable,
and I have no reason to think that it isn't sustainable.
What does hyper-deflation right now at inference time for frontier models look like once it starts to spread to the rest of the economy?
that this leads to Peter's abundant state, I think.
You know, I just want to say something for those listening.
I feel smarter during these episodes, getting a chance to speak with Dave and Salim and Alex and Imod.
And I hope you do too.
I mean, that's the reason I do this.
We put about 20 hours of deep research in per week trying to find the most relevant content to share with you.
And then how do we make it actually understandable, connect the dots and deliver sort of a
distilled cliff note to help you stay ahead.
Selfishly, I do this because it's a blast.
Salim and Dave, what about you?
Well, I think the curation that goes into this, right,
where you're looking across the spectrum and then picking out what are the most relevant things,
right?
Dave talks about the actionability of it, but I think the fact that we can curate the very
important bits for our viewers, I think is the most important part and the most fun.
And we get to see that first.
Yeah.
I always have my kids
in the back of my mind
when we're doing these podcasts
because they're going to live
their entire lives
in the post-AGI world
and one of my kids
was talking to one of the guys
here in the office and said
all your dad ever talks about
is AI
yeah but the whole time
you're growing up
did I ever talk about AI once?
I mean I never mentioned it
until suddenly it's going to change
your life.
I mean you must get on top of this
right now.
You must have a plan.
It's for their own good.
And so I'm always thinking
that in the back of my mind
how many listeners out there need this information in order to remap what they're doing.
And to be inspired, right?
I mean, our goal here is to inspire everyone to be in the thick of this and to find your own moonshots to understand.
So if we just connect the dots on one thing, right?
Yeah.
The fact that GPT-5 is now free is crazy.
It's built into it the best doctor in the world that can diagnose anything on a much better basis instantly for you.
is a profound uplift.
And this, I think, Alex, the point you were making earlier.
Exactly.
When we talk about abundance in all of its many facets,
taking 700 million people and suddenly giving them access to state-of-the-art AI,
I think becomes transformative.
I'd be interesting.
I'm looking forward.
Here's the thing I want to watch.
How does opening eyes user growth go from here,
given that they've made it free?
I thought you were going to go in a different direction.
I agree that that's interesting.
but another is, in some sense, this is the greatest AB experiment that macroeconomist should be all over
because prior to yesterday, the world, most of the world didn't have access to frontier AI,
and starting yesterday, most or a fraction of the world, call it, you know, a tenth of the world,
does, what does the before and after look like? Do we see dramatically different outcomes in different dimensions?
You know, Sam didn't have a huge part in the event yesterday,
but he did a lot of postgame interviews, which I watched.
And in one of them, one of the interviewers said, you know, imagine college and education for me in, say, 2035. And he said, 2035. Like, college in 2035? He said, if it exists, I mean, we need to coin a term for it. Maybe this is like an intelligence shock that's hitting the world.
Yeah. Oh, I hope so. I mean. Just to the college thing. My son is 13 and I'm hoping the university system implodes in the next.
years before you're you you're a boy at mine um and by the way uh i'll make a
call out as talking to my son i'm saying i'm going to go do wTF with uh with my moonshot mates and he goes
have you reached a million subscribers yet and i go why because well then you'll actually have a great
podcast at so if those you haven't subscribed yet please help we get this to million
subscribers subscribe share it with your friends it's not about view count i think it's more of a
quality. And I think if a smaller set of people gets a much more value out of it, I think that's
better. All right. I can tell you, when I'm walking around MIT, a huge fraction of people I
bump into have actually watched the pod. So we got a quality audience for sure. Yeah, I think
the closing out is, you know, there's a cap on human intelligence, but there isn't on artificial
intelligence. So everyone will have abundant. And you can expect that next year, a zero drops off
here and then the year after another zero drops off and we're seeing that insane it's very
crazy crazy that's insane so uh i want to hit a couple of things open ai is eyeing a half a trillion
dollar valuation um which is pretty extraordinary it's one of the highest valued private
companies along with bite dance SpaceX and ant group wouldn't say much more here other than you
know, how will they go public? When will they go public? And will this be the largest IPO ever?
You know, we've seen open AIs, GPT5, but they also unveiled their open models. I don't want to go into
this in too much detail, but Alex, do you want to lead us on this one? Actually, Imod, you're the
open, open model champion around the globe. Wait, wait, wait, wait, wait, hold on one second. Hold on one second.
Can we just go back to the previous slide just for a second?
Okay.
So, Open Air made $10 billion.
I was making about $10 billion a year.
Microsoft is making about $300 billion a year in revenues.
And so Open Air is valued at half of Microsoft.
So I just want everybody just digest that ratio.
$3 trillion.
No, no, I mean revenues.
It's $10 billion versus $300 billion in revenues.
Okay.
So, this is a very big, it's very lofty, but that feels overpriced me.
Anyway.
Well, Sam's projection is $100, $250 billion in revenue in, what is it, two years from today?
Which I don't doubt that that's entirely possible.
There's only two versions of the world, actually.
There's a version of the world where open AI easily hits that target, and there's a version
of the world where Google destroys them and wipes them off the face of the earth.
Those are the two possible outcomes.
I mean, talk about capitalism at its finest, right?
I guess.
Look, SpaceX is $13 billion in revenue.
And what's its valuation?
Like almost a trillion or something like that?
Half a trillion?
210 right now?
Oh, that's right there.
Okay.
But when they own Mars, it'll go up a little bit.
There we do.
So, you know, there was an interesting note that Elon, you know,
pushed out on X, which was,
when is Open A. I'm going to buy Microsoft.
Okay.
Fascinating.
All right.
You might continue on with the open models here.
I think that, I mean, it's pretty significant.
A lot of people worried about Chinese open models going everywhere.
Open AI have released a really solid model.
It's a bit weird.
It has to be said.
But the main thing is, this model costs $4 million to train.
And it's better than any model that we had this time last year.
That's extraordinary.
ordinary. Next year, it will cost $400,000 to train. How did you know it was 4 million
to train? Did they release that? They said 2 million H-100 hours. And the 20 billion parameter model
that runs on your laptop was 10 times cheaper. It was $2 million. It's $2 an hour. And that's like
from scratch or was there a distilling from a big model? No, it's from scratch. It's 80 trillion
tokens, 80 trillion words. So when we back it out. Dave, to your point,
I think the footnote there is where do those tokens come from?
And I think it's reasonable to assume in the style of, say, Microsoft Phi models that these are
tokens that were generated through some synthetic process from a much larger, much more expensive
in terms of fixed costs model, in which case, whether you call the total pre-training cost,
just the marginal cost for training on the back of a much larger parent model or teacher model,
I think that's the key distinction.
We should do a whole podcast on just that topic
because in the broader sense, AI that helps create the next AI
is an incredible force multiplier for just humanity.
And it's a good example because when you just distill the training data
and create some synthetic data using the prior model,
you knock 90 to 99% off the cost of creating the next iteration.
It's just crazy economics, how it feeds back.
Like no technology previously, other than maybe robots building,
robots someday. But there's nothing that feeds back like this. We need a new term, you know,
that supersedes Moore's law here because the speed of this is extraordinary. And we're witnessing
the evolution of something that I think we're going to look back as. I can tell Alex is about
to say something brilliant. I don't know that look. Point out a couple of things here. One,
we do have this already. It's called education. Distillation is what humans use to take
years and years that researchers and teachers spend accumulating and then convey this in a concise
lesson to a student. So we as humans do distillation as well. It's very efficient, very
economical. And so it's perhaps not that surprising to see distillation give us radical economic
efficiencies in these open weight models. That's first point. Second point, just to go back,
Peter, to your earlier comment, having these supply chain safe, if you want to call them, that
open weight models is transformative for so many applications that are highly regulated,
that are very sensitive to supply chain risks in finance, in health care, in government.
Now we have American trained models that can be embedded in all sorts of mission-critical
internet disconnected systems, and that is going to be transformative.
Yeah, I think that just one final.
thing on this. This model only has five billion active parameters. And so it runs like
faster that you can read even on a MacBook. And I think the big thing is everyone's talking
about billion dollar training runs. I actually don't think that's true at all. I think you
will have a GPT5 level model by two years max that will cost under a million dollars to train
end to end. And nobody's got that in their numbers. All right. They'll run on anything.
The expensive part was the journey to get there.
I completely agree that at some point, we're going to discover, made this point previously,
the perfect architecture, the perfect sort of microkernel version of a foundation model
that's relatively small parameter count, that's fully multimodal.
And if we knew what that were today, we could radically collapse training costs.
I think what this actually shows is we don't even need that.
We need to have a trillion good tokens.
And if we've got a trillion good tokens, then you can train a friend.
frontier model for less than a million dollars next year and so what do you do with that do you then
embed that into all sorts of devices and human robots and moving cars and anything yeah
everything everywhere there is everything everything everything becomes intelligent with a built-in
i think that's where that's where this one goes yeah so your question exactly defines the future
entrepreneur what am i going to do with all that if i could for a million which is seed money
I can build a GPT-5 level model.
What else can I build?
And this is going to be the age of abundance,
where it's limited by people's imagination.
If you can imagine something genuinely useful that people want,
the cost of creating it is near zero.
Well, you didn't have to create the model.
One entity needs to create that model open source once,
and the economies of scope means it can be used anywhere.
Maybe I can stop arranging the room for the damn rumba.
That would be a great starting point.
not go there, Salib.
Hey, everybody, there's not a week that goes by when I don't get the strangest of compliments.
Someone will stop me and say, Peter, you've got such nice skin.
Honestly, I never thought, especially at age 64, I'd be hearing anyone say that I have great
skin.
And honestly, I can't take any credit.
I use an amazing product called One Skin OS01 twice a day, every day.
The company was built by four brilliant PhD women who have identified a 10 amino acid peptide
that effectively reverses the age of your skin.
I love it, and like I say, I use it every day twice a day.
There you have it. That's my secret.
You go to OneSkin.co and write Peter at checkout for a discount on the same product I use.
Okay, now back to the episode.
All right, we have a back end of this WTF episode, which is to look at all the other
companies in the AI wars that continue GROC, Gemini, Meta, Nvidia, Apple.
I'm going to try and move us through this.
There is some important data.
need to share with everybody just, you know, this is what we're watching and what we're keeping
in tune with. Hopefully you are too. Let's jump in. The first is, again, a quick look at humanity's
last exam, HLE benchmarks. Alex. Yeah, I think what we're really seeing here, so we see two models
grok with extensions and derivatives of various sorts and GPT5 and its derivatives leading the pack.
I think if you pull back that headline, what you're actually seeing here is the power of tool use and the power of parallelism with GPD-5 leaning heavily on search and other tools and GROC-4 heavy leaning on the power of having multiple parallel agents collaborating and zooming out 10,000 meter perspective.
I think what this points to is what we were just discussing, a world in which it's not just the core foundation.
model, but arrangements, not even necessarily scaffolding, but the ability to integrate both
these microkernel-type foundation models with each other in teams of agents, and the ability
to integrate them with powerful tools in their environment. That's going to turn out to be
one of the next big shocks in terms of how we're able to challenge the frontier for HLE and other
hard benchmarks. By the way, people listening, our subscribers,
If you get a second, you want to do something fun, just get on to chat GPT5 or GROC or wherever
and just ask it to give you 10 example questions from Humanity's last example.
I mean, I'll just share a couple of them here that I asked for.
And so here's one in the classics, category of classics.
Here's a representation of a Roman inscription originally found on a tombstone provided translation for the Palmy Rini
script. A transliteration of the text is the following, and then you have to transcribe that.
Here's another one. What is the rarest noble gas on Earth as a percentage of all terrestrial matter
in 2002? Okay. All right. Here's one. I'm going to ask our geniuses here in physics.
A point mass is attached to a spring. Spring constant is K. It oscillates on a frictional surface,
and a frictionalist surface,
if its amplitude of motion is doubled,
what happens to its total mechanical energy?
A, it doubles, B, quadruples,
C, it triples, or D, it remains the same?
I'm not going to ask you to answer that.
Hopefully, quadruple, I would expect.
Yeah, I would say good one.
Yes, correct?
Okay, here we got.
I just have a shiver is flashing back to my physics courses.
Please, for God's sake, let's not do that.
All right, last one.
consider a balanced binary search tree like a red black tree with n nodes what is the
worst case time complexity for searching a given key and the answer is zero log n okay
order login oh order login yeah yeah there you go i'd like to point out on this one
the open source models open a i just released scored 19 percent and 17 percent and the
17% is the 20 billion parameter model that will run on anyone's laptop.
Crazy.
Amazing.
Bring that into your college exams, everybody.
All right.
We had Elon Pipe Up.
He said, great work.
So here was the tweet he's referring to.
Very proud of us at XAI after seeing the GPT5 release.
With a much smaller team, we are ahead in many.
Rock Four World's first unified model.
and crushing GPT-5 in benchmarks like ARC A-G-I-A-I-A-I.
So we're going to have this continuous, I don't know if it's an ego battle, a financial battle, whatever it might be,
where everybody just trying to one-up each other.
And, of course, his next tweet was GROC-5 will be out before the end of the year,
and it will be crushingly good.
So comments on GROC.
He just tweeted, he just tweeted saying GROC 4.20 before the end of this month, number one.
I think, Peter, one of the takehomes here is that whoever is defining the benchmarks wins.
It's like, you know, you create the evals and humanity wins.
It's amazing how starved the research community is for compelling new evals as discussed previously.
And so to the extent that we can create more e-vals.
that as I think your community has also chimed in historically with some wonderful idea for abundance-oriented benchmarks or e-vals, the frontier labs will, I think, race to achieve them.
All right.
Most of the polymarket predictions have Google winning by the end of the year.
And for good reason, what we've seen is extraordinary.
And here's the title of the slide, Demis Hasabas, in a word.
relentless. In only two weeks, they've shipped or achieved, and I'll read the list here. Gemini
3. We'll see an example of that. Gemini 2.5 Pro Pro, Pro, free for university students, Alpha Earth.
Amazing. We'll see a demo of that. Eeneas deciphering ancient text. Gemini won the gold medal in the
International Math Olympiad. Storybook, Caggle game arena, Jules, notebook L.M. Video.
Overviews and Gemma, which passed 200 million downloads, and this is Google's lightweight, open source, open weights model.
I mean, really impressive work.
Yeah, well, so Demis has 6,000 people in AI R&D.
OpenAI is up to a little under 2,000 now, but these guys at Google have been working on it for years.
So they've got about a factor of 10 more person hours put into it.
far and they're all operating on things in parallel. So they're now unleashing it all.
It was all just kind of sitting there in the lab until Open AI put the competitive pressure
on them. Now something has shifted in a big way at Google and a couple of fronts. One,
they're unleashing all the things they've been working on. The other is they proactively reached
out to a bunch of our companies, including Blitzie, which is, you know, Blitzie is a particularly
hot company. But I don't know how they found it, probably through all their big data. But the Gemini
people came over to our office proactively and said, we need to meet with you. So they're really
reaching out trying to get the businesses to move over to using Gemini. And that was also really
evident in the GPT5 rollout yesterday is the call to companies saying we're here, we're open,
we want to partner with you. And we're cutting the price point to make it easier to do. And,
you know, we're open for business. So I think I think that's a new thing. I hadn't seen anyone
and proactively reach out to our companies until this week.
Amazing. All right. Let's take a look at a few of these examples coming out of Google.
This is Google's Genie 3. It's World Models for Gaming. Let's Play the video.
I was blown away by this. I found this probably one of the most impressive things I've seen in the last week.
What you're seeing are not games or videos. They're worlds.
Each one of these is an interactive environment generated by Genie 3, a new frontier for world models.
With Genie 3, you can use natural language to generate a variety of worlds and explore them interactively,
all with a single text prompt.
Let's see what it's like to spend some time in a world.
Genie 3 has real-time interactivity, meaning that the environment reacts to your movements and actions.
You're not walking through a pre-built simulation.
Everything you see here is being generated live as you explore it.
And Genie 3 has world memory.
That's why environments like this one stay consistent.
World memory even carries over into your actions.
For example, when I'm painting on this wall, my actions persist.
I can look away and generate other parts of the world, but when I look back, the actions
I took are still there.
And Genie 3 enables promptable events, so you can add new events into your world on the fly,
something like another person, or transportation, or even something totally unexpected.
You can use Genie to explore real-world physics and movement.
all kinds of unique environments. You can generate worlds with distinct geographies, historical settings,
fictional environments, and even other characters. We're excited to see how Jeannie 3 can be used
for next generation gaming and entertainment. And that's just the beginning. Worlds could help
with embodied research, training robotic agents before working in the real world, or simulating dangerous
scenarios for disaster preparedness and emergency training. All right. I'm going to part
there but holy cow I mean I I don't see I mean first of all the simulation theory
just took a huge jump forward boom I mean you know this this blew my mind I actually
showed this to a friend who spent the last two three years building metaverses you know
and and he was literally his jaw dropped and he said I I don't even know where to start
the fact that you can have a responsive environment that Taylor is depending on where you look
and all that's generated on the fly in real time,
he couldn't cope.
I've just never seen his mind broken like that.
Yeah, it's a master fusion transformer,
similar to all of the video models like VEO and others.
And again, we're kind of seeing the breakthroughs coming in this,
especially because Google has such an amazing data set.
I think that you'll see a video like this,
a video model like this as well from XAI.
This is what Elon's going to be putting those 10,000 blackwells on his video model towards.
But the fact that it's real time now gives you a real idea.
about that. Similarly, we've seen real-time video generation from Juan and others now. So every
pixel will be generated in a few years, which is going to be cool. And again, make sure what
is made? And what if, you know, meta, you know, Zuck has wanted the Metaverse forever, and of course,
this is delivering the Metaverse. On the one hand, on the other hand, this is billions of dollars
of CAPEX that's been allocated to video gaming or to Metaverse software that suddenly is in danger
of having been rendered irrelevant
if this can all be just the output
of a single model.
A thousand voices in the video gaming industry
just cried out in anguish
if this is all just a prompt away.
That's the response I got.
Yeah. I mean, with V-O-3
potentially crushing Hollywood
and this potentially crushing the video game industry
or reinventing it, accelerating it,
making it possible for anybody
to create magically compelling
video games.
This is the Star Trek.
Holodeck. This is the matrix. This is the key node potentially in the tech tree of our civilization that
unlocks general purpose robotics and general purpose autonomous vehicles. Yeah, because they can train
inside of that. That's right. Extraordinary. Absolutely extraordinary. All right, here's another
extraordinary gift from Google, and this is Google's Alpha Earth Maps in real time. This turns
massive satellite data into unified global maps, views of land and coastal areas on 10 by 10
meter precision, tracking deforestation, crop, wealth, water use, and urban growth. Take a quick look
at this video. This is how our new AI model, Alpha Earth Foundations, interprets the planet.
Different colors in this map show how different parts of the world are similar in their surface
conditions. So similar colors mean similar things, like two deserts, two forests. The model
It all understands the unique patterns that distinguish any ecosystem, so it's able to use those
learned patterns and quickly find matching patterns in other places in the world.
This allows it to tell the difference between, say, a sandy dune on a beach and the deserts
of the Sahara.
It used to take months to years for scientists to accurately map the world.
But with our data set, they can do it in minutes.
Much like Google Search has indexed the web.
With Alpha Earth Foundations, we've indexed the surface of the planet.
And we're making this available through Google Earth Engine for the use.
years 2017 all I can say is just in time thoughts what's interesting here I think is this is what's
called an encoder only model it takes 10 meter by 10 meter patches of earth surface and converts them to
high dimensional vector representations encoder only models were were very popular in natural
language processing prior to the advent of so-called decoder only models like the gpt series
I think the elephant in the room here is once we have encoder models that are encoder-only models that cover the Earth's surface, we're about to get decoder-only models. And what that'll enable in practices, so right now with these encoder models, you can convert arbitrary land masses or ocean masses to vectors and do a bit of regression on them and maybe a bit of light prediction. With decoder-only models, you'll be able to take a few square kilometers of land.
extrapolate out visually, what does the future of this land look like? And you'll be able to do
searches of interventions. If I put a parking lot here or I put a hospital here, what's going to
happen in all likelihood to development in the area? You'll be able to do urban planning as a matter
of a tree search in the same way in which Alpha, AlphaGo or Alpha Zero or Mu Zero are able to play chess.
That's, I think, going to be the real economic unlock.
Amazing.
See, this is the application usage where I think all these things start to really shine,
where you can take all that capability and apply to something like this.
It'll completely transform how we look at the world.
My mind is kind of blown with this one.
Yeah.
Yeah, I'm really glad you said that, Alex, because I really did not get the implications of this until you explained it just now.
I do appreciate that.
You know, on the prior slide, too, I had a meeting earlier today with Satya and Mahaja on the CEO of Dateline here,
who, you know, all these companies are thinking,
what's my moat, what's my moat, what's defensible,
what's going to give me recurring revenue for the next 20 years.
And like, it's just not a way to think anymore.
If you look at the rate of change,
it's all about small, nimble teams and great team dynamics.
Overall, there'll be far, far more company success than ever before.
But you can't expect to sit still.
You have to reinvent yourself all the time.
Amen.
I mean, I think agility and passion-driven building and understanding the root-cost problems that you want to go solve, those are the fuel for the future.
It's not setting up regulatory, you know, blockage.
All right.
Next up is a video of Zuck on meta-superintelligence labs.
Superintelligence for everyone.
Let's take a look.
I want to talk about our new effort, meta-superintelligence labs, and our vision to build personal
superintelligence for everyone.
I think an even more meaningful impact in our lives is going to come from everyone having
a personal superintelligence that helps you achieve your goals, create what you want to see in the
world, be a better friend, and grow to become the person that you aspire to be.
This vision is different from others in the industry who want to direct AI at automating
all of the valuable work.
This is going to be a new era in some ways, but in others it's just a continuation of historical
trends. About 200 years ago, 90% of people were farmers growing food to survive. Today, fewer than 2%
grow all of our food. Advances in technology have freed much of humanity to focus less on subsistence
and more on the pursuits that we choose. And at each step along the way, most people have decided to
use their newfound productivity to spend more time on... All right, so he's out pitching hard. He wants
to get to superintelligence first, what could possibly go wrong.
He is, the poaching continues, and I love this.
So Zuck contacted over 100 Open AI employees, 90% of them turned him down.
Why?
Because they think Open AI is closer to AGI than META.
That's got to sting.
Imod, what do you think about that?
I think he has a very different definition than Sam Altman does.
I think one of the reports was that he was talking about how
I could make reels a better product.
So I think it's a very different view to the type of ASI that we're talking to.
They should just call it meta-intelligence.
But, you know, I think it shows, like, you see their billion-dollar offers.
People still don't move.
I think everyone feels that we're getting close to that AGI point.
And you want to be where it's going to happen, you know, because what even is money after
that?
We're going to find out soon.
I mean, that's a very important point.
I mean, in this post-abundance world,
We're living in a post-capitalist world as well.
Money has very little meaning.
Imad, you and I, and Alex, you and I have spoken about that at length.
That's right.
And I think, Peter, as the cost of talent is increasing, and it would appear that it certainly is,
that's going to force Frontier Labs to start competing based on algorithmic insights and ideas.
And I think that's a net positive for the economy and the world.
Amazing.
All right.
I love this next one.
The Zuck poaching effect.
So Sam just announced $1.5 million bonuses for every employee over two years.
He's now officially made every employee at OpenAI and millionaire by giving them over a million dollars.
That compares to 78% of Nvidia employees who are also millionaires.
David asked if that included the baristas.
Do we answer that?
I don't know, but we'll be at Open AI in a couple of weeks and we'll ask.
Okay, it'll affect your tipping at the coffee counter, I guess.
Oh, my God.
Peter, I would expect this to create a bloom of seed funding of startups in the next year or two.
It's just going to be absolutely enormous.
I'm already starting with some of the startups I advise,
starting to see the beginnings of an absolutely enormous.
Alex, that is such an important point, right?
This is something that America does so well.
We create, you know, these decadillion and centa billion.
companies and trillion-dollar companies, and we make, because of stock options and stock distribution,
we make all the employees super wealthy, and they turn around and invest in other individuals.
And that doesn't exist in a lot of countries.
Dave, you've spoken about this.
Well, there's two things that are different this time.
But you're right.
That is the engine of America, and it works really, really well.
This time around, it's so fast, and the teams are so young.
So that's unprecedented, and I could see some things that go wrong with that, but it's, you know, it's a field day.
right now, so might as well, might as well savor it. Also, it's much clearer now how you're supposed
to work with either OpenAI Anthropic or Google. How's that? It's not, well, I mean,
they've made it very, very clear that they want partners in all these categories, especially
complicated, regulated categories, or categories that have proprietary data. Here's the API,
here's how we want to work with you. The pricing is going to be super low. We want you. For those
three companies, it's really clear. It's not as clear with GROC yet, and I don't think anyone knows how to
work with meta if there is anyway. But for the three other big guys, it's just a field day for
here's how we want a partner. And please just bring in the revenue, change the world. We're all
happy. So I really am cheering for Sam in this battle too because, you know, Mark Andreessen built
Netscape, coolest company ever, got absolutely obliterated when Microsoft woke up. They just annihilated
him and changed the course of his life. He did well in the end anyway, but complete life change.
Sam is that guy. He woke up Google. He's got a little shaky relationship with Microsoft right now.
I think he pushed Google over the edge. I think they were awake already in that regard.
Yeah, yeah. Well, so now he's got them all coming after him concurrently, and he's got to outrace them.
And it'd be a great American success story if he can stay ahead of that and survive.
Can't wait for the Hollywood movies that are coming out all on these subjects.
Yeah. I think one of the really interesting things is that crypto has basically been legalized in America.
almost fully in the last week.
And so I think next year, CryptoXAI is going to be the most ridiculous thing you've ever seen
because these start-ups will go with a few smart people.
They'll get massive traction by leveraging these models.
And then anyone will be able to buy them pretty much instantly.
And so we're just at the start still of the bubble, I think, versus what we're going to see.
It's going to be the biggest bubble of all time.
Well, bubble has a negative connotation to it, in your mind.
Of course.
But, you know, we're just at the start now.
Like, this is the final hurrah of the current financial system.
Or societal system as well.
I really think, if you just take a step back and try and visualize Sam's life, for real,
the biggest companies in the world are offering your direct reports $1 billion to walk out the door.
You have to fight at the same time, Miramaradi, I'lliamer,
at Sutskiver, two of your founders, are trying to raise $10, $20 billion to compete directly with
the thing they built at Open AI.
Well, they don't get any.
They did raise.
They did.
Does it get any harder for an entrepreneur than where Sam is right now?
And he's like bulletproof.
He's just fighting his way through it.
It's something that the movie will be really cool, I think.
There's so much to work with.
This is a great testament to the fact that if you keep pushing products,
right? And keep doing or launching new things and keep innovating. You can stay ahead. And Facebook showed us that. Yahoo showed us that. Google showed us that. All in their era, they just kept breaking boundaries. And so the only thing now is, can you break those boundaries and break the status quo? And relentlessly keep doing that. And differentiate yourself from the competition. Yeah, I think it's sort of an interesting economic experiment. In the past, I've compared the AI buildout that's happening in the U.S. to 1930.
and the prelude to the Manhattan Project.
It's sort of an interesting thought experiment to ask
what would have happened if nuclearization
and the Manhattan Project hadn't been a nationalized effort
but instead had been a private sector effort
where blue-chip companies were all competing with each other
to see who could build the first atomic weapon
and how much would they be spending
to poach the top scientists from each other
to build that first atomic bomb
that has such strategic import over the future light cone.
I think we're living in some sense a civilian version of that thought experiment.
Amazing.
It's actually the really interesting thing is it's not hard to know how to build the models,
if you know how.
The if you know how is really, really rare.
And so that's why they're willing to splash these billions on top of that.
And they'll be interesting to see what they come up with now as these things get commoditized.
All right.
On the Open AI train, NVIDI and Open AI announced their first.
European Norway Data Center. This is a $2 billion opening eye data center, 100,000
NVIDIA GB 300 super chips. They'll host 230 megawatts of capacity, expandable to 520 megawatts,
so half a gigawatt of capacity, powered 100% by renewable energy out of Norway. Let's take a quick
look at this video.
The launch of Stargate Norway marks a new chapter for AI infrastructure in Europe. We're entering a
new industrial era. Just as electricity and the internet became foundational to modern life,
AI will become essential infrastructure. Every country will build it. Every industry will depend on it.
AI is no longer hand-coded. It is trained. It is refined with massive compute. It is deployed
into factories, research labs, and digital services. Stargate Norway will be powered by GB300 superchurch.
chips and connect it with mvlink. It is designed to scale to hundreds of thousands of GPUs
and support the most advanced models in training, reasoning, and real-time inference.
All right. There you got it. Emod. Analysis, please.
Yeah, I mean, I think this is part of the big sovereign AI strategy because your comparative
advantage as a country will be how many chips you got and how much intelligence you have
when most of your work is a digital. We've seen Open AI go.
very aggressively on this front. In fact, this week, they announced that they're going to be
rolling out chat GPT to all federal workers in the U.S. at the cost of $1 per agency per year.
So I think the land grab has really begun.
They couldn't say, they couldn't have said free, huh?
Yeah, I would add, Peter, there's, I think, a less obvious angle here, which is pulling back
the details on the announcement. This new, this new data center is planned to be powered with
hydro power, which is intrinsically scarce. You either have access to it or you don't. You don't
necessarily. It's not that ergonomic as a nation state to create a lot more hydropower. So that
means there is very literally land grab here. And this is Stargate planting its flag in that
hydropower to the extent that Europe has a policy of bounding power to certain energy sources. There's only a
finite amount that's available to reprogram to AI. So real land grab. And we'll see geothermal
energy as a land grab and we'll see other areas. I want to move this forward here. We saw a couple
of interesting announcements coming out of the White House. So Apple announced a hundred billion
dollar U.S. investment. This is increasing their total investment to $600 billion. And I don't know.
Finally, we're seeing Apple come back to the U.S.
How much of Apple's products are overseas manufactured right now?
Anybody have an idea?
It's got to be overwhelming majority, like 90 plus percent.
Yeah, huge.
Comments, Dave?
Well, look, we've been, most of the countries that have like a Samsung in Korea or the government industrial integration is very, very tight.
the U.S. has never really had that before. This is the first time. But, I mean, it obviously
works really, really well. It got Japan on the map, and then it got Korea on the map, and now it's
gotten China beyond on the map. And so, you know, Trump is the first president to really take
this to its limit. He's a business guy, so he knows how to do it. And it's obviously going to
work really, really well. It's not super hard to figure out. You just need to do it.
I would also maybe add, I think there's, again, going back to this idea of a tech tree existing
for civilization, it seems clear that there's an innermost loop to the tech tree that's at the
intersection of fabs and electricity sources and drones and rare earths. And to the extent that it's
possible to co-locate as high a density as possible talent and infrastructure for building
all of these, I think that has the potential to lead to an economic explosion for the U.S.
for the world. Amazing. One more article coming out of the White House here on AI, and that is
Trump demands Intel CEO resignation over China ties.
Trump labels CEO highly conflicted over $200 million plus in past investments
in Chinese tech firms in a relationship with the Defense Department.
For me, this has echoes of the J. Edgar Hoover anti-communist campaigns from the FBI.
Imod, do you have any opinion on this being a non-American?
Yeah, well, I mean, look, this is just a just,
posturing, right? Like, I think that this whole U.S. versus China AI thing is completely overblown
because everything gets commoditized soon anyway. Actually, to be honest, we should have had the push
for open source being that China wants to get into all our systems. Then they would have actually
put proper money behind it. I think it's completely wrong there because, again, the correct
view is this is abundant and it's going to come to everyone everywhere. You can't keep a lid on it
at all. How do you keep a lid on math?
I threw this into the deck, actually, just to spark the conversation around, you know, right now the chips that are driving this entire AI revolution, two-thirds, 66% market share through TSM, a single manufacturer.
Crazy.
That's utterly insane and not sustainable.
So my guess is that the White House is thinking about this and talking about Intel every day.
It's not coincidence that Trump decided to tweet about, you know, one CEO.
the China thing, I don't know what he's thinking about there, but, but, you know, Lipu's a 65-year-old
guy, Intel must succeed for, it's just an incredible national priority.
It's an incredible asset, right?
I mean, it defined the last 50 years.
Mm-hmm.
And, yeah.
So, so anyway, the point is the White House is talking about it.
We need balance in chip manufacturing desperately, and we need a lot more volume of chip
manufacturing, so.
I mean, if you're at Intel, what you should be thinking about is where do you, how do you leapfrog?
Well, they have, you know, their tech, their 18 nanometer, or 1.8 nanometer 18
angstrom tech is absolutely fine.
They need to get the yields up.
And then they need to build more fabs, which means federal help.
And so I just think that if someone running that company can get friendly with the current
administration, that it's all unlocked and it'll explode and succeed wildly, which is what
America needs.
And I don't know.
I hope they figure out the relationship between Lippoo and Donald Trump quickly.
Yeah. I mean, actually part of this was because Intel were trying to sell their fabs to TSMC.
Yeah, which gets complicated.
I mean, that would be devastating for the world, you know, really.
There's no way that can get through.
But I get it, right?
It's because all the losses that Intel come from the fabs, they would immediately monetize a huge asset.
The remaining Intel would be hugely profitable the next day.
So that's the allure of that transaction.
But then you have one company controlling the entire destiny.
Yeah, there's no way.
That makes sense.
All right.
I want to close out with this slide, which is I find telling, especially in the backside of the Intel conversation, that we're still early in terms of buildout.
So here we see a slide showing infrastructure CAPEX as a percent of the U.S. GDP.
So the railroads were 6 percent of the GDP back in the 1880s.
Telecom was 1 percent back in 2020.
and today our AI data centers in 2025 is 1.2%.
We're still early.
And Alex, we've talked about and EMOD
that we're about to turn planet into computronium.
We're building data centers every place.
And maybe the solar system.
We'll see.
And maybe the solar system.
Selim, you've got an event coming up soon.
Talk to me about it.
On August 20th, we have our next monthly EXO workshop.
The last one, that our last two or three have sold out.
People are absolutely loving them.
It's $100 to come bring your company and we'll teach you how to build an EXO.
We actually have a great ad which we'll get a link to and post in here, which is they created an ad where an AI reads out a real review by a real person, but it's an AI reading it out.
It's super funny.
Okay.
It's fun.
And for those interested, I've got some comments interested in the Abundance Summit in March.
Applications are closed at this moment.
They'll be reopening in September.
but you can get on the wait list by going to www.w.bundance360.com
and let us know that you're interested.
We'll have all of our moonshot mates at the Abundance Summit as well.
Let's take a quick look around the horn.
Dave, what's happening for you in the next few weeks?
Well, the biggest thing by far is we'll be together at OpenAI in, what, 11 days, 12 days?
I'll be there the whole week, actually.
And I, God, there's so much going on in that building.
So really looking forward to that.
We're having a fun podcast with the chief product officer, Kevin Weil, at OpenA.I.
Looking forward to that conversation.
Imod, it's 2 a.m.
Do you know where your children are?
You're a nuclear power source, buddy.
Thank you for sticking with us through the wee hours of the UK.
That's too much fun to sleep.
It is.
And what's on your plate over the next month?
We've got some big releases coming up.
In particular, I've been looking at the economics of the AI age, so it's going to be wild.
I'm going to be releasing a bunch of stuff around that.
I've seen what you're going to release, and it is stunning.
You know, dare I say, you know, just earth-shattering.
Alex, how about yourself, pal?
Oh, my goodness.
Well, I think we're in a time, although on an exponential curve, every point looks like the knee in the curve or the inflection point.
So one has to be careful of such anthropic bias.
I spend most of my time advising tech startups and making sure that the benefits of AI are
evenly distributed throughout the economy.
And every day is an adventure and an opportunity to smooth out the singularity, as it were.
All right.
Well, everybody, thank you for joining us on this episode of WTF and the GPT5 announcement.
We'll be coming back to you with an episode again next week.
week, please tell your friends about what we do. Our mission here is to help you understand
how fast the world is going, to inspire you, to give you the motivation, to create your own moonshots
and to make this understandable. And actually, what was the word you used, Alex? Rividing.
Rividing. I'm at the edge of my seat.
An amazing time. The most amazing time ever to be alive. All right, to all of you, thank you
for a fantastic conversation. Every week, my team and I study the top 10 technology metatrends that
will transform industries over the decade ahead.
I cover trends ranging from humanoid robotics, AGI, and quantum computing to transport,
energy, longevity, and more.
There's no fluff.
Only the most important stuff that matters, that impacts our lives, our companies, and
our careers.
If you want me to share these metatrends with you, I writing a newsletter twice a week,
sending it out as a short two-minute read via email.
And if you want to discover the most important meta-trends 10 years before anyone else,
this reports for you.
Readers include founders and CEOs from the world's most disruptive companies and entrepreneurs building the world's most disruptive tech.
It's not for you if you don't want to be informed about what's coming, why it matters, and how you can benefit from it.
To subscribe for free, go to Demandis.com slash Metatrends to gain access to the trends 10 years before anyone else.
All right, now back to this episode.
Thank you.