Moonshots with Peter Diamandis - US vs. China: Why Trust Will Win the AI Race | GPT-5.2 & Anthropic IPO w/ Emad Mostaque, Salim Ismail, Dave Blundin & Alexander Wissner-Gross | EP #214
Episode Date: December 9, 2025Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Emad Mostaque is the founder of Intelligent Internet ( https://www.ii.inc ) Read Emad’s Book: ...https://thelasteconomy.com Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy Grab dinner with MOONSHOT listeners: https://moonshots.dnnr.io/ _ Connect with Peter: X Instagram Connect with Emad: Read Emad’s Book X Learn about Intelligent Internet Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Listen to MOONSHOTS: Apple YouTube – *Recorded on December 6th, 2025 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
China is accelerating its push to become independent of NVIDIA,
with Cambricon planning to triple output to a half a million accelerators in 2026.
I fully expect, as I mentioned on the pod in the past, that we're just going to see.
Having an open source model that you can test as you're developing it
really closes the feedback loop a lot more aggressively here.
And most of the Chinese models are optimizing around this very sparse M-O-E-type structure.
with deep-seek and similar arches with Mewan kind of acceleration.
So you're getting towards one architecture
that they can just engineer and output industrially.
And who's the best at industrial manufacture?
The challenge with China, as people don't trust it.
I think we're going to see a Cambrian explosion,
no pun intended, of architectures coming out of China
now that China has been effectively decoupled from the US tech stack.
So stepping up a level, is this good for humanity or not?
I think the big question is,
What does the finish line look like?
Now that's the moonshot, ladies and gentlemen.
Anyway, I'll get us going a second, but what a fun day yesterday.
We were all in Seattle.
Salim, we missed you.
You were in Brazil.
Good morning.
You arrived from Brazil at 6 a.m. this morning.
Yes.
And got an hour and a half asleep in, so I'm foggy as F.
All right.
Well, hey, that means we all have a shot at you.
But I'm just back from Rome and Ahmaud's in London.
Let's kick off with the covering the world we've got going here.
Yeah, fantastic.
And Alex is just back from San Diego.
Yeah, and God knows.
Yeah, I just got back from Vietnam and Japan yesterday.
Look at us.
Globetrotting, gentlemen.
It's like, you know, no time for sleep.
It really is.
I mean, I feel like we're going 24-7.
I don't know about you guys.
Well, look, if you want to transform the world, you have to go out into the world, right?
And I think that's what we're all doing.
I think that's a good opener, too, because, Peter, you just got back late last night from Seattle, and that'll come out in a couple days.
Yeah.
If we just mentioned the whole world that we've covered in the last week, that's pretty cool.
Globetrotting.
But a lot going on.
Shall we jump in?
Yeah.
Is that enthusiastic, yes, everybody?
Yeah, let's get in.
Let's go.
Game time.
Make it happen.
Engage.
I'm trying to find my neocortex.
It's there someplace.
Don't worry about it.
It'll show up.
All right, everybody, welcome to moonshots.
This is the conversation that's changing the world.
Hopefully, we can help you get ready for the future.
And this is the news that if you're not watching the crisis news network
and you have time to watch moonshots,
we hope we'll deliver to you sort of what's going on,
what's happened the last week in AI, robotics, data centers, energy.
It's a lot.
Here with all of my incredible moonshot mates,
we have a, let's see, a five,
increase in capabilities here today because not only do we have AWG, we have EMOD as well,
Salim, DB2. It's going to be amazing. All right. I'm going to jump into our first stories.
We're going to start with where AWG was last week. NURUPS 2025. So, Alex, this is you?
Yeah. Yeah. Neurips this year was a bonanza. I've called it in past Woodstock for AI. A few
observations. It had more than 29,000 registrants this year, which is almost 50% increase over
last year. It was enormous. The Alibaba, the Chinese lab, had 146 papers accepted, including
Best Paper Award. anecdotally, the language that I heard the most in the hallways was Mandarin.
There was a sense that the frontier labs have all the resources, the academic labs do not,
and the frontier labs were at Nureps, mostly to recruit academics.
American, again, this is sort of a sense of the conference, if you will.
The frontier labs and the frontier research coming out of the labs has largely gone dark.
So what was being shown on the research end was largely coming from academic resources
or academically resourced labs that are lacking the resources.
There was on the sidelines, even though frontier labs were there to recruit,
the publications and the oral presentations at this point largely not coming from frontier labs
except for Chinese frontier labs.
So what does Neurip's stand for it, first of all?
Let's start with that.
So Neurips, formerly NIPS, stands for neural information processing systems.
So it is the largest AI conference in the world.
It's held once per year every December.
And it is where some of the most striking AI research historically
has been published. It's also the place, the one time per year where all of the frontier
labs are all under one roof. And you get really a sense of the pulse of the AI space just
from being there. Some of the most interesting meetings happen in the hallways. Optimus Gen 3,
humanoid robots there in force. You also see a sense of the vibes, the zeitgeist of AI,
right now. So you can see these are a bunch of photos and video I took from the conference. We've talked
on the pod and passed about how Chan Zuckerberg Initiative is now pivoted to solving all disease
with AI. And that solve everything mentality that math, science, engineering medicine are just
going to be solved imminently with AI was very much on the show floor to the point where it
made banners. It's really, I think, such a wonderful, wonderful way to tap the zeitgeist of the space.
Amazing.
It's interesting to see that a lot of it is hardware and much more than you'd expect.
There's a lot of hardware, and the feeling of the moment is that robotics in particular is the next big thing after agents.
I should also mention a lot of the attendees watch and are fans of moonshots, and I think there would probably be a lot of interest if we were to do a recording in the future from Nureps or I-CML or I-CLR.
Well, it's a global thing.
I don't know what those TLA stands for.
world. It's kind of like the Olympics. It's in a different city all over the world every, every
session. So it just happened to be in America this year. But where is it next year if we want
to record? I don't think they've announced that yet. It's in San Diego again. Is it really
back to back? That's easy. Neurips 2026, here we come. All right, let's move on.
There's something interesting that I think I should point out. So ICLR, which is another similar
conference, the number of first author-affiliated Chinese papers went from 9% in 2021 to 30% this year.
And the US number went from 52% down to 36%.
I think we saw something similar with Nareps, but it would be good to crunch those numbers.
I think we can't emphasize that dynamic enough.
And, Amad, I'd be curious to hear if you have a different take on this.
But my take on the floor was that American frontier labs have basically gone
dark and are largely no longer publishing all of their internal results.
Chinese labs are continuing to publish in the same way that they're continuing to push
open-weight models.
And so the gap, the research publication gap is being filled in part with Chinese labs.
I've got a question for you on that.
So I know exactly why the U.S. is going dark.
You know, everybody working on these big frontier models is a former Googler.
Not everyone, but almost all of them.
And the billion-dollar signing offers are a real problem for, and we saw that yesterday at Microsoft, too.
It's just recruiting warfare.
So Google has explicitly gone dark, you know, after being very open and publishing everything for years.
So I know why that's happening, but why are the Chinese still being super open?
Well, it's because the Chinese are backing open source now, aren't they?
It's like deploy, use open source to make it more efficient, and then that's how we'll win.
And that's how the models propagate.
Yeah.
Strategically, it's great to differentiation.
If you have really strong American frontier models that are largely hidden behind APIs under the spirit of commodify your complement, release lots of open weight models.
And it's a land grab at this point, right?
If a lot of nations and entrepreneurs and companies are beginning to use the Chinese models, they have a foothold.
it's also i think there's an integration angle so if again under the the banner of commodify your
complement a classic economic strategy there is much more to AI than just the models there's
integration with society and the economy so if if there's strategically if you're at a disadvantage
on the model front open weight open release all of the models and focus your attention focus your
economy on deeply integrating all of those open weight models and that becomes the competitive
advantage. Every week, my team and I study the top 10 technology metatrends that will transform
industries over the decade ahead. I cover trends ranging from humanoid robotics, AGI, and quantum
computing to transport energy, longevity, and more. There's no fluff. Only the most important stuff
that matters that impacts our lives, our companies, and our careers. If you want me to share
these metatrends with you, I writing a newsletter twice a week, sending it out as a short two-minute
read via email. And if you want to discover the most important metatrends, 10 years before
anyone else, this reports for you. Readers include founders and CEOs from the world's
most disruptive companies and entrepreneurs building the world's most disruptive tech. It's not for you
if you don't want to be informed about what's coming, why it matters, and how you can benefit from it.
To subscribe for free, go to Demandis.com slash Metatrends to gain access to the trends 10 years
before anyone else. All right, now back to this episode. All right, let's jump into our first article. Is this one
from Neurip's. Google's Titans and Miris are helping AI have a long-term memory. Google's
Titan is a new architecture with deep neural long-term memory that updates itself in real time.
Do you want to jump into this, Alex?
Yeah. So one of the many blockades to radical progress in terms of advancing AI models
is obviously the context window size. It would be miraculous.
if we could have, say, a billion tokens in context or a trillion tokens in context.
If we could have the entire web in context or the entire human genome in context,
imagine the reasoning powers we'd gain and all of the problems that we could solve.
The problem is, as folks in the space have long used to motivate just about every recent paper on the archive,
the quadratic complexity associated with increasing the number of tokens in the context.
It's really painful to increase a conventional vanilla transformer past a few million tokens in context.
So we result to techniques like rag, retrieval, augmented generation, and other techniques to try to effectively increase the amount of working memory, if you will, that an AI model can keep.
So Titans and Miras or Maras, these are Google's latest attempt to get past that bottleneck.
And as we've talked about in the past on the pod, there are a variety of techniques, architectural techniques, to try to break the context window limit, like recurrent neural networks, popular example.
They don't have any explicit context window limitations, but they forget.
So the approach that Titans and Morass propose is sort of biologically inspired, distinguishing between short-term memory and long-term memory.
It's sort of ironic given that the attention mechanism itself that's powering basically the whole economy at this point was originally designed for differentiable long-term memory.
We found ourselves back there.
The idea is to use surprise, a numerical metric of surprise, to decide what to commit to long-term memory and what not to.
And it turns out scales really well without catastrophic forgetting.
Just to give people a sense of this, the models historically, right, GPT4 and 40 had about
128,000 tokens as a context window. Two million tokens, which we're talking about here,
is about 3,000 pages of text, 16,000 novels, if you would. And you mentioned the human genome,
right, which is 3.2 billion base pairs. That, you know, that would fit, you know, 2 million
tokens only hits about 6% or 0.06% of the entire human genome. So will we actually get to a near
infinite context window someday? Is there a strategy for getting there? Absolutely. And I think
these papers and others, when Demis Abbas talks about there being only one to two major
AI advances at the level of transformers left before we achieve what what he'd characterize as
AGI, I think breaking the context window ceiling is probably at least half of one of those
great advances, and I think we will absolutely get there pretty soon.
Imad, how do you think about this?
Yeah, I mean, I think I agree with Alex, and also this fits with Google's hardware architecture,
so the massive kind of troyal TPUs that they're doing that can handle huge amounts of memory,
gigabytes, probably terabytes of memory in one instance.
By making it more efficient, like from the graphs that we see on Titan and Miras,
everything else kind of just slip straight down.
This goes almost continuous as you scale without the also complexity overhead,
because you used to need almost exponential amounts of compute as well as memory to handle that increase.
Being able to just capture everything in one go means that you don't need to store stuff to files anymore.
Like you could have an entire picture of every email the organization has ever said.
just in the memory at once.
So it can track everything and figure out
the interconnections dynamically.
And again, I think that's what helps you
break through to that next level of performance.
And it's pretty exciting.
And I think a very logical approach
that they've taken here with the surprise element there as well.
Right now, before Transformers are a bit brute force,
to be honest.
This reminds me of Ben Gertzel back around, I think,
2010 or so, launched the OpenCard project,
which was an open source effort at recreating
a mind. So they had different modules of what constitutes a mind like memory, pattern recognition,
sensory adaptation, etc. And they were trying to replicate each module in software and then
improve the software. Typically, like Ben, very, very early for the timescale. But as we look at
this and we're building like memory and we've got the processing power, it feels like this new
generation will get to that point. And essentially, we're without realizing it actually
growing on mind.
All right.
I can't wait because, frankly, I would love to have perfect memory.
And when we add augmented reality glasses and an always on version of Jarvis, having an
assistant there that's able to constantly remind you of everything you've ever known and
everyone you've ever met is going to be super handy.
All right.
Let's go on to OpenAI.
Is just, you know, we heard about in the last pod.
We talked about their code red response to Google's growth and dominance.
Well, some news is coming out.
I want to announce today that looks like GPT 5.2 will be coming online next week.
And this chart we're showing, which shows the benchmark and GPT 5.2 and Gemini 3 Pro, this is still hearsay.
This was put up on X.
Don't know if, in fact, if it's true.
but just for fun, you know, we're going to see once again, are we leapfrogging model on model on model
and we'll see XAI come out again with something, I'm sure, very shortly thereafter.
Alex, the catchphrase I heard over and over again at Nureps this week is that this is a rat race.
There are so many employees at the frontier labs who are just grinding away competing at what they view as a rat race
to achieve the Frontier Max in this case.
I fully expect, as I mentioned on the pot in the past,
that we're just going to see leapfrogging on a nearly weekly basis.
And we're seeing it. Absolutely.
Yeah.
Until we get to the finish line.
And I think the big question is, what does the finish line look like?
It's interesting that Sam went out and made this Code Red announcement,
which got picked up by all the media everywhere, right?
And it's kind of a interesting strategy because he's putting the organization on alert.
He's letting the world know that he's put it on alert.
He got a lot of negative press from that, but he doesn't care.
He just wants to refocus the organization.
It really is an interesting management strategy.
There's a comment that a good crisis is a terrible thing to waste.
I love that.
That's fantastic.
Leveraged that, right?
You're really, and I think that sends a message throughout.
You know, look, Google did this exactly the same thing, right?
Sergey Brun said, I'm parachuting back in.
We're going into founder mode, and we're going to solve the AI thing, and they've done it.
And this is now the leapfrogging rat race, which is good for the consumer in so many ways.
That's amazing.
Yeah.
You know, we were going to be sharing the conversation we had last night on our next pod with Mustafa Suleiman, who's the CEO of Microsoft AI.
And just foreshattering that, one of the conversations we talked about.
talked about is safety. And if you're in a rat race and everyone's just trying to leapfrog
everybody else, you know, it feels like safety goes to the sideline. And it's just an
accelerationist point of view over and over again. Emad, any thoughts on this code red?
Yeah, I think the code red makes sense because the competition is intensifying. And there's only so much
attention that consumers have, as it were, for these kind of models. This benchmark chart, I'd be very
surprised at because, for example, it has video MMU, and GPT5 can't understand video at the
moment. So it'd probably be a brand new architecture underlying that. However, all these numbers
will be hit in the next six months, probably, for a year. And that's why, like, we're running out
benchmarks. Well, Peter asked me, you know, should we put this out there? Because we have no idea
if this is real or not. It's just a leak, an internal leak. And we'll know next week, we could
look really stupid if this is totally wrong. But I wanted to actually make sure we put
it out there in case anyone wants to trade on polymarket, because right now, the end of
year, the end of year, you know, you can buy chat GPT or OpenAI at like six cents on the
dollar. So if that humanity's last exam number is right, that's just mind-blowing. So we'll find
out next week, but it wanted to at least give everybody a chance to see it and make their own
guess on whether this is real or not. And like Imod said, we'll hit these numbers within six
months, no matter what, you know, somewhere. You know, the other thing that was really interesting
last night at Microsoft is that we'll start adding a column to all these charts that has Microsoft's
numbers independent of OpenAI. That's the mandate now. I think, I think Mustafa was really clear
that, yeah, we're going to be another column on every one of these charts and another line on
polymarket. Yeah. Well, I guess one other point just to just to note, you know, Sam has got a lot of
to raise in order to implement the buildout that he's announced. And I think this kind of,
like, I'm willing to do whatever it takes to stay out front is part of the strategy being able
to raise capital and get ready for an opening I. That's a great point. That's a great point, Peter.
This is as much for investors as it is for the general public and for the employees.
Yeah, but I think the bottom line for all of our subscribers.
is expect this on a week-by-week basis, which is what makes our conversations on this
pod so interesting. It's like it is watching, I don't know what the equivalent race would be.
I mean, it is a horse race, but it's continuous with a billion dollars per day plus going
into fuel this crazy competition. I think that the closest analog I can think of is this is like
a world war with multiple campaigns and multiple fronts and multiple thrusts.
and initiatives. Yeah. If you only have time to look at two numbers on this chart, look at the
first and the last, because the first one is the one that's most correlated with self-improvement
and accelerating AI. Well, just to read them for those listening here. Well, I mean, it's
speculation, but the high bar on humanity's last exam is without tools, 37.5%, which is really good,
which is Gemini, Gemini 3 Pro. And that's Gemini 3 Pro. Then with tools, it's higher than that. It's
closer to 50%. Here, the speculation is that they've leapt all the way up to 67.4%,
which would be crazy. I mean, again, only Alex can answer those questions as far as we can tell
on this podcast. I think Imala do a good job as well. But what is... I mean, seriously, they're
damn near impossible. What is what tools mean, for those who don't know? Well, so then the AI,
it doesn't just answer in one pass. It's allowed to actually use a whole variety of calculators,
And really any kind of a tool that doesn't give it the answer, it's allowed to use in its chain of thought.
And it can iterate many, many, many times.
And so it's not just the standalone LLM.
It's the LLM accelerated or enhanced with other software, which is perfectly fair if you're trying to solve a world problem, cure a disease or whatever.
That's why they use that benchmark in addition to the raw benchmark.
And then the last line is the one that Alex loves for good reason.
It's self-improving is one thing, but then self-financing.
is another. And Alex, you can talk about that better than anyone. I'll hand it to you for that.
Vending bench two and vending bench arena, I love to the extent that we have any sort of economic
Turing test or economic benchmark for agents' ability to autonomously deliver a return on capital.
That is what we have right now. And as I've mentioned in the past, maybe just project this
into the ether, I would love far better benchmarks for measuring economic autonomy than
vending bench, but vending bench is what we have right now.
Because it's coming.
Actually, there's another thing that just crept in, didn't make it into the slide deck,
which is there's a trading competition with the AI bots, I think, on the crypto side of things.
And there was a mysterious model that actually made a profit reliably all the way through.
And Elon must just announced that was GROC 4.20.
Nice.
So that just literally snuck in.
The leapfrogon continues.
The leapfrogging. And so he said, that's how you pay for all the GPUs, just to let GROC5 wild on the stock market. So you're going to compete against Elon and there's million GPUs. Dollars are the best benchmark. All right, Anthropic is making use once again. There's a lot of excitement and energy. And right now, it's still a bit of rumor, but that they will have an IPO as early as 2026. So Anthropic is negotiating a new funding round that could value them at $300 billion as revenues projected to reach.
26 billion next year. They're aligned alongside open AI, which is also exploring a future IPO.
So if we've got, you know, opening I going public and to access capital and anthropic going
public, I have to imagine XAI is also going to go public sometime in the near term.
Thoughts on this one, gentlemen.
Well, I mean, comment. Go ahead, sir.
No, go ahead.
tired. I think if anything, X-A-I is relishing not being public, given its history. But I think more broadly,
the worst-case economic scenario for superintelligence is that it, maybe not the worst, the next worst case
is that it remains decoupled from the human economy and that an intelligence explosion happens
not on the publicly traded markets and that insiders and early employees and the machines themselves,
sort of skyrocket in terms of real wealth, but are largely decoupled from retail investors
and the rest of the human economy, that would be, I think, a highly suboptimal economic outcome,
whereas if we get enough IPOs from Anthropic, from OpenAI, from SpaceX, from some of these
other firms that are achieving hyperscale on land in space, I think that is probably the best case
from a macroeconomic perspective for the economy. And once those happen,
assuming they happen, I think it's far clearer to see how the economy grows past the so-called
debt crisis, how we achieve hypergrowth, real hypergrowth over the next three years.
Dave?
Well, having founded and taken a company public, this is a big, big step.
And when Dario, the CEO of Anthropic Dario on the day, gets interviewed, he says,
I actually never visualized myself being a CEO at all.
I'm surprised I'm in this position.
Then when you become a public CEO, it's a whole another level.
So I'm a guessing he'll grow into the role, but it's a big deal.
And so why do it and why would Elon be relishing not doing it?
Well, once you're in the public limelight, all your dirty laundry and your code reds become crystal clear in your stock price every day.
It's very hard to do what Sam does right now, you know, roam the world selling the story when your dirty laundry is right there on every stock ticker.
So it's another level.
But they have to do it because you can raise $10, $20 billion.
privately, but you have to tap into the public markets. Yeah, you got to be talking about much bigger
numbers, and the only way to do that is through the public markets. And it gives you a currency
for acquisitions, which is important. I think it also increases trust in a company if it's a public
company versus being a private company. Yeah. Sam was at Davos last year, and I was, you know,
falling them around on the streets of Davos. And I mean, just the back to back to back. Were you stalking
He actually had this, you couldn't miss him.
He had an entourage as big as a president of a nation.
But he's just doing meeting after meeting after meetings saying, give me another
billion dollars.
Give me another billion dollars.
And, you know, you can see how there's no way that can scale to what's happening next in
I mean, and part of what's coming out in the news right now is a lot of the deals that
Sam and Open Eye have announced are options and not actual deals, which is fascinating.
Yeah, I think you're at this fascinating time now, though, whereby the dollar benchmarks are starting to accelerate, the revenues ramped up like Anthropics at 10 billion of revenue just a few years in. It's amazing. And they're actually catching up to Open AI. But then the competition is going to get intense. Like Opus 4.5 is $25 per million tokens. GROP 4.1 is 50 cents. Will you see substitution occurring? Or will you see these models actually just,
being used to literally make money.
Like I said, I can easily see Elon in particular just say, GROC-5 is going to pay for itself
and all the GPUs by being the best hedge fund in the world and just let it loose on the stock
market.
And macro card is going to replace all the SaaS.
Yeah.
So it's an interesting.
You know the other thing that happened this week concurrent with this?
It's not in the deck, but the cost of HBM memory, the memory that drives AI skyrocketed.
It went way, way up.
And Open AI in the news said we've reserved 40% of the world's supply of memory for our own data center work, you know, in Abilene, Texas at Stargate, which is crazy.
You know, normally memory comes down at a rate of almost half a year in cost.
And so to see it go the other direction is the first bellwether that, hey, there's going to be a huge shortage of compute.
And if you don't go public and raise the capital and lock up the supply, you're going to be left compute starved.
Yeah, I mean, the same thing is going on in fundamentals, right?
I didn't put it in the deck, but there's a copper crisis right now.
The price of copper is going through the roof because of wiring these data centers.
All right, let's move on to our next story here.
Wait, quick point.
Yeah, please.
If they're next year, revenue projections are 26 and the valuation is $30,000, $300 billion.
That's only 10 times revenues.
Palantir is trading at 111 times revenues.
So that's cheap in that sense.
That's a good point.
It's a good point.
No, it is reasonable.
It's crazy big numbers, but it's perfectly reasonable.
I think Slim is saying it's overly reasonable.
It should be at a higher evaluation, and it will be.
It'll probably spike after an IPO.
All right, here's our story.
OpenEI. finds confessions can keep language models honest.
So new confessions method trains models to openly admit when they're hallucinating or when they've broken
constructions. The method encourages models to self-report mistakes instead of hiding them. So I am
completely curious here. It's like, okay, so you use this methodology. Do you actually get two
reports? Here's my answer and here's my confessions. Emot, what's going on here?
Yeah, I mean, the whole thing with next token prediction, right, is that the models kind of go along
and then they can't have the self-reflection
and more things like that.
I think that what you find is
when you've actually got the right prompts,
the right planning and the right loops,
you get very interesting things occurring,
particularly as the hallucination rates are dropping now as well,
because the models used to jump a lot
and skip these behaviours
because they didn't have the self-reflection,
they didn't have some of these other things.
So I'm not very surprised by this,
and again, I think what we'll eventually see is
what we saw with the Deepseek V3.2 math paper, a concept called a meta-verifier, where models learn from their mistakes.
So rather than just checking against a very small baseline, are you being honest, where have you made mistakes?
Having that as a verification loop is very similar to how humans learn, and that's what causes big leaps in some of these more frontier areas of thought as well.
You know, what I found shocking, I asked one of the models, you know, how often are you
hallucinating or providing wrong answers on average? And I found a couple of studies, and one of them
was that 25% wrong answers on everyday user questions. Another one said GPT40 and Claude
3.7 Sonnet hallucinate on an average of 15% to 16% of the time. And I never expect that.
When I'm asking my questions, I'm assuming, and I think the majority of all of everyone,
perhaps not you, Imod and Alex, I'm assuming that they're corrective.
If it's really, you know, 10 to 25% hallucination, that's scary.
It's basically the same as a human doctor, right?
And the interesting thing, though, is that it's dropped.
So GPT5 dropped it from 18% down to 3%.
So it's a lost generation of models.
Yes, go ahead.
Can I give you a dramatic example of this?
I think it's worse than a human doctor
because they're actually trying to please you the model,
so therefore they're saying whatever.
I had a TV where the power went bad,
and ChachyPT said, oh, if it's this model here
and it's making a buzzing sound.
Yeah, you mentioned that in the last pod.
So I asked it then, okay, who do you know that can fix this?
And it gave me names of three local TV repair shops that were completely hallucinated
with phone numbers, with phone numbers, websites, names, addresses.
And I started calling them.
All the phone numbers didn't work.
And then I went and looked up.
None of them existed.
So this is a big problem.
If I lift up a level for this confession thing, I think it goes back to the earlier point
that it's great to have a feedback loop.
It gave me somewhat chills because I went to Catholic high school and the idea of
confessions is somewhat chilling. Who's the priest is my question when you do this type of model,
but I think the feedback loop is very powerful to have. Wow. Wow. I think it's also worth
mentioning the 50-year-old notion from economics of Goodhart's Law, which is that when a measure
becomes a target, it ceases to be a good measure. And the way that these models are trained
certainly and superficially rewards various sorts of behaviors that might be construed.
as dishonesty, and being able to avoid Goodhart's law,
clever ways to avoid Goodhart, I think are admirable on the part of Open AI.
And maybe the final solution looks a little bit less like naively optimizing
just next token prediction objectives or reward maxing on RL objectives to solve math
problems or programming problems.
It looks a little bit more like some sort of multi-objective optimization problems.
where maybe there's some blend of a good heart avoiding honesty reward and an accuracy award
and an ethics reward that maybe almost start to look a little bit like separation of powers
in what we see in government systems. There's an executive, in some systems, a legislative,
judiciary, et cetera. I'm going to have to bookmark that comment, Alex, because you lost me
at multi-objective optimization problem. On that note, I'm going to move us forward here.
You have a good heart, Salim.
All right.
So again, it's like the next release.
Google releases Gemini 3 DeepThink, which uses parallel reasoning, testing multiple
solution paths at once.
That makes sense to me, right?
An upgrade from 2.5 DeepThink.
And it's excelled.
Once again, Humaneity's Last Exam and GPQA Diamond, Arc, AGI2.
Going to our benchmark expert, Alex.
This is a template for how revenue is going to scale to justify the trillions of dollars of
KAPX. It won't just be faster models or better models or stronger models. It's going to be
lots of agents, fleets of agents, millions of agents, many millions, billions of agents, all running in
parallel to solve problems. That is, in my mind, and certainly based on the architecture of Gemini
3 Deep Think, which isn't just a faster, better, singular.
model, but it's also scaffolding to have fleets of agents, fleets of Gemini three agents that are all
running in parallel to solve a given problem. That's the deep think part of Gemini.
Sounds like quantum computing to me. You know, it's like I'm going to run. No, I know that,
but there's the concept I'm going to run this problem in multiple parallel universes and bring back
the answer. I'm going to run this problem in, you know, a billion or a trillion agents and
bring back the best answer. Parallel, yes, but quantum computing.
Wishes that it has the economic utility of Gemini 3, deep think.
I'm using that.
I'm with Peter on this one.
Go ahead.
Go ahead, Salim.
I'm with Peter.
I'm with Peter on this one.
It feels like that kind of parallels.
But I think the broader point you're making, Alex,
is that when you have millions of agents,
he's specialized.
If you take something like the Manhattan Project,
you have thousands of people, each with a deep specialty connecting together,
and the hive mind then solve the problem.
And we're going to see the same with agents.
Is that a metaphor that works?
Yeah, like when Dario speaks of countries of geniuses in a data center, this is what it looks like.
We're going to have billions of agents that are all going to be independently, probably pretty expensive,
even though the cost of intelligence is going to zero.
But collectively, yes, this is going to generate trillions of revenue if we have so many agents.
And this is how we pay for all those data centers.
Well, yeah, I think that's a really important point, Alex.
Everybody talks about the cost of intelligence going to zero, but it's not actually going to zero.
It's going down to a low number.
But concurrently, the fleets of agents are getting so much bigger so quickly.
And, you know, we saw earlier in this pod, too, the process of iterative, expand the context window to entire, you know, hundreds of millions of tokens and run many, many iterations to get rid of the hallucinations.
Those forces are going in the other direction.
And it's working far better than anyone thought it would.
And so it's very unlikely that the cost of intelligence is going to go to anywhere near zero.
Everybody's going to want more intelligence and more intelligence and more.
So there's going to be an acute shortage.
And I only mention that because many business leaders are out there saying,
let me just wait and see what happens.
And you're going to be starved of access.
And you can see this already.
When the new models come out of Gemini and they add another level of deep thinking,
it works incredibly well.
But you wait three, four, five minutes to get your answer.
And very often it says, we're experiencing unexpected loads right now.
Sorry, we're offline.
Like, how's that possible if the cost of intelligence is going to zero?
Well, it's not.
The cost per token is going way, way down.
But the use cases are expanding on at least three different dimensions
on this ridiculous curve the other direction.
And everybody's going to want it because it works so well.
So what does this actually mean other than we have yet another faster model
able to hit the benchmarks a little bit better,
and next week we'll be announcing the next better model.
I think the models are getting to the point now
where again ensembles of these models,
doing different things,
are just genuinely useful for real world's advanced complicated tasks.
I'm really looking forward to using Gemini 3 DeepThink.
2.5 Deepthink was pretty good,
but right now I think the only one usable for real frontier stuff
is GPT 5.1 Pro, which again does something similar.
Ultimately, what you want is you don't want a task that's really complicated to be done in seconds
nor minutes.
You want to be iterating on a task for a period of time giving input and feedback, and the model
just not making mistakes.
Like Gemini 3 Pro still makes mistakes in math when I'm using it, you know?
And so Gemini 5.1 Pro doesn't.
The model usage, again, I think the deep seek math paper is fascinating for this.
It's the first open source model that gets a gold on the IMO.
And to each of the problems, they use two billion tokens.
So about two billion words.
That gives you an idea of how many more tokens you can use.
Yes, and it works.
And it works.
I mean, if you want to experience this firsthand, just go to GPT 5.1, soon to be 5.2,
ask it to do something complicated for you that it can't quite do.
And they just keep asking it to try order about a thousand times.
back-to-back and it will eventually get it right like why does that work
Dave what's an example of a hard thing to do oh I do this like all day long in
fact right after this pod I'm back I have a fleet of Kimi K2 agents scouring the
world right now working on hard problems but I'm you know if I ask it to
translate legacy C code into Python and then it's you know it comes back slow it
make it faster improve the algorithm or you know more more down-to-earth do you
that with your students do you do that with your students to work
Carter tried. Try it again.
Well, that's the difference. That's where everybody's analogy breaks because there are a limited
number of humans and students, but they're an unlimited number of AI agents. And so
deploying a billion in parallel to work on something is out of your normal range of
intuition, but it's just flat out works. You need to expand your intuition to this new world
we're moving into. I think that's one of the most important things that we can say coming out
of this, which is we're about to enter a new world where there is a near infinite,
amount of intelligence to be thrown at things by anyone.
So then the interlude.
Sorry, please, sorry.
Go ahead, you know.
Yeah, I think one of the super interesting things on the context when just realizing
it is, again, the stuff that we get wrong, when we're trying to solve problems,
usually the only stuff that survives is the stuff that we get right.
Like at Neurips, it's papers full of all the stuff people got right.
Being able to actually have scientific method, strategy, and things like that,
where the context window includes everything you got wrong
for all the different models.
Fascinating.
We know that will do better
because often it's what we got wrong
that actually guides the fun of the mistakes.
Yeah, sure.
I mean, if you had asked everybody in the community
three years ago, is that going to work?
90% of people probably would have said, yeah, I really doubt it.
But it does.
Now everybody agrees.
Everybody at Nureps, I'm sure.
It just flat out works.
But that means you need massive, massive numbers
of parallel agents
and even, you know, even any given agent needs many iterations.
It's just a huge amount of compute, but it just solves problems.
It's incredible.
All right.
Our next story here is the reasoning AI became so efficient.
So AI got more efficient by mostly for huge LLMs, with training becoming 22,000 times more efficient, while smaller LLMs only improved by 10x to 100x.
So what's the story here, Alex?
Yeah, this is sort of a finger in the eye to those armchair theorists who say that the small guys,
the small labs are going to benefit from algorithmic advances.
So this is a study that found that 91%, this is a study out of MIT,
91% of algorithmic efficiency gains between 2012 and 2023 were the result of only two things.
one, the switch from LSTMs to transformers, and two, the switch from Kaplan scaling,
named after my former office mate in the Harvard Physics Department, Jared Kaplan,
at Anthropic, and switch from Kaplan scaling to Chinchilla scaling.
Those two things.
So LSTMs to transform.
LSDMs were recurrent to transformers.
What are LSTMs?
Long, short-term memory.
So LSTMs were prior to,
to the Transformer Revolution, LSTMs were the favored language model.
I remember the old days prior to transformer, prior to GPD,
when Andre Carpathy had his charr-R-N-N language model that stunned people by being able to generate code.
There was a life prior to GPT.
But just those two algorithmic transitions, LSDMs to Transformers and Kaplan scaling to Chinchilla scaling,
those two were 91% of the efficiency gains.
And what that says is that this story that, well, we're just stacking small winds on top of each other
and that eventually somehow algorithmic efficiency gains are going to enable smaller labs
to have some sort of advantage relative to larger labs, this suggests that's just not true
and that most of the algorithmic efficiency gains are actually accruing to the large labs that are able to scale out the most.
There's one other thing in this paper that's noteworthy, and please don't read it.
summarized it perfectly. That's everything you need to know. It's way longer that it needs to be
classic MIT work, but it's a very, very good summary at the beginning of the document of why
this work is so important, because we're putting an immense amount of societal energy into
scaling the hardware. And Elon Musk talks about, you know, Tennessee all the time and Stargate
and huge amount of thought and research and discussion on our podcast about these massive data
centers because they're so visual and they're so expensive. But these software,
side of it is very under-researched and very under-analyzed. So they're taking a first
shot at trying to give us better insight into the future rate of improvement of the software
side of it, because that's where it's not as expensive, but the lift could be enormous. And so I
think this is a really, really good focus area, and I'm really glad MIT's on top of it. But, you know,
Alex's summary is all you need to know about the work so far. Amazing. So the inner loop,
The inner loop here, then, is energy going to GPU to agents going to intelligence, and therefore all of that scales, and the demand is so infinite in terms of adding intelligence to everything, that it'll be a long time before we run out of that.
That's what I'm telling the world with data centers.
So I think there's some YouTube viewers somewhere with a drinking game or a bingo game for how many times we can say, tile the Earth with compute or disassembled.
moon or whatever it is. So, so drink your whatever or cross off your
incredible dingo game to this one.
All right. There's another one. So robots are the part of the loop.
We're robots in the loop. Here we go. This next article here I find really important.
This is about visual chain of thought. And it's the notion that chain of visual thought
methods are now able to give us a better understanding of images. And this visual thought delivers
three to six percent in gains and continuous reasoning performance. The image here on this on
this is asking question, is the wall behind the bed empty or is there a painting hanging on the
wall? And what you see then is the analysis. The ability for an AI to understand what it's
seeing. At the same time that we're bringing about augmented reality glasses and we have
humanoid robots coming in coming online is is going to be fundamental. I want my AI and
understand what I'm seeing. I want it to be able to remember during the course of the day
where I left the keys or who I ran into or recognize a face and give me their name.
How fast this is accelerating? I think this is a, if I,
If I may even more profound than just garden variety acceleration, if I ask all of you, or I request, don't think about pink elephants.
What's the first thing that happens? You start thinking about pink elephants, not in terms of text tokens in your mind.
You're not probably thinking in terms of language. You're using your visual cortex probably to create a mental image of pink elephants.
And the ability to visually reason is something we've talked about on the pod in the past.
We're finally, a few weeks later, a few weeks after this was predicted to happen, we're starting to see major gains in reasoning performance by models that can include visual tokens in their chain of thought, not just text tokens. And we're going to see a lot more of this.
You mind, how excited about this, are you?
Yeah, I'm not surprised at all by this. It's very exciting. I think it's actually something fundamental to reality.
Models are the things with the best math that approximates reality. And we've seen some interesting things,
before like originally we built stable diffusion and then from stable diffusion we extended it to
3d using the same knowledge we found out actually harvard did a study that a image model understood
3d then we extended that out to video the same thing it's somewhere i actually had a concept of physics
in there and in fact if you look at the latest image model that's top of the charts now flux by the
black forest lab team my former colleagues it started with a language model that then got a video model that
then became the best image model in the world.
And you see now, for example, Luma recently raised
900 million from Humane and others
to build world models where you input all this data,
image, video, text, et cetera.
Because text is low dimensionality.
If you actually want to understand and reason,
you need to have all the different types of data.
But the latest spaces are actually very, very similar
to them all in terms of your understanding of the universe.
So you can go from a text model to a video model, actually,
just by adding.
the right types of data, but the underlying structure doesn't change, which I think has big
implications for, again, the actual nature of reality itself, because each of those is modeling
a different part of reality.
Can you imagine going back to Alex Nett when they were putting this together and showing
them this capability?
We're trying to recognize the number seven.
That was a conversation we had yesterday.
I mean, it's truly extraordinary.
It is.
And to experience it firsthand, you know, take a screenshot of something you're doing on your computer, dump it right into Gemini and say, help, what's going on here?
You know, it's incredible that that works.
It would have shocked anyone 10 years ago.
Nobody would have believed you at all.
And here it is.
Go ahead.
Go ahead, Eamont.
Well, the crazy thing, I think it was always worth coming back to this is that if you told someone 10, 20 years ago, they would have thought it'd be like this massive logicry, right?
We have to remember...
Is it this or is it that, right?
They're just ones and zeros.
They're literally like a movie file
and you push words in or images in one side
and it squeezes out this stuff on the other side.
The reasoning isn't actually reasoning at all
in the way that we think about it.
And again, I think that says something profound
about the way our brains work and the universe works.
But the static group of ones and zeros can do that.
I think what's important to realize here
is where we're going.
All of us are going to
have a AI with visual capability always on helping you, supporting you. And I think that's a vision
in the future. People say, well, I don't want to lose my privacy and so forth, but it's going to be
watching what you eat. If you want to turn on health mode, it'll tell you, you know, eat more
of that, don't eat that, or there's a staircase over there, go take the stairs instead of taking
the elevator. I mean, the ability for an AI to be your always on, you know, visual Jarvis assistant
is going to be profound throughout our lives,
increasing our efficiency of what we do
and what our objectives are.
Yeah.
Salim, do you want to add on that?
Just to build on that point, Peter,
I'm expecting in a year or 18 months
some sensor that's in your stomach saying,
hey, you're about to eat that donut,
wait 10 minutes because I'm still metabolizing the coffee.
Okay.
And creating radical efficiency
and all these very little things
that we never thought about much
is going to be one of those.
areas that we're going to add a ton of compute against.
So I took a screenshot of our podcast as we're speaking and gave it to Gemini just to prove the point.
And I said, hey, are these guys having fun?
And it completely interprets the scene.
It knows exactly what we're talking about.
And it says, yeah, it looks like it's fun if you're a tech enthusiast.
You like futurism or you enjoy brain food.
It's probably not fun if you dislike technical jargon or you want casual entertainment.
Okay.
That's probably true.
The point is, it completely knows what we're doing from just that screenshot.
And this is going two different directions, too.
It's making the AI more in touch with humans and the way we live.
But the data is not specific to that.
It can also go the other direction where you feed in genetics data, you feed in satellite image data,
and it can then get intuition in those domains where nobody that you know has intuition.
So it's going in both directions at the same time.
So if you kind of study what's going on with vision on this slide, you can develop some intuition
about what it's very soon going to be capable of
with medical imaging, with satellite imaging,
with other types of sensors that we're not familiar with.
I'm still reeling over last week's comment from Alex
that we're taking brain scans and running them through AI.
I mean, that's going to just generate some unreal insights.
Imad has thrown some cycles at that previously.
I know Imad, we were catching up with some of your former colleagues
from the Medarch days at Erips.
FMRI wants to be its own modality.
It does indeed. I think all the modalities. Again, we just tell the world for some reason, right?
All right, we're going back to when the stories we opened up with, which is the response China is having or the leadership it's providing.
So China is accelerating its push to become independent of NVIDIA with Cambricon planning to triple output to a half a million accelerators in 2026.
So this is a response to U.S. policy. It's always that way.
As soon as we restrict a country from buying a product or service that we're providing,
especially if it's fundamental to the lifeblood, they will develop competition.
And without question, I think the competition will, you know, there's a huge amount of intelligence resident in China.
Don't forget the Chinese sort of educational system excels at math and compute.
So I would not expect that they would deliver anything sub-NVIDIA.
Imad, do you want to kick us off here?
Yeah, I think necessity is the mother of invention, right?
We also saw more threads IPO this week in China.
They raised just over a billion dollars.
There again another competitor to GPU.
It was 4,000 times oversubscribed.
So $4 trillion of demand.
Now, obviously, that's a bit much.
but again, you're going to see more and more of this stuff ramping,
particularly for the specific Chinese models,
because having an open-source model that you can test as you're developing it
really closes the feedback loop a lot more aggressively here.
You don't need just to build for one vendor, you can build for everyone,
and most of the Chinese models are optimizing around this very sparse M-O-E-type structure
with deep-seek and similar arches with Mian kind of acceleration.
So you're getting towards one architecture that they can just,
engineer and output industrially.
And who's the best at industrial manufacturer?
Yeah.
China has been.
You know, it's important to remember,
NVIDIA used to supply 95% of China's advanced AI chips.
And when that supply got cut, you know,
there was a red alert going on in China.
And I'm sure the government orchestrates and supports and says,
okay, we need to, we need our own NVIDIA or multiple NVIDIA companies in China.
Alex, your thoughts?
Yeah, we don't have a slide for this, but I would definitely encourage the audience to read
the national security strategy that was just released in the past 48 hours.
It's most certainly eye-opening, and I think spells out a pathway for tech decoupling
between the U.S. AI tech stack and the Chinese tech stack.
I'm reminded during the Cold War, the Soviet Union had what for those years would have amounted
to an independent tech stack
and was experimenting with all sorts of crazy architectures
like ternary computing and other
to westernize unconventional choices.
I think we're going to see a Cambrian explosion,
no pun intended, of architectures coming out of China
now that China has been effectively decoupled
from the U.S. tech stack
and maybe many of those innovations
will end up one way or another
benefiting the overall world, benefiting the U.S. tech stack.
I think we'll see a lot more experimentation coming
out of China post decoupling.
So what's the implication of this?
I'd like to spend an extra couple minutes on this,
because China's going to be going as rapidly as possible
developing its models,
it's developed fully its energy ecosystem,
you know, 10x further than the U.S. has.
We're going to talk a little bit about China's desire
to put data centers in space.
I mean, any thoughts on the long-term implications
of this complete parallel development?
between the U.S. and China? I think we see an intelligence race, and that will lead to diversity.
We're going to see so many different architectures that are all competing in the U.S., in the West.
We have a whole handful at this point of Frontier Labs that are all vertically integrating with their own chip
architectures, many of them in partnership with Broadcom or other lower-level infra providers.
Now we're starting to see the same happen in China. I think in the end, this heterogeneity that we're seeing
in terms of tech stacks is only going to further accelerate the race that we're already in to the
finish line. And again, I would pose the question, what is the finish line that we're racing
toward? Because we're going to go much more quickly with this level of integration.
So stepping up a level, is this good for humanity or not?
I think all other things being equal, more experimentation can can probably be better for humanity,
query whether it's good for the U.S. or not, query whether it's good for interoperability or
or not, but all other things being equal, more experimentation is probably better.
Imad and Salim, I'd love to hear your thoughts here.
I think the good news is that more technology development is generally better for the world.
If I think about the counterposition between the U.S. and China, I think a lot of the future
will depend on where you end up with trust, right?
And the challenge with China is people don't trust it.
Now, people are losing trust in the U.S. on a week-by-week basis.
there's that to be considered.
But I think over time, the concept of do you trust Google or do you trust chat
GPT in terms of what the future of AI is going to be?
A lot of it is going to come down to where do we place our trust.
If you're an African nation over time, where will you put your trust, right?
Salim, really important.
I saw a tweet today to entrepreneurs saying if you're building something that increases trust,
to double down. If you're not, then stop doing it. I think trust as a scarce asset is a really
important thing to optimize. And I got a shout out to Jerry McColsky here. You made that phenomenal
comment that scarcity equals abundance minus trust. It's just like amazing. Yeah. I think that you'll
see just like China flooded Africa with smartphones, with TCL and others, you know, these chips
will be very aggressively priced.
To put Cambricon in context, they raised about $2 billion.
Their market cap is $100 billion right now,
and the $590 is equivalent to an NVIDIA-100,
the 6090 about a H-100, but it's about half the cost.
And it's much more power-efficient as well.
Does this hit NVIDIA's bottom line?
Not for a while.
For a while, they'll all be used locally,
but as they ramp from 500,000 accelerators to 5 million to more,
and again, China has the full end-to-end supply chain as well,
then you'll see it flooding in a few years' time.
Again, this is in the acceleration phase here.
To put the 500,000 in context,
I think there were about 4 million hoppers sold
and about 10 million blackwells coming.
So in a few years, you can expect even just Cambricon,
someone that no one's really heard about,
They're already at 80% of a generation back
Nvidia chips.
In a few years, you'd probably expect them
just like Tesla and BYD to actually be fully competitive.
And now you're seeing BYD displacing Tesla.
And over again.
Yeah.
It also comes down to the engineering versus legalistic thing, right?
The U.S. is lawyers managing immigrant and engineers.
And China is all engineers with kind of an authoritarian state,
and where will this play out?
This episode is brought to you by Blitzy,
autonomous software development with infinite code context.
Blitzy uses thousands of specialized AI agents
that think for hours to understand enterprise scale code bases
with millions of lines of code.
Engineers start every development sprint
with the Blitzy platform, bringing in their development requirements.
The Blitzy platform provides a plan,
then generates and pre-compiles code for each task,
Blitzy delivers 80% or more of the development work autonomously, while providing a guide for the final 20% of human development work required to complete the sprint.
Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their pre-IDE development tool,
pairing it with their coding co-pilot of choice to bring an AI-native SDLC into their org.
Ready to 5X your engineering velocity? Visit Blitzy.com to schedule a day.
demo and start building with Blitzy today.
Eamond, you're kind of in Europe or you're in a previous European nation.
And we've had this conversation on this pod over and over again about Europe has been
in a sort of an AI winter or I say, as the case might be.
So here we see EU to open bidding for an AI gigafactory in early 2026.
Europe is finally making a serious move.
to close its compute gap with the U.S. and China by green lighting an AI gigafactory bidding in early 2026.
Imad, what does this mean?
I mean, every nation needs sovereign compute because their intelligence of their nation will be dependent on the number of GPUs, right?
The EU with their regulatory acts has kind of held AI behind.
But, I mean, recently we saw Yan Lukun's now hiring for teams in Paris.
You know, we see teams in Germany.
We have the stable diffusion team.
others. There's a lot of talent there. It's just they've got to cut the red tape. And we're seeing,
I think, a change in that. And now the UK, Europe and others are like, well, this is the future.
We have to cut the red tape. But the US is still far, far ahead. Can they move fast enough? I saw,
you know, they're going to relax GDPR to give access to data. Finally, I mean, GDPR was just a
chokehold on entrepreneurs. I mean, how are you feeling in the UK right now?
So I think, again, you've seen a step change just in the last few months, and as the agents hit next year, proper agents, not these stochastic parity agents, everyone has to change.
No country has an option but to change and to go all in on this, because if you don't, then you're going to be left behind.
You'll be outcompeted by your peers.
Well, I'll tell you, just observing without judging, there's a square mile in Palo Alto, a square mile in
San Francisco and a square mile in Cambridge and the gap between those three places on Earth, Cambridge, Cambridge, Mass, yeah. Yeah. Yeah, not the other can't, not the original Cambridge. The gap between those three square miles and the rest of the world is getting wider and wider at an incredible rate. And I have always the same observations that Amad has. There's amazing talent all over Europe and all over the world. And shouldn't this proliferate out to all that talent? But when I observe, you know, that Peter, that meeting we had with Richard
Socher, I heard you go, holy shit, like 12 times during that meeting.
Yeah, yeah.
And there is nothing on the planet, and we'll talk about that soon.
Not that we can say anything about it, but hey.
We can't.
We can't right now, but the gap between what's going on and those three square miles of the
Earth and the rest of the world is mind-blowingly big and accelerating very, very quickly.
So I think, you know, building data centers around Europe is way too little, way too late,
unless it's done with some other, in combination with some other force that I don't
know about yet. But just observing it, that gap is accelerating very quickly.
Europe is best at public-private partnerships. The problem is that in this world, speed is
the ultimate higher orbit, and speed is not the strength there. I mean, you and I have had
these so many meetings throughout many of the European nation, Salim, and the energy isn't
there, right? The drive, the absolute, you know,
My favorite Joseph Campbell quote is like a man whose hair is on fire seeks water, right?
I mean, that's what we're seeing right now in the hyperscalers here.
It's like, you know, this code red.
It's like, you know, everybody jumping in.
It's 24-7.
It's not, it's not, what was it, it's not 996.
It's, it's, I don't know where it is.
It's 612, 7.
It really is that way.
Yeah, that's just that.
to make a point of this with Mustafa, you know, yesterday, and that'll be on our next podcast.
You'll see it. But, you know, he just name dropped. Well, you know, when I was talking to Sam
the other day, and then I, you know, I saw Demas, you know, last night. And then, you know,
Sam and I were thinking about Darya. It's all first names. You know, all of these are just first
names to him. And so it's not, you know, it's not corporate. It's not, you know, data center
investments made by the government. It's this very small group of people that are on a first
name basis that have now, no joke, $500 billion budgets to build this out. So that's what's
really happening. We saw the same thing in the space industry, right? In the United States,
we gave birth to Blue Origin and SpaceX and Virgin Galactic and a whole bunch of entrepreneurial
space companies. And in Europe, it's the industrial military complex, creating Ariane 5 and a few
other smaller rockets, but they can't compete with the current, you know, entrepreneurial space
industry. The only way to compete is because the government buys local. And that simply makes
the entire space-based services that are launched out of Europe more expensive. It just can't be
supported. Well, and it's changing so quickly, you know, it has to be on a first name, informal
basis, at the rate of change, you know, in Massachusetts, this will drive Alex crazy. But in
Massachusetts, you know, we have a very good relationship with our amazing governor.
And she said, you know what I need to do?
Put together an AI task force.
And so the timeline on that kind of action is like three orders of magnitude slower than the evolution of A.
So in Europe, in Europe, they would say sometime in Q3 of 2026, we'll start the discussion to create that task force.
Actually, there's a very interesting thing.
There's an initiative called next frontier.
AI to build Frontier AI labs.
And again, it's well-intentioned.
It's like they'll give 12 teams $25 million euros over the next two years to see if they can
accelerate up and get there.
25 billion euros per second?
Can I just, yeah.
Can I double down on this just for a second?
We are working with one of the biggest companies in Europe.
I have to apologize to our European listeners.
I don't want to make fun of the situation there.
It's serious.
And just spend two years in the middle of it and then go back home.
That's the way to solve the problem.
Look, I spent all the 90s living all across Europe, right, in like five different countries.
So I've got some kind of personal thing here.
In terms of the ability to live and have a great life, Europe is amazing.
But in terms of technological progress, it's not really the place to be.
You have to move to the West Coast and others.
We are working with one of the biggest European companies, it's one of the biggest global companies on transforming their metabolism.
And we finished one of our major sprints with them.
And they said, well, we need to start another one right away.
And this was back in February.
And they said, let's have the first meeting about it in October.
And this is the problem.
The people aren't seeing that the metabolism of everything that is happening needs to accelerate by 100 times across Europe.
for them to jump and it's culturally it's culturally blocked you know there's a old adage in europe
you work to live in the u.s you live to work yeah yeah which may not be the very best thing
now we're going to live to now we're going to live to compute yes all right let's jump into jobs
and economy a few interesting articles this week uh the first one is michael dell's 6.25 billion
dollar investment in America's kids. So it's called Invest America. And it'll give every child
born after January 1st of 2025 an investment account of $1,000. It'll be deposited to build
financial security. Let's take a listen to Michael and Susan Dell, describe what they're doing.
We're making a $6.25 billion investment in America's kids through our charitable funds.
Next year, every American child will be able to get an investment account.
powered by Invest America.
We've seen what happens when a child gets even a small financial head start.
Their world expands.
The real power of these accounts is that anyone can contribute.
Parents, relatives, friends, everyone can help shape a child's future.
To philanthropists, companies, community leaders,
if you want to be part of something truly meaningful for our kids, for our communities,
for our country, join us.
So I, you know, celebrate them for that effort.
There's been a number of players who have talked about similar situation.
Of course, being able to invest versus just save is a part of what's made America great.
Now the question becomes, is this too little too late, right?
Or we're just flat out irrelevant, given that all education is moving to AI.
You know this better than anyone, Peter.
Well, I mean, one of the questions became, you know, maybe what's going to happen is every single kid will have an AI agent that is out there generating revenue for them, right? This was a con, where was the conversation we had about this? No, it was it was the conversation that Ilya had in his recent pot saying, you know, one of the problems is if you've got an AI agent that's doing all this work for you, generating revenue, supporting you, representing you, the question is, does the human, uh,
all out of the loop and becomes irrelevant and is it important instead than to ultimately merge
with AI? But that's a different conversation.
Yeah, that's a different conversation. But I think I'm 100% sure of this topic, you know,
teaching at MIT, Harvard and a little bit at Stanford. There was a huge push to move all
educational materials online when the internet exploded. And it kind of worked and kind of didn't
work. You know, it made all the material available. I mean surprising. I'm 100% sure that now
that there's an AI face and voice that matches your personality on top of that, that it's
going to absolutely take off. Sure. And traditional education will be completely irrelevant
imminently because it can match your, it matches your accent, your favorite voice, your favorite
star, your self-image. Well, I think this is more than just for, this is more than just for
education, right? I think this is more about how do you provide a financial stability. We've
talked about this in the pod before that, you know, this was like a few episodes ago on the
data from F-I-9, that the majority of the world is absolutely concerned about not being able
to be employed and the cost of living. And if you have a seed kernel of capital when you're
born that is growing by the time you're 18, does that give you some additional stability?
Saleem, you were going to say? I have two thoughts about this one is I really, really love the fact
that it has every child born and it's kind of essentially universal. It's creating a wealth
floor for every kid, which means every kid from day one will be thinking about,
huh, investment and how do I think about it, et cetera.
I didn't come across the concept of investment until I was like 16 or 17, right?
If I'd had that way earlier, I'd be a way richer person today than anything else.
And I think that by adding the employer add-ons and encouraging people to contribute,
you're creating a community effort here.
So I think that it may be late to be doing them, so at least it's being done and I've got
to applaud them completely for doing it.
Well, I think also it's just money, so it'll be pivoted over to access to compute.
Let's say, look, education, oh, it doesn't have to be tuition.
You can also get, you know, your GPUs that you otherwise good to afford.
That's the currency that matters ultimately.
It's just hard for folks to acknowledge that and understand it.
Imod?
Well, yeah, I mean, it used to be the capital is what compounded, right?
You give people money earlier that $1,000 have become $20,000.
Now it's computing cognition that compounds.
as you move to self-learning systems.
Like, your capital becomes irrelevant
as the compute can get capital quicker than anyone.
It's all about, again,
how do you build that whole cognition architecture around you?
So I think this is great,
and then the other thing is we've got to give people
access to frontier compute as young as possible as well,
in a way that makes them able to compound the benefits from that.
I would just add this to my eye
looks like the beginning of universal basic equity. We've spoken on the pod about UBI, UBE, UBS.
This looks like universal basic equity, where every person in the economy will have an equity
stake in the economy. And if we do see hypergrowth, macro hypergrowth over the next few years,
then $1,000 in a 530A account, which is, again, what this Invest America, thank you, Brad
Gersner, for helping to conceive this idea, what $1,000 in an account,
now in a 530A account a few years from now if we experience hyper-hyper-growth could be quite material
to a person's living circumstances yeah and Brad's brilliant and he's agreed to come on the pod and talk
about his moonshot so we'll make that happen probably in early 2026 along these lines our next story
here are college students flock to a new major AI okay not very new for us but AI majors are
exploding with popularity with schools like MIT and UC San Diego and launching AI branded
degrees. So Dave, I'm going to go to you first. Everything is AI at MIT these days, right?
It sure is, yeah. So this is at MIT lingo. This is course 6-4, which was only added just a minute
ago, basically. And it's already almost caught up to 6-3, which is core computer science in terms
of people who are majoring in it. And I'll tell you, when you talk to the students, they say the
curriculum sucks. There are two or three great classes. Well, because you're trying to build an
entire major, and there's only two or three classes so far. You know, it takes the school too long,
to build the material
because they're used to this much slower time scale.
So I'm sure it'll fill in
because the demand is so high,
but as of right now, there's just a couple of classes
and then a whole bunch of garbage,
which is frustrating the heck out of the students,
by the way, everybody wants to move to this.
And for good reason,
no matter what you're trying to achieve in life,
whether it's biotechnology or space travel or whatever,
the way to achieve it is via AI.
So if you get a good grounding in AI,
you're actually then empowered to do virtually
anything. So it's the perfect thing to study anyway, especially when you're young and you have
time on your hands and you can really grind through these complexities. So I'm really glad this
is happening. I just want the curriculum to move much faster to catch up. The interesting note here
to add is that AI-related job postings in the U.S. was up 50% year over year from 2024. So
continue that. That's going to continue. Any other thoughts on this one? I mean, it feels kind of obvious.
the biggest challenge I've got is it shouldn't just be in college. I mean, we should be seeing
this in high school as well. We're going to see a lot of people that are going to skip college,
I think. That's a debate that we've had. So if we can get you started in high school to think
about AI, think about the world you're going to be inhabiting and inheriting and how do you
use this technology to create your vision and your passion? Emod. Well, I'll tell you, if there's
one actionable thing to talking to every school administrator, every high school,
principal, every college administrator, approve the applications. When the students say,
I want to study this on my own, I don't want to study that. Just say yes. Just approve it.
Let them, let them carry themselves forward. Don't hold them back.
Intrinsic motivation. Intrinsic motivation. Unleash them. Just sign the doctor.
My favorite Joseph Campbell quote is, follow your bliss. Let them follow their bliss.
Yeah. That's a good one too. Yeah. I think it's fascinating because the fundamentals of
AI are not actually that hard.
Like, it's not easy math, but it's not like that hard mathematics.
My take is, like, if you have a semi-vocational course where you do fast.
AI, which is a fantastic intro into the math and the basics for programmers,
Andre Carpathy's video series on YouTube, and then you just vibe code and build and the entire
class, like, implements some latest research every month, that will be.
put you way ahead of everyone. I think that CVs and qualifications move to show me what you have
built and done with AI. Yeah, so if you're listening to what Amad just said and you're a student,
take exactly what he said. Take the Carpathy material. Carpathy is the one guy from that open
AI original crew who is not a self-made billionaire because he's building education for the world right
now. He's given up, he could be a billionaire tomorrow if he just signed some documents.
He probably is anyway, actually, from his open-a-stack. But putting that a self-made.
side. He's building out the best educational platform you could ever imagine. Just go find him
online. Then tell your high school teacher or your college professor, I want to study this
instead. Can you allow me to do that in replacement for this class I would have taken?
Brilliant. That's the solution. But what you just said is antithetical to the concept of a university
structure, which is where a high school or a high school program. Yeah. I think just quickly,
I think that if you implement the stuff together and discuss it, and again, that fits with it, it's way better than doing it by yourself.
For sure, but I'm just saying you have to literally hijack a high school curriculum or university curriculum to do that because it's not going to offer today.
To Dave's point earlier, Peter Lilly, my wife has been pressure by all the local parents to have a day of just AI mind shift for all the teenagers.
So we're going to do that and pilot that out and see how good.
I heard that you've stolen Max Song, my Strike Force member.
For a day.
Yeah.
I should also point out, Peter, I mean, just want to look at this for a minute
from the perspective of economics.
Right now, AI engineers are complementary good or complementary service to AI compute.
The cost of intelligence is going to zero.
And so right now, pursuing careers and majors in AI, highly complementary.
But, but, but, but recursive self-improvement is also.
also potentially imminent.
And to the extent that recursive self-improvement gives us soon AI engineers, we might start
to see AI itself become a substitute for AI engineering labor, in which case maybe this rush
to major in AI at MIT and UCSD maybe reverse itself, unwind itself, and everyone goes back
to majoring in the humanities like they used to.
And we had the conversation in the past about, you know, should you learn to code?
And there comes vibe coding.
And one of the conversations we had yesterday up in Redmond, we'll hear about it later this week, was the importance of studying philosophy.
All right, let's talk about the next job boom, which are in data centers where the gold rush is for construction workers.
So AI data center construction boom is making welders, electricians, and supervisors earn between $100K and $225,000.
and these AI companies need that kind of labor.
There's a national shortage of 450,000 skilled trade workers, and it's significant.
You know, this is a alternate career path where you don't come out with hundreds of
thousand dollars in debt.
You come out with the ability to earn immediately.
How long will this opportunity last?
for Optimus 4 or 5 or figure 6 comes in and does this work for us.
I don't know.
Maybe it's five years, 10 years, something in that realm.
Thoughts?
Agreed.
I think that is the multi-trillion dollar elephant in the room.
As with college majors flocking to AI, in this case, right now,
if you can pursue a career in the so-called skilled trades to facilitate tiling the earth
with compute, drink your whatever again.
I think that that's potentially very promising local strategy.
But of course, five to ten years out, and I agree, Peter, with your timelines, we're
going to see humanoid robot substitution effects.
All right.
Let's jump in.
Yeah, go ahead.
The economics and dynamics of AI data centers are really similar to fracking, actually, when
you think about it, even the financial structures and these booms in these industrial kind
of areas.
So I think it'll last longer, but let's see.
I find this next article.
So Amazon I is expanding its network after talks with USPS stall.
So you may not know this, but the U.S. Post Office is one of Amazon's main delivery carriers.
So the U.S. Post Office is delivering Amazon packages last miles in rural areas.
It's been a significant, about a $6 billion per year contract between the two.
and that contract is breaking down.
My prediction is the U.S. Post Office will be put out of its own misery, and Amazon will get a contract from the government.
Today, the U.S. Post Office has about an $80 billion per year operating budget, and it's losing $7 to $10 billion per year annually.
Thoughts, comments, gents.
What happens when we have last mile of robotic delivery services?
Sure. I think we have to prepare for that imminent future here, and that is probably best done by the private sector.
Drones. You know, we saw an article probably about a month ago that Amazon is giving its drivers now augmented reality glasses, right? And it's saying to the drivers, okay, where are these glasses will warn you about if there's a dog in that in that apartment building or that house, it will show you where to drop the package and so forth. And I think what's really going on here is that Amazon is collecting all of the last mile or the last 100 meter data.
and being able to train its future robots, right?
Autonomous trucks, autonomous robots,
doing that last 100, I keep on to say 100 feet.
I hate the fact that we use feet and pounds in the United States.
It really drives me up the wall,
and it drives me up the wall that science fiction writers are using that as well.
Damn it, we switch over to metric back in the 60s.
Pain the ass.
Anyway, yeah, this is going to be an interesting battle.
What's your under over and how long the post office lasts?
Well, this is an interesting bellwether because it should have been privatized probably 30, 40 years ago.
Everybody knows that, but it's written in the Constitution and, you know, nobody wants to mess with the Constitution.
But it's so obvious.
Like space travel is getting, or space launches are getting privatized.
It's not in the Constitution because space didn't exist when the Constitution was written.
So it can just move over to SpaceX and Blue Or.
By the way, look at the FedEx line, right?
I mean, FedEx had such an amazing lead.
Fred Smith with such an extraordinary entrepreneur,
and it's been just slowly on a decline.
All right.
My guess is U.S. Post Office has, at max, five years left.
I don't know if anybody wants to go.
Yeah, that's all it would have about the same.
It'll take an act of Congress.
Yeah, and two-third ratification of the states.
It's a structure.
Like, this is a good case study.
It's not that big a deal.
But it's a great chance to learn, like, what are we going to do that's blatantly stupid because of legacy structure and how is that going to get fixed?
So, yeah.
All right.
Let's move on to space, a fun subject.
There are four space stations under development in the U.S. today, VAST, which is being launched by SpaceX, Axiom Space, Starlab, and Blue Origin, Orbital Leaf, just throwing this out.
because it shows finally we're going from government to truly commercial inhabitation.
Alex, do you want to add anything here?
Yeah, it's not a coincidence that there are four separate private space stations that are about to launch.
These are actually all causally related to a NASA program.
When we speak of privatizing the government, NASA in 2021 started the commercial Leo,
lowerth orbit destinations program with ultimately $1.5 billion in funding.
And the SpaceX surge to space that we saw was in part the result of another analogous NASA program to try to commercialize space launch capabilities.
Yeah, that's correct.
Yeah, in fact, the commercial crew program was what saved SpaceX, right?
SpaceX had three launch failures of their Falcon one.
They got the fourth one finally to orbit after Elon literally borrowed money to be able to put that together.
And in Christmas, what was it, 2008, he won a billion-dollar-plus contract from NASA to go forward with Falcon 9, which is today the most successful launch vehicle on the planet by, like, an order of magnitude.
That's right.
So the commercial Leo destinations program was spun up in part because the International Space Station is going to need to be deorbited sometime soon.
And there was a desire for private U.S. space presence to succeed the ISS.
So I'm very optimistic about all of these and other private space stations.
I think we're going to see an exponential rise of humans in low Earth orbit sometimes.
You know what I'm excited about as well, Alex?
Jared Isaacman.
I cannot wait.
So Jared Isaacman is back on the docket to be our NASA administrator.
I'm not sure when the congressional hearings finalized.
Do you know?
Several days ago.
Oh, is he in finally?
Well, there has to be a vote.
Okay.
The hearing was several days ago.
Okay.
So I've been texting with Jared, and he's agreed to come on the pod as soon as the confirmation is done.
So excited about that.
He is brilliant, absolutely brilliant.
I've known him for a long time.
I took him to Russia to watch the Soyuz flights from some of our commercial launches there.
All right, continuing on.
Wait, I have a quick comment.
Yeah.
This space station thing reminds me of a date put out by, I remember one of our NASA.
astronauts telling us the most interesting date in the world for him was October 31st 2000 and it was on
that date that the first human being lifted off for the international space station and since that
date we've always had some at least one human being off planet and so the first molecules
are kind of drifting off this thing so this next article is a bit of a surprise that SpaceX is
considering a 2026 IPO I mean I've had this conversation with Elon he was
always resistant to take SpaceX public for a number of reasons. When you're a public company,
you have to disclose all the details. He doesn't want to disclose all of his details and how he
operates to his competition. But the other thing in particular was, you know, if you're a public
company and you're spending a whole bunch of money to build Mars vehicles to go and colonize
the Martian surface, is that something which your shareholders are going to support? So,
Listen, I'm a SpaceX investor.
I've held it from the very beginning.
I would love to see it public.
I always thought that what Elon was going to do was spin out Starlink
and take that public and keep the launch capability.
That was the conventional wisdom.
Yeah.
I do think...
Okay, I'm not sorry.
No, I mean, like, Elon's a smart guy, and he's got a million GPUs.
And so they'll have more AI lawyers than anyone to attack the stupid people that come after them.
But I mean, serious, this is like SpaceX, XAI, Tesla all will basically have full AI teams top to bottom.
Like you can criticize their strategy.
They all just clap back at you.
You can sue them for the silly stuff and the AI will just clap back.
It's a big difference in the way that you can actually run public companies.
Yeah.
You take SpaceX public and you take X public and Elon, you know, leaps over the trillion dollar mark in terms of personal net worth.
Okay.
Maybe just a comment quickly.
I'm also not sure the historic story that Starlink would spin out and do its own IPO.
I think with the rise of orbital data centers, I think that muddies the water somewhat
in terms of Starlink as pure communication service versus Starlink as a predecessor
to orbital data centers and putting compute up there and not just comms capabilities.
So in that sense, I think I could imagine a scenario where orbital data centers are actually pulling
all of SpaceX to go public, not just spin-off Starlink.
Yeah, and that's a relatively new part of the conversation.
That's right. It's very recent.
I love this competition. You know, it's fun. My mission has always been to open up space,
and I, you know, built so many companies on the space theme. And it does the nine-year-old in me
so proud and gives me such contentment that two of the wealthiest humans on the planet
are battling it out to open the space frontier. So Blue Origin plans to start flying cargo to the
moon in early 2026 using its Glenn Heavy Lift rocket, which, by the way, recently did a launch
and full recovery of its first stage. And it's backed by a multi-billion dollar NASA contract for
targeted human landings in 2008. The government has always wanted dual suppliers. So for most
a human, for most of the America spaceflight industry for the 70s, 80s, 90s, it was, you know,
Boeing and Lockheed Martin competing for this. Here comes SpaceX, which becomes the dominant
player. And now, you know, the government wants the number two. And it looks like it's going to be
blue origin, which is, which is super exciting. Alex, thoughts on this? This was basically the plot of
season three of the television show for all mankind. I love that show. It's a wonderful,
wonderful show. The three-way race, in the case of season three-way race, it was a three-way race to
Mars. In this case, it's a three-way race between SpaceX, Blue Origin, and China to land humans,
again, on the surface of Mars by 2028 or earlier. And I think this resumption of a space race,
which was dormant for 50-plus years, and maybe also had collateral downsides for the rest of the
economy in terms of overall innovation, it's coming back to life. We're back in the space race
again, and one might hope we'll see a lot of growth in innovation come out of it. The nine-year-old
in me is so happy. Just to put some numbers and size against it, so NASA's 2025 Artemis budget,
Artemis is their lunar program, their human lunar program, is $7.8 billion. Let's look at that
compare it to the Apollo program. So in 1966, NASA's budget was about, actually, the Apollo budget
was about $3 billion of NASA's 5.9 budget. So Apollo was half of NASA's budget. And if you
adjusted the Apollo budget to today's dollars, it would be about $35 to $40 billion. So
compare that to the $7.8 billion that we're spending.
in Artemis. We were spending about half a percent of the US GDP on the Apollo program back in the
60s. Pretty impressive. And the reason we don't have to do that anymore, of course, is commercialization
and technology. We brought the price down, you know, orders of magnitude. I find this article hilarious
Sam Allman enters rocket business to compete with Elon and SpaceX. You know, what's fascinating is
Elon goes for BCI and Sam goes for BCI.
Elon goes for space and Sam goes for space.
There's probably a few other areas.
Any particular thoughts on this one, gentlemen?
Well, I'm really curious to see how the code red interacts with, you know,
Sam has cut a deal in every single dimension related to AI,
including, you know, Johnny Ive with the wearable device and Stargate with the data center
and Broadcom with the chips, you know, the TV.
So he's cast it out in every direction and more power to him.
But I think that's Sam personally versus Open AI, right?
It's a mix.
It's a mix.
The bigger deals are Open AI, and then there's about 3, 400 personal deals, you know,
that are all the use cases and components.
But it's a massive metric in this Samoverse, a massive network of connected parts.
But now you've got this code red where, hey, wait, the thing that matters at the middle of it
is these AI benchmarks.
and we're now off the chart on Polly Market, code red, code red.
So I'd be very curious to see what that means.
Because, you know, he has a lot of talent, but there's still a limited supply.
It's not infinite.
And so that all gets drawn back into the middle.
That's going to cut some of the things on the edges.
To give some more detail here, the company he's in discussions with is a company called
Stoke Space, and it's founded by two former Blue Origin propulsion engineers.
It's as with every, you know, launch company needs to be fully reusable.
It's a two-stage, fully reusable rocket.
It's never flown, right?
They're using something called a ring-shaped aerospeak engine, which also has never flown.
So it's a little bit of a risky bet.
But if Sam wants to enter the orbital data, data capabilities, I think having space launch capability.
And of course, let's not forget Eric Schmidt.
is also in the rocket business.
Yeah, I think this vertical integration by hyperscalers into spaces
is probably an inevitability at this point
where we're certainly not going to get our Dyson Swarms drink without that.
But I also think, you know, imagine near-term future,
are we going to get a meta-space station?
Are we going to get an anthropic space station?
Maybe clusters in space for everything.
Just like we're getting just like we're getting
hyperscaler fusion plants.
That's right.
Yeah.
Facebook Space Station.
No, I'm not going there.
You've got to be full fact to control the light code of humanity, eh?
Exactly.
At least it's a better place to put a name than a sports stadium.
Maybe Instagram Space Station, I'm not sure.
Oh, God.
You guys can be grocers here.
All right, orbital compute energy will be cheaper than on Earth by 2030.
So again, I still find this kind of challenging just because we have so much solar flux on the earth and don't have to worry about launch.
But if we can really get the cost of launch down $100 per kilogram, which is the projection with Starship versus $500 to $1,000 per kilogram, perhaps, who wants to jump in on this one?
I'll just comment.
It could also be even cheaper than this once we get compact fusion online.
A lot of these orbital compute projections are assuming that solar is the primary power source for orbital compute.
It doesn't have to be.
Once compact fusion is cheap enough and there's no reason to expect, it won't be, we can tile lower the orbit at minimum with compute as well.
And it won't require all of these expensive solar panels.
So we're going to throw fusion reactors into orbit?
Yes.
Wait, so the theory there is launching a fusion reactor is cheaper than just a solar panel.
The solar panel in space is about 10 times more.
effective than it is here on
Earth, but it's still cheaper to launch
a fusion reactor? It's maintenance.
I just won the
bingo game. We said launching a fusion space
reaction in the space.
Salim wins.
No, so, I mean,
solar and space is
still fusion-based. It's just using the fusion
reactor at the center of our solar system.
So I think the question is, where do we
want fusion power to be located
for space-based compute?
And as fusion reactor,
There have been a number of deep space probes that NASA and other organizations have launched
that are using fission, for example, ion-based propulsion.
It's not like nuclear energy is that foreign for space.
Fission thermal has been used by deep space probes for decades.
It's not like we don't know how to do it.
What's going to be new is compact fusion in particular.
We've put fission-based energy in space for decades.
You know, Alex, I'm looking at the numbers here and says,
current terrestrial average is $12 per watt, and we're talking about $6 to $9 per watt in space.
That's not enough of a difference for the level of complexity.
Yeah, you're going to need at least a 10x drop in that.
Yeah, so maybe it is compact fusion.
I think if we're actually mining the moon, lest I say, disassembling the moon to build, you know, the beginnings of ice and swarms.
So we've already won the drinking game.
Yeah, well, hey. Maybe that occurs. But, you know, I love the fact that it's now orbital data centers that are driving humanity's expansion to space. That's amazing. I would have never gone to. It was going to have to be something. I mean, if you look at all the sci-fi plots, it was either going to be the discovery, again, for all mankind, it was without spoiling too much. It was either going to be ice on the moon.
or the discovery of microbial life on Mars or something like that that had to motivate space
exploration and development who knew that it was going to be data centers well it had to be something
it had to be i remember when i was in college i was at mit and i was running seds and i put together
this brochure on why open the space frontier and i used to have to rationalize like better materials
there and i mean there's like always this like very soft rationalization uh but this is real industry real
base. People were trying to, like, what, could we manufacture in space that has value here on
Earth? Guess what? What you didn't see? This is the moment. This is the moment you've been
waiting for since you were an undergrad. I mean, you should be like a kid in a candy store
on this, but it totally makes sense. And it always drove me nuts when, you know, like Romance
when he graduated, our buddy, he went to Ford and he was working on wire harnesses and
rearview mirror motors for Ford. And like, why do we need like another electric component in a
Ford, why don't you work on space or something foundational that changes humanity?
And it's like, well, because it's kind of like your iPhone now.
A new feature for this massive installed base is economically incredibly valuable,
even though it's marginal for society.
So it sucks up way too much great talent.
And something really important like space data centers doesn't get worked on.
But you need an economic crack that starts the whole process.
And this is it.
We finally have it, you know, after.
And it is so much better of a storyline than for all mankind.
You know, actually it's a lot later in this history.
I'm still betting, I'm still betting on asteroid mining.
I mean, everything we hold of value on earth, metals, minerals, energy, real estates,
and infinite quantities in space.
So, you know, those nickel iron asteroids that are worth trillions of dollars in platinum group metal
or those carbonaceous chondrites, we're going to mine for oxygen and hydrogen for fuel.
Can I burst your bubble there, Peter?
Oh, don't do it.
I think the ion transformation of material science will ride around all the scarcities are on that.
I don't know.
Or maybe it'll accelerate it.
I don't know.
Peter, when you were founding Seds, did you, I'm guessing not foresee the plot twist that the killer app would, for space,
would actually be like getting enough compute available to do generative cat videos doing funny things?
That I was not able to project that far ahead, to be honest.
So who knows what the asteroid belt will actually end up being used for?
So, you know, lest we...
We'll assemble it for compute, obviously.
Lest we leave the Chinese out of this, China, a company called Cosmospace, is planning to build and add AI data centers in space.
They're putting up a super computing cluster with three modules.
One's got 100 megawatt level energy.
The other is 10 terabits per second comms.
And the third is 10 X operations per second, 10 Xaflops, 10 to the 18th level compute module.
Alex, what do you think about this?
I think we're seeing a race to build Dyson Swarms.
It's as simple as that.
It's not just a race to the moon.
It's not just a race to tile the Earth.
It is a race to put as much AI accelerated compute into low Earth orbit as possible.
And Como Space emerged from nowhere.
I had never heard from them several months.
or heard of them several months ago, and I don't think an obscure or otherwise obscure Chinese
provider is going to be the last story we hear for Chinese orbital AI compute.
We're going to see probably a dozen different vendors from China.
We'll see, as just discussed, a dozen hyperscalers in the West, and the concern maybe becomes
like overpopulation on Mars, making sure that all of these Dyson swarms remain interoperable.
Can we just point out to all of our listeners if you've been viewing to follow?
us on moonshots, this conversation of orbital data centers did not exist four months ago
in any way, shape, or form. It was there. I'm sure people were speaking about it,
but it's now become a weekly conversation over the last three months. It literally came
onto the scene with a vengeance. It's extraordinary. Technology clearly works. It's proven technology
now, so it's just a question of launch costs. That's the only missing link, you know,
and it looks promising.
They figure out the dissipation side of things?
I think that was still outstanding.
Yeah.
No, no, they go.
I mean, I'm shocked.
There is still an issue of how you can get efficient energy dissipation on the back end.
In fact, that was a subject.
It was proposed as an XPRIZE this year at Visioneering.
So they'll figure out.
It'll make more sense.
But the thing that always gets me with this is it's horribly insecure.
Like you had the proposal by Eric Schmidt and others, like pounds of things go on race, go against data centers.
Space data centers think they'll go up and it'll just start disappearing, honestly.
Yeah, I'll give you the counter argument.
And I totally agree, by the way, so I don't want to, I don't, but just to give you the counter argument,
the hardware depreciates in three years anyway.
So you only need it to be secure for three years.
So I think the counter argument is that the U.S. Space Force will basically guarantee enough safety that
can get three years of hard work out of it before something bad happens and that's all you need
to pay it up so what about what about solar flares one of our one of our subscribers asked that question
what happens when you have solar flares hitting these and what happens when there's an emp
that hits them as well right all of this gets knocked out actually i've got a very interesting thing
so we were training on thousands of a 100s a few years back and we kept getting errors exactly
at the time of solar activity
because it was basically messing
with the EMCC memory.
Wow.
We'll find out how these things sustain in the backing.
You know this better than anyone, Imod,
but little inside scoop on the million compute clusters
that are being built now.
They have errors here on Earth too.
And you have to solve that problem
in order to have coherent training anyway.
So the error rate goes up a lot in space,
but you have to have a process for backing off.
And you can't, you know,
right now everybody,
does checkpoints and rollbacks, but you can't, you know, invest, you know, an hour of a million
GPUs at millions and millions of dollars and say, oh, wait, we have an error. We're going to roll back
all of those GPUs for an hour. So you have to do it, you know, unit by unit. And so that work
is well underway. So presumably that'll work fine in space, too, and you can tolerate the error
rate. All right. The other comment, of course, is those models just want to learn. So for
AI compute workloads, to the extent we're doing training or inference, they can be structured
to be false tolerant.
Yeah, Microsoft was actually the leader in that,
and now Google's the leader in that.
It's just seamless the way that it flips.
It'll be interesting once we get up to space.
Our last topic here.
We're going to dive into robotics just for a few stories.
Here we are.
After an AI push, the Trump administration is now looking to robots.
So robotic are the focus for a 2026 executive order
to accelerate U.S. robotic development.
we're seeing this in China, right? China is crowning its winners. It's got huge investments into
the robotic industry. In fact, in our last pod, we talked about the fact that there is a, at least
what the Chinese are calling, a robot, what do you call it, a robotic bubble going on today
with over 150 Chinese robot companies. I find this one fascinating.
after the acceleration, so major national robot strategies are coming online, robotic firms are
likely to have tax credits, subsidies, protection against trade measures, something along the
Chips Act once again.
Thoughts, gentlemen?
I'll comment that the zeitgeist at Nureps this year was that humanoid robotics is the next big
thing for AI after agents.
I think this is how re-industrialization of.
the U.S. and the West happens, I think this is probably the best path for radically increasing
economic growth and automating the two-thirds of the services sector that relies on
physical intervention. This is instrumentally convergent for the future that we want.
Yeah. Just to remind folks from our last pot, we talked about the fact that China installed 54%
of the world's total robots last year. So again, massive, massive push.
I want to show a couple of quick videos here just for fun to close us out.
We saw in the last week a little bit of Optimus versus Figure competition.
Elon posted this image of Optimus walking. Let's take a look. Here it comes, running along, jogging.
And then we had Brett posting this one of Figure running across.
I have to say, they look pretty natural compared to where they were six months ago.
Which one did you like better?
Let me play this again.
Here comes Optimus.
Optimus coming along.
I don't know.
It kind of looks like figures doing a better job running to me.
What do you guys think?
I think they're both incredible.
And I'd also throw out maybe a request to the audience for the show.
If you're interested in supporting robot athletics in the United States,
either as a host or as a vendor or in some other capacity, please reach out to me.
I'd like to do what I can to ensure U.S. dominance and Western dominance in general with
humanoid robots via robot athletics.
Yeah, well, let's take a look at Chinese dominance with this video.
Then we can talk about it.
So we saw last week the T-800 humanoid robot from Engine AI in China.
This is five foot eight inches tall, incredible capabilities.
They put out a new video that I wanted us to take a look at here, because it's a little bit shocking.
All right.
We are 100% sure this is real, by the way?
Yeah.
This is, they claim it's real, and this is a follow-on.
But so here we are with this T-800 robot basically kickboxing, but check this out when it goes up against a human opponent.
Like, wow.
Kind of scary.
I'll go back to my standard comment that having a robot doing kickboxing is not a great marketing message.
Yeah.
I think calling it at T-800 is also not a great marketing message.
What about all the skulls that they put in there on their...
Can I do a little rant here?
Yeah, we love your rant, Slim. Go for it.
Look, a human being.
has evolved for four billion years, which is an optimized strategy.
Well, 200,000 years as a human being.
Whatever, for survival.
We have the human structure to survive and being able to quickly pick fruits off trees.
We have a plausible thumbs and whatever.
I mean, a wheel is so much more energy efficient than walking.
It's ridiculous.
I think, for God's sakes, at least put those little wheels in the bottom of the robot,
like the kids with the wheels and their sneakers, so they can be more efficient.
as battery power seems to be huge limiting factor in this.
So why don't we have a wheel along with the leg so they can do both when it's needed?
This just having robots copy human beings seems to be the stupidest thing in the world.
It feels to me like when we first had TV, we did radio announcers and put it on television
reading the same scripts.
We're going to be doing a podcast.
We're going to be doing a podcast from Figures headquarters in Palo Alto in January.
You're not invited.
yeah i don't know i think this is really interesting you guys kick bikes the robot
yeah this is really interesting so the entire chest cavity of the robot's actually a battery
here um but this is really interesting because the 450 max joint talk that's the really interesting
part basically this thing can punch harder than the gorilla uh like four times a mike tyson
do you really want those to be walking around in the streets like are they going to have to
regulations on the max joint talk.
Also, if you actually
look at the full video, which is real, they even
show a behind the scenes one, and you
look at the previous video, so there's
something called SIM to Real, which can basically
model human actions in a robot.
So we have full almost real
steel, the movie with Hugh Jackman,
teleoperation capabilities
now in robots, and soon it'll be
policy learning. Like, this robot
can do pricking kung fu
with greater...
UFC is coming. Punch through?
UFC is coming. We're going to see Tesla bot, you know, basically Optimus versus T-800. I mean,
it's going to be Olympic-level sports. It's going to be amazing. Olympics versus Euro UFC. I think
that would be exciting. But we have to actually ask, do we actually want to have regulations around
the max joint talk of humanoids in the street? Because I'm fine with these being in the fighting
arena. I think it's fantastic or an industry, but I don't really feel comfortable than walking
around. The challenge comes. No, no, the kitchen. They're going to mean the kitchen.
The challenge comes when they enter warfare, right?
I mean, this is Terminator in sort of the pure sense of robots on the battlefield.
And it's a scary direction for us to take humanity.
Yeah, I think it's all of the above.
It'll be warfare.
It'll be in the kitchen.
It'll be on the street.
And you'll see, I think, governance and governments at different levels, whether it's
municipal or national or international regulations for the parameters of what the
of engagement are making an omelet versus fighting a war.
All right.
I want to just do a, you know, a thanks to C.J. Trueheart who gave us our first song on the moonshot mates.
This is an outro piece called The Exponential.
But before I play our outro piece, gentlemen, it's been a blast to spend time with you guys again.
I love this.
IMAID, it was wonderful to have you as a fifth here today, grateful for you.
What's your week ahead look like, Iman?
Lots of policy work and more agent stuff.
We've got lots of releases coming.
It's exciting times.
And how was Japan?
You were there for FI. Japan.
That was fantastic.
Huge amounts of corporate and government interest in using AI to help accelerate the way forward.
And so, again, hopefully some announcements about that soon.
Yeah, and Dave and AWG you're about to hop your flights back to Boston, I gather.
I think collectively we covered 12 countries in the last week and a half in this group.
So it'll be nice to be home for at least a week.
Nice.
And same for you, Celine, chance to stay home.
Yes, I'm here for a bit.
I just got back.
So I'm preparing for the big online meaning of life session where people are interested.
Come armed with any question you have about life.
And let's see what in my frame.
Any question.
When is it, Salim?
It's December 17th, 11 a.m. Eastern will go for several hours on metaphysics philosophy in the middle.
Yeah, and when Salim says go for several hours, like six to eight hours, get prepared.
Well, it's a big topic. You know, it's left to cover.
It is for sure.
This is an example. We start off on a conversation of what is truth, and have it broken down to a two-by-two framework that just the sense-making.
It allows to have a decent conversation.
What do we mean by that?
Well, on Monday, I'm heading up to the buck in the Bay Area to talk about longevity and
AI, my favorite one-two punch.
And with that, let's listen to the music of C.J. Trueheart as we wrap this episode,
gentlemen, see you on the next episode of Moonshots.
Thank you to all our subscribers.
If you haven't subscribed yet, please do.
We're now putting out more than one episode a week just because the speed is moving so
rapidly so if you want to know when the episodes drop a quick hit subscribe and uh and let's listen to
cj
But something's shifted in the code
Now watch the numbers grow
One becomes, two becomes for the double it never ends
Deceptively flat at first, then vertically ascends
Can you feel it building the pressure in your chest
This is the moment where the future manifests
We're rising exponential
Straight up to the sky
Every second accelerating
This is the time to be alive
Digitize, disrupt, democratize the dream
Nothing's ever been this fast
We're breaking through the scene
On the curve of infinite
Rocket boosting now
All the inflection point is here, and we're never coming down.
All right, if you're a music producer using AI and you want to give us an outro, just go ahead and let us know.
And please next, when you're watching this and you have questions, please post them in the comments.
We are going to do more AMA in the next couple of sessions.
gentlemen, moonshot mates, Dave, AWG, Mr. EXO, EMOD, thank you guys, having a fantastic week.
Every week, my team and I study the top 10 technology metatrends that will transform industries over the decade ahead.
I cover trends ranging from humanoid robotics, AGI, and quantum computing to transport, energy, longevity, and more.
There's no fluff.
Only the most important stuff that matters, that impacts our lives, our companies, and our careers.
If you want me to share these meta trends with you,
writing newsletter twice a week, sending it out as a short two-minute read via email.
And if you want to discover the most important Metatrends 10 years before anyone else,
this reports for you.
Readers include founders and CEOs from the world's most disruptive companies and
entrepreneurs building the world's most disruptive tech.
It's not for you if you don't want to be informed about what's coming, why it matters,
and how you can benefit from it.
To subscribe for free, go to Demandis.com slash Metatrends.
to gain access to the trends 10 years before anyone else.
All right, now back to this episode.
