Moonshots with Peter Diamandis - GPT 5.2 Release, Corporate Collapse in 2026, and $1.1M Job Loss w/ Alexander Wissner-Gross, Salim Ismail & Dave Blundin | EP #215
Episode Date: December 13, 2025Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Al...exander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy Grab dinner with MOONSHOT listeners: https://moonshots.dnnr.io/ _ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Listen to MOONSHOTS: Apple YouTube – *Recorded on December 12th, 2025 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Open AI releases GPT 5.2.
The capabilities are just shockingly different than they were a few weeks prior.
Open AI has just unveiled GBT 5.2, which it's billing as its most advanced frontier model yet.
The value that we see people getting from this technology and thus their willingness to pay makes us confident that we will be able to significantly ramp revenue.
The fastest scaling consumer platform in history, we're almost at a billion users. That just blows my mind.
A lot of change is coming rapidly.
I think the biggest challenges people are not projecting properly on how rapidly this is going to tip.
I think 2026 is going to see the biggest collapse of the corporate world in the history of business.
In 2025, we had 1.1 million layoffs, which is the most since the 2020 pandemic.
71% of comparisons between a human performing this knowledge work and the machine resulted in the machine doing a better job.
at more than 11 times the speed of the human
and at less than 1% of the cost of the human professional.
So knowledge work is cooked.
Now that's the moonshot, ladies and gentlemen.
Speaking of alien creatures,
I was touring with Colossil yesterday, Ben Lamb.
I'm an advisor, an early investor in this company,
and Colossel is amazing.
They've got something like 12 different species at different stages of de-extinction, right?
They brought back the dire wolf.
They're going to be bringing back the saber-tooth tiger.
I can't wait for that.
Of course, the woolly mammoth.
They created the woolly mouse, right?
So they've been able to identify the genes that the particular are different phenotypes, right?
Like length of hair, length of snout.
and it's fascinating what they're doing
and their ability to actually find the closest living relative
and then snippets of DNA.
So they have DNA going back as far as 1.2 million years.
They haven't been able to get DNA older than that.
But that's still pretty incredible.
But being able to actually like...
Didn't Ben say that we couldn't restore animals
if the DNA was older than like 10,000 years?
Well, for example, the woolly mammoth DNA that they've gotten ranges from like 10,000 years to 1.2 million years, right?
And they've got to identify, that's not a single species.
That's a whole spectrum of a species, right?
Because there's evolution going on all that time.
And so they're trying to figure out, okay, what part of the phenotypes, like the tusk and the woolly mammoth hair and its cold tolerance and all of those things.
and they're reconstructing a single lacruma, you know, an approximation of Woolly Mammoth.
Anyway, the programs are amazing.
And Ben is such an incredibly good CEO.
I'm excited.
He's going to be one of our moonshot closing speakers at the Abundance Summit this year.
So we're going to go deep with how do you go from zero to $10 billion valuation in four years?
And how do you do something?
And no bio background at all for Ben, right?
He was the CEO of Hyper Giant, the software company.
Incredible.
So, Salim, your multi-armed robot can shear the woolly mouse,
and then we can make sweaters in time for the holidays out of it.
I'm very excited.
And we can all wear them on the pod.
By non-human-ard robots.
All right.
All right.
I think it's time to jump in with enthusiasm.
Yes.
All right.
Welcome to Moonshots.
Another episode of WTF just happened in tech.
This is the news that hopefully impacts you, inspires you,
gives you moonshot thoughts and gets you ready for the future, because that is one of our
primary goals. How do we prepare you for what's coming next? A lot of AI news. Today is a special
episode that we pull together in order to celebrate the release of GPT 5.2, but we'll get to that
in just a moment. I wanted to hit on some of the top sort of like top level hyperscaler updates
and battles. So just a few headlines here. We'll be discussing them through the pod here today.
ChatGPT was the most downloaded app in the iOS App Store in 2025. Congratulations to them.
They're nearing 900 million active users. Gemini is catching up. Anthropic jumps to 40% enterprise share.
Amazing. Accenture is going to be training 30,000 people on Claude.
Elon has let us know that GROC 4.2 is coming very shortly in the next few weeks and GROC 5 in the next few months.
We said in a moment ago, OpenAI has released GPT 5.2.
That's going to be coming up in a moment.
And interestingly enough, Google launched its deepest AI research agent the same day that OpenAI dropped GPT 5.2.
a little bit of PR battles going on between them all.
All right.
One other piece of data on the downloads here to give people a look at the scoreboard.
ChatGPT received 902 million downloads.
Gemini is at 103.7 million downloads and Claude has received 50 million downloads.
Any comments on these opening headlines before we jump into
GPD 5.2? Well, I'm in shock this week at the capabilities. We'll look at the benchmarks in a minute,
but the benchmarks really undersell the last two weeks. The capabilities are just shockingly different
than they were a few weeks prior, and we'll get into it. But also, the big, big change is the race is
on. You know, when, you know, GPT5 kind of disappointed everybody, the polymarket on Google running away
with the rest of this year went to like 90, 95%.
Now, kind of as Alex predicted,
it's a closer horse race.
You know, Google's still on top of the stack,
but apparently Sam had something in the tank.
And who knew?
So we'll get into that too.
But I'm just absolutely like, no exaggeration.
The things that I got done in the last week
that I couldn't have done three weeks prior,
just coding and building things,
are it's just i'm in shock so are they pulling their punches we discussed that in the past right
where they're releasing this much they know that you know we're going to have grok coming out next
so let's then release the next segment to compete directly there they are totally pulling their
punches they've absolutely been holding back uh i think because they're starved of compute and
they're afraid to roll out you know addictive capabilities that they just can't deliver on but
You know, Alex experienced this, too.
Like, yesterday we were, you know, going crazy with 5.2 trying to see what it can do.
And then it's like, sorry, you're done for today.
We're out, we're out of compute, sold out, no gas in the tank.
And so the competitive pressure is forcing them to code red, you know, come out with things
when they normally would want to hold back and wait until they can find the data center compute
and wait until Chase Lockmiller finishes Abilene.
But they just don't have that choice with the competitive pressure on each other.
Yeah, maybe just to comment.
And I think they're at this point, if you're open AI and you have your purported code red and you're in a hurry, you're in a bind, 5.1, GPT 5.1 came out only a month ago.
And you need to rush something to market, to put at ease perceived competitive pressures.
I think there are only approximately three levers you have.
So one lever to Dave's point is compute.
You can increase the total amount of compute allocated to give.
given models. And that, of course, comes at a cost. It comes at the cost of compute
scarcity. It comes at the cost of longer response times to prompts. Second lever that you have
is safety. So you can turn down the safety. You can make models more sycophantic. And that's
a way to improve. Can we get a benchmark on the sycophantic models?
There are a bunch of benchmarks for sycophantic. Or compromising your ideals to win the market
in general.
Yeah.
Right.
So call it the safety knob is the second knob that you can turn if you're in a pinch.
The third knob that you can turn is the post-training knob, which can be done on relatively
short notice.
So you can pick particular benchmarks that you want to really post-train your models to do really
well on.
And I suspect all three of these, more compute, maybe, maybe not some turns of the safety
knob, and then post-training on select benchmarks is exactly.
what we're seeing in this cycle now that we have a real horse race.
I found it fascinating.
We've got probably the most, the fastest scaling consumer platform in history.
We're almost at a billion users.
That just blows my mind.
It's starting to eat the operating system.
I mean, like when you start to get order of magnitude of billion downloads,
at some point you have to ask the question,
is this AI user interface basically cannibalizing the entire OS itself?
what point sometime soon is every pixel that shows up on a mobile device being AI generated?
I think we're not too far from that.
Well, that was definitely the backstory, too, when we were at Microsoft last week with
Mustafa Soleimay.
Is that podcast out yet?
I'm not sure what the order of releases.
Coming out shortly.
Well, look forward to that one, because what Alex just said is clearly in the minds of Microsoft.
They're going to do everything and anything they can to get on this chart that we're showing
right now.
and they have a lot of assets that will come up in that pod
that'll give them a really good chance of getting there.
But it's for exactly the reason Alex said.
The OS, the whole base of Microsoft,
the revenue driver for the last 30 years,
is at risk now.
And you've got to move to the new thing.
It's not just OS, right?
It's the entire app ecosystem.
I mean, the end goal here is for these hyperscalers
to capture the user as the only AI you need to use.
So-called core subscription, and that certainly is OpenAI's stated strategy to become the default core subscription, quote-unquote, for consumers, anthropics strategy apparently is to focus on enterprise APIs and co-gen, XAI, focusing on brute force scaling and maybe benchmarking and Google focusing maybe in a more balanced way on total stack domination, balanced pre-training, post-training.
So I think in a real horse race, which is what we're finding ourselves in among the top four frontier labs,
we're starting to see differentiated strategies coming to market.
Every week, my team and I study the top 10 technology metatrends that will transform industries over the decade ahead.
I cover trends ranging from human robotics, AGI, and quantum computing to transport, energy, longevity, and more.
There's no fluff. Only the most important stuff that matters, that impacts our lives, our companies, and our careers.
If you want me to share these Metatrends with you,
I writing a newsletter twice a week,
sending it out as a short two-minute read via email.
And if you want to discover the most important Metatrends
10 years before anyone else, this reports for you.
Readers include founders and CEOs
from the world's most disruptive companies
and entrepreneurs building the world's most disruptive tech.
It's not for you if you don't want to be informed
about what's coming, why it matters,
and how you can benefit from it.
To subscribe for free, go to Demandis.com
slash metatrends to gain access to the trends 10 years before anyone else. All right, now back to this
episode. All right, let's jump into the core story here today. OpenAI releases GPT 5.2. We spun up this
pod for our subscribers the day after the release so we can go into detail. What does this mean?
You know, we heard Open AI's red alert and here's the result. Alex, take it away.
Yeah, I've been waiting for this all day, Alex. All right. Dave, you want to lead us?
Or Alex here.
Oh, no, I just want to say that these numbers, when they go from 80 to 90, it really
understates the impact on what you can do.
You know, the benchmark, and when it goes from 10 to 40, it looks like a big gain on a line
chart.
But when it goes from 80 to 90, it doesn't look like a big gain.
But what you can do, like firsthand, is just mind-blowingly different.
And I'll tell you some of the things I've done in a minute.
But I've been waiting all day to hear actually Alex's interpretation of the chart.
who are listening versus watching, there's a chart of the benchmarks comparing GPT 5.1 thinking
against GPT 5.2 thinking.
And with that, if you don't mind sort of speaking the percentages as well, Alex, as we're going
through this, that would be great.
Okay, sure.
So maybe some high-level comments, and then we can do a detailed play-by-play.
So high-level comments, one, keep in mind what I said a couple of minutes ago.
So they're really, if you're open AI and you need to rush an impressive model release to market,
there are probably only three knobs you have.
One, you can turn up the compute, two, you can play safety games, and three, you can do
post-training on particular e-valves, particular benchmarks.
So that story, maybe not the safety story, but the other two knobs, I suspect is what we're
seeing here.
So walking through this chart benchmark by benchmark, we have sweepbench pro, which is software
engineering benchmark. We see a modest improvement between 5.2 and 5.1, perhaps attributable to
mostly compute in a little bit of more post-training and or distillation. We have Google-proof
question answering Diamond, modest increase from 88.1% with GPD 5.1 to 92.4. Again, so far pretty
modest. We have Charkhive reasoning, a larger increase. This is scientific reasoning.
could be post-training, not a benchmark that I pay super close attention to.
Then we get to Frontier Math. Frontier Math tiers 1 through 3, which are easier math problems,
and then one of my favorite benchmarks of all time, Frontier Math Tier 4,
which is research-grade problems in math that are supposed to take professional mathematicians
several weeks to accomplish. I often point to Frontier Math Tier 4 and progress on Frontier Math Tier 4
as indicative that drink math is being solved.
So focusing on Frontier Math Tier 4,
we see Gemini 3 Pro getting approximately 19%
and GPD 5.2 thinking getting 14.6%
and GPD 5.1 thinking getting 12.5%.
So this is actually a win.
In my mind, this is a win for Google
and a loss for OpenAI.
that OpenAI has had a month to attempt to super scale, to beat Google in this horse race at hard open or rather hard closed math challenges, but professional mathematician grade nonetheless couldn't beat Gemini 3 Pro.
And it's not as if these problems have been a state secret.
In fact, OpenAI actually sponsored Epic's creation of the frontier math benchmark.
So OpenAI has had, in some sense, privileged access to all of Frontier Math still couldn't be Gemini.
So I think that's pretty instructive.
Moving down the list.
Amy, the American Invitational Math Exam, 2025, scoring now 100% 5.2 versus 94% suggestive of post-training.
Then we get to the second set of benchmarks that I think are super interesting.
Arc AGI 1 and 2.
ARC being autonomous research challenge, and of course, AGI being AGI.
So for those who don't pay super close attention to ARC AGI,
ARC AGI, sort of a visual reasoning challenge,
testing whether problems that humans find relatively easy,
sort of a visual problem-solving slash program synthesis challenge,
but machines historically have found exceptionally difficult
as sort of an arbitrage between human minds and machine minds.
we see here some big, big differences.
So for Arc AGI 1, the first version of the prize, we see that's just saturating at this
point, 72.8% with GPD 5.1, 86.2% with GPT 5.2.
Arc AGI 1 is cooked at this point.
Arc AGI 2 is nearing the point of saturation.
So huge change from 17.6% with GPT 5.1 to plus.
50%, 50 plus percent, 52.9% with GPD 5.2 thinking. So in my mind, this smacks of post-training.
So that's the obvious strategy. Take a moment and just for those who don't know what post-training
is, because I think it's an important, one of the three knobs that you spoke about and it's
important for folks to understand what that means. Sure. So let's reason by analogy to the way
humans in sort of a conventional Western upbringing learn. So you have the sort of the baby,
the infant-like learning, that's approximately pre-training. So the P in GPT stands for pre-trained.
Pre-training is unsupervised training. You're feeding a model, just information about the world,
and giving it the goal of predicting what comes next. There's not much of a supervision angle to it.
not unlike a human newborn where it's just taking in information via lots of sensory feeds
and trying to make sense with very little guidance.
Then there's mid-training and post-training.
So think of these phases of training as being not unlike attending primary school,
secondary school, where you receive explicit supervision, you're receiving grading,
you're being given particular assignments.
And there are many ways that you could be graded.
You could be graded very granularly, like a thumbs up, thumbs down, grade A, B, C, D, F.
And there are other ways that you can grade.
For example, you can be given more of an open-ended assignment and graded on how well the ultimate final product of that open assignment is.
So this sort of mid-training, post-training, which really became popular with the O-Class series of reasoning models from open AI.
and everyone has since adopted reasoning models and post-training,
not just to make humans happy, which is another form of post-training,
like pleasing your teacher,
but also showing that you can via reinforcement learning,
via other mechanisms, solve hard problems and reason about hard problems.
This is where post-training shines.
This is where almost all of the alpha, if you will,
in increasing model capabilities over the past year or so has come from,
not from pre-training.
So getting back to the benchmarks, Arc AGI 1, Arc AGI-2,
These are benchmarks. The R in RKGI is reasoning. These are benchmarks designed to test the reasoning
capabilities of models. And we see a huge jump. We see frontier level performance, state-of-the-art performance
by GPD 5.2 with RKGI2. Reasoning is well on its way to having been solved at this point.
And I think we'll cover this probably in the next slide, but the costs are collapsing as well.
Maybe talk about that in a minute.
And just to wrap up then, for purposes of narrating this chart, the final benchmark here,
which is perhaps the most interesting of all, is GDPVAL.
So GDPVAL, gross domestic product, eVal, was created by OpenAI with the idea of having
an eVAL that measures AI ability to automate knowledge work in the general human service
economy. So we're seeing a jump from GPD 5.1 at 38.8%. GPT 5.2 is now at 70.9%. This is the clearest indicator
in my mind that the human knowledge work economy is cooked. You heard it here. It is it's cooked.
This is 44 different occupations that opening. And by the way, this is like all open source.
You can go on GitHub and you can read all of the tasks for,
GDP Val now, 44 different human occupations, 1320 specialized tasks like creating PowerPoint
presentations or Excel spreadsheet, sort of typical prototypical knowledge work. It's cooked.
It's automated. And 5.2, probably again, due to elaborate post-training, can get almost 71% of
these tasks. That's 70, what does that actually means? 71% of comparisons between a human performing
this knowledge work, and the machine, 5.2 performing knowledge work, resulted in the machine
doing a better job. And that was, by the way, at more than 11 times the speed of the human
and at less than 1% of the cost of the human professional. So knowledge work is cooked.
Okay. You know, I figured something out on that last line this week, too. Because I'm, you know,
I'm chairman of about a dozen companies and I'm like, guys, what is holding you back? Why have you
not deployed this. You can cut costs dramatically. You can automate. You can expand your
market share. And they're all like, yeah, I don't know, we're really struggling. I'm like,
oh, it's driving me nuts. What's going on? So a couple things that I finally figured out.
One of them is, you know, one of the companies is working entirely in Java. And when you turn
this loose in Python, where it had a lot more training data, it can build virtually anything.
It just blows your mind. And it really sucks in C still. And I don't think they're going to
fix it because they just don't care.
We've moved off of C anyway, and there's not
enough training data. And Java's somewhere
right in the middle. And so when they benchmark it,
they're like, well, let me try and take my legacy thing
and see if it can just immediately fix it
and it struggles. But if you just
say, no, scrap it. Rebuild it entirely from
scratch and Python. You come back an hour later, and it's
done. So they're stuck there. And also
the other place they're stuck is in operations.
They're saying, well, look,
the way we pick up a customer service
request is in an email
that's in an Outlook folder.
and that has all these security
whatever's on top of it
so it's struggling to open and read the emails
and so we're giving up
like
don't you think you could maybe fix
that front end interface and maybe a day
and then try it on the rest of the process
and just turn it loose and it would immediately crush
the problem so they're stuck on these little
edge case issues and I'll tell you it also
comes up you know that RKGI benchmark
is the one that was specifically designed
to be things that a human finds
relatively easy and intuitive
and the AI is still struggling with,
the AGI 1.
And I had countless conversations
around academia
with people who desperately want to say
there's still something missing.
There's something fundamentally missing
in this great AI brain
and it hasn't been solved yet
and the proof is ARC AGI1.
And you're like, okay, boy, do you look foolish now
just three weeks later, five weeks later
because it's going to, it's,
basically saturated, but it's going to be completely saturated imminently. And on the GDP Val,
you know, if you remember Elon has spoken about one of the companies he's going to be starting
is macro hard. And his mission is basically go into a company and simulate all of your employees
and deliver it as a service back to that company. A lot of change is coming rapidly. I think
the biggest challenges people are not projecting properly on how rapidly this is a
in a tip. Our next slide here is GBT 5.2 ARC-AGI update. We spoke about the numbers in the
table just recently. Here we see it charted out where GPT has had a 390-fold efficiency
improvement over 03 back from 2024. Anything you want to add to this AWG?
Yeah, so we talk on the pod. We've spoken several times about hypothetically 40.
times 40x year over year hyper deflation. We're seeing 390x year over year hyper deflation on
visual reasoning for ARCAI. This is unprecedented. And it will not, this level of
hyper deflation in terms of the cost of intelligence will not stay contained to the data
centers. It will not stay contained to these still relatively narrow. I know they,
they brand themselves as generally intelligent benchmarks, but they're still relatively
narrow in the scheme of things. It's not going to stay contained. Hyper deflation is going to
spread outward from these sorts of benchmarks to the rest of the economy. That's common one.
Common two, just focusing narrowly on ARCGI. One of the lovely things about the ARC AGI 1 and 2 benchmarks
is they don't just focus on raw performance. They also focus on cost. And if it costs us
$100 trillion to solve a hard problem, well, if it's larger than the human economy to solve an
important problem, then it almost doesn't matter. But if it's incredibly affordable, you know,
to your mantra, Peter, about abundance. If abundance is unaffordable, what's the point? It has to be
affordable abundance. And the way we get there is exactly what the ARCAGI organizers do, which is
you measure on a scatter plot, performance on the vertical axis, and cost per task on the horizontal
axis. And that shows you what progress looks like. You want progress that looks like points in
scatter plot going up and to the left, greater performance at lower cost. And in fact, if going
back to my earlier comments, if you see a frontier lab hypothetically just increasing compute costs
but not actually making efficiency gains, that shows up in these plots too. So you can see,
for example, if you look at ARCA-1, although it's probably a little bit difficult to read here,
if you squint, you can see that GPT 5.2 is on sort of the same.
The same extrapolated slope as GPT5 Mini, suggesting that maybe at least as it pertains to Arc AGI1,
there hasn't actually been major progress, algorithmic or efficiency progress.
It's just like more compute being spent on the same tasks.
And so it feels smarter, but it's actually because you're putting more work into it.
As the aphorism goes, you're lifting with your back, not with your legs.
But with Arc AGI 2, there is, in fact, radical improvement.
So we're seeing progress.
Well, this is a benchmark that I think a lot of people can relate to, the next one here,
GPT 5.2, writing benchmark comparison, long form, creative writing, and emotional intelligence.
Again, we're seeing improvements across the board.
Alex, one more interpretation here.
Spiky.
This is very spiky, so spiking.
We saw that, we saw that sort of an interesting three-dimensional plot on when are we going to
reach AGI. And again, spikiness was the descriptor for it. That's right. That spider plot was
purportedly comparing humans with AGI's or strong models in general. What we're starting to
see here is increased spikiness and spiky competition between the different frontier models.
So just a little bit of context. Longform creative writing benchmark evaluates models' ability
to basically write a novella, about 8,000 word novella.
as judged by Sonnet 5, and the emotional intelligence judge mark, benchmark measures how well
a language model or a model can grade short fiction. And so what we're seeing here is no single
model dominating all the benchmarks. We're seeing, for example, that with long-form creative
writing, Anthropic Sonnet 4.5 wins and is the best job at writing an 8,000-word novella.
What do you use? What do you guys use for writing? I mean, I've been using
you know Gemini 3 pro it looks like Claude you know sign at 4.5 is the one to go to
are they all using i've been using jemini 3 pro and i found it to be really amazing to just craft
but i'm using mostly business documents so that's a little different
that's the same for me i use 3 pro for almost all of my writing
yeah i'm using k2 for huge volumes of stuff on my little fleet
of invidia chips that I hijacked.
And then, but I'm using actually Gemini to one D spyware it and to proofread it.
And I'm using Claude Opus.
My opus expenses went from 200 bucks a month to a thousand bucks a month to I'll easily
crack 20 or 30K this month.
But I'll also generate more code this month than my entire life up to this date.
So it's a bargain at 20K, but my expenses are going through the roof on Anthropic,
and I'm happy with it, actually.
DeSpyware.
What's DeSpyWare at mean?
Well, Alex warned me that when you use a Chinese open source model,
it can inject evil things into the code that it returns to you.
It's actually publicly information.
We're not breaking news here, maybe just to expand on this.
So two comments.
One comment is there have been,
very well-publicized outside of the pod studies that found, for example,
prompting certain open-weight models with certain politically sensitive for certain
countries' topics, results in those models emitting more vulnerable code, for example.
That's something to be wary of.
So I would say, more broadly, for creative writing, et cetera,
none of these models is so strong that I can ask them to just do a good job doing all writing
what I find inevitably is I end up like having to do 80% of the work and models function
is more of like a junior editor as it were and I end up still doing like the majority of work
writing similarly with today's point with with code gen I would certainly not trust code gen models
to not insert vulnerable code it's definitely so well when you told me that a week ago
I was like you know Alex I'm just I'm going to see the code and you know I'll also
see if it's injecting anything evil in there. I'm not super worried about it. Let's go.
So here we are a week later, and it's generating volumes that no human being could ever look
at. I was completely wrong. And it worked. The code just flat out works. I don't even have to look at it.
It's passing every e-val. It's doing it's building interfaces that I want, it's doing everything I wanted
to do without needing to look at it. So now I've got actually GPT 5.2 proofreading right now,
but I think what I need to do is just turn off Kimi, pay the 10x higher.
price. It's actually 20x higher price to run it on GPT2 instead. 5.2 instead. Yeah. But I'm going to have to do that
because I don't know how else to make sure I don't end up spy wearing my entire world. This is a real
challenge. If you have basically intelligence being dumped into the world, then there is this
implicit tradeoff between do you want intelligence cheap or do you want it to be safe?
Yeah. And we've talked about this as a potential strategy for China making
open source models available to the world, it becomes, if it becomes the base in once you
built everything, then it's there from the beginning. I don't want to impute a, a dystopian
point of view on all the Chinese model makers, but it is a concern. I think we're going to
see a move to sovereign intelligence. I think this is the long-term trajectory we find ourselves
on. Every sovereign entity is going to want their own sovereign, trusted stack.
Well, how do you feel about France? So Mistral's DevStral 2 raises the bar in open source coding
tools. So what do you think about Mistral? Dave, are you playing with them at all?
You know, it's funny, I saw this chart, and I had kind of forgotten all about them. And I guess my
read on the chart was, oh, it exists. But the headline, you know, the headline,
says it raises the bar, but it's actually below the,
I mean, only a notch, but it's below Kimmy and Deepseek.
I guess you could probably trust it more,
because Europe is much very trustworthy.
But other than that, it was like, what's the news here?
It's the headlines.
Europe, slow but trustworthy.
And also, it's not, I mean,
there's, I think, this sense for a variety of reasons
that Mistral is somehow like the EU's sovereign AI stack,
or sovereign AI model.
model, but its roots are all very much American. All of its early funding is from Blue Chip American
VCs. Its founding team came from DeepMind and Meta. Yes, it's like raised a large amount of money
from ASML most recently, and my understanding is Europe is very interested in using Mistral as
sort of an AI emissary to the rest of the world, but its technical roots are deep, deep in the
U.S. and sort of this bizarre world that we find ourselves in where it's a Paris-based,
Frontier Lab or NeoLab, however, they brand themselves, that's the right now the only and
main counterweight to Chinese open weight models. There's one thing I thought was really interesting
here. As it's getting close, once you have open source systems being beating closed systems,
then you move innovation to the community level from the lab, from the lab. And there's no
catching up with it once you get that flywheel going. So I thought this was a big deal. They
I mean, needed a little bit more improvement for Dave's point.
But I think once they get there, it's going to be huge.
Is that true for AI open source models?
I know it's true for a multitude of fundamental just plain software models.
We've seen that before.
Alex, do you think that's tricky?
It's tricky because you have to ask, what are the primary limiting factors to increase in capabilities?
And it's compute more than talent.
There's lots of talent in the world, but compute is still.
pretty scarce. So the community has lots of talent, but in my mind, they don't have compute.
They're compute starved. This isn't like Linux where you can sort of say lots of eyeballs
make all bug shallow. In this case, the way you make the bug shallow is by investing trillions
in CapEx. Well, this conversation is critically important. And Alex, you can help the world
a lot because every corporate executive in 2026 is going to need to choose something. And, you know,
there's only two types of exec out there. People that are familiar with this,
and they've already kind of got their landscape figured out,
and then the other 99% that are going to get slapped in the face in 2026
and have to react and they're late to the party.
But you saw the benchmark earlier.
Everything every one of your employees can do can now be done by AI.
What are you going to do?
Just sit there and ignore that.
So in 2026 is the turning point,
but these choices are really tough on this chart.
Like to an executive saying, well, God, I can go open source
at 120th the price, but I get 72.2.
units of thing. Or for 77.9, what does that mean? It means a lot. Anyone looking at the chart
would say, oh, what's the big deal? It's only five units. But the reality is the capability
difference in terms of, you know, your economic value is massively, massively bigger as this
goes up even a little bit. And so it's a tricky, tricky situation in 2026 for pretty much all
of corporate America, corporate world. I think it's probably, I mean, if I had to spitball this one,
it's going to take some sort of regulation to move the dial on this right now. If you hang out
with all the Silicon Valley firms that are using open weight models, they're just all using
Ali Baba's Quinn at this point. And mistral and devstrel, that's great. But it's probably in the
mind of a typical Silicon Valley firm that needs to host their own models. Too little, too late.
They're all using Quinn. They're all fine-tuning Quinn. And it's going to take an executive order or an
act of Congress or some sort of regulatory measure to turn off the cheap Chinese open weight
intelligence before they're incentivized to move over to mistral or deftrol or GPTOSS.
But Dave, I think one of the points that you made is the CEO and the board of directors of a
company in extremis, in sort of paralysis, not knowing what to do, right?
And their lunch is going to be eaten by the small startup that says,
oh, there is an interesting business, so we should go and enter.
And it builds a AI-native approach, that 100th the cost, and 10x, the innovation evolution speed.
And so what do they do?
Who do they turn to to help them reorganize their company?
And it's a risky move, because if you brought in an outside consulting firm, I don't think it's going to be the big consultants.
I mean, there are going to be AI-native companies out there.
We're going to be having a pod conversation with one company called Invisible that does this very shortly.
And there are others.
The right way to do it, you said it earlier, is to scrap what you've been doing and actually start with a fresh stack.
And that is so hard for any company to do.
Salim?
Yeah, this is right in our wheelhouse.
essentially we're working with some very big companies and Dave, you're exactly right.
They're totally paralyzed. They're flailing. They have no idea what to do.
And if they bring in one of the traditional consulting firms, they just push them faster down
the old path, right? And so that doesn't work at all. And so what needs to happen is they need
to take their capability here, create a new stack on the edge that's completely built AI
native from the ground up. And then little by little deprecate the old and move functionality capability
resources to the new. The political and the emotional stress of that is causing the most of them
to do nothing. Yeah. And so out of the, say, 20 major companies we're working with, maybe three
are doing maybe 50% of the right thing. And most of them are just like, we're going to keep pushing
this old model and seeing where we get to. Surely we can catch up because we've always been
able to get there before. And the answer is you absolutely cannot. And so this is a huge deal.
And when you say we, you mean Open Ex-O is doing some work with these companies out there.
Yeah, we have like 42,000 people talking to companies around the world. And so we were kind of
aggregating the gathering the information of all that. It's 1999. I think 2026 is going to see
the biggest collapse of the corporate world in the history of business. All right. You've heard that
first position here. No doubt. Because yeah, I think this is going to be. And we should have maybe a end of
your perspective and some predictions, but for all of the madness we've seen in 2025,
it's like this is the slowest that's ever going to be in 2020. This is going to be 10x to 50x
to 100x crazier. So I don't even know where to start. I've got model fatigue and I've got
benchmark fatigue right now to dealing with all this. If you hire Salim to help you with your
strategy, one of the things he'll tell you is read Clay Christensen, the innovator's dilemma,
which exactly addresses this question.
And what that book will tell you to do,
and Clay Christensen's Foundation will tell you to do,
go find Link Studio, Y Combinator, Neo,
go out there and find your AI development partners.
Try and do a deal with them
where you either invest in them
or you become a development partner customer for them,
pull them in, give them revenue
because their market cap will go way up,
they'll all become wealthy,
but they'll then hire the talent,
but point them at your internal problem
and have them solve it inside your organization
as an outside very tightly bounded startup company
that's growing like crazy.
That's the only way you're going to get the talent
focused on your internal problems.
You can't hire the talent directly anymore.
You've got billion-dollar signing bonuses
all over the place.
And by the way, Selim will tell you,
we'll tell you to go read Open EXO2,
exponential organizations too,
which is our book,
which actually walks through step by step what to do.
How to do this, yeah.
I actually had a couple of really interesting conversations with Clay before he passed away.
And one of the things he honestly, very honestly, admitted was the innovator's dilemma works really well to identifying the cracks in the structure.
But it's not that great at the prescriptive side or trying to predict.
For example, in his model, Uber is not very disruptive.
And I said, but Uber is very disruptive.
it fits right into the wheel of our EXO thing.
And he goes, yeah, it means our models wrong.
And when we drilled into it, what we realized was his, the innovator's dilemma assumes
that the verticals like transportation, energy, health care education, stay in those
verticals.
So Uber is a transportation company may disrupt a little bit of transportation, but not realizing
it, it's also disrupting health care delivery and restaurant delivery and food delivery
and can go horizontal across a lot of these.
And so there's a, the old verticals are essentially collapsing of the old newspaper with the printed places and utilities and this and this and this.
And to Alex's point, it's all going to become one category called compute.
And that's where one of that.
Well, if you don't want to do what Salima's suggesting, the other choice is to do a $20 billion aquire plus $14 billion of new payroll.
And that's the other way to solve the problem.
Or I tell you, the other thing I'm seeing this unbelievable, executives at that level are, I'm,
are looking at everything, looking at the world and going, yeah, I'm just going to retire right now.
And so this is unbelievable.
Exacts like falling off the cliff going, ah.
This is the most fun time in human history.
How could you?
Not if your company's diving into the ground at warps being.
I actually respect that.
I tell you why.
What they're doing is they're basically saying, I can't navigate this new world.
I'm a checkout and let the younger generation navigate this because I can't do it.
Well, it's a really honest.
But it's very honest, right, at least.
The worst thing in the world is the old fuddy-duddies that were running the world on the old model that can't, they won't get out of the way.
And we're seeing that much more in politics to some extent in the corporate world.
All right.
So there's this massive she's change happening.
So talk about, talk about billion dollar salaries, talk about innovator's dilemma.
Our next story here is meta's shifting AI strategy is causing internal confusion.
So meta is at an inflection point, right?
And after mixed Lama 4 results and reported $14 billion AI talent spending spree, you know,
Mark is looking at considering whether an open source strategy can still compete with closed,
vertically integrated rivals like OpenAI and Google.
Dave, what do you think about this?
I think they're doing exactly the right thing.
Actually, the other backstory here, which I guess is validated, maybe it's more rumor than validated,
but they're getting heavily into distillation
of other people's models
to accelerate the inference time speed.
And what's exciting about that
is if you look at where we are in human history,
intelligence in a box was invented just days ago,
you know, well, really two years ago,
but it's brand new in the world.
And now we're in the hyper-experimentation phase
of how do we make it bigger, better
by running many, many agents in parallel,
by expanding the context window
and dumping in tons more data
and by iterating it over and a chain of thought
reasoning it over and over and over again
and we're getting ridiculous gains
but we're brand new in that game
and so what meta has realized is look
we're behind in the foundation model race
we do need to rebuild and catch up
but that's not going to happen overnight
but where we can potentially get ahead
is by raw inference time speed
and having many many more agents
working on things in parallel
And I believe that that will also lead to self-improvement, which will get them back on the map.
And so I think that they're directing all their research energy now into how do we make this blazing fast and be the world leader in distillation?
That's my thing.
Incredible. I'm blown away by the $14 billion hiring spree.
Just like that number, I can't process that number.
Well, remember, they've got just a massive cash cow and cash generating engine.
And, you know, Mark has basically said, this is the race.
If we don't spend the money now to get, you know, towards number one, it's, it will just slowly, slowly go away.
So Dave, what you're saying is...
And what's cooler than cool is that he's already decided to use every single penny of it plus debt on top of that to try and win this race.
And Wall Street has said, that's fine.
No damage to the stock.
Go for it.
We love, we love what you're saying.
That's just a beautiful thing.
So what you're saying is they're moving from like trying to focus on the open source of the foundation model to putting all of their chips on the agent strategy.
Well, I think there's so much innovation there too.
Yeah, I think they're in a bit of a tricky situation.
So I know the key players, Zuck's undergrad advisor before he dropped out was my postdoctoral advisor, Nat Friedman, who's with Alexander Wang helping to lead this new lab with my first roommate at MIT.
I'm pretty familiar with the key players in this particular story.
And I think there are three strategies that META could be pursuing and or has been pursuing.
So one strategy is that of commodify your complement, drive the cost of generative AI to zero.
That was their Lama strategy that they were pursuing.
Problem is Lama 4 was a disaster, and the Chinese open weight models are flooding the market and doing a much better job.
The second strategy that they could be pursuing is more conventionally and perhaps what Wall Street would expect out of meta,
use strong AI to improve like Instagram and other meta products.
And so that I would have to imagine many executives at Meta would like to see all of these new AI resources being used
to just improve meta's other existing products.
Strategy two.
Strategy three is compete directly with the first.
Frontier Labs with closed source API-based models to be the first to superintelligence.
So I think what META has to struggle with, it's almost hopefully not like a civil war internally,
but what they have to decide is which of those three strategies do they really want to pursue.
My guess is there are constituencies with different interests within META that want to pursue
each one of those three.
I cannot believe Mark is not all in on number three.
I mean, being first to superintelligence, that just feels like Mark's.
Yeah.
And I think very often the cover story is, look, we're going to enhance existing products.
We're going to use our internal data.
You know, we've got a huge amount of internal posts that we can use as training data.
That's all kind of cover story for the real we want to win the race to AGI and ASI.
By the way, everybody, I want you to realize as you're hearing these stories about Google, about
meta, it's all about business model innovation on top of all of this, right?
Google going from an ad-based search company to now an AI-based company that's delivering a
whole slew of different products.
Meta is, I mean, this is where companies fail when Blockbuster did not change their business
model, even though they had twice the opportunity to buy Netflix.
So how do you actually disrupt your own company and shift its business model?
Otherwise, it's game over.
innovator's dilemma to Dave's point earlier.
But it is, I think also ironic, like Sam Maltman has said publicly that he'd much rather
have a billion users with not frontier model than vice versa.
And yet, and yet what we see from META is the exact opposite strategy.
Meta already has their billion users, billion plus users, but they would much rather have
a frontier model at this point than continue to.
That's our grass is always greener at the other frontier lab.
that's funny that's a good phrase all right our next story here is google deep mind to build material
science lab after signing deal with the UK so we've heard about this as well another company
out of MIT and Harvard called Lila is doing something very similar where you're basically
you know it's all about the data and if you've consumed all the data you need to go find new data so
imagine having a, you know, lights out robotic capability where the AI is putting forward a scientific
hypothesis, designing experiments, and then at night, robots in the lab are running the experiments
to get the data to either confirm or modify your hypothesis. And like, let's do that a thousand
times or 10,000 times faster than humans can do. It's, I think we're going to see multiple
companies. I think every frontier lab is going to need to have this kind of data mining.
We're data mining nature, understanding what's going on. In particular here, they're focusing
on material sciences. Lila is looking at biological sciences. Thoughts on this, gentleman.
I don't know if there's a polymarket on this, but Dennis is really leading the race to being
the coolest guy on Earth. He got his Nobel Prize in chemistry. Now he's going to crack.
And you kind of could see this coming because, you know, AI can
allow you to be a world leading expert in anything. And, you know, he's the master of the biggest
AI, you know, compute in the world and algorithms and TP. He's got, he's got the, and he also isn't,
he's not one of the corporate, you know, leaders trapped in the political fray. And he's a beautiful
soul. Yeah, he's, yeah, he's, we're going to have the coolest guy benchmark. Okay. Well,
what's great is you want somebody with that purity at the edge of this, which is fantastic.
There's a couple of things I thought came across for me, having kind of hunkered around in physics labs during my degree.
If you have a fully autonomous lab, this is like the biggest breakthrough in scientific progress since the scientific method was invented.
Because we talked about dark kitchens and dark factories and now we have dark labs.
Holy crap.
I can only, too, I can only find just a handful of people like Demis, Alex on this pod.
There's like 10 or 12 that I could name.
that can tell you the implications, you know,
in all these other, you know, in biotech,
in material science, in chemistry, and math.
You know, Alex is talking about solving all math.
It's just such a small group of people
who see where this is gonna take us
and how short that timeline is.
So it's good to see Dennis doing material science.
This is AI-assisted science and AI native discovery.
Alex, you wanna close us out on the subject?
This is what comes after superintelligence.
What comes after superintelligence is solving math,
drink, science, comma, engineering, comma, and medicine.
And yes, math is being solved.
We've spoken about that, perhaps ad nauseum at this point on the pod.
We haven't spoken as much about AI solving all of material science.
And there are like a dozen companies.
It's not just Google.
It's not just Lila.
It's not just periodic.
There are a dozen companies that are all laser-focused on solving material science.
And that's going to give us so many upsides.
It's also when we talk about recursive self-improvement.
having better semiconductors, having better superconductors for fusion to power.
Material sciences is at the foundation upon which everything else is built.
Computronium, here we come.
And the innermost loop accelerates again.
And by the way, for our new listeners and new subscribers,
if you hear Alex saying drink, there's been a bingo game sort of invented for terms
that are repeated on a regular basis.
You'll be hearing it.
All right, let's move on to our next story here.
And I don't know how I feel about this story.
I sort of feel like I don't want to like overblow, you know, overexpose what's been
overblown.
But this is a story of an AI native character called Tilly Norwood.
And she's an AI native actress that's freaking out Hollywood.
So Tilly Norwood is an AI-made actress created by a London studio to star in films and social
media built over six months with GPT. Tilly went through 2000 design versions and YouTube videos
have garnered over 700,000 views in October. We saw this also in the music business where
fully AI-native bands and music tracks have been created and people don't even realize they're
listening to something that's just fully AI generated. She has her own agent. Yeah. And 40
reportedly like 40 different
contracts for movies and other
development projects. I would say
this is consistent with my
modal hypothesis that over the next 10 years
we're going to live out the plot of every
sci-fi movie ever made.
In this case, this is actually
I don't know if you saw the movie Simone.
This was the plot
of the sci-fi movie Simone where
an AI actress develops
a life of her own, takes over,
it has Al Pacino and it's a fun movie.
But like we're going
to see AI actors and actresses take over potentially,
or at least we'll discover how soon humans
crave authenticity in their entertainment.
There's no doubt in my mind that humans do not crave
authenticity as much as we think we do,
and we will just watch whatever is interesting and entertaining.
And I was at the Washington Post
when every reporter there was saying,
you know, the Post will be fine
because people will want genuine great reporting
from great reporters who are struggling in the field
to find the stories.
Yes, again.
That was right before, yeah, guess again, gone, just gone.
And in just a couple of years, too.
The timeline was so much shorter than they ever would have thought from top newspaper in the world,
multi-generational, been in the family for three generations, to gone.
Jeff Bezos bought it for cents on the dollar in just, what, three years, four years.
So that's going to happen here, too, and no doubt in my mind, it's going to happen with music,
it's going to happen with movies, it's going to, yeah, it's inevitable.
This is an AI performer working 24-7 appearing in unlimited projects, never aging, never burning out, never needing to renegotiate contracts.
I mean, this is the Screen Actors Guild worst nightmare.
I had dinner a couple of nights ago with a dear friend on my XPRIZ board who used to be the head of two of the major studios and then an actress who's another dear friend.
and we were talking about this.
And it is scaring the daylights out of the industry.
Good.
I mean, it's...
Well, no, good, because they'll react.
I don't mean, I'm not...
I wish nothing but good to happen to the people that are in the industry.
But good that they're scared because then they'll react
as opposed to getting crushed.
I didn't mean to throw them to the bus.
Well, the question becomes then what's the response, right?
Are you as an actor going to license your persona?
because that's the way you can make money in the final result.
Because if you don't, then the industry will simply, or, you know, the next generation
industry will simply create a Tilly Norwood who actually is cuter than you or more handsome
than you, able to...
Doesn't age.
Doesn't age.
Oh, yeah, there you go.
Doesn't age.
That's a huge one.
I'll tell you one thing that a lot of people...
I wonder when you'll have one of these winning the Oscar, right?
Because in theory, in theory, they should be the best performing.
We have a lot of those benchmarks.
When will the first AI win a Nobel Prize, right?
When will the first AI, you know, build a $1,000 company?
Well, Demis already did it because he's kind of half AI anyway.
That's done.
It's squishy also.
There have been, by my count, at least two Nobel Prizes.
There was Demis with Alpha Fold and Chemistry.
And then there was also Jeff et al with restricted Boltzmann machines for physics.
The squishy thing here is you can always do a secret cyborg as, as,
some would say, and wrap AI talent inside a human meat body and the human claims the credit for it.
So it's unclear, again, like how much humans crave authenticity?
Does this become a separate category in the Oscars like animation?
Is this sort of an increment on top of animation that's real-life animation?
Or is this an actual labor substitute?
I don't know yet.
I think a lot of that thinking, though, a lot of that thinking is a little bit misguided in that what the actually
will be looking for is a feature-length movie in a theater where it's all AI, and that's what
they're going to use as their bellwether for the threat. But that's not what's going to happen.
If you look in the data, short form video is taking over the movies anyway, and video games are already
miles ahead of movies. We had these conversations. My kids don't go to the movies. They watch YouTube
videos. It's all. Exactly. So Tilly will end up being a star in every video game and also TikTok
clip. And they'll say, well, that's not a threat. That's not a threat to, yeah, cross platforms.
and the actors will say, well, that's not a threat to me.
I'm a real actor. I do Shakespeare and, you know, whatever.
Like, well, no, it is a threat to you because the audience has moved and the budget has moved,
and that'll undercut you.
So they're looking at the wrong bellwether.
When Tilly shows up in 5 billion TikTok posts, that's when you know you're dead,
long before it hits you in your long-form movies.
And you just got to look at the video games, too.
A related story of this, which is OpenAI is working with Disney to bring Disney characters
into SORA 2, right?
So that's...
Yeah, they just announced that.
Yeah, it's a fascinating.
A billion dollar investment and licensing.
I think there's going to be a certain fungibility
between classic IP assets and generative everything.
And so for maybe in the short to medium term,
it's a three-year reportedly licensing agreement
that OpenAI and Disney struck.
Maybe in the short term, the short-term remedy is existing actors
can license their visage out as an asset to customers
who want to do sort of fan picks.
If you were like a really popular star,
like a Peter Diamandis, you know,
what's the thing you should do right away?
I signed my rights already.
Get your avatar out there.
Get it built and out there right away.
Get your Tilly Norwood equivalent Peter or whoever out there right away
so that personality can grab before, you know,
the true synthetics take over.
Yeah.
It really is going to be a race for neurons, right?
If you're looking, you're going to, you know, the general public, you know, Dunbar's number
only really cares about 150 people and holds them close.
And so the question is, are one of those or 10 of those going to be synthetic actors?
And once you get to a point of popularity, it's going to be hard to replace you.
for for for what it's worth at maybe to tie a bow on this also the dunbar limit of 150 people that that was like in the ancestral environment if if the number is is valid at all in the post social media era you can maintain light casual associations with thousands of people and yeah but dunbar's number is is basically sort of the human tribe and i've done this when i was running singular university when it's it's the number of people you can actually uh
remember their names, go deep with, and so for sure, you can have a Rolodex of 22,000 people,
but Dunbar's number in terms of who you feel connected to closely is a real number.
I'm with Alex on this one.
What I noticed was once you have Facebook and you could essentially Facebook acted as your RAM for Dunbar,
you could move people in and out of that spectrum very easily without really noticing.
And you have the opposite effect also where once you kind of start to connect with enough people,
Peter, you've probably had this.
I remember walking down University Avenue in Palo Alto right after one of our
one week executive programs, and this guy stops me, and he goes, hey, Slim, nice to see you.
And I'm like, have we met?
He said, I just spent the week in the classroom with you, right?
And he's like, wow, like it's like our brains are so blown up now with the limits of that.
We need technology to expand that capability.
And it's already done that to one extent.
And we can move things in and out.
The question is, what do we do when we have all these synthetic AI levels?
going through that.
So, oh.
This episode is brought to you by Blitzy,
autonomous software development with infinite code context.
Blitzy uses thousands of specialized AI agents
that think for hours to understand enterprise scale code bases
with millions of lines of code.
Engineers start every development sprint with the Blitzy platform,
bringing in their development requirements.
The Blitzy platform provides a plan,
then generates and pre-compiles code for each task.
Blitzy delivers 80% or more of the development work autonomously,
while providing a guide for the final 20% of human development work required to complete the sprint.
Enterprises are achieving a 5X engineering velocity increase when incorporating Blitzy as their pre-IDE development tool,
pairing it with their coding co-pilot of choice to bring an AI-native SDLC into their org.
Ready to 5X your engineering velocity, visit blitzie.com to schedule a demo and start building
with Blitzy today.
Our next story here comes out of the White House.
Trump signed an executive order curbing state AI rules.
So this is a decisive federal power grab over AI regulations.
Trump's one rule executive order is going to preempt state level AI laws.
It's like, nope, it's not going to be.
you know, Washington, Washington, D.C. is going to win over everybody.
It's not California laws or Texas laws.
It's Washington, D.C.
I mean, ultimately, I think this is what the EU needs as well.
It needs top-level direction.
It's going to be harder there.
Any particular thoughts on the one rule here?
It's absolutely positively necessary.
I hate it when this happens, but we've got to do it.
because variety across states is one of our best assets.
On the other hand, New York just passed a law that says
you can't use the likeness in an AI of somebody who's deceased
without going to their ancestors.
What about all these Einstein's floating around already?
How are you going to keep it out of New York?
There's no way to just launch it across the country
and then New York users get blocked somehow?
I mean, it's just unworkable.
I'm going to claim I'm one of Aristotle's ancestors
and you can't use his like.
I mean, how far back do you go?
How are you going to track down, Aristotle?
I had two thoughts when I saw this.
One was when I saw one rule, I very quickly thought about one ring to rule them all.
And I just love the politics of this where a huge amount of the effort for Trump
Atlantic was saying, let's push all the thing down to state's rights.
And now we're going totally the opposite direction.
And I think it's a necessary thing.
I agree with Dave here.
It has to be done because if we don't get uniform AI treatment,
where the hell are we going to get to?
Also, I mean, there's an interstate commerce angle here.
Models are being trained in one state and inferenced in other states.
In my mind, and I read the executive order in the past 24 hours,
the EO is ensuring a national policy framework for artificial intelligence.
I think it's both reasonable under the interstate commerce clause
and also necessary for international competition.
It's not at all obvious how a patchwork of state-based regulations results in anything other than total chaos.
I mean, this is a piece of the overall White House strategy on energy, on data centers, on chips.
It's all aligning everybody to make the U.S. as competitive as possible on the global stage and to accelerate as fast as possible.
It is a race to superintelligence.
And this is just part of the...
Can I make a radical prediction here?
Yeah, of course.
Over the next five years, the entire U.S. Constitution will evaporate.
Every clause is starting to just melt away.
I look, the right to privacy, Fourth Amendment, gone, right?
We're going to see the whole thing.
It needs to be rewritten from the ground up, and it's going to be interesting to see how that happens.
Well, I will do the job of that.
Boom, that's what you need.
Instead of the founding fathers, it's the founding models.
For the record, I don't buy that.
prediction for one second.
Good.
We can put some money on it.
That's...
Poly markets, baby.
All right.
Let's move to a conversation on the economy.
And, you know, this is data just to support what we ready to know.
Open AI finds AI saves workers nearly an hour a day on average.
So workers using open AI tools have saved between 40, 60 minutes a day.
The survey of 9,000 people in 100 companies found that 75% say AI
makes work faster or better.
The biggest time saver,
over a million businesses today
are using open AI tools.
I'm going to couple this story
with our next one,
which is layoffs announced,
in 2025, we had 1.1 million layoffs,
which is the most since the 2020 pandemic.
All right.
Dave, do you want to jump on on this?
I was talking to Scott Perry,
the CEO of Tree,
Lending Tree, public company,
yesterday actually and he said 20,000 incredibly talented people in Seattle are now cut loose from
Microsoft and Amazon and it's the best hiring opportunity for tech talent he's ever seen in his life
but these are really, really solid great people that the mega tech companies have just cut out
because AI is automating, improving, enhancing, you know, coding is one of the biggest early
beneficiaries and you know my top coders are 10 times more productive so I don't need
nearly as many. So that's where the layoffs are coming from. But this is just, you know,
we'll look back on this and say, way, that was a bellwether. Why did I not notice this little
thing, you know, but when you see what happens in 2026, you'll say, when did this all start?
Well, right now. This is when I started. What do you predict for 26, Dave? Continued acceleration
of this? Yeah, the capabilities will be, you know, able to eliminate on the order of 80, 90% of all jobs.
but then the rollout and the percolation is dependent on regulation and also corporate bureaucracy.
And so it's tough to predict how quickly people will react.
My guess is that it'll get a very slow start, everybody's very stodgy, but then everyone's a sheep.
And when somebody in your industry is an early adopter and their stock goes up 10x just because they're an early adopter,
then your board beats you up like crazy and says, what about us?
And then the sheep effect flips in 2026.
So by the end of 2026, everyone's in absolute panic mode.
And then they're wishing they started at the beginning of 2026.
You know, I think there's going to be, this is one of my predictions,
I think there's going to be an absolute need for all the medium size and large companies
to bring in a re-skilling consultancy.
Some type of a program could be fully AI-based,
but that provides some kind of a safety net for your employees,
that you're going to re-skill people before you fire them,
and if they aren't able to be reskilled, then they're let go.
I also think that's a huge business opportunity for an entrepreneur out there
to build that kind of capability.
Totally, totally right.
In fact, if we look in our portfolio,
the companies that are quote-unquote floor deployed are killing it.
And if you couple that with what we just said,
there's 20,000 highly talented people in Seattle that just got cut loose.
If you're growing your business, a lot of the younger companies, you know,
20 to 23-year-old leaders are afraid to be forward-deployed because they've never done it
before.
They don't have any management experience.
What do you mean by corporate experience?
Well, hire those 20,000 people, train them on how to be AI forward-deployed consultants
or delivery people and then get them embedded back into corporate America.
at State Street Bank, at J.P. Morgan, at Walmart,
they'll hire your people instantly to get AI deployed inside their organization
because they can't get that talent.
But if you grab those people, retrain them very, very quickly
on your own AI training platform and then get them redeployed into corporate America,
your growth rate, you'll be sold out every time you have a meeting,
you'll generate a sale.
So I agree with you there.
The founders, the really young founders are afraid to do it.
They want to just like launch their software.
on hacker news and hope that the world sucks it up and it there's just this big gap between
there and where corporate america starts and it's just never going to fill if you don't get
forward deployed i don't think this is a skills issue this is a cultural problem the problem is
in corporate america with all the structural impediments in a big company you need a mindset shift
at scale in a big company to even adopt this i think i think the large companies and the
medium-sized companies to be very specific about my prediction here are going to
to need to hire a very specific kind of consultancy right a company that comes in and their job
inside your company and i think every company's kind of a version of this is reskilling and so that
when you go to work for a company you know there's a a reskilling um you know uh safety net there
for you yeah so yeah um but what i'm saying is it's not just reskilling it's a
mindset shift, it's a cultural change. It's a cultural change that has to take place. And that's
actually much harder. And I want to say two things about this. There's cultural and mindset shift
at the CEO at the executive level and at the employee. All of that. It goes through, it goes through
the organization. And we've actually been working on this for several years now. And I want to
tell a quick story. Our second ever client, when we've finished one of our 10 weeks sprints,
realized that they had to lay off a thousand people in the company.
And they decided, what are we going to do?
Because we're a family-owned business.
We want to really provide for these folks.
What do we do?
We actually got them to give them a one-year UBI so that they could find their own passion,
find their own work.
And if they didn't, at the end of the year, they were trying to hire them back.
And it was an incredibly successful program.
I think we're going to see a lot more of that as we kind of transform the workforce.
All right, let's get into data centers, chips, and energy.
We're seeing data centers begin to pop up in countries,
around the world. I don't want to spend too much time on this, but Qatar, you know, QIA, the
sovereign fund there is investing 20 billion to launch a data center in Qatar or Qatar or however
you want to pronounce it. As a Middle East hub, we're seeing Microsoft and Satya just coming back
from India, meeting with Prime Minister Modi there, committing 17.5 billion in India to expand
an AI-ready cloud there in the region. So we've got, I mean, this is going to be the case in all
major nations, these partnerships taking place. The real story when- This is Alex's comment about
tiling the world with data centers and every drink. Tile the earth with sovereign inference time
compute, drink, drink, drink. We're drinking coffee this morning, ladies and gentlemen. Drinking water
Synth alcohol. All right. So here's the story I want to dig into. In our last pod, we talked about
China's sort of incredibly expanding role. So China is set to limit access to NVIDIA's H-200 chips
despite Trump's export approval. So, you know, President Trump says to NVIDA, okay, you can export
these. And now the China leadership is, no, no, no, you can't buy them. You need to buy
Chinese made
and you know
GPUs
fascinating right
it's this is
propping up
its own chip economy
I think it's a smart move
on China's behalf
this is so
fun and annoying at the same time
to watch you know
this is pure protectionism
the US never did it before
and now we're now we're playing the game
but you know what happens is a country
invents something like an LCD TV
or a car or you know whatever
and another country says okay what we're going to do
is we're going to protect the home market, we're going to manufacture our own, then we're going to
dump it on your market cheaply, and we're going to dump it until your company's collapse
and the venture capitalists all run away, and then we're going to price it up. So what we did
is we embargoed the chips from China, and they're like, oh, shite, we need to build our own whole
supply chain. And as soon as they get it up and running, we're going to say, oh, no, no, it's
okay. Now we're going to actually allow you to buy the H-200s. And that entire thing you just
built makes no economic sense. And so China's saying, all right, I see what you're doing here.
I've played this game for a long time. We're not going to buy them. Like, but why?
You know, it's an incredible buy. Why would you not allow us to buy them? Because we already made a
massive investment in our own fabs. We're going to have to keep subsidizing that to get this up
and running because we know what you're doing here. You're going to let us buy them right up until
our stuff collapses. And then you're going to cut it off again.
This is a, it's a trust issue.
Big trust issue. Well, there's no trust at all between the U.S.
China right now. Well, the same thing happened, right? The Japanese came over during Trump's
first administration and spent a lot of time negotiating a trade deal. And then just a few months ago,
Trump, the administration canceled that trade deal. And the Japanese are like, we're not negotiating
another one because we don't know which way up is anymore. And every single time, it changes
completely. So there's no trade deal. And this is really a big problem going forward. And I think what China
things, we don't want to play that game.
Well, there's no doubt that the outcome has looked two completely separate ecosystems.
Europe is kind of a wildcard.
It's interesting, and so is India.
It's kind of a wild card right now.
But there's no doubt the U.S. ecosystem is going to grow completely independent of the
China ecosystem because there's no chance of reestablishing trust after that chip embargo.
There's like no way that that's going to get mended.
That's right.
So sovereign data center, AI compute, to Alex's point.
it's a new it's almost like a second cold war it's it's a world that we move to where there are spheres of influence and spheres of fab and spheres of compute and the decoupling happened yeah i find okay move on to power generation uh there's a company called boom uh many years ago it set out to build the first supersonic uh passenger airliner to replace the concord and i was so impressed by the the the
founder and CEO, his Hutzpah, if you would, to take on this moonshot to build a supersonic
consumer airplane. And it was like, I don't know how you get there. How much money is going to be
required to build this. So it's a fascinating backstop that Boom had been developing, you know,
supersonic engines. And now they've unveiled a super power turbine that's able to provide
42 megawatts of natural gas turbine capabilities to data centers.
And so this is, you know, a backstop business model for Boom.
And it's huge, right?
So this is moving power to the data centers.
It's a gas turbine strategy.
And we've heard before all the gas turbines have been sold out for some time.
Alex, you want to jump on this?
Yeah, I mean, as you were gesturing, Peter, that the wait times right now for gas-fired turbines for AI data centers are seven years in some cases.
So I think this is a brilliant strategic pivot by boom.
It also, to the extent referencing comments from a minute ago, to the extent we're in almost a quasi-second Cold War, this is almost like a self-directed defense production act-type move, pivoting resources.
perhaps from turbines for supersonic consumer jets
to turbines for AI data centers.
Of course, there are synergies there.
But this is, I think it's a brilliant pivot.
And the irony is there's probably a much,
much larger addressable market for gas turbines
for AI data centers than there is for consumer
supersonic jets at this point.
I just hope for the sake of boom
that they retain at least some semblance
of the original supersonic vision
and just don't get overwhelmed by the AI data center business.
I just love the same.
I need that audio clip.
Hey, team behind the scenes, I need that audio clip, like, right away.
Because there's so many companies, including Vesmark,
and one of the ones I founded, pre-A-A-I, you know, manages $2 trillion of assets,
$20 million lines of code, profitable, great business.
And I'm like, guys, you've got to be an AI company like tomorrow.
Pivot, pivot, pivot.
Pivot, pivot, pivot.
Strategic pivots.
So there's a great case study.
you wouldn't think that a jet engine is company
is culturally going to pivot
and become a power generation company.
But when you look under the covers,
it's like, well, what are our assets here?
Well, we've got the blades.
We've got the manufacturer.
We've got metal.
You know, like, that's all it takes.
The age of AI has so much opportunity
that didn't exist the day before.
And you don't have to be that close to the center point.
You have to be adjacent and just pivot quickly
and you'll succeed wildly.
And so I hope these guys just crows.
In fact, I know they'll crush it because like you said, Alex, I know personally the data center operators that, yeah, they'll spend anything.
And they're pre-buying, too.
They'll pay you up front for something that you're going to make next year.
There's a $1.25 billion backlog.
And it's a product they can deliver immediately, right?
This is on-premise power generation for data centers, which is so critical.
You know, they've been working, Boom's been working on this for, I don't know, six, six.
seven, eight years, and they've built a scale model of their supersonic airplane, and they're
trying to get advanced orders from all of the airlines. But to get through the FAA thicket
is so difficult. That's decades. It will kill you. But if you've got an actual business model
delivering revenue right now, I mean, I agree with you, Alex. I hope Boom actually delivers on
their original idea. I think this increases a probability a huge amount, right? And this is the
equivalent of Amazon realizing with Amazon web services, it's got something that he can offer
to everybody else that makes very strong near-term profits.
Or Elon like delivering Starlink now and Mars Colony in 10 years.
Yeah.
That's the sexiest looking gas turbine I've ever seen, by the way.
It's beautiful looking thing.
I'm sure after you run it, it gets a little dirtier.
1.25 billion in backlog.
Congratulations to the team at Boom for that strategic pivot.
Everybody learned from this story.
We should track this, you know, a few weeks or a few months.
What do you have, what are you building right now that's a cost center for you
that could become a profit center for you in the AI ecosystem?
That's the question.
All right.
On the energy side, China builds nuclear reactors at $2 per watt versus the U.S.
at $15 per watt.
Again, what's going on here?
Why is that happening?
Alex, do you have a thought?
Yeah, well, China does have more people than the U.S.
China does have a need for more energy.
If AI were not part of this equation
and China were to attain U.S. per capita energy footprint standards,
China would need more energy than in a total sense,
in an absolute sense, than the U.S.
That part makes sense.
What doesn't make sense, if you look at the permitting processes required for nuclear energy in the U.S., it's a very different beast.
There are obviously the NRC regulates U.S. nuclear power deployments at the national scale.
But then on top of that, you have some states that de facto ban nuclear power entirely.
We have a patchwork of state and local regulations that make it extremely difficult to deploy nuclear energy here in Cambridge, Massachusetts.
It's many people not may or may not be aware of this.
Cambridge has a nuclear reactor.
It's not very well advertised.
It's on Massachusetts Ave.
It's on the MIT campus.
But we have a working nuclear reactor and have had one since I think the late 60s, early 70s.
But that's very much like not par for the course in the U.S.
I wouldn't be surprised if sometime in the next two to three years we see some equivalent for nuclear energy of what we just saw with the White House.
as executive work to see it in the next few months.
I mean, the bottleneck is not physics.
It's permitting an execution.
And that's got to be cleared.
I'll give you a little side story related to this.
The MIT brand, here's the MIT brand.
The MIT brand is absolutely skyrocketing in this AI revolution.
But we found out that that MIT nuclear reactor is going to be exothermic and powering
the campus.
And I'm like, wow, because we don't have a single nuclear actor in the state.
You know, we can't get that approved.
We buy our nuclear power from New Hampshire.
But MIT can actually get stuff like that done now,
which is crazy how that brand has skyrocketed in impact with this AI revolution.
All right, I want to jump into robotics, a special, you know, hat tipping here to Saleem.
This is Salim's perfect robot.
It's got something like 14 different arms on it.
Salim, are you happy with this robot?
This looks awesome.
Look at all the chickens that can move around very quickly.
Um, this, this is, this is, yeah, love it.
Just, uh, for those of you, I think, I'll just go back to the pod,
Salim isn't having a running debate about, okay, why humanoid robots?
Why just two, why just two arms?
Well, Salim, you've got all the arms you could possibly put on a body here.
I just love all the wires sticking out of it also.
Like, like, there is a serious story here too, like in China.
There's an image doing, doing doing, doing, doing, doing the rounds.
with six arms that they're yeah there i don't think there's anything like super yeah yeah i was going to bring
that i was going to bring that article forward as well yeah there is there is there wait what was
a six arm robot yes coming out of china is not about having a humanoid robot it's about mimic it's
about integrating into human spaces and and kind of moving around where humans have been and so
there's there's some case for it but in general there's it's very easy to be tan x more efficient than a
human being, we're very, very inefficient in most of the things that we do.
Yeah, I think evolution has done, evolution has over billions of years, or maybe order of
magnitude a billion years, done a search through body space. And there are lots of body shapes
that aren't anthropomorphic humanoid bodies, you know, more arms, more legs, more heads,
lots of different formats. And I do suspect what we'll see to Salim, I'm not sure if this is
your dream or your nightmare, but we will see lots of different Cambrian explosion.
of lots of different body shapes tested.
All right.
It's not a dream or a neighbor.
It's just the most effective use case
for trying to get something done.
I'm moving this forward.
Call out to our listeners.
I made that on nano-banana.
Somebody make, now that we know about the woolly mouse,
makes Salim's perfect robot
for turning the woolly mouse hair into sweaters for us
and send it to us.
We'll put it on the next pod.
Okay.
That's a hell of a prompt.
All right.
Another form of robots are drones.
And I just found this anti-gravity drone, that's the name of this drone.
It's manufactured by a company called Insta 360 in Shenzhen.
For those you don't know, Shenzhen is really sort of the entrepreneurial hotbed in China.
I've visited many times.
You can go there and every part and component you need is there to be manufactured.
So check out this video of an 8K-360-degree drone.
Talk about marketing genius.
So this drone user is using it with VR goggles,
and he's on a platform suspended by a balloon at 5,000 feet altitude,
and the drone is just flying a beautiful 360 view of him.
The dude standing on a platform suspended,
about a hot air balloon. That's way more interesting with the drone. That's ridiculous.
Well, it's like what are you going to do to capture someone's eyeballs, their attention, right?
You know, I think Salim is on to something here. Drones are a commodity, but the experience of
being on a hot air balloon at altitude in a VR headset control and a 3D drone, that's got
to be some sort of consumer experience that one could build an enormous business out of. Maybe that's
more interesting than the drone itself. Yeah.
Oh, all right.
Well, all right, let's move on to our next story.
Wait, if you have the VR headset, why do you need to be suspended up at 5,000 feet?
That makes no sense.
Well, for latency, right?
You want to see yourself suspended on the balloon at altitude.
It's more exciting or something.
All right, let's go to our next robot story.
And this is robotically automated vertical farms, which is an important part of our future food chain.
So, of course, out of China once again, and what we're going to see here are these massive
vertical farms that are operating 24-7, basically growing at the perfect, you know, light frequency
at the perfect soil and drip irrigation pH, and it's being, you know, the AI is checking
to see if it's ripe, if it's ready for harvesting, and the robot arms are harvesting,
and this is going basically 24-7 in a city near you. I mean, this is one of the futures,
you know, stem-cell-grown meats in vertical farming that helps us bring food to the individuals.
I don't know if you realize this, guys, but like half the cost of a meal that you have is food
miles, transporting the food from, you know, sort of Argentinian beef or
Chilean red wine or Iago potatoes. The average meal in the US travels 2400 miles to get to your
table. Yeah. Wow. This is something really, this is something kind of incredible. We've been tracking
this for a while. You know, we've crossed over into economic efficiency for farming and
agriculture and food production. This calculation I've seen that the most startling is if you took
35 skyscrapers in Manhattan, turned them into vertical farms, that would feed the entire
city sustainably. So you think about the food security, logistics, trucking, all of that
stuff. And when you can automate the entire farm, the yield is something like seven to nine
times what you can get with horizontal farming, because you can give exactly the right frequency
of light that you can drink. By the way, you save 99% of freshwater, and 70% of our fresh water
goes to agriculture. And no pesticides. And no pesticides, no fertilizer, all of this stuff. The
benefits are kind of incredible. So we're going to see vertical farms next to every restaurant
over time just feeding the restaurant. This is amazing stuff. Yeah. It's probably also just
quickly worth pointing out that video, to my knowledge, was actually put out by the Chinese
government. And this is a new form of soft power, soft influence broadcasting these visions,
presumably ground truth, accurate, but presumably of radical forms of automation.
I think we're going to see many forms of propaganda, soft influence, as showing these amazing
tech demonstrations of robotics and action start to hit the internet.
And by the way, a humanoid robot makes no sense in that factory.
I agree.
But a humanoid robot does make sense in this next story, again, out of China.
China is testing retail automation with humanoid robots running the shops.
So what do we have here?
You know, you're walking by, you look inside.
don't see humans, you see a robot behind the table behind the desk and, you know, I want to go in
and check it out. So this is the rise of the robot run convenience store, taking humans out
of the loop. We've seen Amazon do a version of this, right, with their Amazon Go, where you walk
into the shop and you just pick up anything off the shelf and there's cameras, you know, noticing
what you took and noticing what you put back on the shelf. And then you're automatically
rung up as you walk out. But here we've got a two-armed, two-legged humanoid robot doing the store
clerking. I do think that this is going to be viewed as sort of like the atomic vacuum cleaner
moment of 2025. Do you really need a humanoid robot in a convenience store? No, probably
there's more ergonomic solution. Like as you say, Peter, Amazon's just walk out. Technology, on the one
hand, on the other hand, I would love to live in a world where every convenience store is filled with
humanoid robots in the US should be doing this as well. I think it's fun. I mean,
I'm sure we'll see this. I'm sure we'll see this this year. As soon as, as soon as 1X with their
Neo-Gama or figure, and we'll be visiting figure at the end of January to record our next
podcast with Brett Adcock. I just spoke to him yesterday. Super excited about going and seeing
behind the scenes there. Two counter predictions. One is I think this takes at least five years
to have a convenience store operator with a humanoid robot. And by the time that five,
years arrives that we don't need convenience stores anymore for various other reasons.
Ah, interesting. Everything's being conveniently taken to you by a drone.
Drone delivered. You know, with Brett Adcock, maybe he'll let us go behind the scenes for real,
like into the factory because with 1X, you know, there's too much proprietary stuff. They wouldn't
let us do it. But if they cleaned up a little bit, maybe we could have done it. But it's
incredible when you go back and see the actual robot construction. It's, uh, God, if we can get
footage of that. We went back and saw it, but we couldn't bring the cameras back there.
is what you're saying.
Yeah, too many secrets.
Another story here back in the U.S., Boston Dynamics announces its plan to ship
automotive volumes of humanoids, and this is from their lead, their product.
I actually interviewed the CEO at FII.
So we're owned by Hyundai for a reason.
We can ship automotive volumes of humanoid.
So there's a billion cars right now out there, and these are being manufactured.
at, you know, tens of millions.
Imagine, well, we've talked about this.
Elon plans to do this.
Brett Adcock plans to do this.
We've heard this from Bert Bornick.
Now we're hearing this from Atlas, right?
The ability to manufacture at the millions and tens of millions.
Robots building robots.
We don't need billions of cars.
We do need billions of humanoids.
Yeah.
Two armed humanoids.
Salim, two armed humanoids.
Okay.
Don't be armist.
I'm staying silent.
I went on this one.
Here's a story that's fun.
Years ago, I had the pleasure of meeting an extraordinary entrepreneur Eric
Mijigovsky, who built the Pebble Watch.
And he did this on a crowdfunding platform.
Remind me recently which one it was.
It was Kickstarter.
Yeah, he built.
This is an amazing story.
Yeah, he was running out of money.
And he had like three months of cash in the bank.
He was able to get funding for his Pebble Wall.
watch. And so he goes on Kickstarter and he says, hey, if you want one of these watches, fund me.
And he went from one problem of not having enough money to another problem. I forget how many
orders he had. So Eric's a fellow Waterloo grad. And he was running out of money, as you say,
even coming through Y Combinator, no investor in Silicon Valley. He talked to 20 plus and
And nobody would fund it because hardware was kind of a bad word back then.
So he puts it up on Kickstarter, trying to raise 100 grand to build a prototype of his watch,
gets $10 million worth of orders.
That's right.
And it's an important point because it tells you two or three things.
One, the investors are wrong fine.
Secondly, if you can do this, why do you need the investor at all?
But the third thing that I think is the most powerful in one of the big inflection points,
we talk a lot about this in exponential organizations,
is that now that you can do this type of Kickstarter type thing,
you can actually get market validation for a product
before you build a product.
And we've never been able to do that before
in consumer hardware or consumer products.
So this is an amazing inflection point.
Sony is actually launching anonymous Kickstarter campaigns
and then funding the winners
because their product development is not been the greatest
over the last couple of decades.
So they're kind of tapping into this modality,
which is really powerful.
So Eric goes from having one,
problem of not having money to another problem, which he's got to deliver now on $10 million
worth of orders. So he literally takes the first plane out of the U.S. to Shenzhen and basically
builds the manufacturing chain in China to deliver this. And it was a great watch. Remember having,
I gave it out at Abundance 360 years ago. A decade ago. But then Apple Watch came out and sort
crushed the marketplace. Well, Eric's come back and he's got something called... Pivoting to AI.
Yeah, the Pebble Smart Ring. And for 75 bucks, you wear a ring that's got one purpose. It's got a
small little physical button on it. And when you press the button, a microphone records whatever you
want. So this is, you know, do you remember like waking up in the middle of the night, like remembering
something? You just push your ring and you're whisper into your ring or you're meeting with somebody.
you walk away from your meeting and say, okay, I need to call, you know,
X, Y, Z as soon this is over.
And it's sort of, you know, reminders and it's notes that go into your AI model.
It has one purpose, right?
This is not, you know, tracking your heart rate or your sleep.
It's tracking sort of bits that dribble out of your thought during the course of a day.
I love this.
And critically, like, where does the voice go?
the voice goes from the ring to an on-device on your phone hosted large language model that then
transcribes and analyzes. So what is this really doing? This is really, to the extent that a ring
stays on you almost all the time, this is about adding a button to the human body that enables you
to speak to a large, huge, to a foundation model that's also on your body. And so question to the moonshot
teammates here. How long until it's not just a button on your body that enables you to talk to a foundation model, but you're, you're swallowing foundation models. How long to the first edible foundation model? Well, injectable or sub-dermal. You think it'll be injectable versus edible first? I think it'll be edible. I mean, if you're, if it's edible, it's going to pass through your elementary canal all the way out to the end. So I want this, you know, there's interesting this part of the skull, right? The mastoid bone in the back, behind.
your ear that's this hollow area of bone I think it's a great place to implant a
permanent you know microphone and speaker yeah that's my prediction we're
gonna be implanting a microphone speaker at the back of your I'm exactly on
shark that exact thing was on Shark Tank and Mark Cuban vomited really I'm
I'm iterate hardware much faster outside the body than inside the body yes
don't think it'll be invasive for a while. Yeah, I think we'll see swallowable, swallowable
foundation models in the next two years. Bluetooth? Like just Bluetooth in and out of your
body to your phone? Yeah, Bluetooth, but critically locally hosted, very locally hosted.
Okay. All right, a few subjects, a few, a few topics on space here. Let's move us along,
guys. Chile becomes the first Latin America country to enable Starlink direct to sell. So, I mean,
Listen, Starlink is such the killer app for SpaceX and the ability for him to potentially
bypass the current phone industry, which, I mean, tens and hundreds of billions of dollars has
been put down in terms of, you know, G4 and G5 level distribution networks now to be bypassed
by Starlink.
Crazy.
But this is what I find.
This next story.
Take a listen.
I mean, can I just go back to that?
Can I just go back to that?
Just for a sec, Peter?
I think this is something, a very big deal because, you know, throughout history, this
is the failure of government.
The UN should have launched something like Starlink.
You know, they should be doing something like this.
The UN doesn't launch.
Hang on, hang on, hang on, hang on.
But they're fundamentally unable to, and it needs private sector to do this type of stuff.
What I find incredible is the demonetization and the dematerialization of technology allows
now a private individual to do something like this
that changes the world completely
in such a powerful way
and you kind of can say
well governments just step out of the way
and let private sector do everything going forward
because it'll navigate most of this
with light regulation
we can navigate most of this stuff now
so I'm really really excited by this
okay can I ask you guys a question
because I was trying to look at the data behind this
you know the idea of orbital data centers
wasn't in the conversation how long ago.
I mean, we weren't talking about this a year ago.
We weren't talking about it nine months ago.
There was a guy at Abundance 360, March a year ago.
Alex probably published a paper on this about 14 years ago.
It was.
And if you were reading Accelerondo, in which case you had the blueprint for everything we're seeing now.
Sure.
But it wasn't.
But no, March a year ago, one of your abundance 360 guys was talking about it and he was going to do Bitcoin mining in space at that
point in time. And everybody thought he was insane. And we also thought we couldn't do the cooling.
So that was only March a year ago. So that's nine months. But there's a spike. So I know at that point
it was nothing. Yeah. But the last six months, really the last four months, all of a sudden,
every single player, we've got companies out of China. We saw at the last pod, we have now a company
out of Europe. And we have a dozen companies in the U.S. And then I found this video clip,
which I found fascinating because Google was not discussing it a few months ago. But here we are.
to Sundar.
How do we one day have data centers in space so that we can better harness the energy
from the sun?
You know, that is 100 trillion times more energy than what we produce in all of her today.
So we want to put these data centers in space closer to the sun.
And I think we are taking our first step in 27.
We'll send tiny racks of machines and have them in satellites, test them out, and then
start scaling from there.
But there's no doubt to me that a decade or so away will be viewing it as a more normal way to build data centers.
I never thought I'd hear Sundar say tiny racks of machines.
That's hilarious to me.
I just love the schoolboy level excitement he's got there.
You can see him actually grinning.
He's like, oh, look, data centers in space.
This is amazing.
I love the AI generated.
The big banner on top of that video was AI generated.
It's like, we're going to always tell you that this scene.
and deep space is AI generated as if it was not.
The reason, Peter, why you know, I mean, even though I may be a little bit glib
saying, well, if you had read Acceleranda, this would have been obvious to you almost 30
years ago on the one hand.
The reason you know that this is a sudden phase change in the way the industry works is
Google's plans that this is public information, the Google plan to launch these, so
it's TPUs, first of all.
Google's launching TPU based data centers, obviously, are on planet.
satellites planet labs it's not google's own satellites it's planet labs so so you know if google's
hitching a ride via SpaceX on planet satellites this this is all of a sudden i i'll say that
second point sun synchronous orbit is about to become very very crowded sun synchronous orbit is is is a
low earth orbit that satellites that want to always have sun exposure never pass behind the earth
never be in the shadow always have solar power for their panels it's going to be very crowded
it's a real estate, it's a limitation and there, you know, there currently is limits on how
close you can get to other satellites. That's going to be a real, it's going to be a real challenge
because we've got, you know, a dozen companies all wanting to do this at the same time.
It's going to be a race. And how the FAA, which governs this, is going to decide who gets the
territory, who doesn't. In geostationary orbit, there's a very clear demarcation of I own
these orbital slots over my country.
But Lowerth Orbit doesn't have that situation.
Peter, you're making the case for the Dyson Swarm.
Again, the Dyson Swarm, so we move out of Geo, we move out of Leo,
and Sundar himself in this clip was saying,
we want to get closer to the sun.
So we're sleepwalking straight into the Dyson swarm.
Well, Peter, to your prior point, too,
this was science fiction a year ago,
and now suddenly it's mainstream among the top CEOs in the country.
How does that happen?
But, you know, you look at Elon and his credibility.
You look at, you know, Alex, your credibility.
A lot of things that were impossible a year ago are going to be very easy a year from today.
And if your track record of predicting them is near perfect, then, you know, the credibility of these crazy-sounding ideas immediately catches on.
And you're going to see a lot more of that, I think, because the, you know, the capabilities are exponentially growing.
But, you know, some of these things are truly hairbrained and some of them actually are.
Is there a clear line of sight on solving the heat dissipation problem for the satellite data center?
Yeah, and it radiate in the direction of the cosmic microwave background.
Yeah, the final answer shocked me, but for every square meter of solar panel,
it only takes one same square meter of radiant cooling, radiative cooling, which really surprised me.
I thought it would be, we estimated on Gemini, which was wrong, at 10x.
You need a 10x more area, and it was just wrong.
It's cooling at 1X, and I don't know how they, and it's all aluminum base, so it's not weird, expensive metals or anything like that.
So, yeah, pointed into deep space, like Alex has been saying forever, and it's, for whatever reason, just flat out working.
So, I took all of the comments from our last two pods and ran them through one of the LLMs and said, okay, pull out the most interesting AMA questions.
Here we see a list of 10 of them, gentlemen.
Let's pick out a few to answer.
I'll start with one, which is how do you make these space-based AI data centers fault-tolerant, right?
There's sunspots.
There is the potential for, you know, disruption from a, even from an EMP at some point, God forbid.
Any ideas on making them fault tolerant?
Those are two very different faults.
Yeah.
Yeah, there are both disruptive.
There are lots of failure modes.
So I do think this is another multi-billion dollar company that someone should start.
There are many techniques right now ranging from switching from silicon-based electronics to maybe other semiconductors.
Yeah, like gallium arsenide, 26 or 3-7 semiconductors that are more false-tolerant, have different band gaps to designing just electronics that are intrinsically at the design level that are able to tolerate false.
to just doing what right now is a standard protocol,
which is if there's a solar storm or bad space weather,
you shut down or you switch them to safety mode.
So there are lots of partial solutions here.
To my knowledge,
there isn't like the definitive industry standard solution
of what happens if you're in the middle of a training run.
I just hate to think about the idea of all the data centers in orbit shutting down
because there's a solar storm for the next 12 hours.
we're getting hit by alpha particles.
But how do we solve that in general?
Like if there's bad weather or a blackout on Earth, you have diversification.
So if anything, again, like let's put space-based AI data centers throughout the solar system.
So if there's bad space weather in one part, there isn't in another.
That's a great point, actually.
I bet earthquakes and tsunamis and hurricanes are much bigger problem than solar storms.
All right, let's pick another one of these.
Hey, just to make a point, there's a, there's a,
It's kind of a flaw in the question, too, because when you have Skylab up there, you want it to be up there for 20, 30 years, and you don't want it to get hit and destroyed or anything.
But the space-based data centers need to be replaced every three years with new chips.
And so it's a constant launch, recycle, launch, recycle, launch, recycle thing.
So if somebody EMPs the entire thing and destroys it, then there's a war, of course.
But it was going to get replaced in a three-year cycle anyway.
It's not like Skylab.
Interesting.
One of the things we did in the four.
planetary resources when we're looking at asteroid mining weeks we set up the the software so we
would expect constant disruption um and the system we focused on rapid restart of the system so it would
boot up extraordinarily fast um all right i tell a quick story here you can but i want you to choose
one of these uh one of these a m a questions also sure um you and i were sitting in a hotel in
Dubai and Richard Branson walks by and he said hello and we grabbed a quick drink and he
said Peter how's my investment in you know planetary resources going and that and you described
that how it was going at NASA contracts etc and Richard turns to me and goes this is why
Peter's interesting because in a random hotel lobby I'm suddenly having a conversation about
asteroid mining off planet it's just like this conversation happens nowhere else in the world
except with Peter.
I love you so much.
It was fun.
All right,
Salim,
pick a question here.
Is this question bingo?
Should we expect G20 level initiatives for UBI within the decade?
I would hope it would be within a year.
It needs to happen very,
very fast.
I think it'll force the conversation.
Universal basic income, right?
Universal replaced soon by UBS,
universal basic services.
But I think you shouldn't expect much from the G20 period.
I think that's the flaw in the question.
But in general, we're going to expect to see this rolling out in a pretty rapid way,
lots and lots of experiments being done all over the world on this because they have to,
do we have to move to something like that?
The social contract is completely being wiped out in the current model.
Dave, why don't you pick a question next?
Okay, I'll take number one.
How can AI lift up those who aren't international entrepreneurs?
I think, one, listen to the podcast, get subscriptions, play with the tools,
and then brand yourself as an AI expert
within your company, or if you're not going to be an entrepreneur, that's fine.
The demand for this knowledge inside regular corporate world
is going to go through the roof in 2026,
and if everybody around you knows you're the AI person,
and also don't be intimidated.
Historically, if you wanted to be a software guide,
you needed to be very, very softwarey.
That's not true with AI.
It's much more intuition-based.
You can build virtually anything with voice prompts.
So it's just knowing how it applies in your industry will separate you.
So just jump in the game.
Yep.
Amazing.
Alex, do you have one?
I'll take question number four for $10 trillion.
Is pure scaling enough or what comes after?
So I think the answer, I think it's a true question.
I think pure scaling probably is, by pure scaling, I'll construe the question to mean
we freeze all algorithms. No new algorithms are allowed to be developed in AI, but we're allowed to shovel more and more compute, especially inference time compute, into the existing algorithms.
I do strongly suspect that if we froze all the algorithms we have today, no new architectures, but we get lots more compute coming online, the existing architectures combined with scaled compute will be enough to give us AI smart enough to tell us what a perfect algorithm would be, to the point.
where we get our highly coveted AI researcher, recursive self-improvement, the final algorithm,
and we can just ask our scaled algorithms what comes after.
So in summary, my answer to question number four is, yes, I think probably pure scaling is sufficient.
Is it all that we need?
No, of course, algorithm in the real world, algorithmic development is continuing and we're going
to get both.
But could we live with pure scaling at this point?
My guess is probably yes.
All right. Let's answer one more here. Number three, how do the Moonshotmates prepare day to day for each podcast episode?
Yeah, I think we can share that. So let's see. Alex, you're constantly providing the team with a incredible list of all the breakthrough stories.
You're searching. You're probably generating how many AI stories per day do you think you generate for us to look at.
Oh, gosh. Order of magnitude 20 important stories per day.
I'm also, at this point, like, I spend so much time just reading, reading primary sources, archive papers, et cetera, living in the zeitgeist of the moment.
Because after all, drink, singularity comes around only approximately one time per planet.
So it's a special time.
I do also, at this point, you know, probably should say I'm also turning all of these stories, in addition, obviously, to research for this show into.
quasi-daily newsletter, just trying to help. Follow Alex on X. He puts out some incredible
daily sort of interesting AI rants, I would say, or AI visions. Follow me on X. Follow me on LinkedIn.
It's a genre I'm trying to popularize. I'm calling it Cy Non-Fi. It's written in the style
inspired by Charlie Straussick, Solerondo, others, written in the style of science fiction,
except it's all grounded in what's actually happening. So Alex,
generates, you know, on the order of 150 stories a week, I'll generate probably 20 or 30
stories a week. We get some from Salim, some from Dave. All this gets sort of put into different
categories. We then sort of cut it down to the top 30 stories. I typically spend about 10 hours
sort of playing slide shuffle, working with Gianluca and Dana, who are incredible
members of our team. And then we do research on those stories to get the details and think about them.
And I'm probably spending a good 15 hours of my week focused on this. How about you, Dave and
Salim? Well, everything you just said, you know, I lean entirely on Alex's internal feed,
which now you can get on X, you know, it's a digest of the same thing. That's brand new as of the
last week or so. So take advantage of it. But I've been reading that internally for what a year now,
I guess, or more, which is very time-consuming, but I need to know it all.
The only other thing I do is I route all the really big stuff over to the venture capital
team and say, what are the business implications of this, which we need to know anyway to run
our venture fund.
And then I try and bring those stories back into the Moonshots feed so that we can talk about
not just the technology, but what it means to investors, to business people, to people
career planning, and all that.
Salim.
I spend, I source a few stories, but nowhere near as much as the rest of you, but I think
the, I spend a chunk of time the minute you guys release the deck, I look through it and then
find it's changed again, and so I have to restart again. So I'm always playing catch up with
the slides that you, and then Peter, in the last night, God knows what you do, but you change
it all again and I have to re-research it. I spend a few hours a week looking up the terms
in the papers that Alex surfaces
because half of it is Greek.
And then I'll ask also my community member,
my opening exo community.
So there's a hive mind reaction to some of this,
which I think is very powerful,
similar to Dave asking his team.
Just again, to let our subscribers know.
Overall, though it's just sucking up more and more time per week,
but it's such a powerful and important thing.
It's the most fun thing we do.
Come on.
It is super fun.
But when no one ever warned you of Salim is like,
the singularity of covering the singularity.
It's a singularity of time suck.
It's just, it's a black hole.
It's a black hole, Dyson swarm forming around my own.
Singularity wants your attention.
So we hope for all our subscribers and listeners that you guys appreciate it.
We put a huge amount of work because we care about this deeply.
I need to give a quick plug.
Quick plug.
I'm doing my meaning of life session next week.
We've already, we're almost sold out.
It's going to be pretty amazing.
It's going to go for several hours.
starting 11 o'clock Wednesday come armed with any question you have about life and judge me by
how well this framework answers that question all right let's get to our outro music here from
david drink all i think it's the perfect name for a drinking game that can't be real oh my god it's a
bingo card i love it so this is a bingo card uh and you can see tile the earth uh have our glasses of
water ready yeah i do cybernetics
Okay, let's listen to this. Where's the humanoid robot entry?
Six-arm humanoid robots. Robots down the bottom. And cloud computing on the bottom left.
Okay, let's take a listen to David's outro music. Thank you, David, for producing this for us.
And again, if you're listening and you are creating music videos and you want to create an outro song for us, send it over.
We'd love to listen to it and perhaps select it. All right, let's take a listen.
Take a sip when Peter says
Gold-trodden gentleman
Two if he named drops just got back from again
Drink when Alex says
Better benchmarks a vending bench
Finish a glass if he whispers, Dyson,
swarm at last moonshot bingo
Moonshot bingo
Tile the earth with compute dream
Moonshot bingo
Moonshot bingo
Close the feet, back a loop, chug and sing
sip when someone says we'll cure every disease
Two endave mentioned startups are singularity.
Drink when Celine drops insert my usual objection.
Hit the phrase rat race or leaf frog and hit successively.
Moonshot bingo.
Moonshot bingo.
Tile the earth with compute.
Drink.
Moonshot bingo
Moonshot bingo
Moonshot bingo
Little data centers
Scullet quick
Oh
One sip for every code red, two for humanity's last exam.
Three when Alex is solving math.
Yes, that old plan
Big up when anyone says
Universal Basic Services
Pass out when Peter yells
That's a moonshot, ladies and gentlemen
Dang drop and dang
Moonshot Bengal
All right
Amazing
That is awesome
That's a moonshot ladies and gentlemen
You know
This is again a tribute to the creative nature
Of all of our subscribers
Thank you guys
And also the tools out there
To allow you to do things like this
guys i think that's the best yet amazing have an amazing weekend yeah super creative take care folks every
week my team and i study the top 10 technology metatrends that will transform industries over the
decade ahead i cover trends ranging from humanoid robotics aGI and quantum computing to transport
energy longevity and more there's no fluff only the most important stuff that matters that impacts
our lives our companies and our careers if you want me to share these meta trends with you i write a newsletter
twice a week, sending it out as a short two-minute read via email.
And if you want to discover the most important Metatrends 10 years before anyone else,
this reports for you.
Readers include founders and CEOs from the world's most disruptive companies
and entrepreneurs building the world's most disruptive tech.
It's not for you if you don't want to be informed about what's coming, why it matters,
and how you can benefit from it.
To subscribe for free, go to Demandis.com slash Metatrends.
To gain access to the trends 10 years before.
anyone else.
All right, now back to this episode.
Hack the holidays with the PC Holiday Insiders report.
Try this PC Porchetta, crackling, craveworthy.
You gonna eat that?
Who are you?
I'm the voice for the next ad, car commercial.
But I noticed that show-stopping roast and...
Help yourself.
Mmm, designed for indulgence.
Precision crafted to navigate every corner of my mouth.
All for just $18.
Okay, okay.
Try the season's hottest flavors from the PC Holiday Insiders Report.
Please feast responsibly.
