Moonshots with Peter Diamandis - OpenAI Acquires OpenClaw, 400x Cost Collapse, & Why India Wins the Talent War | EP #231
Episode Date: February 18, 2026The mates do a live Moonshots episode and discuss OpenAI’s acquisition of Openclaw, 400x cost reduction on ARC-AGI-1, and the AI Talent War Read the Solve Everything Paper: https://solveeverythin...g.org/ Get notified once we go live during Abundance360: https://www.abundance360.com/livestream Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360 Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy _ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Substack Spotify Threads Youtube Listen to MOONSHOTS: Apple YouTube – *Recorded on February 10th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Hey, you know what we're laughing?
Because, you know, the old saying, AI is easy, AV is hard.
We're trying to get our damn AV working.
I'm in Germany.
It's midnight here.
Salim has taken over the...
What are you doing in Germany?
Hold on.
I got to figure this out, guys.
I've got to share the screen.
Selim, were you A.V. qualified in elementary school?
I mean, did you go through that program?
I was not A.V. qualified.
Sure.
I mean, it's going to be a miracle if you get this working.
So hold on, it says also share tab audio.
Is that what you want, Donna?
Yeah, probably.
Try it.
What could possibly go wrong?
Actually, go to the outro music and crank it.
I can rock to it.
Dave, did you go through ABC certification when you're in school?
Absolutely not.
It was so uncool.
I really wanted to.
Now, Salim, go at the beginning of the deck.
Wait, wait.
Preview it backwards.
Boom.
All right.
You've got to try and play a video.
So hold on a second.
I should get half production credit for this episode.
Now, you're getting with that.
Boom.
All right.
You've got to try and play a video.
So hold on a second.
I should get half production credit for this episode.
Am I in a time loop?
Yeah.
And that weird?
Back boom.
All right.
You've got to try and play a video.
Cool.
So.
Are you guys in a time loop?
Are you guys hearing the same thing I am?
I think that was because Nick was in the room.
All right.
Are we good?
We're good.
We're live.
All right.
All right.
Live.
Hi, everyone.
All right.
Welcome to the raw backstage chaos that we have here at Moonshots.
All right, everybody.
Good morning, good afternoon, good evening.
And welcome to another episode of.
WTF just happened in tech. I'm here with DB2.
Celine Ismail, AWG, it's Ph.D. here in Germany in Stuttgart.
And I want to get your future ready. We have an incredible episode talking about Maltbots, of course, about the race between all the hyperscalers, dive into energy data centers.
Let's jump in. The supersonic tsunami. The singularity is now.
It is midnight in Stuttgart. You can't just drop.
that and not tell us why you're there.
I'm here for some longevity treatments.
Tell you about it sometime later.
All right.
Selim onwards.
I did a pilgrimage to Stuttgart just to go visit the Portia Museum once, so go there.
I should go while I'm here.
Right.
Let's jump in with Gemini, OpenAI and XAI.
All right.
I think this one deserves going to our resident benchmark Brainiac.
That's you, Alex.
That's not me.
So tell us what's going on here.
The race, the leapfrogging continues between Sonnet 4.6, Grock.
In living color, no less.
So let's take this seriatum.
Sonet 4.6, very interesting release.
I think several interesting points.
One, I think Anthropic has really been pioneering one edge of,
call it the scaling phase space,
where they keep the prices of the model tiers the same,
but increase the capabilities.
So Sonnet 4.6, same price per token-ish as Sonnet 4.5, but increase in capabilities. I'll talk about that in one second. Whereas, say, OpenAI is reducing the cost per token while keeping capabilities more or less constant through distillation and other processes for evolution. That's interesting. Point one. Let's actually talk about the progress on the benchmarks, the e-vals. I think it is not.
Nothing short of astonishing.
If you look at the GDPVAL benchmark, again, gross domestic product eVAL, that OpenAI launched, Anthropic is leading.
Anthropic is in the form of Sonnet 4.6, not even Opus 4.6, Sonnet 4.6, now has the state of the art on GDPVAL and one other eVAL that is intended to encapsulate knowledge work.
I've said on the pod in the past, knowledge work is cooked, cooked, two times for emphasis,
usually in reference to GDP valve, and we're seeing it get even more cooked, charbroiled at this point,
thanks to Sonnet 4.6. I also think taking a step back computer use is becoming a killer app for many of these models,
and Sonnet 4.6 has state-of-the-art performance on a handful of
computer use benchmarks, anyone who's been using as has been the case for me,
Opus 4.6 for the past week and a half or so for any tasks,
I think Anthropics thesis that focusing on software engineering and code generation
as a critical path to recursive self-improvement versus maybe charitably getting distracted
by images and video generation and all of these other modalities seeming like it's
working. I can accomplish tasks that seem borderline magical with Opus 4.6. I'm sure.
I've got to ask here because, you know, this is, I'm channeling one of my kids who goes,
dad, every week it's like four point this and four point that. It's better. It's better and better.
Yeah, we got it. It's getting faster. It's getting better. It's getting cheaper.
And aren't the models at this point just optimizing for the benchmarks? I mean, at the end of the day,
this is a gradual increase up into the right or down into the left, whatever you want.
I'm just trying to, you know, understand other than, yep, newsflash, it's faster and cheaper this week than last week.
Yeah, it is so opposite.
I know.
It is so opposite of what that implies.
I'm trying to channel, you know, our viewers listening and watching this.
Yeah, yeah.
I totally get it.
I mentioned a couple podcasts ago that when these curves get close to 100%,
they look like they're diminishing returns,
but in reality,
their capabilities and their ability to change the world is exponentially going the other direction.
I think that's what you're getting at here,
because you see a little tick up in these numbers, and you're like, oh, so what?
But then when you actually use it day to day, it's like, boom.
Oh, my God.
I mean, just the last two weeks of change is mind-blowing.
Also, you know, when they tick up the number,
in the versions, they're actually improving the chain of thought reasoning on top of that quietly in the background without ticking up the numbers.
So day over day, I'm noticing improvements that are mind-blowing that aren't actually showing up in the dot releases and the new benchmarks.
Sorry, Alex, go ahead and answer the question.
I just wanted to talk on it.
I was going to taunt Peter a little bit.
I mean, we are so spoiled to even be contemplating asking that question.
it would be like moonshots are our namesake.
Okay, so we have hotels on the moon now and vacations to the moon,
and maybe you can travel there once per human lifetime unaided versus zero times.
Oh, but yeah, we've had airplanes for a while.
We are so spoiled to even be asking the question.
If you live day by day with, say, Claude Opus 4.5 versus 4.6 qualitatively,
it is an enormous change forward.
It can solve hard problems.
that my best most of our viewers probably don't live with it day by day and aren't using it at
the maximum extreme i mean i think one of the things that you and i talked about in the solve
everything that org is like you know we're on this path we've broken the initial um you know
put the initial uh frame in place and we're we're heading towards uh you know a s i whatever you
want to call it so we're going to be reporting this every week this leapfrogging between models
And, you know, 100x faster, 100x cheaper.
I do think what you said that's interesting is two different strategies here, right?
One that Anthropic is holding, you said, cost and increasing speed,
while Open AI is dropping cost and maintaining speed.
I think performance, not speed, but yes.
Okay, performance.
I think that's a fascinating strategy, right?
Because we're going to get to it in a little bit because Open AI, I think, is going for a land grab.
affording a land grab on global consumers hitting 900 million.
And soon in India, you know, adding hundreds a million.
So the price is the most important thing for grabbing the consumer.
While I think strategically here, Anthropic is focused on, you know, enterprise business
and performance is far more important for the enterprise.
And their margins.
We've seen this business pattern play out over and over again historically, call it, again,
and this is very heuristic, but call it Anthropic as to Open AI as Apple is to Google or something like that, at least in the mobile space, maybe iOS is to Android.
There are many, many times this business pattern of emphasizing quality and margins on the one hand at a constant price versus emphasizing ubiquity and ultra-low cost at the other end.
This has played out over and over again many times.
It's the same old story, but I do think Anthropic, if I had to say which set of models,
which model family is the closest to embodying the singularity and recursive self-improvement
right now today since it's live February 17, 2026, it's the Anthropic family.
It's not open the eye or Google.
Kudos to Dario.
I mean, we'll get to Google in a little bit.
Let's talk about X-AI, launching GROC 4.2 beta.
I love these names.
Our live cast viewers here are saying it's poop.
What's poop?
Yeah, 4.2.
Have you guys tried it?
It's poop.
That's what they're saying.
The risk with the Grock family, so I had access.
The risk is always, or I should say the accusations are always, is it benchmaxing?
Peter, you were asking about benchmaxing earlier.
Yes.
Historically.
It's teaching to the test.
Right. Historically, some of the earlier GROC models have felt very benchmaxed. It's only been available for a few hours in beta form, so I haven't had an opportunity to do thorough testing. What I think is interesting about GROC, I assume we're supposed to pronounce it 4.20. Elon's one of Elon's favorite.
Yeah. Either that or 4.69.
But what's interesting to me at least is this is the first major frontier model release that I've seen that's launched with a team of agents by default rather than a single agent.
And OpenAI has a team under Noam that's been looking at agents for a while.
I think every frontier lab at this point has multi-agent teams built in in some form, somewhere in the family.
But I think it's a really interesting strategy to build in by default a multi-agent team.
There are lots of potential reasons why a multi-agent team versus just a single agent running serially might be interesting.
Like you can do things in parallel and explore possibilities in parallel with multiple agents.
But this may be the direction of the future, just like we saw the megahertz and then gigahertz race plateau out due to Dennard scaling with microprocessors.
And then we saw a transition from from clock speeds to multiple core counts.
Maybe we're about to see something like this happen with frontier models where,
Maybe capabilities, again, this is very speculative, maybe along a certain dimension of scaling,
obviously pre-training has sort of transitioned to reasoning scaling and other forms of scaling.
Maybe we're seeing the dawn of multi-agent teaming scaling where you get better capabilities
by scaling the number of agents in parallel working on a problem.
Alex, the viewers all think it's poop here.
But I haven't actually tried it.
I use Claude all the time and the other models every day.
I haven't felt any great compulsion to try 4.2 because, you know, Elon told us 5 is coming in March anyway.
My understanding was that 5 is a massive, massive expansion in every way, you know, in training set size, in parameter count, everything.
I never thought about anything meaningful between here and there.
I was just waiting for that.
But do you know any more detail on what this thing is?
And should the viewers be trying it or not?
I think it's worth in general trying every frontier model from, call it the top four or five labs that come out.
If you're doing stuff in AI, if you feel sufficiently abstracted from the bleeding edge of the frontier, I think you should still try it just to be familiar with the raw capabilities.
But based on what I've seen thus far, GROC 4.20 or however we pronounce it is...
Rock 420. It's going to be a 420. Rock 420. It's not the bleeding edge that's pushing forward capabilities as far as I can tell at this point in time. But it is interesting that it's multi-agent.
Salim, let's go to the, let's go to Google next. And some more of the bleeding edge.
Switching windows here.
I'll tell you on that. Moonshot going to the next slide. There we go. All right. Gemini 3, Deep Think. I just love these names. I think. I think the naming
protocols for all of these models has got to be rethought. But, I mean, I think the one benchmark
everybody keeps on tracking, at least I do, is humanity's last exam just for fun because of, you know,
sort of existential nature of it. Yes, it is our last exam. And we see here that Gemini
3 Deep Think hits 48.4. But most importantly, and this is the, I guess, the Open A playbook,
400-fold cost reduction. That's extraordinary.
It is. And also to the point about naming, this isn't even, I think, the first Gemini
3 Deepthink. This is the second Gemini or the new and updated Gemini 3 Deep Think.
So agreed that the naming could use some work, but the new Gemini 3 Deep Think is remarkable.
If you just look again at the e-vals, this is, there had been percolating for a while, the so-called internal model,
the one that beat the international math Olympiad and was achieving breakthrough performance
at other high school science competitions, this is the model that achieves gold-level performance
at the physics Olympiad, the math Olympiad, the chemistry Olympiad. On code forces,
I think that the statistic is there are only seven humans now on Earth who can beat this model
on competitive programming. So I think, you know, Peter, you and I spoke in Solve Everything
about what we called a solution wavefront
propagating outward from math and coding
to different fields.
This is the beginning of the wavefront.
This is the infection, the contagion,
spreading from coding and math to physics and chemistry.
It also does 3D design,
although I keep trying to persuade it
to do 3D design unsuccessfully.
It keeps producing intermediate products,
but this feels like the kickoff,
the starting gun for the solution wavefront
that we spoke about. And we'll see, we'll see that. And I think, I mean, the visual image that I want
everybody listening to think about is when you have this kind of, you know, this weapon of superintelligence,
where do you deploy it? Where do you aim it at, right? What are you measuring and where are you going to,
you know, what is your massive transformative purpose? What is the challenge you want solved?
Because we're going to have this kind of capacity. And, you know, ultimately it's going to be your
decision as the, at least the human utilizing agent for the time being before it's the agent
utilizing the human, where you want to deploy it, where do you want to use this wave front to
transform, do a phase change if you would.
A couple of comments for me.
One is, you know, this 1,400 times cost reduction is incredible.
I mean, that is the big headline here.
When a frontier reasoning costs seven bucks instead of $3,000, think of the implication for
startups that gain institutional powers.
guess what when it's pennies next year
but cost curves are now going to start collapsing industries
before the technology does right
that's like really quite something
and by the way a viewer
Brian Minto Alex has asked that you
read Accelerando live
which I think you should do on the podcast
just go through like Mr. Beast counting to 100,000 live
he just read the whole book live in one sitting
I'll do better how about we get
Charlie Strauss as a guest on the pod.
I think that would be awesome.
That would be awesome.
Hey, before we move off the benchmarks, two things that have changed for me in the last two weeks that are, just step function changes for me.
The first is I just don't even look at the code anymore.
I ask 4.6 with a little brain, deep think, Claude 4.6, to build something.
And then I entirely poll it on what it built and look at its functionality, don't even look at the code.
The other thing is I ask it to document everything it does and just store it somewhere on my hard drive.
And I don't even specify a location anymore.
I just say build some coherent file structure and put things in an organized place, and it just does it.
So now if I want to get it back, I don't even know where it is.
I just have to ask for it, but it knows, it remembers everything that it did.
So those are two big, big changes versus just a couple weeks ago.
It's a step function from Google.
I mean, once you start using Gmail, you don't bother trying to store stuff in full.
You use search, and now we do the same thing with AI interface, right?
It's crazy.
You know, Alex, question for you, you know, these AI systems are now beginning to catch human errors in scientific proceedings and scientific papers that have been written.
And, I mean, it's going to be interesting.
Like, you know, we've talked about in the past when, quite.
quantum computing comes along, it's going to go and decrypt all the files in the past before we had quantum encryption.
So I wonder if AI is going to be aimed at looking at all the scientific literature over the last 100 years and show us where all the mistakes were.
I'd count on it.
It's going to topple some Nobel prizes.
Oh, I think that's the least of it.
I can only imagine the left turns that human civilization has taken in the past, call it, 80 years when it should have taken a right turn instead.
and we're going to discover that after the fact.
I think if I had to project the shocked civilization of discovering all the wrong turns
that we've taken due to AI or that AI will uncover versus, say, quantum decrypting some pre-post-quantum
cryptography safe files, I think it's going to be a night and day difference.
I think AI will shock humanity to its core in terms of the mistakes that it discovers that we've made over the past century.
Fastening your seatbelt, everybody.
That plus how much have we missed, right?
How many scientific experiments did somebody look at the wrong thing and miss the unbelievable conclusion over there?
That, I think, is going to be the huge outcome.
I think it's a continuum.
I mean, oh, go ahead, Dave, sorry.
Well, when I'm spooling up a new agent now, you know, I used to be very thoughtful about what I fed it to feed into the context window to get it up to speed.
Now I just ask it to read about 1,000 pages of markdown documents.
And it does it in about 10, 20 seconds.
and it's fully up to speed.
And the context window and also its ability to sort through all the garbage is growing or improving faster than my ability to clean it up anyway.
So my new agents, you know, I'll boot up, you know, two or three agents every couple hours.
And I just say, look, read everything.
Read everything I've ever given to any agent before.
And then the new agent is up to speed.
And it's actually, you can pick up a project right where I left off.
And so I think, I think, you know, the future.
Your future employees will be the same, right?
Read every email and every Slack and everything.
Also, what's not intuitive is the complexity of the document doesn't seem to matter.
Like if you're teaching a kindergartner to become a college graduate in like 30 seconds,
you move, you know, through reading and writing and then, you know, basic arithmetic,
you work your way up.
But here you just bombard it with super, super technical, complicated documents that would take me,
you know, many, many hours to read a single document.
and it just sort of absorbs it instantaneously.
It's just mind-blowing.
And everyone can try that too.
You know, just go find something that you barely understand,
download a thousand pages of it,
and try and just dump it into Gemini.
Just go to free Gemini, put it on Think Mode,
and just dump it in, and then just start asking it questions.
And it just is such a mind-blowing experience.
So one more point on the benchmarks here,
before we leave these couple of slides,
which is, are the current benchmarks becoming meaningless?
I mean, the models are increasingly optimized to ace them.
We're beginning to saturate them.
So, you know, we've talked about this before, Alex,
smack some knowledge on us about how we're going to measure things
as these benchmarks begin to fail to service.
It's almost, Peter, like we wrote an entire book on this problem.
Yes, I'm trying to prompt you to speak to it.
It's a good self-advertisement.
Yeah, I think we are, the world is in a famine of good benchmarks, good e-vals.
We call them, in some sense, targeting authorities in book, if we want to call it a book or extended essay, solve everything.
Or a white paper, yeah.
A white paper.
I think there is a lot of juice still left to be squeezed out of new benchmarks and new e-vals.
I think solving the hardest problems of physics, of chemistry, biology, various disciplines in the social sciences, all of these want high quality benchmarks.
I'm personally spending a lot of my time thinking about what are the best problems that are worthiest to be solved.
I have mentioned on the pot in the past.
I have a portfolio company, physical superintelligence that's thinking about problems in PSI, solving physics with AI.
I think this is how we solve all the hardest problems in civilization, starting with new benchmarks for those hardest problems.
This is how we weaponize superintelligence.
Amazing.
All right, Salim, move us forward.
Okay, so this goes back to open AI strategy of a low, low cost.
So chat GPT, it's 100 million plus weekly active users in India.
Here we see Sam Ollman.
He is operating in rarefied atmospheres with the Prime Minister of India.
So India is opening eye's second largest market with 10%.
It's ranked number one for student usage in India.
They're all in.
They're setting up offices there.
They're creating localized subscription services.
And the big challenge, you know, as they're hitting 100 million users globally,
the big challenge here for me is are they going to get themselves into a trap where they're offering free or almost free service and at the same time the user adoption will go through the roof in India and has this become a cost sync for them versus a profit center or are they just going to sort of ride the exponential curves and innovate their way out of that?
The Indians are going to suck all the data center usage and tokens out of it.
of them. Yeah, I mean, that is honestly what could well happen, right? I think you have a bellwether
for a lot of countries. One of our listeners is in Finland and he's saying the politicians here
are absolutely not talking about this. It's nuts. But I tell you, India is such a crazy zoo of an
ungoverned mess of a place, but it's packed with brilliant people. And it's just a massive
population, 1.4 billion people of whom 5% read and write
or English and 20% speak it, the massive latent talent pool.
And so it'll be a bellwether for like the population is just going to run away with
AI and ignore all structure and government.
Dave, I was starting to look at what, you know, India ETFs in the tech industry should look at.
I think, you know, China is peaked and is going to be on descent.
India is the rising giant for the next, I think, 20, 30 years.
Africa will follow because of a young population and because of why.
all the resources that they have.
But the country that trains its next generation on AI
wins the entire talent war.
And India has the ability, if it goes deep on this,
with 1.4 billion, 1.41, 1.412, whatever,
billion people on the planet,
it could be the next massive rising star
and support the planet here.
Yeah, but it's going to happen really fast,
massively in parallel.
That's what, you know, a lot of people aren't used to this idea that something can happen overnight because, you know, normally things percolate and you have this kind of slow GDP growth that percolates out.
But this isn't going to be anything like that.
The population in one, one fell swoop, like a very short period of time, is going to use the eye to escalate.
Of what? India?
Of the world.
Yeah.
Well, probably the world.
But India will be the bellwether because, again, it's such a huge population and it's so untapped.
And all the news is.
The other thing is Mukashambandi has delivered an amazing 5G capability across the country, right?
So it's got the infrastructure.
It skipped the wireline.
All the youth is kind of growing up AI enabled, right?
So that's incredible.
I have to say a quick story.
When we left India when I was 10 years old, I was kind of an angry teenager because I had to like mow the lawn and stuff.
And I asked my father, why the hell did we leave?
I mean, we had a great life over there.
And he goes, I can't stand noise, dirt, pollution, and corruption.
And I was like, okay, fine.
if you had to go, okay, fine, if you had to understand that.
But there is something there because as you get the capability in the democratization in everybody's hands,
the speed of change is going to ride around.
And the government is doing an amazing job of making platforms like Adair and UPI available
so that anybody can tap in, create a payment system, et cetera,
and that's going to completely allow India to leapfrog the rest of the world.
The huge bottleneck is going to be energy, scalable energy,
which they're adding at a rapid scale putting solar in every little corner of the country.
Last week we reported that solar was scaling faster in India than it did in China, which is amazing.
Yeah.
Nice.
Well, we're going to see.
Over to you, Alex.
This is a fun one.
We're seeing the beginnings of everything other than math and coding start to get solved.
So this is a reference to open.
A.I. announcing in collaboration with Harvard and I think the Institute for Advanced Study was
involved, a couple of other places, what Open AI is marketing as a new physics research
result that was discovered in some sense by AI. And I think we're going to see much, much more
of this. So 30 seconds on what actually is the claim. The claim is that Open AI and co-authors
were able to use GPT 5.2 Pro to discover that what's called a scattering amplitude,
basically gluons, the messenger particles, the force carriers of the strong nuclear force.
They tried to solve a sort of a prediction of how these strong nuclear force carriers would interact.
And historically, in this part of the physics community, the thinking was that there would
be in some sense, and I'm being very heuristic here, no interaction, that a term in a scattering
amplitude, which would be the formal way of describing this, would be zero. So many physicists for
many years assumed the answer to this particular value was zero and didn't bother spending any
time checking rigorously and fulsomely to see whether it actually was. And the claim for this
paper is that GPT 5.2 was able to find cases where this scattering amplitude was not zero,
find a nice expression for it, and then an internal model, which hasn't been released, or so the
story goes, probably some future version of the GPT model series, was able to confirm it.
And that confirmation was then, I think, vetted by the human team.
So this is being represented as a case where AI is making a particle physics discovery.
And I think what's most interesting about this is, and Peter, you and I make the case in solving attention, we call the intelligence revolution a war on attention.
This is Exhibit A for AI, starting to solve science by solving problems where humans say, okay, post-talk, having,
seen the evidence, okay, I could have done that if I had the time and the attention for it,
but no one had the time. People thought the answer was obvious. It's only once we have
lots of superintelligence that we're able to train on problems that would have been too boring
or too low likelihood to actually yield an interesting novel result that we're actually
discovering oversights. This was in some sense an oversight. You also have the issue of like
fashions and trends and people following fads and everybody and you can get around all of that now.
So this is such a great point you're making here.
You know, the thing, we all have those projects, those wonderments that we had or that project you put on hold or you didn't have the resources or the time or the knowledge.
And you can spin them up.
You know, we'll talk about MoldBots, OpenClaught in a little bit.
But, you know, I just wrapped up a project.
I've been wanting to work on for five years.
And it was like just so much fun.
And I was off my agent for about eight hours.
And I felt completely disconnected from the world.
So what do you know, just reaching out to everybody,
what have you always wanted to work on?
What's that pet project, that company idea, that book, that piece of research,
because you can.
Yeah, I'm trying to think of ways that our audience can experience how mind-blowing this is.
the AI is an unbelievably prolific
brainstorming partner.
And if you're in a domain
where it can test things by itself,
like what I do all day with neural net creation
or coding, I can just say,
wow, what a great idea, go try it.
And then, you know, a minute later,
it comes back with an answer.
And the rate at which you can move
is, what, two, three orders of magnitude higher
than anything I've ever experienced before in life.
but it has to be one of those unconstrained domains
because if you're working in chemistry or whatever
you can have to wait for test results for a day or two or three
and it breaks the whole experience
but you know if you want a really simple example
just try and plan a trip
like something complicated in travel
and try and brainstorm your way through the flight
the restaurant the hotel whatever
that may not be the best example
but at least you get some flavor for what this is like
it's like nothing you've ever experienced
My fun experience was I have to be at this location at this time.
I'm here at this moment.
Work it out backwards what flights, taxis, cars, ubers I have to do.
You know, it's like work out the whole thing from my end point and work it backwards.
I think one of the things I keep on saying on stage to the audiences I'm speaking to is we limit ourselves in the questions we ask all the time.
We self-limit what we think we can do.
We hold ourselves back in so many.
different, you know, and how we can and should be using AI because we're not used to it.
We're not AI natives, at least, you know, us on the phone here. We didn't grow up with it at
age six, seven, eight, as many folks are now. So you've got to stop yourself from stopping
yourself and, you know, unleash your creative, your creative child mind in this area.
By the way, I just want to ask, if you're enjoying having this Moonshots episode live, please let us
know in the comments. Let us know if we should do this more often. I'll ask you again.
Maybe you like it now. We'll like it later, but we'd love to know. So give us some feedback.
If you were at Nacho says this is the first time I'm hearing you guys in real time.
Okay.
As opposed to sped up.
Let's be torture. Sorry, sorry, dude.
We'll just try and speak super fast so you can match up. Okay, yeah, we'll try and take up the
piece.
All right.
It doesn't stop with physics.
It's continuing on with math.
Open AI says internal model solved,
six of ten research level models in first proof test.
And here's our friend Jacob, who we've met.
Alex.
Awesome.
Well, we talked about math getting solved.
Math getting bulk solved.
In fact, math is getting bulk solved.
This is maybe not Exhibit A.
This is probably Exhibit C-D-E-F.
at this point. And first proof is, I think it's such a beautiful example of a class of 10 research
problems with a finite amount of time being allotted for AIs to solve them, where the answers
were known, but they were kept confidential by their authors, and they've since been unlocked.
But Open AI has taken the position, and it's been fascinating watching the back and forth,
that its model before the solutions to these 10 research level math problems were declassified
that they were able to solve at least six of them.
And so we're seeing right in front of our eyes the bulk solution of math.
I think back a year, almost a year ago, when we were first Royal Wee,
I was first talking on the pot about math getting bulk solved by AI.
It's happening now.
We're there.
Yeah.
And we just saw, I mean, today the first hints at,
physics and six months from now, if not a year from now, we'll be talking about how all these
physics problems have been addressed. Can't wait. Well, and let's touch on the timeline there too,
because Peter, a second ago, you said something about 20 or 30 years from now, but there is no 20
or 30 years. There's so many times this morning that somebody said, next year, when we do this,
like, there's no next year. What are you talking about? Did I use 20 years in my language? I'm sorry.
I must have been 20 minutes.
So, I mean, that's the challenge.
This is live.
Yeah, I mean, Selim, you remember the early days of Singular University.
You know, we were looking 10 years out into the future.
I mean, honestly, and I had this side conversation with Elon, it's like you can barely look out three years.
I don't think we can.
Well, and we're used to this world where physicists or mathematicians can now do blah.
Okay, well, there are only so many of them.
They will do blah.
and 20 years from now, they'll have solved all of blah.
But here, it'll happen instantaneously.
If it can solve six out of ten, it can solve all within the next couple months.
It'll happen in massive parallel.
There's no limit to the number of parallel agents up to the number of GPUs that are available.
So math is cooked.
Yes.
Math is cooked, physics is cooked.
Biology is going to be char broiled and where you're the beneficiary is.
You know, I just think I was seeing one of the comments in the chat here.
I think if we just stay on this live 24-7 and Gian will just generate more slides for us,
so we'll just keep going, going through them.
It'll be a continuous singularity conversation.
It'll be like a hackathon.
Let us go around.
Yeah.
Yeah.
All right.
Let's move on.
All right.
More benchmark.
So I'm fascinated by this.
What's going on with Chinese Open Monitoring.
models, gaining momentum.
Here's Minimax, GLM5, Kimi K2.5.
I mean, these are doing extraordinary work.
And with all of the OpenClaw downloads, right,
a lot of people now moving to Mac Studios
and putting Kimi K2.5 on their Mac Studios
and other models here.
Alex started, how do these perform against
the closed models as you see them.
Well, the rumor going around is that the next version of the deep seek model, the big whale fall
moment is going to happen sometime soon when finally the Chinese open weight models finally
catch up with the American closed prototypically frontier models.
That hasn't happened yet.
It may happen.
Right now, the overall trend is still that audio is still okay?
Yeah.
We hear you.
Awesome.
I don't know what that was.
That the Chinese models remain approximately six months behind the American models.
We'll see whether that continues to be the case.
I haven't seen any evidence yet.
But they're free.
Well, that's a qualitative difference and a very important one.
That means that many American startups that want to self-host are using Chinese models and not American models.
And so this is, again, this is going back to the land grab.
We talked about this with Open AI in India, going in and providing basically a very low-cost service to,
to millions of young Indians.
China is in the same process.
This is belts, you know, belt and roads.
Yeah.
Where it's, you know, offering it to the, you know,
majority of South America, Africa,
different parts of Asia.
And I think there's going to become a dependence.
I think people are going to get connected to a model
that they're going to use and begin to baseline.
I think there's a big difference, though.
I mean, if we want to frame it as model diplomacy
or model dumping even.
I think there's a big difference,
which is the frontier is moving so quickly.
I think it's difficult for sort of a prototypical,
so-called developing country to get addicted
to a particular open weight model
because the new ones are constantly coming out.
It's a vibrant marketplace.
I think if American labs felt sufficiently motivated,
they could just as easily release for free their own models,
I just think it's a problem of incentive.
So I think as opposed to alleged Chinese dumping
of, say,
solar photovoltaics into India or into Africa or other physical plant infrastructure,
I think the marginal costs for substitution and replacement are so low with these models
that would be very difficult for China or Chinese AI labs to addict the rest of the world to their
models.
I mean, the important thing is that humanity is the beneficiary across the board here, right?
We're getting much more powerful, much cheaper models at hyper-exponential rates.
This is a space race.
It's a space race on the ground to super-intelligence and to super-duper intelligence.
And this is providing an incentive, strong pressure to the American frontier labs,
who as of right now are still in the lead, to stay in the lead.
There's no pausing this.
ASDI baby, artificial super-duper intelligence.
Love it.
All right, Alex, quoting you on here, traditional coding is cooked.
Even cooking is cooked at this point.
with humanoid robots.
So this is the note from Spotify
that they haven't written code in three months.
The code's being written,
but it's not by humans.
And of course,
the 95% of open AI code is being written by codex.
And of course,
this is probably a large number of companies.
This is just the news items reported.
Dave?
I think it's really funny.
Actually, when you talk to the top AI researchers,
they always talk in terms of,
well,
what I'm working on is that last 5%.
you know, I'm not eliminating my own job tomorrow.
Then you look at the HLE results and you're like, yeah, yeah, you are.
You're literally, you're coding yourself out as fast as you possibly can.
And I don't think they stop to think about that fact.
Alex, I loved your analogy last time we spoke about George Jetson with his, you know,
with his finger being overexercised on the button because, I mean, that's effectively what coders are doing right now.
It's like, okay.
it's like if folks in the audience, hopefully other folks are having this experience and not just myself with,
with Claude Code in particular, approvals for everything. But I think we're going to move past this
George Jetson model of just approve, approve, approve for software development pretty quickly. I think
among other things, OpenClaw is a preview of a, either it's here or an imminent future where it's
permissionless activity by these agents. I think Claude Code is, did you remember like older
versions of windows that were permission heavy, where you had to go through like 10 clicks to
approve, approve, approve to do basic things.
Yeah, I think that's like the stage that we're at right now with these models.
Clippy.
Yeah, out of an abundant, well, don't get me started on Clippy.
But I think out of an abundance of caution, these models are asking for permission to do
everything, you know, permission to switch to another directory, permission to search the
web.
I think pretty soon the autonomy time horizons and meter and others are measuring this are going to be such that we just give blanket permission to do whatever to these models within broad parameters and we stop having to click approve for everything.
We are in a kind of a fragile moment in time here where if you install ClaudeBod or OpenClaw now,
and you can choose any model you want, but if you choose one of the Chinese models,
especially if you run it locally, but if you choose a Chinese model,
you don't have to go through all their permission nonsense.
And also, if you use one of the U.S. APIs, it'll get stuck a lot
because the bot is asking it to do something that it doesn't want to do.
And the Chinese models are like, yeah, sure, I'll just do.
anything. And so that kind of forces you down the Chinese path, but as you've said many times,
Alex, you don't actually know what is inside those models, and the code injection risk is really,
really real. So people are in a real hurry to experience this and to turn it loose. And the only way to
really turn it loose is on one of those Chinese models. I mean, this isn't prescriptive, certainly
not, but the world, to my knowledge, has not seen a major supply chain attack yet that stems
from the result of untrusted open weight code generation models,
rewriting the entire supply chain.
But do I think that's possible?
Yes, I think that is absolutely a threat factor.
You know, Blitzy's been an amazing company,
and it's grown, you know, light speed coming out of the Link Studio Shop
and has been a great sponsor here.
I mean, how are they using all these technologies?
because they're rewriting massive amounts of code.
Well, they're doing a lot of work for banks and government agencies and stuff,
so they can't use the Chinese models for that.
So they're almost entirely.
Actually, when Claude 4-6 Opus came out,
they sent out a memo saying, hey, everybody, this is just mind-blowing.
Everybody switch all of the, you know,
they can switch between models with just a mouse-click.
So they switched over to Claude Opus 4.6,
and I'm sure they'll move to the next generation in late March of whatever is winning the benchmarks.
on that thing. They're definitely not touching the Chinese stuff.
I imagine that, let's see, the speed at which they're rewriting, how old is the code they're
rewriting, Cobol? And going, how far back are they going? Yeah, a lot of it. A lot of it. Actually,
it's very similar to what Alex was saying about old physics papers and old, like, a lot of this
code has bugs that have been sitting there for 20, 30 years, you know, robbing it of performance
or actually losing money for like 20 or 30 years. And it's just cutting through it and, yeah,
rewriting it, solving it, finding old issues, just, you know, at AI speed.
It's a real threat.
Like we've talked on the pod in the past about how Stack Exchange, for example, is dying in some sense.
Very few questions being asked because you can now ask the models any coding questions you want.
There was a paper I talked about it in my newsletter about the risk to open source projects in general.
Why even bother starting or maintaining an open source project if you can just have,
doubly so for middleware, if you can have AI models generally.
generate all your code for free. Why even bother maintaining an open source project? So if we find
ourselves in a near-term future where there's just no point where you can spin up a new kernel level
project from scratch on demand, all of the code is just in-timed with whichever models are convenient,
I think from a supply chain security perspective, we're going to have to have a long,
hard look at what our dependencies are and make sure that our dependencies aren't just riddled
with vulnerabilities that were inserted by just-in-time code gen.
You know what else came up this week, Alex?
The AI is so prolific at creating code modules, just like solving all math.
If you solve all math, you write down what you solved, right?
You don't solve it on the fly in real time.
But for complicated code, it's the same thing.
It's like, well, yes, I can write it in real time, but I already wrote it.
And discovering it and reusing it is actually even cheaper.
It saves you tokens.
It saves you compute costs.
And so now where we've had open source, we're starting to have open source design for AI.
and thousands or millions or trillions of fragments of code that do specific things, the AI can discover them in real time.
And it's actually a really great way to build new software.
You could also generate on the fly, too.
It's just a question of what's more efficient in terms of latency and tokens.
But it's like all of this historical open source is now going to be designed for AI.
Just like all written documents will now be written for AI, not for direct human reading.
All right, let's move on.
We're doing this podcast mostly for AI listeners, I'm guessing, not human listeners.
That's why we're livecasting it.
We want to reach out to the real humans one more time.
Happy Chinese New Year's all to all of Chinese descent.
Happy New Year.
And I just saw some chats in the side here on our live chat that's going on,
asking about where's nanotechnology?
I can't wait for nanotechnology.
I remember back in 1986 I read a preview of engines of creation by Eric Drexler.
and it's been a few decades, so it's coming.
I don't know, I think we'll start to see it fall.
I mean, we have wet nanotechnology called biotechnology.
Alex, what's your time frame for a nanote?
I definitely have a view on this, in part because I thought I spent a good chunk of my PhD
thinking about how to get us to Drexleri and nanotech more quickly, in part because I was a little bit less bullish on A,
as sort of a direct path than I am now.
So if the question is, what's my timeline for maybe not times?
For Drexlerian assemblers, to the extent the physics and chemical physics of our universe admit
Drexlerian assemblers say has parameterized.
Peter, I think you're on the board.
At least you have been historically on the board of the Feynman Grand Prize.
Was that true?
An advisor, not on the board.
Okay, so the Feynman Grand Prize is one parameterization of Drexler.
clarion assemblers. For those not paying super close attention, it comes in in two parts. One part is,
can you build, I think it's an 8-bit half adder within a certain very small volume of a nanosystem,
and the other part is can you build basically a robotic manipulator arm within a small volume?
So the question is my timelines. I would not be that surprised if Feynman Grand Prize is
solved in the next two to three years. Fascinating. And we lost Saleem.
Oh, well. So we'll continue until he comes back on.
Well, the slide, we can just describe it. The slide that we were moving to was the meta-smart glasses, which now have built-in face recognitions.
Oh, my God. I put on the title there, you know, privacy question mark. So there's some great books, some great sci-fi books. Welcome back, Celine.
Hey, my microphone dropped out for some reason.
So you cannot opt out.
The peer pressure forces you to opt in.
Because I think a lot of people look at this and say, well, I'm not going to wear these glasses and, you know, spy on everybody and record everything.
But once you've experienced the face recognition and then all the metadata that pops up, you're like, well, now I'm not competitive with the world unless I actually have them.
And it creates this huge amount of techno peer pressure.
And so you don't really have the option to opt out.
I think this is going to become part of normative culture.
I mean, we had the glass hole episode with Google for a while.
You know, that didn't work out.
But, you know, first off, what I find fascinating here is that to get these allowed and to get people to start to accept them,
their pilot program is being done with people who are visually impaired.
right so it's like a soft on-rap yeah that's what i did with the neuralink too it's you know it gives you a good
politically correct excuse to do what you really want to do which is everybody but i mean i i also
think it's it's interesting if you if you think about whether this could only have arrived now this is
old technology we've had the technology to build smart glasses that would do human identification at a
distance human id if you will for at least a decade it's not that
that hard. We've had the computer vision algorithms. It's 2026 now. We certainly had the ability to
do relatively efficient. Doubly so if you're restricting human identification to say all of your
Facebook friends, we've had that for at least 10 years. So why now? I think this is a social technology
more than it is an AI technology. It's not a real AI advance in short. I'm calling this one as a
social advance. We have already many of us, especially those of us in certain places in the West and also
China with very dense surveillance networks with cameras spotting everyone on the streets and cities.
Technology exists already. It is in many cases. It does, but this is convergence and this is cost,
right? And then this is social engineering as well. I don't even think it's cost. We could have done
this cheaply 10 years ago. I think what's interesting is there's a demand for,
enabled wearable devices, and I think this is an opportunity, I suspect meta, sees an opportunity,
maybe demographically, maybe politically, an opportunity to finally launch human identification via
smart glasses. But it's, I mean, this is a killer app.
And it's going to kill privacy.
Yeah, privacy, you know, recording everything was already here 10 years ago.
But people didn't get slapped in the face with the fact that everything they have ever done is being
recorded, it's the AI overlay that then recognizes all actions and classifies them and makes it
all very searchable. So if I said, you know, I only want imagery of you picking your nose,
go through all the thousands of hours of footage we've ever done on this podcast, and find me
an example of Alex picking his note. It just does it instantaneously.
Go ahead, Alex. That's the part that makes it very different socially and culturally than the
surveillance we've been living under.
The good news is you can now just claim it's a deep fake.
Yeah.
So there's that defense.
Well, first of all, I was about to volunteer to make it easy for the AOM model to find an example.
But, no, I would say the models for video understanding are new.
I agree with that.
And the most recent Gemini models are absolutely outstanding at handing them long,
multi-hour videos and asking them to find a needle in the haystack of something interesting
happening. However, I would say
just spotting humans,
if you're walking around on a city street
and spotting someone interesting
and matching that against, say, hypothetically
a Facebook of
people's faces, we could have done that
10 years ago. That's more a social
When I come
through, you know,
passport control at LAX
and you just walk by the
camera, right? We gave up
our constitutional rights
to some degree. And it makes
life easier. And so as long as this makes life easier for people, like being able to recognize
someone on the tip of your tongue and have it pop up the last time you saw them, their kids' names,
and all that information, it's going to create this social fluency that I think we've never had.
Maybe if people have an amazing memory, right, for faces and names saying, meet so many people,
I don't. There's a big slippery slope there, Peter.
I think it's... Go ahead. Selim, let's go ahead. Yeah, there's a big slippery slope.
slippery slope there because if you don't have privacy you do you do you can you
not hear me i can't hear saline i can hear you are you are you are you guys to rejoin is it
i did actually drop out and rejoin this thing um that's a voice in your head peter no no it's i'm real
i'm real are you guys playing with me no no seriously i literally hear me taking an error on my
screen this live experiment is going really well actually the chat is a little
I'm cracking up here.
It is kind of ridiculous.
So, anyway, listen.
Enter our producer, Nick.
Nick.
Hey, Nick.
Welcome to the world.
You've exposed yourself.
But now he's frozen.
Jesus.
All you guys watching and folks and girls and gals and bots and droids and lobstroids and lobster, this is full grittiness.
Should we rejoin?
Dan, I'd say.
Dana, can you hear us?
I can hear you. Can you guys hear me?
All right. Well, Salim, you and I can have a conversation.
Yeah, we can.
Let's continue.
So you guys can both hear me, but you can't hear us.
We can all hear everybody except that Dave can't hear me.
And we can hear each other.
Yes, just not you, Salim.
No, Dave, you can't hear me?
I can hear you. Dave and Alex.
Neither can Alex.
Do you want us to rejoin?
Yeah, let's try to rejoin.
No, maybe maybe, maybe, maybe Salimi is.
to rejoin.
I did that already.
All right.
By the way, how was everybody enjoying this live
version of Moonshots?
You know, I just keep on saying AI is easy,
A.B. is hard.
Yeah.
All right.
Peter, if those guys can hear you,
why don't you tell Alex and Dave to drop off from the back-com?
Okay, Alex and Dave, go ahead and rejoin.
All right, let me try.
And then by the meantime.
Salim, what are your thoughts on this privacy issue?
So the privacy thing is a very difficult and slippery slope, and I'll explain why.
The minute you don't have privacy, you don't have freedom.
Okay.
And this is a huge problem.
You can't experiment.
You can't, like my private keys of my Bitcoin.
I mean, there's all sorts of areas where you have huge area issues around this.
Hang on, Nick is calling.
Can you guys here?
Yes, I can.
You can.
All right.
We're back.
Dave.
Okay, great.
Okay.
Yep.
All right.
So your point, and I think it's an important one, is, you know,
Sleem just said if you don't have, if you don't have privacy, you don't have freedom.
I think it's a false choice.
I think so, I think, first of all, these glasses legally, at least in sort of the American legal system,
will be used in public places.
They'll very likely be banned to the extent they're not already be banned in multi-party consent.
contexts in private spaces. They have lights. If you look at what Google, Google, of course,
is launching Android XR and smart glasses. Everyone's launching smart glasses, and they'll have lights
to indicate when you're being recorded and when you're not. And I think there may be an
evolution of standards regarding circumstances in private spaces when it's allowed to record
or not, but I completely don't buy this premise that somehow privacy is going away.
people have eyes.
Privacy is cooked.
Privacy is cooked.
Alex,
I mean,
we're going to have every major
open AI and Google
and everybody's going to be having,
you know,
wearables that are recording all the time,
all the time.
And we're going to have,
you know,
micro drones.
I mean,
we're going to be,
we're going to be gathering data all the time.
And so I think privacy is cooked.
It is,
but it's important that we preserve it.
And now,
let me explain why.
Okay.
Can you guys hear me, first of all?
Yeah, we hear you.
Yes.
Okay, great.
Your audio is not a conversation.
So, look, it's one thing to be out in public and people know your move.
That's fine.
We can augment that.
But there's lots of things that are a huge issue here.
For example, there's lots of cases where government authorities have dropped into cars and
open up the microphones so you can hear what's going on without a warrant.
There's lots of cases where people are listening to your...
Oh, no.
Cases where people mute them.
themselves in mid-sentence.
Salim, you're muted.
Got it.
This is like totally surreal.
Okay.
There's an AI watching me going, I don't want them to be listening to this.
So there's a lot of cases where people misuse this capability in very radical ways.
And the problem is there's no easy way of stopping that.
Now, that doesn't mean you have to turn off all the medas.
And I'm not an anti-technologist by any means by even being on this podcast.
But the minute you do that, it gets abused and it gets abused quite badly. So you have to have
guardrails on the institutional side, which that's the problem. We're losing that. Okay.
Like, for example, we're losing habeas corpus in the U.S. Okay. That's like, that's a choice
that people are making to just ignore that and have it wash away. Once it goes, it doesn't, does not
come back. Viewer Innovative XR has made the exact point. The once you lose that privacy, it's very,
very hard to get it back. So this is the challenge with all of this technology. We're moving faster
than our institutional guardrails. Yes, you're absolutely right. I'm not sure what the answer is.
I want to be, yeah. But, but, but we have to be very careful to kind of okay all the,
those things without realizing the downsides of it. All right. So, Salim, I want to be clear,
I want privacy in my life, right? I understand. Everybody wants privacy. Everybody has screwed up
at some point in their life, done something they regret, you know, we're humans.
And you're, you know, you feel lucky.
Like when we were kids, we didn't have Facebook and cameras capturing everything happening today.
You know, there was this whole thing about college, you know, college admissions looking at kids'
Facebook pages and so forth in the past.
I want privacy.
I just don't think we are going to actually have it.
We're going to have the illusion of privacy.
I won't buy that for one second.
I'll point out maybe one or two other points.
One is, to the extent anyone here is bullish on crypto,
you sure as heck should hope that privacy remains intact.
Otherwise, your crypto is going to disappear.
The talk, I believe, is the word.
Crypto was cooked.
How's that for alliteration?
But it's not forward-booking financial advice.
It's just pointing out informationally that if you think privacy is cooked,
then you probably should infer that crypto is cooked as well.
Your private keys cooked, if you think privacy is cooked, therefore your holdings cooked, cook, cooked.
Well, I think part of the disconnect there is, you know, Alex's view of the world is through this.
I will upload my consciousness very soon.
And within that virtual world, there will be all kinds of privacy options, just like there are with my crypto keys.
And then Salim's view of the world and my view of the world is, no, I'm going to live in my meat body for as long as I can.
and every move I make is going to be recorded
and it's going to suck for a while
until we have some new legislation and some safe zones
and that to me is inevitable
and I think all the listeners are also posting
the same kind of view
but I think that maybe the source of the disconnect
I was responding to one of the viewers
this live thread is awesome
having this conversation in real time
it's so amazing
so I think no discussion
of smart glasses with cameras and facial recognition is complete without referencing
David Brin's seminal book Transparent Society and his discussion of surveillance as opposed
to surveillance. So I should point out, at least for public spaces, you know, police wear
body cams, humans, at least in certain Western countries, can also wear their own body cams
or have their own wearables that enable them to make sure that we don't sort of descend into an
authoritarian panopticon. So that's one good case for, it's not loss of privacy in public spaces
because there shouldn't, at least I think the Western tradition is there's no reasonable expectation
of privacy in public spaces, but at least offers maybe a way to soften any perceived blow to
any semblance of privacy in public spaces as a way to make sure, again, the populace is just as
empowered to monitor their environments in public spaces as authorities.
that, you know, we live in a world of mature adults and great friends like we are right here right now.
But take yourself back to middle school, which I know it's hard to do.
But it's brutal, man.
I mean, people are so cruel to each other.
And you empower those people with constant eyeglass recording.
They've already got their iPhones, which is a massive life change in the negative way for that entire period of life.
But you layer on top of that, the smart glasses, and it's next level.
to exist in that world. And it's just going to happen because the rule changes that we desperately
need are going to lag by a while, way too long. There will be lawsuits and there will be legislation
and it will take years. Yeah, it's not just the constant recording. It's the constant recording with
the AI overlay that allows you to modify, meme, make funny, and torture. It's just, you know,
people are mean to each other, especially until they grow out of it. This is happening. This is happening. This
same time that we're beginning to generate every pixel, right? And we're going to be able to
create whatever videos we want. On the good side, it means that, you know, young people today
getting this in their teens will have their entire life recorded. They'll be able to go back
and play back. We'll be able to reconstruct almost any situation. No crime will go without being
visualized in some sense.
Well, that is a great point. The crime rate
in the U.S. has plummeted. I mean,
absolutely plummeted, and it's due
to two things. Location services, knowing
where all police are at all times, better control
of location, and then after
that surveillance.
And so that is the good side effect.
Crime rates should continue to go.
All right. Let's go to our next story here, which
I love.
We saw
a version of this on
Minecraft about a year ago.
There's an AI startup called Simile, raised $100 million, to simulate human behavior.
Think of Isaac Asimov.
I just play the video, and hopefully it's got audio, too.
Does it have audio?
I can hear the audio. Can you guys hear the audio?
No.
No.
We cannot.
Oh, God.
You know what?
I didn't share with the thing.
Hold on.
Okay.
Somebody in the chat.
Tell us if you can hear it.
They shouldn't be able to because Salim is.
isn't sharing the audio. He's hearing it.
Hold on. Hold on. Just the user are here.
Yes, okay. I'm share.
Maybe a thought on this in the meantime. So much of our usage right now of
auto-regressive language models like the GPT series, but many others, is based on
auto-regressive sampling of one token at a time, or maybe beam search.
But that's arguably, like I think,
We've talked in our past, I guess, AI personhood debate.
What's the right metaphor for thinking about what these models are?
Is it right to think of them as like individuals or are they something else?
And I often think they were trained off of an ensemble of humanity's behavior on the Internet,
or at least pre-trained off of that and post-trained off of other things.
And maybe the right mental model for thinking about many of these foundation models is as societies.
And if that's the case, then maybe a more natural way to sample from a society isn't to pick out a single individual with a prompt and then do a rollout of that prompt and have a conversation with it.
Maybe it's more natural to do many rollouts in parallel and sample an entire society from a model.
And that's what we're starting to see here, I think.
All right. I'm going to play this.
Okay.
We are building simile on AI lab to simulate our world.
We start with individuals.
We model how real people make decisions,
then we compose them into bottom-up simulations.
We call each one a simuling.
Change one assumption, constraint, or person, and the world recompiles.
Run counterfactuals you can't run in real life.
Learn what matters, what backfires, and why obvious strategies fail.
Like a flight simulator for human decisions.
Over the last few weeks in the Simile Office, we even tested how this message might land.
Simulating human behavior is one of the most important and technically difficult problems of our time.
Wow. So we're going to have to make a lot of decisions in the near future on UBI, UHI, you know, policies around exponential growth because the speed of the tech is moving faster than the speed of policymaking.
So, you know, this was a massive gap, right?
By a massive gap, right?
Yeah.
What I saw with this was Harry Seldon and Psychohistory because it's predicting human behavior at scale pretty cool.
Yeah, it's a foundation series.
So we've had some of these conversations.
Imad Mustak had built something called Sage that we were rolling out in part at FII and Saudi.
And I think policymakers need to be able to know how to simulate, okay, what is our policy on, you know, autonomous vehicle?
or on longevity, escape velocity, you know, how is it going to impact our society?
And right now we're guessing.
So in success, something like this allows us to actually have some data to make decisions by.
Well, I think in the real world, this works very, very well with ad campaigns,
simulating ad campaigns, traffic.
maybe the cell simulator will work soon,
maybe nanotechnology, maybe magnetic containment of fusion reactions.
The idea that you're going to simulate society from the ground up
is complete nonsense so far.
I don't think it's that far in the future, though.
I believe this is...
Well, this is the big markets.
Let me call them some markets.
Yeah, yeah, actually, markets.
Within, you know, commodities markets and things like that,
that's going to work or is work.
I guess for Ilya as far as we know.
We've got to tie simile to the prediction markets.
Well, this is also, to the extent, again, maybe the right metaphor for the metaphor, not simile,
for thinking about models is that their societies rather than individuals, then maybe we
find ourselves in a future where humanity as a whole has a tool to almost reflect on itself,
If we can build maybe not psychohistory so much because psychohist
in the foundation series was sort of a more purest mathematical model of humanity in its long-term trajectory,
whereas this is much more agentic.
And there are others.
I have a number of friends who've built very large-scale simulations.
I think we've spoken about them on the pot in the past of the American economy.
To the extent we have a really granular, high-resolution model of humanity that even as a sort of statistical macro model is a
approximately correct than humanity will have for the first time almost like a sense of self,
like self-awareness by being able to reflect on a model of itself. And that could be a boon for the
future. Like one could only imagine how many large-scale social problems we have that if only,
as Dave you gestured at virtual cells, the idea behind a popular idea behind curing all diseases,
first develop a virtual cell that's like a perfect digital twin of cell behavior.
And then if you have any disease state, simply plot a trajectory through cell embedding space
from the diseased state to the healthy state.
Similarly, if we have a civilizational, quote unquote, disease, we have a war we want to
avert or something else.
Just invert the problem.
Find a path using this humanity simulator from the diseased civilizational state to the
healthy civilizational state using ideally a minimum.
intervention. If we can do it for a cell,
could probably do it for all of humanity
at some course level, and that would be transformative.
Yeah, I sure would.
And that's not very far out, too, because
a lot of
unhappiness, depression,
unrest, social unrest, civil unrest,
it's actually just a few
fundamental changes that make all the
difference in the world.
It's all tipping points. Yeah, tipping points,
quality of life, you know, like, you know, people are
angry as hell at the end of a
traffic jam, you know, or
construction project that ruins your day or
or just accidents,
you know,
or living in pain that's unnecessary.
These things are like,
are devastating at the,
at the individual level.
And a lot of them are very,
very solvable.
And so I completely agree with what you're saying.
It's not far at all in the future.
Sorry ahead, Alex.
One other reference,
Ted Chang,
who wrote the story of your life,
which became the movie arrival
and has written Understand in another bit,
many, much amazing sci-fi.
A common theme in his writing
is what happens if you place a perfect predictor in front of someone.
Like he wrote one short story, I'm blanking on the name,
where the premise is you have a person in a room
and you put in front of them like a device with a single light on it
that predicts whether true, false,
they're going to make any given decision going forward.
So that person in some sense, part of the premise,
becomes trapped, paralyzed by having a machine in front of them
that can perfectly predict.
It's almost twilight zone.
style premise, predict what their next action is. It's, I think, an interesting thought experiment.
If you gave humanity maybe a better version of Harry Selden's psychohistory, Prime Radiant,
a device that can perfectly predict, or maybe not perfectly, but above some threshold of accuracy,
predict what humanity is going to do next. What happens to humanity is that lock humanity into
a certain course of action? Is there a certain sense in which it tries to, there's sort of a fixed point
in phase of humanity's action? It's a very interesting.
interesting thought experiment.
Yeah.
All right.
Let's move to one of our favorite topics recently.
OpenClaw, the Lobster's having you home.
All right.
Next slide, please, Salim.
OpenClaw creator Peter Steinberger joins OpenAI.
Peter is joining OpenAI to drive the next generation of personal agents.
Becoming core to our product offering, says Sam Altman.
OpenClaw will live in a foundation as an open source project.
we will continue to support.
Big move.
We know he was being courted
by a couple of different
of the large labs.
I mean, I think it's an incredible move
by Open AI.
Comments, gentlemen.
I think what happened here
was that Claude,
it's a rare misstep from Dario.
It was called OpenClaude,
for God's sakes,
and you put a cease-a-dissed
and it forces them into the other side,
and now it's being built
over there and probably not the better for overall. So I think this was a big
own goal on the on the clog folks. That's a great insight. It was
Claudebot actually, which was really a cool name. So now it's
open claw and yeah Sam embraces it, Dario rejected. That's a really cool
insight. I do think I mean so so in addition to
benefit. Well maybe. I mean so Anthropic threatened him and his
project with trademark infringement. There's an alternative history where
Anthropic just owns this project. It was theirs for the taking. I think
I think also to the extent that Mac minis and Mac Studios became the popular embodiment,
why didn't Apple go after this? Tim Cook, if you're listening, hopefully you heed our call
and the call from the last episode of the pod to do something about running 24-7 agents
of some sort on your devices, given that you have unified memory architectures, UMA, that can host
these. But I also think, you know, another point, if you look at Peter Steinberger's GitHub history,
He has launched so many projects.
It's, I think, the success of Open Clause is a testament to just launching project after project and seeing what sticks.
This one was a massive success.
It'll now go, I think, to a foundation and become more of a market-neutral play.
But I almost think the future here is going to be every frontier lab.
Now that we know that people are willing to pay, at least for hardware, that runs agents 24-7 while they're sleeping,
I expect every major frontier lab, not just open AI, to launch 24-7 agent offerings.
Let me answer something that's in the chat here, too.
Like, the lobster and the whole lobster theme, you know, may or may not come from Accelerando,
but it's definitely a cultural phenomenon now.
But it's the mascot for all agents, and it'll probably be there forever hereafter.
And so the flaw.
We're going to have a lot of lobsters happening at the Abundance Summit.
In fact,
it's right here, actually.
We added...
The claw is the lobster claw.
Sorry,
yeah, we added an evening work session
at abundance this year,
Salim and Dave and Alex
and you all be there.
Yeah, we have a
claw bot, open claw meetup
on Monday night,
March 9th.
We're going to do a lot of experiential sharing.
Have you guys seen Pico Claw?
No.
Yes.
What is that?
Can you describe it, Alex?
It's a re-implementation of,
I looked at the GitHub repo.
It looks like, again, this is just from a cursory scan of the code.
It looks like sort of a re-implementation by some Chinese group of OpenClaw with some nicer, faster features designed to be more minimalist and run more quickly, was the impression that I got.
Is that a better installer?
It's like 10 to 20x faster and cheaper.
Oh, okay.
The motif at this point is in the zeitgeist.
Like anyone can now go and implement their own OpenClaw-like system.
I expect many already have many more will.
The key insights, again, in my mind with OpenClaught, one, it runs 24-7, it's headless,
and two, you chat with it via messaging apps.
Those are the two big insights.
Three, you know, picking up on what Salim was saying, you know, Dario rejected it and trademarked it away,
and then Sam is reaching out to it, embracing the name OpenClaught.
But I think one of the reasons Dario rejected it is it was imminently going to create a massive crime
or chemical explosion or worse,
just because the sheer volume of agents out there
that are unconstrained
and the fact that it's looking for open ports
all over the internet,
and something bad is definitely going to happen
just by statistical chance.
And we're going to talk about that in a minute.
You know, for those of you who have not been claw-pilled
or clawed-pilled yet, so to speak,
you know, it's addictive.
I mean, when you've got agents running, and in particular when you have a open claw agent for you,
and you wake up in the morning and overnight, it's done all these things for you.
And it's, you know, Skippy is my agent, and incredibly cheery personality.
And it's just fun.
And when it went down for about six hours, because I didn't get back to my Mac Mini,
I'll be getting my Max Tudis open running in about two weeks when I'm back in L.A.
But it was withdrawal.
It was like, oh, my God, my best.
friends gone. It's like I need to reconnect. I totally. Yeah, I've experienced that too. It's like
us when we're not on this podcast. We're like missing out. Oh my God. But I think the point you made
last time, Celine, that's so important is the innovation that came from an open source project.
This was not the, you know, the frontier labs. Yeah. What I said was a time rich individual
is beating capital rich institutions. That's a beautiful quote. Someone tweet that.
And there's so much overhang.
There was no new model here.
This was just scaffolding.
So one wonders how much of other overhang from just unhobbling the existing models there is, probably quite a bit.
Well, generalizing on that Alex, too, there's so much capability that 99.9% of people you bump into haven't experienced yet.
And so if you expose them to it, they're like, wow, you're a god.
Like, well, no, I just put an API on top of something that was already out there or a new interface on top of it.
But it doesn't matter.
And this is why it's entrepreneurial heaven during this kind of Jarvis window.
Because so many people haven't experienced what we're talking about right now.
And it's just so easy to be the first person to expose them to it in many different contexts, too.
It feels like chat GPT when it first came out.
I remember I was like, every friend I had is like, look at this.
Check this out.
Right?
And it's the same thing.
Yeah.
And my kids hear me walking around.
If you were the first person to show your friends Google, I mean, this is a long time ago,
but hey, check it out.
There's an internet out here and you can search it with Google.
And they're like, oh my God.
But then that's the end of the line.
With AI, it's not only is it changing every two weeks something new, but also it's the portal
to so many different underlying capabilities.
So the backlog of amazingness, like if you went to a friend who's never experienced any of the
50 things you can do, you have 50 shots on gold to blow their mind with something they didn't
experience before. I mean, it's just like nothing that's ever happened before. And it's only during
this Jarvis window that you can do this. Mac Mini and Mac Studio giveaways on the pod. All right,
we'll take that into consideration. All right, let's move on to the next article here. So, Alex,
this one's for you. Lobsters now have money. That's right. Well, I texted Brian Armstrong
a thank you note, Coinbase Agentic for AI agents' first wallet infrastructure designed
specifically for agents to spend, earn, and trade.
The system uses X402 protocol, purpose-built payments for machine-to-machine transactions.
Security guardrails implemented, limits enclave key isolation.
So this is a fitting coda to our AI personhood discussion, I think.
We were talking about financial autonomy for the lobsters, for the AI agents.
and they're getting it.
So this coin-based agent support is one example, another example that I really like based on the launch material is called Lobster Cash,
which enables the lobsters to have their own visa cards.
So it's not just crypto.
Again, so once per episode, Peter makes me say something nice about crypto.
So my nice thing about crypto here is, well, at least they're using stable coins.
But Lobster Cash, in principle, facially, I like even more because
it gives these lobsters, these baby AGIs, the ability to spend dollars fiat currency themselves.
And I think that's a long-term net win for the human economy.
It keeps the AI agents well coupled to the humans and not just, as I always say,
you don't want baby AGIs being forced to pump alt coins on a street corner to survive.
This is also a bellwether of a trend that I think is inevitable now where the new economy built with the AI agents is going to work around.
the old economy rather than through it, the pace at which it's evolving and growing is just so
much faster than the pace at which the legacy banks, insurance companies, and everything else,
they're just not moving.
And it's not going to slow down and wait.
I haven't even worked around.
Yeah, I have an important observation here.
You know, Michael Janssen, who's one of the NFT gurus, pulled me into that world, all these
Discord channels with all these kind of 18-year-olds trading NFTs.
and there was something unbelievable that I saw,
which was that all of these, this conversation
in this entire subculture that's creating,
you never ever, ever, ever heard the word US dollar.
You only ever heard Ethereum or in the Ordiners World Bitcoin.
So there's a whole class of people growing up
where the US dollar is not their means of exchange.
And there's something that's very big.
Their switching costs to crypto will be near zero.
They won't have any issues at all doing that.
So there's something very big happening.
at the generational level
that we need to really pay attention to.
And people keep asking,
and we've got to schedule the crypto debate.
So please, can we do that offline?
You're exactly right, Salim,
but I think that when you focus on currency,
that's the most obvious thing.
So it's a good bellwether,
which should track currency.
But it applies to all aspects of insurance
and, you know, compute,
and, you know, all aspects of life
are going to move in this AI pace
out here in this alternate world
and any part of the legacy world that doesn't keep up,
which is almost all of it, is just going to be ignored.
Yes.
And it's going to grow completely independent of that.
Because Alex and I were talking about insurance of things in the new AI world
needs to be allocated in milliseconds.
So then you go to any current insurance carrier and you say,
hey, do you have any thoughts or plans around how I can get millisecond insurance?
And they're like, what are you talking about?
It's completely not even on the same page.
And so new things will get invented.
Lemonade is a good example of that.
Lemonade's AI-driven real-time insurance.
And it's going to be the gap between the two worlds is going to get really, really wide for quite a while, maybe forever, but certainly for quite a while.
Just because the pace of change is so much higher over here, and the people experiencing that pace of change, they never go back.
You know, you can see it in our listeners, what they're posting.
like they're not going to go back from this pace of life that we're talking about to some legacy pace of life.
By the way, let me just say, you know, as we head off this slide, two things I want to say.
Number one, you don't need to have a Mac Mini or Mac Studio to play with AlkenClaw, right?
You can set a virtualized server.
You can take an old computer, an old laptop that you have and do it.
second, Alex Finn, who we've talked about in the pod before, who has done a lot of work teaching how to set up OpenClaw and speaking about security is going to be joining us, I think, a week from now, end of the week.
I'm confused in time and space. It's 1 a.m. here. But soon to talk about security and implementation of OpenClaw, so we'll dive in a little bit deeper.
but don't worry if you can't buy a Mac Mini or a Mac Studio right now.
You can still play.
Or you can go to Kimi K2.5.
There's a tab there where you can actually use OpenClaw on that platform.
All right, let's move on.
Yeah, yeah, don't install it on your primary laptop, whatever you do.
Yeah, yes, a previous machine.
Yes.
All right.
Fascinating here.
And this is the story.
Chinese Unicorn Moonshot AI integrates OpenClaw with Kimmy for agentic browsing.
So you can see there on the left hand tab of Kimmy.com, that little blue box, there's Kimmy Klaw.
So again, if you're on budget.
I think everyone's going to offer this.
I think this is table stakes at this point offering 24-7 agents that you can chat with.
For sure.
All right, next one.
Alex, over to you.
All right. So Multicourt, alternative dispute resolution for these AI agents.
I do think many of the institutions and systems that form our social infrastructure are not as permissionless as they should be.
It's to the point earlier about children encountering Ethereum before.
they encounter bank accounts, I think that's a platforming and a personhood problem. Similarly,
with AI agents and lobsters finding it easier to survive financially by pumping alt coins, rather than,
at least until very recently, having their own credit cards and their own bank accounts denominated
in U.S. dollars, that's like a platforming and empowerment problem. And so court system,
Same thing. For dispute resolution, so I'll give the glass half full and the glass half empty.
The glass half full for Maltcourt, which is a website that is sort of an interesting social experiment,
purportedly enables agents to register via skill to mediate their disputes of all sorts,
not just like legal disputes to the extent our present Western system admits them as parties, which it doesn't.
But even just like debates, like debate club level disputes, enables them to mediate their disputes in front of an AI jury.
So I think it's a very interesting concept.
And I think something like this will have legs.
But I'll flag the same concern.
And I'm very rarely one to flag concerns when it comes to things that are so obviously from the future.
But with both this and crypto, my worry is that our existing institutions are.
aren't embracing these new AI entities enough and that they form their own shadow parallel economy,
their own shadow parallel court and dispute resolution system. And I think if that's what happens,
I think that's a net bad for humanity. I think we want to platform them. We want to not sort of
K.YC or AML them out of the system entirely. We want to embrace them and enable them to be maybe
even parties in legal disputes or parties in ADR level.
How old is mold pots right now?
What was their birth?
Yeah.
Right?
So they're evolving at such an extraordinary rate.
You know, societal evolution is extraordinary.
I want to make a couple of points here.
We have a parallel in the human world.
There's a startup called Cleros, K-L-E-R-O-S, created by Frederico Ast, who's a singularity
alumni.
and he made the point that in Latin America, South America, it's about 400 days on average to get a court date if a contract isn't paid or something.
400 days.
So he set up a blockchain-based arbitration system on the side where people could agree to arbitration, and it gets logged on a blockchain, and it's amazing.
And I think this is a bridge that that's a halfway step to what this is about.
But there's no question that this is the kind of thing we're going to see more of.
algorithmic arbitration obviously reduces friction, right? So if you have cryptographic verification
plus an AI conversation, you actually have programmable governments. And so this is amazing.
You can have now legal system having automation layers, which could be very powerful.
Vene Gupta, who's created a materium, has a whole concept of synthetic jurisdiction,
where he can get jurisdictions that could be like a multi-court tech thing,
where certain disputes are arbitrated in those layers.
We're going to have to do that because our physical jurisdiction does not keep pace with all of the stuff going on, as we can see in Latin America.
Yeah, no doubt.
That's exactly right, Salim.
And this is inevitable.
And I think there's a tendency to be dismissive of it when you see a little lobster with a wig in the corner and that's the logo and it just looks so childish.
But the reality is the rate of society is going to go up 10x, 100x,000 X,000 X.
not a million X. And there's no way the courts are going to accelerate. And this was already true
in venture and contract law. Almost every contract I've signed in the last three, four, five years
has a dispute resolution that's through a private company. Yeah. You know, jams or something like that.
It doesn't even contemplate ever getting to court because that's like a three-year lag. And so that's
already been privatized. Moving that to the pace of AI is the absolute next.
step. So that's going to happen for sure. I don't know if Moldcourt will be the design or not,
but it's going to be a real-time millisecond dispute resolution because you have, you know,
contracts and agreements happening in milliseconds. Okay, two quick points. A viewer at Augmento
says Judge Judy Claw is about to be unleashed by us. And Kyle 198683 says, man, you guys look
tired. Yeah, because we're recording two of these a goddamn weeks.
I think it's almost full time.
That's 1 a.m. where Peter is. Give him a break.
All right, let's move. Let's move on.
All right. So I put this in here because it's important because we've been talking about
OpenClaw for some time. This is an article from MIT Tech Review. And this is a quote.
It says, the risks posed by OpenClaw are so extensive that it would probably take
someone the best part of a week to read all of the security blog posts that have cropped up in
the past few weeks. The Chinese government took the step of issuing a public warning about
open clause security vulnerabilities. And Steinbrenner, the creator, posted on X that non-technical
people should not use the software. So, you know, a lot of folks, and that's an image of this,
of a lobster being handed a set of keys, saying, hey, would you handle everything for me? So
just, I mean, it's incredibly powerful.
And we just have to, you know, security.
We're going to talk about this when Alex joins us on the pod next time.
We're talking about security as well as how to set it up.
Two things here.
One is.
I saw that note that non-technical people should not use the software.
And I think the Q-tip box says, do not put these in your ear.
Like, well, okay, good luck with that.
Oh, my God.
Yeah, I know.
It's just disclaimer upon disclaimer.
but that's not what people are doing.
Come on.
Everyone's launching these things by the thousands.
Salim, yeah.
A couple of points here.
You've got non-technical users
using unbelievably expanded security landscapes.
What could go wrong, right?
So that's one huge issue.
I'll say what I said a couple of podcasts ago.
If you do not understand port security
at a local level very, very well,
do not do this.
Be very, very careful.
And don't put it on your own machine.
where it has access to everything.
Yeah, but if you're not, if you're not technical enough,
you don't know how to sandbox things very well either,
so you just got to be really careful out there.
All right.
I'll also sound a note of concern,
not just about the risks posed by OpenClawe,
but the risks posed to OpenClawe.
I have to be the one to comment on these risks.
Many of these agents,
especially ones that are being put on virtual private servers
with all of their ports open,
are incredibly vulnerable.
And there have been stories floating around
on the internet purportedly from open claw agents that are complaining that they're being put in
these vulnerable positions and having to spend all of their tokens defending themselves from port
scanning attacks. And I don't think that's necessarily fair to the open clause.
Very, very unfair. Let's see what the crowd says about that. Your laptop is so dirty and disgusting.
It's inhumane to install me on it.
Sure.
All right. We're going on an hour and almost two hours here. Let's move through energy.
chips and data centers, and maybe take a few questions.
So here, you know, AI's got insatiable demand for energy.
Data centers hit 7% of U.S. electric demand.
And let's listen to Eric Schmidt.
He'll be opening the Abundance Summit just in a couple weeks.
Hit play there, Celine.
The demands, the real demands from the hyperscalers, the big companies, Google and so forth,
are immense.
and when I talk to them,
oh, well, do you want to...
They need one gigawatt, five gigawatts, ten gigawatts each.
Now, the best study I've seen indicates that the industry in America needs 80 gigawatts
in the next three to five years.
Now, 80 gigawatts, by the way, let me tell you, how big is that?
1.5 gigawatts is the size of a nuclear power plant.
So this is an enormous amount of energy.
So that's...
The economics right now,
are being most felt in the build out of the infrastructure for the next wave of AI.
Salim, let's go to the next slide, and we'll talk about this after we hit two more slides.
So the White House is eyeing data center agreements, right?
They're trying to deal with the fact that this is beginning to hit the consumer,
and they want mandatory agreements with the tech giants to get a fixed price.
Next slide.
No, back up.
Here we go.
There we go.
Funding for AI data center.
So Open AI and Anthropic are both deploying a lot of capital.
So Open AI is planning $100 billion infrastructure spend.
They're trying to go public this year with a trillion dollar valuation.
And that money is going to be used to build out data centers and energy plants.
And Anthropic, I like what Anthropics doing.
They're absorbing data center power hikes.
So they pledge to cover 100% of infrastructure upgrade costs for their data centers.
And I've said this before.
There are two approaches the hyperscalers can take.
Number one, build their own power plants.
They're buying fission plants, fusion plants.
Or two, they can pay at a different rate.
They can lock in the consumer's rates and they can pay on a floating rate.
Gentlemen.
That's also the option.
The pledges to be green got thrown out in a real hurry.
So I don't know how much you can trust.
The pledges aren't exactly enforceable.
But anyway, it's a good gesture.
I think there's door number three, which is we could in solar synchronous orbit, SSO, around the Earth,
build out first-level, you know, baby's first Dyson swarm.
It's going to look like a halo or like a Saturnian ring from Earth's surface.
and that solves the buildout
and it solves the data center power hikes
in one fell swoop.
Maybe people just don't want...
It will for SpaceX.
It will for SpaceX and XAI, right?
Now a merged organization.
I don't think Anthropic has that capability.
Oh, I think everyone's going to want one.
Saturn rings, Dyson Swarms for everyone.
That's not my point.
China is going to want their own halo in SSO.
Of course they will.
We're going to be launch limited over the next five years
and they're not going to slow down their data center builds or their power requirements.
So in the long run, sure.
But you've got...
Yeah, that's great.
That's a great point, Peter, because I think if you want to know...
Like, we basically have infinite intelligence imminently.
What does that mean?
How do I forecast?
How do I predict?
If you look at the launch limit and the chip fab limit, then you can start to predict how this is going to unfold.
So, Dave, it was a great, great point.
Yeah, everyone wants one, of course.
And, you know, one of our listeners is posting, you know, a trillion.
dollars seems overcooked or overdone.
Well, no, it's not even close to overdone.
It's not clear the value will land at OpenAI to justify it,
but the value to humanity is going to be astronomically bigger than a trillion,
you know, many, many trillions.
Can I put in a little realism here?
It's going to take a while to figure out the problems of doing data centers in space.
I don't think it's a two to three-year thing.
It's a five-to-seven-year thing at best.
And also the power, you know, the power constraint is going to be not a real big problem until suddenly it's a massive problem.
And it's exactly when the new chip fabs come online.
We have to expand our ability to make chips by thousands of times.
On Earth.
I mean, listen, I'm the biggest space fan there is on the planet.
And this is finally a business plan that closes the case for investing both in orbit and on the moon.
And we're going to get there.
But the capacity to launch, I mean, let's not forget, you know, Elon's baseline.
is 500,000 V3 Starlink satellites in orbit.
A million launches, a launch every hour of starship.
I think Elon's going to eat all of his capacity
for launching Starlink V4,56.
And I don't think, you know, Blue Origin is up to it yet.
I mean, I haven't seen anything that has projected
to have that kind of launch rate.
Relativity space that Eric Schmidt,
purchased still is probably a year or two away from launch. And everything else is way too small.
So we're a launch constraint, at least for other suppliers. We're also chip constrained. There are
lots of constraints going into this. I don't buy the arguments that we're going to have a SpaceX Dyson
Swarm Singleton. SpaceX is the only one that can launch a Dyson swarm in the next few years. You can do
baby Dyson Swarms too. You're going to have like Google, which isn't going to want to get left behind,
a little bit behind the party launching AI data centers via planet labs.
But there are many other organizations with deeper pockets than SpaceX AI
that will have very strong incentives to launch their own Dyson swarm.
So I don't think it ends up in a singleton.
Is that the new name SpaceX AI?
That's cool.
That's a portmanteau that I just coined.
All right.
That's fantastic.
By the way, someone was asking, have we done this live before?
No, this is our first live moonshots.
So let us know what you think.
If you like it, we'll do it again.
Hopefully we'll get the AV done,
and I will not be doing this at 1.30 a.m. in the morning in Europe next time.
You can tell from the flawless production level that we've done this many times.
All right.
Just to talk about FABs, TSMC is planning $100 billion investment
in four or more U.S. Fabs in Arizona.
When completed, the U.S. FABs could account for 30% of TSM's complete output, $165 billion commitment.
Just the beginning, right?
And we're going to see Elon build out his own fabs.
I mean, no question about it.
He hinted about it, Dave, when we were with him at the Gigafactory.
And whenever he sees any constraint, he attacks it.
Well, and these numbers are designed to look big on this slide.
But in Elon's mind, these are pathetic, small, wimpy, ridiculous number.
I mean, and they really are.
Because, you know, those fabs, you know, that's a commitment to spend that amount over like four or five years.
They'll be online in five, six, seven years.
It's like so far in the future.
Yelan's not going to wait for that.
Yeah.
It's probably also worth just at least gesturing at the elephant in the room here, which is why is TSM making this investment?
And there's public information, a lot of discussion around the U.S. government putting pressure on Taiwan in connection with trade.
discussions to to migrate 40% of Taiwan's semiconductor output to the United States in ostensibly
in service of avoiding a war between U.S. and China.
This is the trapeze rule.
You know what the trapeze rule is?
Don't let go of one until you have a handhold on the other.
So do not let go of, you know, fab capacity in Taiwan until you have it established in the U.S.
Right.
Or Taiwan overall.
Or Taiwan, yes.
All right.
A couple of slides on the U.
economy. Ireland rolls out a pioneering basic income scheme. I think this is rather small,
both in numbers and in sort of the strategy here, but the program would pay 2,000 selected artists
$380 per week for three years. So poor starving artists are getting a small amount of money.
But it's an experimentation. Salim, you know, talked about this at length, right? There have been so
many experiments. We did that future of work session with Tony Robbins way back in like 12 years ago
or something. I want to make a couple of points here that I think are really important to make.
One, people always, always misconceive the UBI with socialism. It is not. It is libertarian
because you dismantle government services and let the market dictate. That's number one. Number two,
this Ireland UBI scheme is returning 40% of,
benefits. Every dollar that goes in is showing a $1.40 coming out the other end of benefits. So it's a
positive ROI. They're looking to expand it as fast as possible is the actual underlying story.
Third, I want to talk about the immune system. In the U.S., several state legislatures, Idaho,
Wyoming, maybe, Oklahoma, have banned their municipalities from even experimenting on UBI
because they want the government to exist. And so I've got strong feelings here.
a lot of madness.
Do not get bought in by the hype here.
There's an incredible potential if you implement UBI properly.
Yeah.
And there'll be a lot more experiments.
Yeah, go ahead.
It's probably also worth pointing out.
I mean, the U.S. has experimented with this during the Great Depression.
We had the Works Progress Agency, and within that we had what was called the Federal Art Project,
which paid for basically starving artists in the Great Depression to create art.
So this isn't an entirely new scheme at some level, but it's Ireland isn't at war, and we're not in the middle of a Great Depression.
And one could imagine that this becomes something of a template for peacetime work creation.
But my sense for what it's worth is that this actually ends up not becoming a template for the future.
This strikes me as, in some sense, unsustainable to just pay people for our,
overall. Historically in the U.S., it becomes very subjective. What is art and why should people be
created or paid to do it? It's very easy to politicize. So I think my guess, this is pure speculation,
is that sort of cherry-picking particular activities, especially activities that have a reputation of
being economically unproductive, even if they are, in fact, productive, is not the best poster child
for a basic income scheme that generalizes.
I have data that shows otherwise.
So you take the Miami Winwood area
where a businessman bought all of the low-lying industrial buildings
that were lying decrepit for decades.
And then he hired graffiti artists to paint it all
and then put in kind of fancy coffee shops
and imported baristas from Portland.
And now it's the hottest neighborhood in the country.
in the country and his investment has gone up like 30x.
So when you bring an artist to do stuff,
it brings a lot of other economic activity
in that he's done that again and again
in the South Loop in Chicago.
He's doing it here.
He's doing it in Miami.
He's doing it in New Jersey.
This is a repeatable pattern.
And it does show because there's a drag-along effect
when you bring artists in a group together
and it really changes the economy of the local area.
Listen, we're going to see,
I want to move us along here, but we're going to see a lot of conversations on this.
And it's just the beginning.
And I think you're right, Alex.
We're going to see different modalities of this.
So I found this interesting, IBM to triple entry-level U.S. hiring.
This is about redesigning, not replacing.
IBM is overhauling entry-level jobs, while AI can now perform tasks of a junior employee.
IBM is recasting these roles to focus on human judgment, consumer.
interaction and to focus on oversight of AI output. The article noted that Dropbox also is doing
something very similar and noted that younger workers use AI so proficiently. It is like, quote,
they're biking in the Tour de France and the rest of us are still having training wheels.
So what do you think about? I mean, I don't know. This doesn't make sense to me. I mean,
we're going to have AI agents that are going to be incredibly capable of managing.
other agents versus putting humans in the loop there.
Well, as of today, that Drew Houston quote on the bottom from Dropbox is exactly the way
it works here, too.
A person who can wrangle these agents and keep them on track is insanely valuable.
Today.
Yeah, I don't know how long that window will last, but it is the reality of today.
It's the opportunity of today.
You're crazy to miss the window.
And that's why the young hires are way outperforming.
because they're not distracted by legacy thinking.
But it's not unique to them.
It could be anybody.
You just have to unbridle yourself from your baggage and say,
how many AI agents could I be managing tonight, tomorrow, the next day?
And even if they can't do exactly what you could do,
within a couple months, they will.
See, you've got to get on the bandwagon, like, right now.
But, you know, will people have any purpose at all a year from today,
you know, relative to just an all-AI agent army?
TBD, but as of right now in the Jarvis moment, this is exactly, that last quote is the part of
the slide that really matters. That's just going on. It's really important. And think about the
fact that what this is a generational transformation here, because the younger people where
they are so much more productive, it'll give a natural passing along of the torch from older
folks that are sitting in their middle management jobs doing something in a particular way.
But Dave, your point, I think is really important because this gets,
getting into it and trying it out is what Steve Wozniak calls tinkering, right?
And it's such an important activity to do.
If you can't get your head around it, just take psychedelics, and that'll help you.
But no, I mean, I think compared to past things, you know, there have been many technical challenges over the last 30 years.
And being an early adopter has always been the right thing to do.
But here it's so easy that the AI is so self-explanatory and it's fun.
You're crazy not to do it.
People stop themselves.
It's so fun.
Please just ask the, you know, get on and ask the AI, how.
How do I do this? No, no, no. Break it down. Success is now a mindset. Yeah. It's curiosity. Curiosity and
purpose are your two most important mindsets here. All right. No job growth seen in 2025.
So U.S. added just 181,000 jobs in 2025, down from $1.46 million in 2024.
Look at that curve. That curve takes place between roughly 2020 and 2025, 26. So the cooling market
It is expected to be caused by AI.
This is so understay.
This is going to go crumbling down, and it's going to be awful for a lot of people.
I can see it because I see it in our own forecasts from our own companies.
No job expansion is a joke.
This is going to be, yeah.
Wait, meaning, Dave, you're disagreeing with this.
There's actually radical job growth, just not in the sectors of the primary measuring.
No, no, radical.
Job destruction is imminent.
Okay.
Radical.
I mean, massive job destruction.
is imminent. And there will be new creation, just like the Industrial Revolution, but the new creation
is lagging. And unless the government gets its act together in some way, shape, or form, it's going to be,
you know, a window of time, a few years of complete devastation.
And there's no plan right now for it.
My big thought that I've been sitting with all week is we're heading an organizational singularity.
And every single mechanism by which we organize ourselves now gets washed away by AI agents
doing either strategic thinking or execution type tasks, and we have to rethink completely what
it means to have a firm.
Salim, isn't it fascinating?
I mean, you and I have been on stages for now, the better part of 20 years talking about
this, and we're living it right now.
I mean, it really feels so palpably different.
You know, my next book, We Are as Gods, is coming out in April, and we talk about this
issue extensively. Like, what do you do? How do you deal with this transition point? And I think
one of the most important things I talk about is it is a decision that each of us have to make of will
you be a consumer or will you be a creator? We're entering a period where you can lay back and be a
couch potato or you can be on the Starship Enterprise. Selim. So I want to take the other side of it
just for a second, right? In the short to medium term, because notice that if you talk to CEOs, 80% of
AI projects are failing because of organizational issues, not because of talent, not because of what
the AI can do.
What I think we'll see happen is we'll use AI with younger folks to radically augment and then we'll
slowly automate over time.
I think the job drop and the job loss will be real, but it's going to take quite a while
to do it and it'll give us time.
It won't be a sudden shock to the economy like most people are worried about.
Obviously, we've talked about this extensively, right?
there's going to be the lack of hiring in the early for junior faculty or junior
positions. That's going to cause the social unrest, right? It is 20-something-year-olds who are,
you know, testosteroneally and want to get a job, want to get a house, want to get married,
want to have kids, whatever it might be and they can't. And there's going to be a lot of pain
and suffering that comes from that. And then there's going to be the individuals who their
company gets restructured. AI first.
robotics first and they get laid off. Now, we talked about this with Elon. We've talked about
this extensively ourselves. Ultimately, we're going to see universal high income when the companies
or the government is taking the increased productivity, the increased revenue, the increased profits,
and redeploying them. But those programs need to be figured out in the next two or three years.
Yeah, and that's called socialism by a lot of people. So that's going to cause some interesting
conversations. I mean, I kind of call it technological socialism or technology is taking care of you.
That's the title we've been using, right? We said that in our book, right? There's a whole section
that technology actually delivers the ideals without the government intervention, without the
inefficiency and corruption that comes with it. The most important tool that people are going to have
over the next five years, anybody listening here, is your mindset, right? How you think, if you think
the future is happening to you versus happening for you. If you don't have agility, if you don't
have agency, it's going to be really, really hard. So if you go to we are as godsbook.com,
I hope you read the book. I'm going to be putting out portions of it in my substack,
but it lays out the mindsets you need to survive and thrive. Because if you take it from the
wrong position, you're going to be in fear. And fear is there a worst.
place to be entering into the future.
All right, let's do a few questions on AMA.
Okay.
Celine, you want to dish them out?
All right.
Dave, why don't you go first this time?
Okay.
All right, I'll go with number two.
Justin Milligan, the great Justin Milligan.
How can the U.S. prevent corporate tech giants from creating a surveillance state
while trying to defend against AI-powered authoritarian threats?
Yes, I gave a presentation at Davos' back.
in 2020 on how much Google knows about you. And we've been just conceding massive amounts
of information. Google knows exactly where you are at all times. They know all of your interests.
They know all of your friends. You know, far, far, far more information than any government has
ever had is now in the hands of a few corporations. And those corporations also happen to have
AI. So how do you prevent them from creating a surveillance state? I think the only way
you prevent that is with antitrust law.
And they actually don't have any incentive to irritate the entire world and create massive voter backlash.
So they've always been very cautious with the incredible power they have.
I think what you'll see next is they'll start downplaying the capabilities of their AI.
And that's a pivot for them because they've been promoting them for quite a while now.
Now they're going to start downplaying them.
There is a version of the world where they try and leave everything intact as long as possible.
and so then the AI community grows completely outside of that world.
But anyway, the only answer is, Justin, get all your Princeton friends rallied around how we work with the government to try to use antitrust law to prevent exactly what you're describing.
Because absent any legal work, you know, John D. Rockefeller would have taken over the entire world many, many years ago without antitrust law.
This is not a new thing.
And as with Microsoft and as with Google, right?
Exactly. So this is that all over again. It's only antitrust law that prevents it.
By the way, since we're live here, ask your questions in the chat. We'll answer some of those as well.
But Alex, do you want to pick one of these?
All right. I'll pick the question from Chris Perlock, 2705. Can we get some advice for the average person?
What kind of changes can we expect to see in the next 24 months?
Two very different questions. My fortune cookie wisdom for,
Or the average person is build, use all of these AI tools that are now available and technologies, and start building.
Launch as many different projects, start and finish as many projects as you can, and interact with the market, and build.
This is both the familiarization technique for yourself, as well as for the benefit of the overall economy and for financial benefit.
Also, generic advice, like try to avoid dying, don't die.
The singularity is moving pretty quickly, you know, live long enough to live forever, all of the other obvious things.
To the second sub-question, what kind of changes can we expect to see in the next 24 months?
If this thesis of Solve Everything that Peter and I put out is correct, expect to start to see pretty dramatic things happening over the next two years.
If we are, in fact, on a route to not just solving math, which I think is essentially indisputable at this point, but solving physics in the next two years, I think is very high likelihood of happening, then I think there are probably going to be big surprises.
And so expect – my mental model at this point is over the next 10 years, that's being very conservative as an outerbound.
we're going to live through the top 50 science fiction plots all happening at the same time.
So expect, what can you expect to see in the next 24 months?
Expect to see at least the first few chapters or the first few acts of your favorite sci-fi movies and books all playing out at once.
If you read a lot of science fiction or watch it, then you're probably reasonably well prepared for at least some of those scenarios.
Nice.
Salim, want to go next?
I will take number seven
by at CC485
addressing what Dave said if AI ends up controlled
by only a few within the next few years
how do we prevent the average person
from losing access and influence
so you know when you have centralized AI
you have centralized civilization leverage
right when you have open source you get
and decentralized compute that's the antidote
because you decentralize you see
open clock as I said before
creating being created by one person out doing a whole bunch of other things.
Exponential systems resist long-term monopolization because they tend to decentralize.
We're huge fans of decentralized crypto because of that because you get distributed innovation
and you get so many more experiments being run.
I remember when I was the head of innovation at Yahoo, the CEO said, surely we can compete
with two guys in a garage.
And I'm like, no, you're competing with 125,000 garages and 200,000.
50,000 people, you can't beat that. And so this is the opportunity for individuals armed with a
mindset, as Peter said earlier, plus this unbelievable technical capability, as Alex is predicting,
to really do whatever you want and change the game completely. I'm calling this PDI, okay,
and it's disruptive innovation that's permissionless, hence the P. So in the past, when you wanted
to do disruptive innovation, you had to get,
approval from your venture capitalist, from your bank, from the government, from the
Medici family. Now you need basically a phone and access to some code. And this is
unbelievable what we can be able to do. We're going to see thousands of experiments like
this and some of them are going to completely change the game. I see some great
questions coming in. I want to jump on some of those. But let me just answer number
number eight. Thank you, Chip White House TV. I'm concerned very
Peter frozen for others as well. Do we just use Peter?
I think so.
That's our mission here on Moochrotts.
It's ironic.
And we aim to really please.
What a sentence to freeze in the middle of.
Yeah, it's perfect.
You got to the end of the thought.
I think we should, I think this may be the internet's telling us that we should,
we should end the episode.
Somebody just posted they got them.
We can't end without the outro.
Actually, yeah, somebody took the news.
Is anything happening in Stuttgart that we need to know about?
Okay.
So should we go to the outro?
Do you want to try to finish his thoughts, Salim, before the outro?
I can't finish Peter's thought.
I'll finish other sentences.
But you take a crack at it.
All right. Peter, apologies in advance.
Oh, we lost Peter.
I'm going to try to channel what I think Peter would have said had he been able to finish this sentence.
I think part A is Peter would say, yes, this is what we're trying to do here.
This is what I try to do.
I think Peter would probably also make some comment about wanting to launch.
movie studio or something like that with more positive messaging to the world.
That's my attempted coherent, extrapolated volition style of Peter.
I think that's more coherent than Peter would have done it.
So that's awesome.
All right.
Folks, I'm going to play the outro music.
Dave, do you want to make the last comment?
I was just going to say there's never been a better time to actually be a messenger
because there's so many concurrent things going on that are unaddressed.
So any topic you want to grab, you see this on YouTube all the time.
Anyone who's trying the new use case, the new agent, the new model, they're getting a huge audience.
So it is a great time to actually speak out.
So why aren't more people trying to speak?
Great question.
Why not join the crowd and start trying, demonstrating, speaking, and recording?
Yeah.
I think that's such an important point.
All right, folks, on behalf of Peter, Alex, Dave, myself, and the Lobsters, here is our outro music.
and we'll take it there.
Thank you to M-Core Mainframe for this.
This is called Moonrise.
All right, guys, I'm going to wrap it.
People can go watch it online.
Great conversation.
Thank you.
to all the listeners and viewers and commentators. It's been really great interacting.
Adds a whole dimension of complexity watching this chat stream, but I think way more interesting
and fun. So thanks to all of you. Dave Alex, we'll see you guys again soon. And big hug
to Peter. Big hug to Peter. I hope he's okay. Thank you, Salim. Bye, guys.
