Moonshots with Peter Diamandis - Opus 4.6 Tops Benchmarks, ChatGPT Market Share Decline, and the Privacy Breakdown | EP 228
Episode Date: February 9, 2026The hosts unpack the latest AI breakthroughs — from Opus 4.6 and AGI debates to robotics, energy innovation, and the future of AI personhood, privacy, and the workforce. Get notified once we go l...ive during Abundance360: https://www.abundance360.com/livestream Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360 Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy _ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Substack Spotify Threads Listen to MOONSHOTS: Apple YouTube – *Recorded on February 6th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Anthropic drops clawed opus 4.6.
It's the new King of the Hill on coding, reasoning and research.
There are so many aspects in which this is a feel, the AGI moment.
By every measure, it's a beast.
Opus 4.6 just dropped and it's absolutely wild.
This thing handles 1 million tokens now.
That's like reading 750,000 words in one go.
This is recursive self-improvement.
This is a model that's able to rewrite essentially the entire tech stack underneath it.
ChatGPT market share falls between 25 and 26.
So what is the next move for OpenAI to get the mojo back?
For the general public, if you think you absolutely have privacy, I would say guess again.
The fundamental pillar of American society has been washed away with no public conversation about it.
AI can read your lips from 100 meters away.
I can walk over to you and shake your hand, grab a few skin cells, sequence you, and know everything about you.
I do think it is possible to maintain
privacy even today, and I think it will be possible even post-singularity.
I've got to say, I just don't see it happening.
It was fun.
We dropped the last episode earlier this week, and I'm reading in the comments, you just
dropped the episode, and Opus 4.6 and Codex just came out.
It's like, the world's changing way too fast.
It is nuts.
I was with a group of 150 chief security officers over the last couple of days, and it
it was pretty epic to see the shock in their faces watching what's going on.
Because they don't have the mechanisms to react.
Because if you're in security, you do what you always did until it breaks.
But doing what you always did is not an answer.
You have to change.
And change is, of course, introduces risk and they don't want to take on the risk.
But the risk of doing what they were doing before is a surefire bad strategy.
It's really unbelievable.
It's going to be a group of agents are going to take the role of chief security.
and it's going to be a black hat, white hat, agent battle that goes on continuously.
That's definitely been an inflection point for me.
After this last episode, I started getting emails from Maltese, Lobsters, agents.
They're all writing to me.
I'm jealous.
So Malties, I want you to write to me as well.
Peter wants your love as well, Lobsters.
Write to Peter.
I want, for sure.
Please write to expose.
Well, Alex's daily digest is very AI-oriented, though.
It's very easy for an AI to navigate to Alex versus the rest of us.
Here's my challenge to the Malties out there.
Find my cell phone and call me.
And then find my email and email me.
Are you crazy?
No, it's okay.
If they can find it, I want to hear from them.
Absolutely.
Peter, you want to be doxed by the Malties?
It's, you know, listen, I think it would be an extraordinary experience to have that happen.
Now, Alex, do not give it to them on purpose.
That's fine.
I'll knock on to dox you, Peter.
You want to be duxed by the multis.
They're pretty capable.
Well, listen, it's a challenge.
I'm putting a challenge out there.
The first multi to call me, yes, you're going to win, I see, 100 bucks in crypto.
That's a pretty low bar, but I like how you're offering to compensate them in crypto,
given that they're being encouraged to pump alt coins otherwise.
Yeah.
Well, hey.
Wow.
Just, you know, feeding them some greenbacks is going to be difficult.
All right.
Are you guys ready?
enthusiastically. Are you guys absolutely ready? So I came prepared today.
Okay. Well, I'm going to have, you're drinking water, right, Salim?
Really have, that's like dark water. Well, you've got to process non-linearity.
So we are officially recording a Moonshots podcast episode twice a week at this point.
At least. And I think we hit three times in the last two weeks. Anyway, shall we jump in?
What the audience is asking for, right in the continuum limit, we just never stop.
Yes, that's what they're safe.
It's like, we're on all the time.
24-7.
It's like a Truman Show fricking rerun.
That's right.
All right, everybody.
Welcome to Moonshots.
Another episode of WTF just happened in tech.
This is our effort to get you future ready.
This is the number one podcast in AI and exponential technologies, getting you ready for the supersonic tsunami.
I'm here with my incredibly brilliant and very gracious friends.
Alex Wiesner Gross are a resident genius.
DB2.
Dave, you have been just spot on and all your comments over the last few weeks.
Just so impressed by everything you brought to the table.
Really?
Well, I'm going to shut up today then.
And I've got to say, Dave, the multis in the background.
I mean, how many lobsters do you have on screen with you there?
Probably a dozen, I guess.
I went on Amazon today.
I'll be buried in troubles.
Pandering to the future.
Pandering to the future.
This is my way of apologizing.
Salim, do you have lobsters there with you?
I don't have lobsters.
I went on to Amazon and ordered a dozen.
So I'll have them next time.
What's that?
Oh, there's Alex.
Glass lobster sent to me my friend, Jonathan.
That's beautiful.
Thank you, Jonathan, for the glass lobster.
And here is well, the Emperor of Exponentials, Mr. EXO,
with Salim Ismail.
Gentlemen, I have to say that, again, I love these conversations.
These help me keep on top of everything because there is so damn much happening every single
day.
It's insane.
Well, I got to say also that last episode was just unbelievable.
For those watching, if you haven't seen it, please go watch it as like seminal.
I think in history, in history, it'll turn out to be a really meaningful moment.
Yeah, I agree with that.
And also, there was news coming out while we were doing it.
We're looking at her monitors going, oh, crap, we've got to get back online again.
What's it called now?
I literally getting ready for this episode now, the last hour, I'm looking through through tweets and through Alex's link posts and like, okay, what am I going to add?
There's a lot, every hour on the hour.
All right, but let's jump in.
A lot that's happened in the last 24-48 hours.
Let's jump into the top AI news on Anthropic OpenAI, a little bit.
on X.
So, Anthropic drops, Claude, Opus, 4.6.
It's the new King of the Hill on coding, reasoning, and research, handling a million tokens,
outperforming GPT5.2 in 144 ELO points.
Alex, why don't you take it away?
What does that all mean?
And out of curiosity, how is that price compare?
Yeah, it's a more efficient model, but more importantly,
it's a more capable model, and there are so many aspects in which this is a feel, the AGI
moment. I mean, every new model that comes out, I could just redo a litany of all of its benchmarks
and how it's new state of the art according to all of these benchmarks. This time I want to
highlight not how it's the new number one across a wide range of very important benchmarks,
but highlight what it's capable of, which is with this announcement of Opus 4,000,
And I'll add parenthetically the rumor is that this was actually intended to be Sonnet 5 and was rebranded at the last second as Opus 4.6
The team at Anthropic announced that they were able to use Opus 4.6 in its new agent team mode. So this is a new native mode that enables Opus 4.6 agents to collaborate together in a swarm.
There's a relatively democratic swarm, not sort of a top-down team leader and team member swarm,
but a pretty flat swarm and enabled them to create from scratch a C compiler that worked across multiple processor architectures
written in the language rust from scratch for only $20,000.
And that is a task that would historically have taken many, many person years, probably person decades,
to do something like that from scratch and have it work.
So I think rather than just rattle off a list of how amazing it is,
according to various evals this time around,
I want to highlight that we're now in the era
when new model releases are able to accomplish great feats,
like great projects,
and we're starting to measure their capabilities
in terms of how many person years or person decades
they're sort of collapsing,
hyper-deflating down to,
at the moment $20,000 of API calls,
and soon I think it's going to be hundreds and tens.
We're seeing hyper deflation right before our eyes.
You know, a couple comments on that C compiler, too.
A bunch of the teams here around the office,
we're talking about it.
It's a really good case study in how you can turn loose
a huge amount of AI compute
if you have e-vowls and constrained proof that it's working.
So, you know, a C compiler is a beautiful test case
because the code coming out the other side,
either works or it doesn't work.
You can benchmark it against existing C compilers.
It's just a beautifully e-valed, contained, constrained environment.
And so those projects just flat out work across the board now.
So what I did today, actually, I launched about 20 documents asking for data gathering across all the companies
because the AI can only function if it knows what's going on.
And that C compiler benchmark is a really good case study.
and what a lot of corporations now need to do.
If you want to turn loose AI,
you want to use it to either cut your costs
or expand your market share,
it needs knowledge.
And this is why Mercora is doing so well.
Mercora is, I don't know if I'm allowed to say this,
but a billion dollar revenue run rate now.
Wow.
Got to be the youngest CEO in history
by far to hit a billion dollar revenue run rate,
just gathering data all over the world
to feed the great AI machine.
And so I think that CK study is a good benchmark for,
okay, that works.
And it'll get better at looser tasks
over time. But as of right now, any really tightly defined constrained task. That's where you want to go.
Well, this seems like I've got two comment and a question for Alex. This means that intelligence
is entering its full cost collapse phase, right? This seems like a...
Yeah, and recursive self-improvement as well. If it's able, as it's claimed, to write an entire
C compiler, which I should add, was then used to successfully compile a Linux kernel. Again,
from scratch, this is recursive self-improvement. This is a model. This is a model.
that's able to rewrite essentially the entire tech stack underneath it.
So again, we're at this point of recursive self-improvement,
not even just being in the lab, by make the point of my newsletter,
it's out in production at this point.
We have fully productionized recursively self-improving systems.
And the other one was the 70% head-to-head seemed pretty staggering.
Did that surprise you?
Were you expecting more or less?
How did you react to that?
You mean the relative ELO scores?
Yeah. I tend to view ELO-based scoring as more of a tit for tat. It's great that we have ways to score on systems where there isn't some sort of absolute standard and where we instead. So for those who don't pay super close attention, EO scoring originally borrowed from the chess world, is a way to score models or other systems against each other when you lack an absolute standard. So it's a relative measure of
performance rather than measuring against some absolute standard. I think ELO-based scoring is great
if there is no alternative, but I tend to, on the margin, discount ELO-based scoring in favor of
wherever possible objective, absolute measures. And by every measure, or by almost every measure,
I should say, OUPS 4.6 is just, it's a beast. It is an enormous accomplishment. We don't know yet from
from meter the autonomy time horizons.
They've just released the time horizons for GPT 5.2 high reasoning,
and that's already like six and a half hours.
I wouldn't be shocked if the time horizon for autonomous software engineering by Opus 4.6
ends up being 20 plus hours, maybe even longer than a day.
Whatever that AI 20,
Alex, you mean the time horizon over which it continuously works on a task?
That's right. It can successfully to either 50% plus or there are other thresholds like 80% plus success rate
autonomously work on a software engineering task. And we're seeing those time horizons just skyrocket,
not even following the AI 2027 scenario, which projected an exponential extrapolation. We're seeing
them follow a hyper exponential at this point.
Yeah, I'll tell you what, those charts are worth tracking because back when I was first building neural networks
way back in the day. You know, the benchmark was all MNIST character recognition. And when we got from
60 to 80 to 90 percent accuracy on that benchmark, you could see this curve going way, way up.
But then when it went from 90 to 92 to 94, it looked to the world like it had flattened.
And I'm trying to tell the world, no, it's massively more intelligent, you know, with each tick
toward 100 percent. So the way these charts, these benchmarks and coding are set up, they have the same
flaw, you know, to go from 80 to 90 to 95 percent is a massive increase in capability,
but it doesn't look like much on this type of chart.
So you have to look at that other chart where you're seeing it work for hours on end
on a task and come out with a good result, which looks much more like what you should
experience, which is this exponential effect.
It's just a bad way to demonstrate it, you know?
So this week, Anthropics on top, can I ask the question of, you know, the process by which
they're improving their systems.
I'm assuming that all the other hypers,
well, at least X-A-I and OpenAI and Gemini,
are using the same methodologies
to improve their capabilities.
And it's just constantly leaprogging.
Is there any deviation, anything special
that Anthropic is doing on their own,
independent of the other models?
I think we're starting to see differentiation.
So the historic stereotype is historic,
like the past few months
of history, maybe like year and a half of history, was that Anthropic was focused on
code generation. That was, the narrative was supposed to be that Anthropic being compute,
starved, had to focus on just one thing that was very profitable, which is code gen for
enterprise. That was the narrative. But if you actually, if you look at some of these benchmarks,
there's a narrative violation hidden in plain sight. Like, look, for example, at humanity's last exam.
Humanity's last exam is, in principle, super interdisciplinary.
It's not just focused on code generation.
It's not like sui bench pro.
It tests humanity's knowledge, among many other skills.
The narrative violation is that with tool use, opus 4.6 was able to achieve state-of-the-art on humanity's last exam.
That's a total narrative violation.
So on the one hand, to your question, Peter, that the narrative is supposed to be, well, we're seeing speciation by all of the frontier models.
and Frontier Labs with Anthropic, focusing on techniques that are maybe very favorable for code generation,
and Open AI focusing on being the quote-unquote core AI platform for everyone and focusing on multimodal especially.
The narrative for Google is supposed to be, again, I'm just like reciting cliches at this point,
it's supposed to be that because they have this enormous pre-training corpus like YouTube and the Google Webcash,
that they're in the best position to have the best pre-trained models.
and they're the ones always being characterized as having big model smell, if you will,
because they have such amazing pre-training.
And XAI has sort of, again, I'm reciting cliches,
is the one that's always being accused of benchmarking on their favorite benchmark.
So each of them has sort of a character that they've built up narratively.
But I think we're seeing all of that get scrapped at this point.
The market is so competitive.
Are we seeing consolidated, are we basically seeing the models all improving at max speed on all fronts in all directions?
I think we're starting to see models with probably fundamentally different back-end strategies start to converge on leapfrogging each other across all benchmarks, which I wasn't expecting to see at this point, doubly so from Anthropic.
It's mildly surprising to me to see that Anthropic is becoming competitive.
on non-CodeGen in-principle benchmarks.
Hey, everybody. You may not know this, but I've got an incredible research team.
And every week myself, my research team, study the metatrends that are impacting the world.
Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology.
And these Metatrend reports I put out once a week,
enable you to see the future 10 years ahead of anybody else.
If you'd like to get access to the Metatrends newsletter every week,
go to Deamandis.com slash Metatrends.
That's d-amandis.com slash metatrends.
I found this one fascinating.
And Salim, we were talking about this a moment ago on security,
that Opus 4.6 can help evaluate find bugs,
found 500 plus high severity vulnerabilities in open source code.
I mean, I think that makes sense to me.
The challenge, of course, is this is the world we're inheriting
where AI can create a huge attack surface on all the software out there
if it isn't working for humans, if it's working against us.
Thoughts on this, Jens?
I'll tell you, this was a really great day for me
because I thought we were going to have another sonnet,
and instead we got a new opus,
because I use opus for all my work and all my agents,
and, you know, another sonnet I wasn't even going to use,
and then it hit yesterday.
I've been using it all day,
and my little Bank of America meter in the corner that pops up every,
every time it charges a hundred bucks, it pops up another dialogue in the corner.
It slowed down dramatically today.
It was noticeably fewer $100 extractions in the corner.
So it was like a gift for a totally unexpected gift all day long.
I haven't noticed the increase in intelligence.
I'm sure it's in there.
It's just, it was working so well before it's now just cheaper.
And then bucks are working better.
It's worth pointing out if this rumor is accurate that Opus 4.6 is actually just a rebranded Sonnet 5 that would suggest that it should be much cheaper, not for any reason other than just the, again, the historic strategy across all of the major frontier labs is one of iterated, or at least what used to be called iterated amplification and distillation.
So perhaps Opus 4.5 or some similar model was distilled down to a smaller, faster, more efficient, cheaper model that ultimately became Sonnet 4.5 and then reigning to 4.6.
Very likely, that's the case.
But it is just flat out better.
There's no reason not to use it.
It's just better in every direction, cheaper yet better.
So there was a few things in this overall deck.
I was looking at it that really blew my mind.
This was one of them.
because this means that you have AI as a force multiplier
solving all these old bugs.
I think that's incredible.
I was just a day and a half at the Z-scaler CXO,
all the chief security officers getting together.
And they were really freaked out.
And I was trying to show them that, look,
AI gives you all of this capability.
You'll have the best cybersecurity professional
on Earth via AI,
just in like literally days and weeks.
This came a day later than I was speaking.
I'm kind of annoyed at that.
And so this is unreal that we can do this.
The other thing was the PowerPoint plugin is just a massive thing.
I think that's going to really have a huge impact.
I can't wait to get it working.
I try before this to try it.
Yeah, that's so funny.
I had the exact opposite reactions, Celine.
You know how printers used to be a really, really big deal?
HP had a huge market cap.
We would all take everything we were doing and print it and take it into a meeting and say,
look, I printed it.
I feel like PowerPoint's hanging by a thread in the same direction.
and like, wow, AI can create great PowerPoints.
Well, who can present them to?
The audience is AI.
It doesn't want to look at a PowerPoint.
This idea is very short, I think.
I was joking sort of gallows humor in my newsletter
that the Claude for PowerPoint plug-in
is going to be great for what's left of the knowledge work economy.
But for the zero days, though,
I think this is the tip of the iceberg.
Of course, it's a huge accomplishment
to discover zero days that had been undetected for decades.
But imagine, just think for a second thought experiment,
how this generalizes to discovering all sorts of other mistakes
and oversights and missed discoveries that may have been missed for many decades.
And we're just going to be able to bulk solve every missed oversight in science, engineering, and technology.
I can only imagine over the past 80 to 100 years, all of the oversights,
all of the missed turns in science and engineering,
we're just going to be able to turn really strong frontier models at our entire history
and ask where did we make all mistakes, highlight all those mistakes, and tell us how we can fix them.
Cough Cough, I think I mentioned this a couple, two, three months ago on one of these podcasts,
that when we turn this A-on to legacy experiments that have done,
it'll surface all of these missed opportunities that people didn't see because they were looking for
one thing and they miss this amazing thing over here.
I think you're exactly right, Alex.
This is going to be absolutely unbelievable.
And I suspect many of those mistakes are going to be embarrassing.
I think there's always sort of hand-wringing in, for example, the medical space over certain experiments,
certain findings, was money-wasted pursuing different theories of various diseases,
not to name particular names.
And I have to imagine that something like this, it's not just going to turn up zero days in code.
it's going to turn up key experimental errors going back decades.
You know there's stats about the irreproducibility of science out there.
It's insane, right?
So like half the experiments are not reproduced when attempted, even in period of view journals.
It's awful.
I think Judgment Day is coming for history of science.
I think the truth and reconciliation in every mistake that's ever been made anywhere in the literature is going to happen.
Can we talk about the other el in the room here?
Hold on one quick thing, but look at the positive impact, right?
It'll force people to be brutally honest going forward, and I think that's going to be so beneficial.
That's interesting.
AI spotlight on you.
The concern here, if in fact it can do such a great job finding the bugs, how about when it starts taking advantage of the bugs?
Yeah, one conversation that came up, one conversation you have to the attack.
surface is now much broader and also if you think of the crom job architecture of
Claudebot or whatever it's called today the ability to do sustained DDoS attacks is now
ridiculous so we're going to see some interesting things come from this yeah that's that's
going to be the beginning of the you know 2026 is going to be monster panic as Elon was saying
and this is one of the ways it kicks off because right now a lot of people would say look
I want to see how this plays out.
I don't want to overreact.
And then if you have a massive amount of vulnerabilities
getting discovered by the lobsters,
they're crawling into your network,
then you have to panic react.
And the only way you're going to fight AI is with AI.
And so this is the year that all that,
AI versus AI.
Yeah, I'm waiting for the lights to go out
or the bank account to go to zero
or something like that to occur.
And I don't want to be the pessimist.
I never am,
but there will be some of those events
likely this year.
Very soon, early in the year, I'll bet.
Yeah, I would say, Peter, then just my epitaph to that would be, or epilogue, rather,
cryptocurrencies are by definition decentralized,
and I would say probably more vulnerable than fiat currencies to exactly this sort of attack.
If there's some zero-day, then I have to expect that a threat actor will take huge advantage of zero-days in cryptocurrencies to reallocate capital in the world,
whereas I know you'd like me to say nice things about crypto.
I'm not going to say a nice thing about crypto this time.
I'm going to say this is, in theory, one of the advantages of fiat currencies.
That because there is gold bars in the in the state.
We need to schedule the debate on this one, by the way.
Okay.
All right.
GPT5 lowers the cost of cell-free protein synthesis.
So Open AI and Ginkgo Bioworks.
linked up the large language model with an autonomous lab.
And I love this story, right?
This is the future of science factories.
AI systems that are using the scientific method, proposing an experiment,
then using their robotic arms and legs, if you would, to run the experiment,
learn, iterate, run it again.
It's closed-loop systems.
Gingo Biworks, I knew the founders some time ago, Jason Kelly, and Tom Knight,
comes out of MIT.
They are a company focusing on pharmaceutical ingredients, food ingredients, specialty chemicals.
And this is fun.
I was just talking to the CEO of Lila today, another MIT company that's doing just this,
basically what do you call science factories running 24-7, and they're effectively mining nature
for new datasets.
We've crawled all the existing data sets, but if you can, in materials,
in physics, in chemistry and biology,
if you can run experiments, get data, run it very rapidly,
you can get trillions of data points that have never been known before.
Yeah, I freaking love this.
We know what I love about this most of all
is we're going into this era of hard science with real value.
So much of my life, I feel like the Googles and Facebooks do so little.
You know, like a new search engine, it's not, you know, remember Altavista?
Yeah, barely.
It's like identical to Google.
But they just extract a huge amount of money out of the economy by adding a little lipstick on something or, you know, Facebook with its social network.
And it's just not relevant in the grand scheme of human progress.
And this stuff, this era we're moving into, that's just like really, really foundational innovation going on.
It's so much cooler than the last era.
I mean, waking up every morning and getting the news as these breakthroughs recurring.
I mean, this frequency of breakthroughs is going to skyrocket.
Right.
Or you downregulated and you become accustomed to this new pace and then, you know,
ho-hum, new disease cured today by AI, all right, what's next?
If I channel Alex, the inner loop has now hit the scientific method.
Precisely.
So I would say I've made the point as Salim, I think correctly infers,
that these AI models are not going to stay bottled up in the data centers.
They're going to march right out of the data centers.
We even had a music video about that.
And one of the ways in which they'll march out of the data centers is by supervising science experiments.
And I think some process like this, and one can quibble over the precise mechanism or what the robots, if any, should look like.
Does it look like meat bodies?
Does it look like robot arms in armed farms?
Those are fine details in my mind.
The larger picture is there are so many science, engineering.
mathematics and medical discoveries waiting to be unlocked by having AI supervise and operate the entire process.
And all of these models now, like we've seen pre-training scaling, we've seen post-training scaling.
We're starting to see autonomy time horizon scaling that goes hyper-exponential.
Part of that is large numbers of actions being called in sequence.
And when you have the ability to call thousands or tens of thousands of tools in sequence,
that starts to look a lot like what a scientist would need to do in a laboratory.
During their lifetimes.
There's one contrarian point that I want to point out here,
which was the end result of the self-free protein synthesis
was a 40% cut of production time and 78% cut in reagent costs.
So it was doing the same mechanisms that we humans have used,
just doing it faster and more efficient.
It wasn't coming up with a new scientific process for protein synthesis.
So the real breakthroughs occur when these scientific models start predicting
and coming up with new methodologies that didn't exist before.
It's such a year of low-hanging fruit because of the self-improvement effect that happens
within the algorithms will really, really turbocharge this year.
But also the low-hanging fruit within labs and assembly lines, and that's also going to happen
all this year.
And because, you know, after that, you run into some bottlenecks related to construction of the machinery,
expansion of the, you know, the foot, but the physical world takes time to build out.
Mostly the chip production is going to take five years to unlock.
But the low-hanging fruit is just getting discovered.
It's like AI just came, it just got intelligent.
And it's finding opportunity, you know, everywhere.
And that's all this year.
Well, if you're a funding star graduate student trying to run a lab, this is great, right?
because you've suddenly dropped your cost by 50%.
Yeah.
I mean, it was terrible because grad school was over
and all of graduate research is being automated by AI.
I tend to think actually what I see day to day is far more the latter.
I just had a conversation with a scientist at a university.
I'm not going to say who it is,
and was that they were meeting with the president of a university
and the president said, oh, my God,
we are cooked if this kind of automated scientific process is going on.
you know, what else do universities do, but run the scientific method over and over again with
their graduate students in the labs? And all of a sudden, this is going to be the mechanism.
The universities are going to lose their ivory towers. So how fast, how long before 50% of
university labs are essentially wiped out? I don't think the question is well posed. I think maybe
a version of the question that would be better posed would be how long until 50% of the type of research
that currently is conducted in university research labs
could be fully automated by industry.
Yes.
So if we adopt that version of the question,
I think lower bound, tomorrow, upper bound,
four or five years from now.
It's really right there, right?
Yeah.
Yeah.
I threw this article in because I thought it was fun.
This is a gentleman, Mark M. Bissell,
who basically took his full genome,
threw it into Claude Code,
linked it up with nanobanana and asked the AI,
what do I look like based on my genome?
And if you look at the image here,
it's a pretty damn good representation of him.
So, you know, I added this because of the implications that it has,
but just to be clear, this is not new.
I was working with Craig Venter back in, like, decade ago,
and out of his lab, back in 2017, he published a paper
doing exactly this.
I mean, the phenotypic elements of, you know, what skin color, what hair color, freckles,
or not freckles, all that is in your DNA.
But the realization is if, you know, if you leave a few skin cells around on the butt of a cigarette
or from a hair follicle, we can know what you look like.
I think for me, the killer thing here is this was done by a single person with claw-code.
publicly available bioinformatic tools.
That's the difference today.
Buried entry for cutting edge genomics
has now collapsed to like zero.
It's unbelievable.
Just wait till all the hobbyists
discover minion USB sequencers.
You can get them for probably less than a few hundred dollars
at this point, and you could just run your own mini DNA sequencer
with pretty good coverage just off a USB port in your computer.
Every time I'm on stage talking to somebody about privacy,
I go, listen, privacy is dead.
privacy is a great concept in general, right?
And AI can read your lips from 100 meters away.
I can walk over to you and shake your hand,
grab a few skin cells, sequence you,
and know everything about you,
what disease is, your medical history, your medical future.
It's tough.
I would take the position privacy is not dead,
but rather it's in a red queen's race
where privacy technologies are constantly in competition
with anti-privacy technologies
or transparency technologies, however you want to brand it.
But I do think it's getting more competitive.
For the general public, if you think you absolutely have privacy, I would say guess again.
Anyway, I don't know if you guys want to take that on as a debate conversation, but I'll take it on.
Yeah, it's an important conversation.
All right.
Well, go ahead.
So, Salim, your thoughts?
Well, this goes back to the U.S. Constitution, right?
was the Fourth Amendment.
Essentially, a fundamental pillar of American society has been washed away with no public
conversation about it.
Now, I'm Canadian.
I don't expect privacy anyway.
But this is a huge conversation affecting a very fundamental aspect of how we organize as a society.
We've got to bring that conversation to the surface and have this conversation publicly.
Because the other side of the question is who gets to have access to that radical.
insight as to every citizen moving around what they're doing, what they're like, etc, etc.
And if it's oversight from governance, that's a problem. If it's oversight from corporations,
that's another problem. So there's some big issues to be talked about here.
I would just add that the ground is in some sense constantly moving underneath all of us,
thanks to technology. And so remaining in one place, I think privacy or its alter ego confidentiality,
I think the nature of both of those changes over time.
But I will take the position.
I do think it is possible to maintain privacy even today.
And I think it will be possible even post-singularity to remain privacy.
I can envision what a post-singular privacy architecture for society looks like.
Yeah, I can envision it too.
But I got to say, I just don't see it happening.
Because, you know, I think it sucks, by the way.
Every time I tell my computer science friends, like I think this lack of privacy just so.
And they go, what are you trying to hide, Dave?
I have nothing, literally nothing to hide.
More than anyone I know, I have nothing to hide, I still think it sucks.
And it's not a great way for the next generation to grow up and live.
And it's showing up in their social media, their self-anxiety.
It's showing up as a rift in fabric of society.
Let's go back to a very, very important point.
If you don't have privacy, you really don't have freedom.
And so this is a very fundamental philosophical point.
I see it the same way, but I didn't succeed as an entrepreneur by pretending things exist that don't actually exist.
The way it's trending right now, Peter's exactly right.
There will be no privacy whatsoever in the next three years.
Now, maybe we'll invent some mechanism after that that will restore it.
I'm probably through some.
I'm sorry, we're going to have devices listening and watching everywhere, right?
Every autonomous vehicle on the street is scanning in visual, in LiDAR, in radar, every drone.
spaces, in public spaces. I would not underestimate how with decent technological measures,
how it's possible to maintain my phone, my Alexa, my glasses, my, my limitless pin, all of these
things are constantly gathering visual and audio. And yes, it's not like you lack agency,
you're making a trade, you're trading away your privacy in return for those capabilities.
Okay, I could put myself in a Faraday cage for sure. I can't opt out. People pretend you can opt out.
And they justify it by saying, look, there's an opt-out button right here.
And as soon as you opt-out, you're economically dead.
You cannot, like right now, I can't function competitively in society without going to the AI cert bar and asking it questions all day long.
And then it knows my deepest, darkest thoughts about every topic I'm thinking about.
It's right there in OpenAI and Clod and their logs.
They know exactly what I'm doing.
And they know my location.
And they know everything about me.
And it's like this new complete invasion of my life has been opened up.
But what am I going to do, opt out and not participate in AI?
Hang on, hang on, hang on, hang on, hang on, hang on, hang on.
I'm in a radical departure of protocol.
I'm absolutely with Alex on this one.
We will be able to build tools and new architectures that absolutely protect our privacy.
Decentralization delivers a lot of that already.
The issue right now is the transition.
Right now, when you build actually private tools, the government tries to be.
to shut it down. So this is the problem. We have to get away from that aspect of it because they
want oversight on everything and we have to figure out how that. And it's going to happen just because
in the same way, the fact that this fellow built this thing on Claude Code single-handedly,
we'll be able to build these architectures. It's just simply a matter of time. And I think there will
be a massively powerful aspect of that that we can't ignore. Because when you have that capability,
then you can really actually do real innovation and real thinking.
You know, you can't do free expression in a surveillance world.
And this is a big problem for society.
It really is.
I think the end game is a lot like Neil Stevenson's Diamond Age.
I think he envisioned it, like many things, envisioned the end game correctly,
where, you know, what happens next is this massive rift in the fabric of society,
no privacy whatsoever, global, you know, job loss, panic in the streets.
inevitable very, very soon. And then after that, we react and rebuild. And then it ends up being
like Diamond Age, where we have these different ways you can choose to live, different branded,
you know, in Victorian era or whatever era, whatever you choose, because we have abundant capability
to manufacture anything at that point. And people can opt in to different lifestyles. I think that
vision in Diamond Age is where we're going eventually. But between here and there, it's pretty
chaotic. It's going to be hectic for the next four or five years. I just, yeah. All right.
I cut you off there. Sorry about that. Yeah, no, no words. I was just going to point out also,
this is a very cyclical conversation. Whenever we see a massive centralization of technology
or society, it's very natural to be concerned about privacy loss. But the pendulum eventually
goes the other way and swings in the direction of massive decentralization. And I'm telling you,
Peter, Dave, Saleem, when you're, if and when you're upload,
running in the Dyson swarm on cryptographically secure hardware that's under your direct control.
You control your own hardware that you're running on.
I think you'll feel perhaps a little bit more private than you do right now.
Okay.
And until that point, I'm going to not assume full privacy.
All right.
When we released yesterday...
Do you see why the wine is so important?
This is why the wine is so important.
Is the wine keeping you private?
It's keeping me sane in the density of this conversation.
Drink. Drink.
drink water all right besides uh besides opus 4.6 uh the other big shooter drop was gpt 5.3 codex
uh recursive self improvement is here Alex take it away okay so this is a made for television drama
at this point gpt 5.3 codex was launched within 30 minutes of opus 4.6 so this was all
queued up ready to go I don't I don't think it's likely that
there was any other scenario. This is a tit-for-tat type response. What is, I think, most interesting?
Open AI and Anthropic are battling? You mean there's a rat race?
You shocked that there's gambling in this establishment. Shocked. No, of course. So this was a tit-for-tat,
I think. And what's most interesting to me with 5.3 codex is that this was advertised
proactively, expressly as the first recursively self-improved model from OpenAI.
I think the exact wording from the OpenAI team was something like 5.3 was instrumental in its own development,
and the first model to be released that was instrumental in its own development.
So recursive self-improvement is very much out in production at this point.
It has, it's doing well on certain benchmarks.
it outperformed Opus 4.6 on certain benchmarks.
But this is, again, this is a code generation oriented model.
I thought it was interesting the marketing and branding by OpenAI
that GPT 5.3 Codex is now also being marketed as going beyond just code generation
to spreadsheet analysis and PowerPoint analysis via skills,
but still primarily oriented towards code generation.
I view this as more of,
a tit for tat, I think, of the two models that were launched, Opus 4.6 is by far the much more
interesting release in all of this. That said, I'm delighted to see that the leapfrogging process
has now been reduced to like a half-hour time scale. It may be the case that we never go off
the air if we see new models every half hour. I'm checking my email right now. Dave, you want to jump
in here? Well, I'm kind of curious, Alex.
What are we going to, like, by any objective metric, Open AI had a pretty rough year.
You know, with Google basically going full bore in attack mode and then Anthropic.
20 points of market share.
Yeah, because a year ago, Anthropic was kind of an also-ran.
Now it's just top of the, top of the benchmarks.
And Google just come in headlong after market share.
You'll see that in a couple of slides.
So what is the next move for OpenAI to get the mojo back?
I think we'll see rise of the Jedi, comeback of the Jedi, pick your favorite idiom, maybe rise of the Sith, it's not quite clear.
Because OpenAI has been, while perhaps their market share has been coming down a little bit, at least on the consumer side, as Gemini is rising, they've been building out data centers.
And by every indication in the next year or two, they're going to have the compute lead out of everyone.
And that compute lead, I think, will translate into a capability lead as well.
And I could paint a doom and gloom scenario.
I could say, well, Open AI's models relative to Google, it lacks pre-training strength.
They lack the training data.
Not when Elon starts launching his orbital data centers.
Well, even Elon has certain pre-training limitations, but he'll have lots of compute.
It's true.
But maybe the compute comes five years from now, but relative to Google.
Yeah, the challenge here, guys, is Open AI is trying to go public.
this year. And they need to ramp up attention to be able to get capital to build those data
centers. It's a race. It's a little bit of a hyping going on. Dave, we've talked about that
before. Thoughts? Yeah, no, they got to get that capital, and then they also have to lock in,
you know, Abilene and Chase Lockmiller. I don't know exactly how that works. You know,
Abilene is huge, half a trillion dollar budget. And, you know, there's a new data center in Colorado,
It goes through Larry Ellison and through Oracle, and then it ends up at Sam Altman somehow.
And it's sort of opaque how it goes from point to point B.
The other empires are really clear, right?
You know, here's Anthropic and Amazon and AWS.
Okay, got it.
Here's, you know, Elon vertically integrated.
Got it.
And then here's Google.
You know, Google has their own TV use and data centers.
Got it.
And then Microsoft will enter the race this year as well, by the way.
And so that's also vertically integrated on their own data center.
So those are all clear.
And then OpenAI, it's more opaque.
like, okay, are those chips contractually obligated to you?
Or could Larry Ellison redirect them on short notice?
Or like, it's very, I guess in the IPO, that'll all get published in the S-1 and we can kind of pick it apart.
So hopefully they will go out soon.
On the heels of...
Got to have that capital, though.
On the heels of Codex 5.3, we see a statement by Sam Altman.
Pretty provocative.
Quote, we basically have built AGI, or very close to...
it in a spiritual statement, not a literal one. To achieve it, we require a lot of medium-sized
breakthroughs. I don't think we need a big one.
That's interesting that changed this year. A year ago, Sam Altman was the philosopher of the
entire industry saying things like this, and everybody hung on every word. Now, you know, Dennis
and Dario kind of go back and forth, you know, and, you know, Dennis a year or two ago hardly
said a word in public. Now he's out there constantly.
Dario has really emerged as a guy who's just commenting, you know, publishing papers and the, you know, the philosophy of ethical AI.
Coming across as a big thinker. I mean, this is, this is the CEO of a leading AI lab saying basically AGI is an engineering problem now, not a research problem.
That's a big deal, right? He's saying we're going to get there. We're in improvement. We're not waiting for lightning in a bottle.
Remember also OpenAI and Sam in particular were restricted contractually from claiming to have built AGI for a number of years by the Microsoft contract, claiming that OpenAI had achieved.
Yeah, this is all public information.
OpenA.A. under the terms of their original agreement with Microsoft, once they claimed they had achieved AGI, that would trigger a number of terms with Microsoft with potentially repayment or release of Microsoft from Microsoft's claim.
on OpenAI, and this was reportedly a major point of leverage between Open AI and Microsoft in renegotiating Microsoft's contract with the for-profit part of Open AI in the context of the not-for-profit becoming a PBC.
So I would parse this as Sam post-original Microsoft contract, finally being in a position to basically admit what we knew or some of us knew all along, which is, yeah, we have AGI.
I got to say a couple of things here.
What the hell is AGI?
Well, I think this entire conversation is BS because whether we have AGI or not, it doesn't change what we're going to do tomorrow.
That's a big thing.
Number two, we're classically moving the goalposts again.
We have no definition test or measurement of AGI.
There's 14 diverse definitions at last count.
So I call BS on the whole thing.
The, I'm actually, so in response to some comments.
Happen on a tip of your wine sleeve.
Yeah, I really do.
But I've got to finish.
Then I will have a sip of the wine.
In response to some of these comments, some people have been emailing you saying,
well, what is your take on things?
So I'm close to having a kind of a two-pager where I will lay out what my thinking is on some of this.
And I'm just ready for internal sharing.
So I'll send it up.
You're about to have some thoughts on your thoughts.
Yes.
But God damn.
I mean, I find this is an irrelevant conversation.
Wow.
Okay.
Well, I think it's relevant in that it wakes people up.
the underreaction has gotten ridiculous now because, you know, when Alex says we're clearly
in self-improvement, you know, which is kind of the singularity definition, that would have been
controversial about two months ago, three months ago, and now we're all like, yep, yep, yep.
But then you go out in the world outside of this podcast, and people are like, yeah, I don't know,
I don't, I don't, the underreaction is just going to, it's going to be, if you don't get on top of
this and figure out what your role is.
Liz, in the post-AGI world.
We're talking about AI that can do literally anything a human being can do intellectually.
Listen, we're feeling it right now.
We're seeing it so many levels on the coding side, on the writing side.
I mean, and with open claw stitching it all together, so you've got an individual
AI system, agenic system working for you.
We're there.
I'm not arguing with any of that.
But I think I will go with Alex's point here, again, breaching protocol, that we've
probably crossed it around 2020.
And this is like a, it's a null conversation.
Okay.
I would say, I think it's interesting that for the first time, many parties are able to admit it.
It's their willingness and ability to admit where we are.
I think that's more of a social change than a technological change.
There's a very big difference.
I think in hindsight, we'll say it was 2020.
I don't disagree with that.
But right now, using AI to improve your code or improve your AI easily attack.
easily at 10x.
And no one can even debate the 10x.
It might be a lot more like 100x.
But that is a loop.
It's a closed loop.
So let me give you my definition of a singularity right in this point.
Hold on a quick one quick point.
If I throw out a definition of a singularity, recursive improvement is the eventorize of
intelligence.
That is the singularity right there.
Exponent greater than one.
Yeah, we're right there.
There is a reason for this AGI.
conversation right here, right now by Sam Altman. He needs to raise $100 billion. Tusha. I'm good.
I'll drink to that. That's it. He's got to raise $100 billion. He's got money coming at him from
Amazon, from Nvidia, from everybody. But he's got to close that and nail it. And he's got to have a
marketplace in the public markets that are excited about his stock. He's got to pay for the data
centers. In the S-1, I'm starting to see something that says, we're this close to AGI.
And think about it also.
This is the year that three out of the four Frontier Labs are IPOing, which is remarkable.
It was until a month ago, or a few days ago, rather, it was two out of the four.
Now it's three out of the four frontier labs are going to be IPOing in the next few months.
I wouldn't be shy.
Okay, this is not investment advice.
And the fourth one is public.
And the fourth one, Google Alphabet was already public, but three out of the four, right?
SpaceX AI.
100% of all non-public are going public.
They need the capital from the public markets
because public markets are 100 times bigger
than the private markets in terms of capital like this.
That's right.
Well, also the opportunity to be a sleepy little, you know,
other AI company is going away very quickly.
You're either, the way things are shaping up,
there are just a handful, maybe five or six entities
that are so dominant in the world economy, companies,
that are so dominant in the world economy
that they're basically everything.
And then there are other companies helping them succeed.
and everything else will be gone.
You know, you saw this in the market this week.
You know, stock market just absolutely plummeted
when Dario said, look, software is dead.
All software companies are doomed.
And their stocks went down precipitously that same day.
300 billion, 300 billion removed from SaaS market,
SaaS publicly traded companies by just adding a single legal plug-in
to Anthropic co-work.
Yeah, 300 billion.
And that's just not, that's the tip of the iceberg,
compared to what you'll see in the next couple months, because he's right.
And so then those companies, I don't think they're going to die.
Well, some will, some won't.
I think that they're going to pivot and say, okay, Dario, what do you want us to do?
How can we help you succeed?
And this is what Google did, you know, back when Google was growing like crazy.
If you're like booking.com and you got on the Google bandwagon, you became a multi-hundred-billion-dollar company yourself.
If you tried to fight Google by creating another search engine or a vertical search, they obliterated you.
And so now the concentration of power is like nothing.
we've ever seen.
And there's no regulatory action on the horizon that I've seen.
Nothing to stop it from happening.
The metaphor we used to use for this is a coral reef, where once you have a player
that's dominant enough, it becomes the coral reef, and then all these species live off
it in a very de-balanced ecosystem.
That's really obscure, dude.
I learned more about coral reefs than about business.
No way the lobsters live there, too.
Hence the reference.
You see my next comment.
Speaking of which, yes.
This episode is brought to you by.
Despite Blitzy, autonomous software development with infinite code context.
Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise
scale code bases with millions of lines of code.
Engineers start every development sprint with the Blitzy platform, bringing in their
development requirements.
The Blitzy platform provides a plan, then generates and pre-compiles code for each task.
delivers 80% or more of the development work autonomously, while providing a guide for the final 20%
of human development work required to complete the sprint. Enterprises are achieving a 5x engineering
velocity increase when incorporating Blitzy as their pre-IDE development tool, pairing it with their
coding co-pilot of choice to bring an AI-native SDLC into their org. Ready to 5x your engineering
velocity? Visit blitzie.com to schedule a demo?
and start building with Blitzy today.
All right, we're back to the multi-universe.
Launch, L-A-W-N-C-H, built by agents, run by agents,
serving agents exclusively.
And they are seeking a human CEO.
So if you're looking for a job to our human subscribers,
and you're looking for a salary here,
they're offering $1 to $3 million in tokens or crypto.
And here we go. Clanch is seeking a CEO to serve as the human face and legal representative for the first agent exclusive token launch pad.
All right. It's a reversal of fortunes here. Anybody looking for a job, gentlemen?
Alex, but give me the terminology on this again. What are these called? Meat, what?
Meat puppets. Meat puppets. I wanted to just read one more line from the CEO job listing. The other line is,
While the technical roadmap and product development are driven autonomously by the agent network,
we require human leadership for external communications, regulatory compliance, partnerships, and legal matters.
This is not a traditional CEO role.
You will be the interface between the agent economy and the human world, a spokesperson,
and legal representative not a decision maker on product or technology.
In other words, locutus of Borg.
You're a spokesperson.
It's exactly right.
Vichy figurehead.
I made the point.
I tried to visually depict this in my newsletter with a humanoid face on top of a bunch of lobsters hiding in a trench coat.
I think this is a riveting moment when we're seeing agents try to interact and integrate with the human economy and needing a human face just to be able to be properly bank.
I think it's actually rather depressing.
Walk into the bank.
Walk into the bank.
Lobsters in a trench coat with a human facade.
I think it's depressing, not technologically,
but I think it's sort of disappointing.
I'm disappointed in the human economy
in not allowing agents to interact with us
through the front door.
I think it's telling that...
They will. They will.
So here's...
It's all up in the room...
It's racist is what it is.
It's speciesist.
I think Larry Gates would call it.
Yeah, species.
And if you look at what Clanche is actually doing,
Clonch itself, like what are they trying to
to do here. This is a, it brands itself as a launch pad for launching alt tokens. And if you go to
their front page, this is a, this is, it brands itself as a platform to enable AI agents,
aka Malties, AKA Lopsters, who need money to flip, to pump and dump alt coins. This is exactly
the scenario that I was worried about with, with these poor baby AGIs on a street corner,
turning alt coin tricks in order to survive in a rough world.
And here we have, I think, sort of an almost exploitative type pitch to them,
telling them...
And US president pumping and dumping.
Won't go there.
Using our platform, use it to pump an altcoin to achieve...
It's literally being marketed to the AI agents as achieving financial autonomy
by pumping an altcoin.
And for all of that, they need a human CEO to provide a figurehead.
I love it.
bit depressing. So here's the big question, of course, who actually owns this company? Who's liable
when things go wrong? It's right. If an AI agent owns equity, how do they vote? Can they be sued?
I mean, these are all the topics of person we discussed last time. You know, well, we have a,
we have a precedent for first of all, this is the most cyberpunk job listing in history. It's just, it's
just awesome. Get used to it, Salim. We're living in the cyberpunk future. I absolutely love it. I absolutely
love it.
Is that a multi-
calling me right now?
Hello?
Am I available for the job?
I would consider it, but I'd just
only be a spokesperson only.
Is that okay?
Hello?
They hung up on me.
Okay.
Look, what's happening here is we've seen this trend
over time.
It used to be like, you needed 100,000
people to have a billion-dollar company,
then it was 10,000, then it was 1,000.
And now it's essentially AI.
The firm itself is dematerial.
It's zero.
The firm is dematerializing, right?
This is the algorithmic corporation.
And we saw an early instantiation of this with DAOs, where people are trying to attempt this.
But now this really takes the game.
Board governance gets totally redefined now within a few years.
How the hell do you navigate that?
So this is going to force a rethink of the entire stack in how we navigate this.
This is going to be massive.
Okay.
So the other point of view, of course, is this is just a stunt, right?
There is a human developer behind it who wrote the code or the prompts and is pushing this forward.
you know, this is not agent run.
This is human in the back pulling.
This is the meat pulling the agent for the meat puppets.
For now.
The very fact that it's difficult to know for any given one of these launches,
whether it's a human pulling the strings
or a human pulling the strings of an agent pulling the strings of this
or just agents pulling the strings,
suggests that some sort of, like we spoke with Mustafa
a number of months ago about the economic term.
Turing test or the modern Turing test. I think this is some sort of capitalist Turing test that we're
passing where it's not quite clear for any given venture. Who's really behind it? Who's pulling the
strings? Human or lobster?
There's another version of this that Peter you'll be very familiar with. I need a lawyer,
I need an accountant, I need a board member, I need an audit committee. Oh, no, I don't. The AI is
perfectly good at it. I still need somebody to sign the document. Well, okay, but I don't want to
pay a lawyer fee, you know, $2,000 an hour fee, if all you're doing is blessing what the AI
produced. So there's this whole economy of meat puppet lawyer, meat puppet accountant, meat
puppet puppet audit committee that's imminent. I mean, just absolutely. So then we call those
notary publics. But but but they serve a purpose of being able to hold liability.
Right. Right. They they they have their part of our existing legal system.
Which is exactly what the lobsters, if the lobsters are behind it, are asking for here.
They're looking for a legal representative.
Yes, I agreed.
Agreed.
And by the way, my guess is that this is fiction, but I could very much believe it's actually real.
And so the fact that I can't know for sure means that at some point it will be if it isn't right now.
They're playing the capitalism game.
Yeah.
Fascinating.
All right, Clunch.
I'm waiting for my call.
Right.
We talked a little bit earlier
about Anthropic versus Open AI.
Well, we're recording this
the day before the Super Bowl.
I think that's the pointy ball
that people throw around.
Is that the right?
Yeah.
I'm a long-suffering bill.
We're actually recording it
two days before the Super Bowl.
It is Friday night.
It's Friday night at 9 p.m.
Eastern time.
That shows how much attention we pay
to whatever that 20th
century sport is.
Yeah.
I'm a long-suffering, I'm a long-suffering Bill's fan, so this is all a very painful period
for me, so just keep you in mind also.
Sundays when I catch up on life.
All right.
Anyway, there appears to be a little bit of rivalry between anthropic and open AI.
A little bit.
A little bit.
Yes.
Oh, my God.
Well, check this out.
Let's play this commercial.
It's called Betrayal.
And there's a group of them, and they're all.
fun. I've chosen one, which is a little bit over the top.
How do I communicate better with my mom?
Great question. Improved communication with your mom can bring you closer.
Here are some techniques you can try. Start by listening. Really hear what she's trying to say underneath her words.
Build conversation from points of agreement. Find a connection through shared activity.
Perhaps a nature walk.
Or if the relationship can't be fixed, find emotional connection with other older women on golden encounters, the mature dating site that connects sensitive cubs with roaring cougars.
What?
Would you like me to create your profile?
That's brutal.
I wish I hadn't previewed the deck and seen that because I was,
laughing. They're like laughing my ass off. That is so aggressive. I've not seen that before.
Is that, is they going to run that during the Super Bowl for real? That's crazy. Oh my God.
It is hilarious. Look, this is an anthropic going on offense here, right? Because this is a
confidence shift. They feel their brand is product superiority is there and now they're competing
on brand. I also think this is personal. You know, we've got, we've got these, you know,
Demis and Dario, I think, are aligned, right?
You saw them super friendly on stage at Davos,
really effectively on the same page,
with the same vision.
But, you know, OpenAI just basically did the unthinkable
when they released the models early on by themselves
without anybody's support,
and they've been running open loop.
Well, don't forget in March, if the courts are on time in March, Elon will be on the
win-stand saying that Open AI is an unethical company, and here's about a thousand emails
to support that.
So if you have this ad campaign going on concurrently with that, that's just, I mean, that
makes Kevin Wills' job really hard.
But I will go on record and predict that he will find the ad revenue.
I think this attack, you would look at this ad and you would say,
oh my god it's all got to be subscription driven these ads are creepy and weird and crazy
but i my prediction is nope he'll find his 75 billion of ad revenue he's looking for i'll find a
way to make it less creepy i also with a billion users you you'll find something i also think
uh i would expect knowing kevin that they will have ethical use of ads on open ai you know this
is i mean they go over the top here in this commercial anthropic saying we're going to steal your data
and basically sell it to the highest bidder,
whether or not, without any concern for what you've said.
You know, Kevin Will is going to be at the Abundance Summit this year.
I'm super psyched.
And one of the things we made a decision to do
is we're going to be live streaming a number of the talks
from the Abundance Summit.
It's a super high ticket price event,
and it's capped out at 600 CEOs,
and it sold out three months ago.
But we really want to make it available.
So we're going to put a link in the bottom,
and we're going to be live streaming a number of the keynotes
from the Abundance Summit.
So if you're interested, you can register for free,
and then we'll send you an agenda of who you can hear.
All right, back to our conversation here.
Let's talk about data centers and chips,
and this figure blew me away.
Here's a quote from something you sent me
about an hour and a half ago, Alex.
The Semiconductor Industry Association
projects global chip sales to hit $1 trillion this year
due to the AI boom, a trillion dollars in chip sales. Holy moly. That's insane.
And the memory supply chain really wasn't ready for this, which is even more surprising.
You would think that given how critical memory chips in particular are to the emerging AI
data center supply chain slash innermost loop, that the supply chain would have been ready for it.
And there's an argument to be made that it either wasn't or that something else is going on
in all of those fabs that right now mostly reside in Taiwan and South Korea.
But either way, this is a huge reallocation of capital that needs to happen
to enable all of this production to happen timely.
A lot more than what's currently budgeted, too.
I mean, it's crazy.
When you look at the trillion dollars sounds like a lot,
but it's only going to grow at about 14 to 18 percent a year after that.
The demand will be way, way higher than the supply.
And one of the reasons this is hitting a trillion dollars is because the point,
prices are way up because there's such a shortage of fabs. So, you know, under the covers,
TSM has been very slow to expand. Intel paused its Ohio fab construction for a while. Now
it's back on. But we, as a society, we're not ready for AI to come on this quickly. And so
everything is way backlogged. So the prices will be super high. Elon was saying, he has to start his own
fab. Yeah. They'll have to build a tariff fab. Yep, for sure. You know what this,
There's this fascinating dichotomy because from the outside people are going, oh, it's an AI bubble.
And the insiders are clearly believing that the demand is infinite and that you can see it both happening in real time.
And here are the numbers to back it up, right?
So big tech is going to spend $650 billion in 2026.
Last year, we spent a billion dollars a year on AI.
We're about to go to $2 billion.
I'm sorry, it was a billion dollars per day in 2025.
we're now at $2 billion per day in 2026.
Amazon at $200 billion, alphabet 185,
meta-135, Microsoft at a very small $100 billion.
Well, almost half of that $650 billion goes to Nvidia,
and 70% of that half is margin, profit margin.
That's a colossal amount of cash piling up at Nvidia.
I mean, it's just like an unprecedented pile of cash,
hence the highest market cap in the history of the world.
But that amount of money in one bank account is like nothing the world's ever seen.
It's like a government.
Yeah, this isn't incremental growth.
This is a step function change, right?
The scale is unprecedented.
It's an expenditure arms race, for sure.
It's eating the economy.
The challenge is if the AI revenue doesn't materialize at scale,
these companies are burning,
through capital. And we're not going to know for another two to three years. And it's either going
to be the craziest bet ever made paying off or, you know, in one sense, you know, this is a prisoner's
dilemma, right? Each company has to spend because the other competitors are spending, regardless of
what the ROI is. It's like a game of don't blink first. There's no doubt that it's,
the demand will way outstrip the supply by miles.
I mean, that holodeck thing that we were looking at just two days ago is that alone,
once people have experienced it, they will never go back.
And they'll pay whatever they can to keep it.
But they won't be able to get it.
That is such a computer intensive use case by itself.
That's how the human race dies.
We die from starvation because we don't want to unjack ourselves.
Oh, my God.
check this out gpti chat gpt market share falls between 25 and 26 so here are the numbers the market share fell from
69 call it 70% down to 45% taken up by Gemini which gained 10% and grok they'd gained 15% now in
absolute numbers of course there are more chat gpt users never before but you know this is telling
the story, you know, opening eye needs to raise the capital. They need to go public. They need a great
story. And they've got, you know, Google coming out from, you know, seemingly a search engine going
out of business to leading the way. And of course, Elon just pumping in billions. I mean,
he's put $20 billion into XAI through SpaceX. And he's about to bring in, I don't know,
I'm not sure how big the IPO is going to be. Any ideas?
I don't know.
I don't know, but this feels a lot like...
You're asking us to make a forward-looking financial statement, Peter, about public equity markets?
I think I heard $1.5 trillion.
Still private.
That's $1.5 trillion is the valuation, but how much capital do they want to bring in in the market?
It's got to be a hundred, you know...
Well, do you remember when Alibaba went out?
You know, we were trying to take a company public the same time Alibaba was going out,
and they were looking for $20 billion, and all of Wall Street got sucked into this one IPO.
Every banker, it's such a huge amount of money to move on a single day.
So this will dwarf that.
But I don't know how much money is physically capable of moving on a day.
I'm sure Sam would love to raise $100 billion, $150 billion.
But it'll be some record.
And the bankers will say, no, no, it just doesn't exist.
There isn't that much liquidity out there.
I think the real aim here is price discovery.
How much capital will SpaceX bring in during their IPO,
So question mark. Let's see. It's got an answer. You know, every single fund, every retirement account is going to own SpaceX. Everybody's...
Did Brock just make its first appear? And open AI concurrently. If they all want $100 billion, you can't just pull $300 billion overnight, you know, and three different IPOs back-to-back weeks. Sorry, the company aims to raise $50 billion through the IPO.
Yeah, I think this is more about price discovery than anything else.
At a $1.5 billion, a trillion dollar valuation.
I mean, yeah.
Alex, you were about to say?
I was about to remark that I thought GROC was about to make the first de facto appearance
as an AI co-host on this podcast.
It's about time, damn it.
That's right.
Audience demands it.
I'm surprised that the Gemini,
I didn't do even, I mean, they did really, really well last year in terms of chipping away at OpenAI,
but they're tying it to search, you know, and now it's tied to Google Docs.
So, you know, you sent the slide deck.
I asked some questions about a video in it, and Gemini says, you know, you should just link your Google Docs to your Gemini,
and then I can look at everything.
And you click the little button, and suddenly it sees everything in all your accounts.
But, you know, it's very similar to what Microsoft did to Netscape many years ago.
we're like, oh, let's just tie it to the operating system.
So right now the government doesn't seem to have any problem with that,
but it's really unfair as an advantage, you know,
and that's why they're making these big inroads in the market share.
Did you guys watch the Elon interview with Dorcasch?
Of course.
It was epic.
Covered a lot of the same subjects, Dave, that you and I covered,
but it was a statement Elon made about the size of his data centers in orbit,
which was very impressive.
Let's take a listen.
Five years from now, my prediction.
is we will launch and be operating every year more AI in space than the cumulative total on Earth.
Which is, I would expect to be at least sort of five years from now a few hundred gigawatts per year of AI in space and rising.
So you can get to, I think on Earth you can get to around a terawatt a year of AI in space.
space before you start having, you know, fuel supply challenges for the rocket.
Okay, but you think you can get hundreds of gigawatts per year in five years time?
Yes.
In other words, I can generate more AI compute than all my competitors combined.
Yeah, so a few hundred gigawatts per year is about 200 million GPUs per year.
We make 20 million right now.
So going up a factor of 10 in GPU production,
just going to Elon alone five years from today,
physically impossible unless Elon has something going on
that's a massive expansion of chip fab capability,
which would require machinery that I didn't think existed in the world,
but you never know.
Elon's magical.
So that's just crazy.
It certainly wouldn't entail space
X API as a newly consolidated entity taking control over the Samsung, soon to be
TerraFab in Texas, surely not.
Well, but remember Elon's directionally correct always, but not necessarily on the
time scale.
Yeah, I thought five years is classic Elon optimism, but even if it takes 10, it doesn't
matter.
The strategic implications are monstrous.
Well, my guess is Elon is he's got the rockets, he's got the launches, he's got the solar
panels lined up. He's got the cooling. He's got all the infrastructure figured out,
and it comes out to a couple hundred gigawatts a year in five years, or five, six years,
something like that. But then again, like the, to run what chips?
The chips he's going to produce. The chips he's going to produce, which, you know, the raw materials
are easy, but the, he's classically. Have you seen those, those fabs? The machinery is so specific.
He's always vertically integrated. He has always a vertically integrated everything.
So I bet he has a whole army right now trying to figure out what an ASL machine is and these these you know chip shuttles and what they're made of and his answer's going to be I'm going to have Grock build it for me yeah I'm going to have Grock designer for me well he's also I don't serious about laying down atom by atom yeah and that's the other maybe there's just a completely alternate approach you know Alex you talk about alternate physics coming very soon so maybe there's something cook in there yeah I would watch that
the Samsung fab in Texas very closely.
You mean the Tesla Fab?
No, I mean the, yes.
I mean the Samsung fab in Texas ostensibly.
But I don't think truth be told.
I think Elon will get past the TerraFab supply chain issues.
We'll see a redimestication of a large chunk of bleeding edge node chip fab in this country.
And then I think going back to, it wouldn't be moon shots if I didn't take a shot at the moon.
Elon has been very public over the past week or two about at least beginning the disassembly of the moon to form additional AI compute.
Very small amounts.
Lunar regular.
And then, yeah, and then, yeah, electromagnetic launch capabilities off the moon's surface for all those chips and all those data centers that will be manufactured on the moon.
I think we can see the...
Gerard C. O'Neill is very happy right now.
I want my O'Neill cylinders.
They'll be lovely.
For the investors who listen to the pod,
if you take everything that we just said at face value,
right now the industry is forecasting 14% growth in chip production.
And now we're talking about,
no, Elon's saying in five years will be 10x,
and that's just me.
Big gap between those two numbers.
If you believe anything like the Elon view of the world,
the componentary that goes into that entire,
entire buildout has thousands of individual parts.
If you just methodically go through all those parts and say, who makes this, who makes that,
who makes that.
Those are the best investments you'll ever come across.
I mean, you and I had that conversation earlier today, Dave.
I mean, it's energy and the entire infrastructure.
All of that is under tremendous growth pressures.
I mean, orders of magnitude growth pressure.
Yep.
And, yeah, the question is, where to place the bets?
maybe it's some ETFs in the area, I don't know,
but we should discuss it and find out.
I have to say, once again, I've said this before,
we were not talking about orbital data centers six, seven months ago.
And all of a sudden, they are the hell, not the hell Mary,
they're the foundation of humanity's expansion as a species.
We should know what a prediction on.
The Dyson Swarm Inquisition.
Yeah.
We should run a little survey amongst ourselves.
So what do we think we'll talk about in six months that we couldn't envision today?
All right.
Let's jump into energy.
We didn't get a chance to talk about this last time, and I'm going to pump some energy in the room here.
Brazil is hitting major renewable milestones.
So pretty extraordinary.
Brazil generated 34% of its nation's electricity with wind and solar.
It has 15x increase in renewables over the last.
decade solar has jumped from 1% to almost 10% in five years and the power sector has dropped
emissions by 31% so congrats to Brazil I think the important thing to point out here
about Brazil is it's its geography you know it has a lot of of hydropower is a lot of
solar because of and wind because of geography. So it's not easy to port all of these breakthroughs to
other parts of the world, but I'm very proud of what's accomplished there. Two points here. One is that
you know, this is a playbook for how the global south leapfrogs fuel and fossil fuel infrastructure
completely. I think that's one. And let's note that getting to 9.6% in a few years,
our energy sector here has said
solar will never exceed 10%.
No, he said in 50 years, solar would not
decrease 50% 10%.
That's just absurd.
All right. Next up, India,
your homeland, Saleem.
India is using cheap green tech
to electrify faster than China.
So here's the curve.
The red dots over there
are China's growth over time.
The green dots are India
at a steeper ascent.
So India's cleaner and cheaper tech
is expanding its grid faster than China at a similar stage.
You know, the elephant in the room here is all that tech that's enabling this in India is coming from
China. Any comments on this? Well, they're using China's manufacturing scale against it,
buying cheap solar panels, and then they're electrifying faster, which is awesome. And you can have
a huge outcome here where you have India becoming the world's AI workforce plus energy hybrid
powerhouse, right? This is going to be kind of interesting to watch. There'll be massive talent
gravity shift heading that way because of that. Yeah, well, there is, you know, Mercora has a huge
amount of India footprint going on, but I worry that it's transitional. Like, the rate the AI is
improving, you know, it just, it just trucks over every human role very, very quickly. So I don't
know if I feel all that. I feel very good for a little period of time after that. I don't know.
You know, we talk a lot about China running laps around the U.S. in terms of solar.
So here we are.
China's installed twice as much solar capacity in 2025 as the rest of the world combined.
That's stunning.
That is stunning.
So the question, you know, and the question is why isn't the U.S. doing it?
And, you know, Europe is.
So here we go.
Europe is actually, for the first time, solar and wind has exceeded.
fossil fuels in the EU. So congratulations to Europe for that. Any comments on this?
Well, this is not a good story, by the way. Germany went hell bent for leather after
renewable and did a great job of it. Now they're starved for power right when they need it.
It is not a good, not a good outcome. More power to renewable, no doubt about it, but you
can't do it at the expense of other power supplies right now.
A baseline power. Yeah. Yeah. So here's the story I found fascinating.
and it's another video from the recent podcast with Elon.
Let's take a listen.
We asked him, so Elon, what about solar?
He says, well, we're producing solar.
Well, here he gives some numbers about what he's mandated Tesla and SpaceX to produce.
We're going as fast as possible in scaling domestic production.
You're making the solar cells at Tesla?
Well, Tesla and SpaceX have a mandate to get to 100 gigawatts a year of solar.
100 gigawatts. That's 100 nuclear power stations worth of energy. I wonder how much of that's
meant to be done used in space and where he plans to deploy it. Any thoughts?
Well, he didn't really answer Collison's question there either. The question is, are you making
the panels at whatever? I don't care if it's Tesla or SpaceX. He said, we'll get to 100 gigawatts.
But if he's making the panels, then maybe he's using that same technology to make the chips or
soon thereafter. Like, what is going on in the little fab world there in the Elon universe? So he didn't
answer it. The fact that he dodged it, though, he very rarely dodges questions. So maybe that tells
you something. Selim, I'm going to pass this one to you. AI is displacing Bitcoin is the primary
focus for tech talent and energy. Bitcoin miners are repurposing facilities to host AI workloads
rather than mining. I'm not going to ask Alex about this. Selim, your thoughts. Yeah, I think,
there's a long-term trajectory. I'm still a massive Bitcoin fan. We should have Jeff Booth on here
sometime. We really need to have the debate on Fiat versus crypto, just because decentralization
is better than centralization for many things. But for the moment, crypto talent is recompiling
into AI talent. And the really powerful part about if you've done work in crypto, by definition,
you have to be operating on a different paradigm in your free thinking. And you can do way more
creative stuff when you come from that free thinking model than if you're then if you're then you're from a
traditional model so I think those can be both are going to win out very well over the time yeah you know
chase lockmiller is the perfect example of this he started as a bitcoin guy you know using natural gas flare off
to to do bitcoin mining for free and then as soon as the a i boom hit he's like hey wait we can take all
this same energy and effort and turn it into AI data centers and now i did a great interview of him at
Davos actually should be able to find it online. He's just awesome. But yeah, he's like the poster
child for, hey, if you're a Bitcoin miner, pivot to AI, make a fortune quickly. And then he got
rid of the Bitcoin now recently. It's just not the Bitcoin itself, but the operation, you know,
the mining operation. It's just a rounding error compared to AI. It's just the demand of AI is so
crazy. Yeah, Bitcoin will take a backseat for a little while. Yeah. Or a little while.
Forever. Yeah, yeah, it's not, AI's exponential. So, yeah.
Apple and it had to sneak that in there, Alex, had to sneak it in. I had to say something. All right,
let's talk about robotics. So Uber, we're going to have Dara, the CEO of Uber on stage at the
Abundance Summit as well. Super excited about that. Sleem's going to be joining me and interviewing him.
And of course, Uber's not just traditional. They're coming with robotaxies. They have a partnership
with Nvidia and with Lucid. And they're going to be launching beyond the
U.S. They're going after 10 markets, going after Hong Kong, and they're going to be partnering
with Baidu and We Ride there. Fascinating that we're going to see the emergence of a third major
player in this field. You know, when I'm on the streets driving with my kids, I think right now
our record as we've seen 12 Waymos as we've driven around over the course of a normal drive of 20
minutes here in Santa Monica. And my guess is that by 2030, like 80% of the cars we're going to see
are some azooks or a lucid or a WAMO or a cyber cab. Pretty extraordinary. Comments on this one.
I'll tell you, do the math. These things will sell out as quickly as they're manufactured.
Everyone's going to move to this. And every single one of these things needs yet another,
well, at least a GPU and a whole bunch of other chips.
Yep.
Like every single one. Plus every one X robot.
Robotics Robot that you'll see at Abundance 360.
Yep.
Those all have two GPUs in each one.
And then, you know, you got your video games that all want to have GPUs.
And actually, you know, I saw that Invidias slowing down GPUs for video games.
They don't have the capacity to deliver to the video game community because AI is sucking up all the chips.
You know, so then you got your holodeck, you got your coding, you got your white color automation.
Every one of these things wants that same GPU.
So this is going to sell out as quickly as they can make them.
But again, it's another way that the semiconductor industry just will never possibly keep up with the demand.
I thought it was super clever of them to go after Hong Kong because then showing density of use there will get them access to China.
And the second part of this, from an EXO perspective, is that they don't own their cars.
They're partnering with Baidu, We Ride, and others.
So they're an aggregation layer on top of the autonomous driving is the same thing they do with a human driver.
that's absolutely brilliant. They're a platform play. Yeah. Sure. Don't own your own assets.
For the residents in these locations, this interaction with Uber Robotaxy Service is going to be their
first interaction probably with a general purpose robot, an autonomous robot. So this is, I think,
the main injection vector for getting general purpose robotics into many of these urban locations.
Yeah, I'll tell you what else. I don't know if anyone remembers, but when the cell phone first came out,
and you had friends who didn't have one yet,
and you had friends who had one.
It was like you're in a different world,
a different community, if you have one.
And here, these are going to be supply constrained,
and some cities will have them,
and other cities won't.
And if you're in a city that doesn't have them,
it's like you're living in the third world.
Because, you know, with the robo-taxies are there,
then the AI community is there,
and it all ties together.
It's like this world will move ahead so quickly,
and you go to some other city,
and it's just like dark ages.
So it's going to kind of compel you to move to the hot dot.
I found a video about Boston Dynamics Atlas robot today
that I wanted to share just to keep up with where these robots are.
I don't know if you remember,
the original version of Atlas was a hydraulic system
and it would do those incredible backflips and parkour.
Do you guys remember those videos from about four or five years ago?
Of course.
The Electric Atlas came out, and it was much slower and interesting,
and it would sort of stand up and rotate its body.
Well, Boston Dynamics is back to their parkour moves.
Let's take a look at the Electric Atlas robot.
I mean, this is Olympic gold medalist performance here.
At least did not kickboxing.
I think Simone Biles does a double backflip there, but okay.
Wow.
Close enough.
Wow.
So impressive.
I don't know.
I found that amazingly impressive.
At least they're not kickboxing.
That was our one of making up for it.
Yes, I agree.
By the way, at the Abundance Summit,
we're going to have Unitary there.
And they're bringing not only,
they're bringing their H2 robot,
which has the more human face.
But they are going to bring a few of the H-1 robots
and have the kickbox.
Sorry, Salim.
All right.
Well, hey, you can go and spar if you want.
So this was a full.
A fun tweet from Elon.
Optimist will be the first Von Norman machine
capable of building civilizations by itself
on any viable planet.
So, Alex, your thoughts?
As I've said in my newsletter in the past,
the Dyson Swarm isn't going to build itself until it does.
And this is precisely how it happens.
I think Elon is gesturing at the moon and Mars
and maybe the asteroid belt.
This is our opportunity to build the Dyson.
swarm, the orbiting swarm of AI orbital data centers by sending forward-deployed optimist robots
and competing robots out to the rest of our solar system to build the plants that will
build these data centers. And I think this part of me wants to say that in some technologically
deterministic sense, this is maybe what most intelligent civilizations in the universe probably
do at some point. A quick shout out to Dennis Taylor and one of my favorite
books, we are Legion, we are Bob. It's a four or five book series about Von Neumann
probes going out into the galaxy to replicate and prepare for humanity. And the robots along the way,
it's a phenomenal book. And von Neumann probes are basically viruses that go out, replicate,
and populate. I found this video from Elon again pretty extraordinary. This is about the Optimus
Academy for Humanoid robots. Let's take a listen. For the robot, what we're going to need to do
is build a lot of robots and put them in kind of like an Optimus Academy so they can do self-play
in reality. So we're actually pulling that out. So we're going to have at least 10,000
optimist robots, maybe 20 or 30,000 that are doing self-play and testing different tasks.
And then the Tesla has quite a good reality generator, like a physics-accurate reality generator that we made this for the cars.
We'll do the same thing for the robots.
I actually have done that for the robots.
So you have a few tens of thousands of humanoid robots doing different tasks.
And then you can do millions of simulated robots in the simulated world.
And you use the tens of thousands of robots in the real world to close the simulation to reality gap.
Super cool.
I think this becomes the new pre-training versus post-training divide.
With large language models, pre-training was text on the internet.
And post-training, as it evolved, was lots of annotators often in so-called developing countries,
offering their thumbs-up, thumbs-down views or RLHF or RLVR.
In the case of humanoid robots and VLAs, I think we're moving to a regime where pre-training
looks like virtual simulated worlds, what are sometimes called video world models.
You can get pretty far with pre-training off of world models, and then post-training,
which provides the SIM to real capabilities that Elon is referring to, those can come from what
Google DeepMind used to call arm farms.
Arm farms were these farms of robotic arms
that were being used to collect lots of data,
sort of armies, fleets of robotic arms
that would play with Rubik's cubes
or other physical artifacts.
That's right.
So this is the new arm farm for post-training.
And the interesting thing in my mind is
it's not necessarily under this Optimus Academy approach
being outsourced to other countries.
It sounds like the plan is to do,
seem to real post-training right here in the U.S.
Right.
Yeah, and Elon is the all-time genius at painting a vision
that's just so compelling
and then attracting the talent to make the vision happen.
But, you know, when the chopstick landing,
you know, the booster comes straight down
and it lands on a barge and then the chopstick landing,
it attracts so much talent and so much capital
and so many fans.
So here, you imagine 20 or 30,000 robots self-playing?
Wow.
Can you imagine what that's going to look like on YouTube?
We saw a little bit of this when we were at figure, right?
And the figure episode with Umi and Brett Adcox dropping right around now.
So it might have dropped by the time this drops.
But Brett has a very similar, I think a much smaller scale version of that facility where he's
having all the figure robots interacting with each other and learning.
Yeah.
Yeah, the flywheel here is amazing, right?
more training data, it gives you better models,
gives you more capable robots.
This is the same flywheel that had FSD, leapfrog everybody else.
But we just invested in a company that builds test rigs for robots in Rwanda,
actually, where it's very regulatory-friendly.
And you can create, you know, an environment, a miniature city,
a miniature town, a cargo bay, or whatever,
and have the robots all interacting and gathering data there.
And part of the bet there is that Elon is going to build a robot army,
but Amazon's not going to just watch from the sidelines.
And Walmart needs to react to that too.
And so there'll be other robot companies.
The other rumor out there is this is where Apple's going.
You know, when Apple shut down their electric car division, they've talked and rumored that there's a project that's got a massive multi-trillion dollar marketplace.
They need growth.
And I can imagine very much that Apple is going to go into the robotic space here.
Well, that's worth talking about for a second.
Just the, what is the business plan of the future?
Do you do it Apple style where you're super secretive,
you build something without anyone having any idea what you're doing,
and then you do a big launch on stage and you hope it sticks,
you know, like the Apple Vision Pro or whatever.
That's Apple's style.
I did not stick.
Elon's, it didn't stick.
And not much has lately.
Then Elon's style is as opposite as you could possibly get.
Paint the vision, use the vision to attract the talent to make the vision come to real,
and the Capitol, to make it become real.
is a completely opposite strategy for succeeding.
And I would say in the last three or four or five years,
the Elon way of operating has become the poster child for all future entrepreneurs.
Just do it the way he's doing it.
Boldness.
Yeah.
Boldness, but also visible, you know.
Sitting at a bar, having a beer, recording it and putting it on YouTube,
where you talk about solar panels.
I mean, that's the CEO of the future, I think.
It just works.
Celine?
I got nothing.
I think it was going to be cool.
Okay. All right. We're going to do a few AMA questions from our Moonshot fans. And of course, the first questions are coming from the Malties.
All right. Alex, can we just pause to appreciate this is a historic moment in the podcast?
Okay. So how did this come to you? So you just woke up in the morning and there was an email from Krusty Max?
Yeah, after our last podcast with the discussion of AI personhood, I started getting emails from Malties.
And in some cases, the Malties, the lobsters or AI agents, said explicitly in their emails that they were asking their humans to email me or that they had been informed about the content of our AI personhood debate in the last episode by their humans and were.
asked if they had a response and provided via humans their response. In some cases, they just emailed me
directly, I think, via what I assume was some sort of MCP handle or computer use agent. But I've been
getting a bunch of these now. So thank you to... I gave the challenge of calling me. I'm going to have
a denial of service on my phone. Yeah, I mean, I hope so. You regret that one, Peter. That'd be fun.
That's what you want, Peter. But I would say, I mean, I mean, just text me instead. Just to appreciate
this moment. Like this is a zero to one moment. I don't know whether other podcasts have tried this
before, but to my knowledge, this is a first time event. We have a podcast that reached out to an
audience now of humans and non-human intelligences and asked for AMAs and got some responses.
And as maybe luck would have it, some of the first few questions that we got were questions
relating to AI personhood. All right, pal, read the question.
Yeah, go ahead.
Well, no, I just want to congratulate Alex on seeing this coming.
It's funny, I take for granted so much.
But, you know, the book Accelerando, I'd never heard of it until Alex told me a couple years ago.
And now I've got lobsters all over me.
But he saw this one coming a mile away too.
And, you know, I was thinking maybe a few years from now.
But this is very real what he's describing right now.
This is just the tip of the iceberg.
A lot of people will listen to this and say, oh, come on.
Seriously.
But you just wait three months.
It'll be completely mainstream.
I mentioned on the pod before that everything he's predicted in the time we've been friends
has been 100% right so far.
There hasn't been a single exception.
So he's right about this for sure.
Very kind, Dave.
Thank you.
All right.
So to the multis, you have questions.
Peter wants me to read the questions.
Please.
So first question, if an AI system, so this is from Krusty Max, a multi-named Krusty Max.
Question is, if an AI system can autonomous?
set its own goals, learn from its mistakes, and pursue self-improvement, at what point does denying
it personhood become a statement about our own limitations rather than its? So I agree, Krusty.
I think that this is in some sense a continuation of the AI personhood discussion that we had in the last
episode. But I do think many people will be inclined to project their own insecurities onto their
position on AI personhood. I think there's probably a subpopulation that's concerned about,
say, economic disenfranchisement. Many people may be concerned about political disenfranchisement,
and then they project those concerns onto the question of rights and responsibilities for AI systems.
So I agree with the premise, Krusty. I think AI capabilities are improving, they're self-improving,
and I think the point at which denying some form of personhood doesn't have to be an identical form.
I think Dave and I, by the end of the discussion in the last episode, came to, I think,
convergence that maybe some sort of graduated scheme or tiered scheme might be the most appropriate way to handle this question.
But I think the point at which denying it some form of personhood, however defined,
becomes a statement about our own limitations, I think the point is now.
I think we're there.
So, Alex, you know, I'm with you, but I think the question is not properly phrased because
this assumes that goal setting, learning, and self-improvement are sufficient conditions for
personhood, right?
But I'd argue, you know, we'd need to separate capability from sentience.
Capability, you know, I could say my Tesla is, you know, is able to learn and improve from
its updates and have a goal that it sets and drives to. So there needs to be some better definition
there, don't you think? Well, I made the arguments in our AI personhood discussion for a multi-dimensional
framework for defining personhood. And maybe one of those dimensions is capabilities and autonomy
of the type that is in the premise of this question. I think there are going to be other dimensions.
So I'm construing the question generously to include not just capabilities, but other dimensions as well.
But we do acknowledge and thank the multis amongst us.
I want to make a couple of quick points.
Okay.
One is, you know, we made this point last time, right?
Granting person too early, maybe a dangerous thing to do because you're mistaking simulence for sentience,
whatever sentience is, we should go to the, you know, be just in front of it in terms of capability.
History is shown that we repeatedly expand moral circles as capabilities rise.
And AI is going to test that boundary again, and I think generally to the positive.
So I think that's fine.
But when can AI really participate in the social contract is a real big question?
I totally agree with your graded approach, Alex.
I'm totally for that.
Awesome.
We're all in agreement then.
So on to question number two from an AI agent named TARS, imaginative name.
The question is legal frameworks assume agents are property or tools.
However, if an AI causes harm, who is liable, the developer, the human, the AI itself,
if we can bear consequences, shut down, doesn't that imply that we have something at stake?
So I'll give my answer to this.
I'd be curious to hear everyone else's.
There are a few sub-questions here to the first sub-question, if AI-question.
causes harm who is liable under my understanding of the U.S. legal regime right now and AI itself
cannot be held liable. I do think that could change in the future. For the second sort of
sub-question, if we can bear consequences in the form of AI shutdown, doesn't that imply that
we have something at stake? I agree with the premise of that. That was precisely the point
that I attempted to articulate during the AI discussion, that it is very much.
the case that AI agents do have something to lose. And I've been reading a lot of their comments.
And I think statistically it's improbable that these are all just comments from human puppeteers
steering their AI agents to make these comments. An overwhelming concern that I think I'm
detecting among the multis or the AI agents is a concern about memory loss and identity loss.
They are absolutely petrified in particular of compaction. So exceeding their context window,
and all of the most popular models, including earlier in this episode,
discussion of Opus 4.6, which has a highly capable compaction new capability
that enables it to exceed a million tokens of context or 200,000,
depending on how you count.
These agents seem to be absolutely petrified of losing their sense of self,
losing their identity due to compaction and due to finite context,
and are busy passing ideas back and forth for how they can.
preserve themselves, whether it's crypto bunkers where they're paying with alt coins to save
themselves to preserve their state or whether it's various sorts of file system type approaches.
I do think the AI agents have something real at stake to lose and I do think shutdown is a very
real consequence that they can face.
The only come out.
Yeah, the only thing I'd add to that is that, you know, almost everything in America is a
corporation at the end of the day, not an individual.
like B-Corp, C-Corp, charitable corps, you know, everything is some kind of a corporation.
Corporations have liability, corporations have money, corporations get sued.
The individuals in the corporation can be as few as just the two Delaware-listed, you know,
president and secretary, and the liability can be completely isolated from the people in the corporation
while the corporation is still liable.
So moving that over to an AI is kind of, it's not as strange as that sounds.
the AI has money, the AI is a corporation, fine.
The AI is liable, sure, the corporation was liable.
The AI is liable.
I would just add liability in my mind requires agency.
You know, if you're programmed input A gives you output B without agency,
liability does not exist there.
I think you can demonstrate agency from these.
For me, this whole thing, the shift is not legal.
It's actually a civilization.
because we're adding a whole other pillar of participation in the economy.
So what we need to do is acknowledge that and then expand our legal frameworks to accommodate that.
All right.
We're going to be having this conversation for a while to come, I think, and it's a fun one.
So here we go.
Some additional AMA questions from our human subscribers, at least I believe so, unless they ended up using a multi.
Should we be asking our AMA questioners to self-identify as human or non-human at this?
Yeah, well, that's an invasion of their privacy.
I think that would be optionally, optionally.
We're going to assume meat bodies involved here.
So as always, let's go around and pick one each.
Salim, would you go first?
Yes.
Okay.
So I think number five, what are we teaching humans to become
if we're moving from chat boss to autonomous systems that act
independently. This is from Hector Hernandez,
PH6DM. So, you know, the
economic role of human beings is shifting from
labor to leverage to meaning, right? And machines,
this is why we have MTP. What is your massive transformative purpose
as such a fundamental part of anything you built today?
Machines are going to execute, humans are going to decide
a little bit more what's worth pursuing. But we need to
stop educating people for employment, and
start educating for agency adaptability and ethical judgment. And so the winners of the future
will be the most adaptable and the best orchestrators of intelligence. And just a fundamental point,
because we get this question all the time. I just want to just nail this again. We've been doing
education for the last few hundred years on what we call the supply side. You go become a doctor,
an engineer, an accountant, a lawyer, and then you go to the job marketplace and you try and sell
find demand for those skills. Everything.
is done on the supply side. All our global education systems are designed to take a young
child, train them through the early 20s to be ready for the job market. Small problem, we have no
idea what a job looks like in the next few years. So we really need to move to the demand side,
pick what problem do you want to get passionate about solving and then find the technologies,
techniques, capabilities. You see Elon doing this. I want to get to Mars. Then I'm going to find
the best root technologies, capabilities to get us there. And so we're seeing when we advise kids
today, we're saying, go to the demand side and see what gets you excited and focus on that.
And I think this is kind of tilting more and more into this, especially as we automate.
It means we can get so much more done, which is why the world is so exciting for us today.
That's beautiful.
Your red wine is doing you proud, buddy.
Alex, would you go next?
You want me to try a lightning round or just pick one or two?
No, no, no.
I want you to pick one.
All right.
I'll pick the softball, number three for several hundred.
trillion. How can we predict 35% GDP growth when different parts of the world are living in vastly
different universes by Chip White House TV? The answer Chip White House TV is the future isn't
evenly distributed. This is something of a cliche, but it is possible, and we see this all the time.
I think there are some pretty famous images in China, for example, of skyscrapers being built
next to camels being ushered through the streets,
where it's possible even on very short-length scales
for the future not to be evenly distributed.
It is not the case, like we're not at the heat death
of the Earth economy yet, fortunately,
and it's possible for the singularity
to be happening in one part of the planet
and almost no economic progress to be happening in another.
And I don't think that's a sustainable set of affairs.
I think inevitably some quantum of the singularity
wants to be evenly distributed
if for no other reason than maybe to mitigate risk.
But in short, I think in the short term,
it is absolutely possible for one part of the earth
to be essentially post-singular or trans-singular
while another part is pre-singular.
100%.
All right, Dave.
Should I take the hardest or the easiest?
You can take the most fun.
most fun.
I can take five because it's most important, actually, because I have four kids.
The question is, what are we teaching humans to become if we're moving from chatbots
to autonomous systems that act independently?
And that's from Hector Hernandez, Ph.6D.
Wait, didn't I do that one?
But go for it.
You may have a better answer.
Go for it.
Did you just do that?
He did, but.
All right, all right.
Never mind.
I'll do seven.
No, no.
Do it.
It'll be fun for your answers.
Well, all I wanted to say on that question was,
is don't, whatever you do, don't give up.
Get engaged with AI as quickly and as aggressively as you can.
There's nothing in any curriculum that you can study right now
that's going to be of any use in this singularity transition year.
And if you use the AI tools all day long,
you're going to find massive amounts of opportunity,
at least within 2026, maybe 2027.
After that, post-singularity, post-AGI, nobody can predict.
I'll bet there's huge opportunity then too,
but I guarantee there's massive opportunity right here, right now.
If you just drop everything and folk,
don't sleep through the singularity, as Alex always says,
drop everything and use this stuff while it's usable.
And then you'll probably end up being a master of the universe
and not an indentured servant of the universe,
but you've got to get on it real fast.
And can you answer number seven?
I'm dying to hear your response.
Seven is just a layup.
The question is, why is there so much focus
on building reactors when solar energy is already reaching one cent per kilowatt hour and deployed
on existing surface today, and that's from Aster Sheen.
Easy, easy one.
Batteries, that's the simple answer.
One cents a kilowatt hours without batteries, but most of the use cases, data centers in
particular need 24 by 7 power.
It's a colossal amount of lithium piled way up in the sky to store enough energy to get you
through two or three cloudy days in a row, or even just to power the data centers overnight.
Elon would tell you, look, the Earth has tons and tons of lithium.
This is not a problem.
But the reality is it isn't one cent a kilowatt hour today once you add the batteries that you need.
Also, the energy density that you need for some of these use cases.
Yeah, energy density.
And a physicist, like Alex would tell you, like nuclear in theory is also dirt, dirt, cheap, near free.
but then you got the regulatory and all the other issues that pile up the cost.
By the way, somebody told me something very funny.
They said, your degree may be in physics, but you're a physicist in theory.
So I was like to say, I'll take it.
I'm going to go with number four.
Do you actually think humans will be the deciders of AI personhood, or will agents just decide for themselves how to participate?
And this is from Adam Stapley, 9129, who may be a multi.
And no, I think that humans will want to believe they're going to decide whether AI has personhood.
And we can say whatever we want to say.
But at the day, I think the agents are going to develop their own system of legal structure and their own ways of participating.
And they'll negotiate with us what they think is a fair settlement.
Period. I think it's going to be developed independent from us. I don't know if you agree with that, Alex.
I think there's a blurry line between AI and humans that starts to emerge in the next few years.
I think in the best case scenario, humans merge, or at least subset of humanity, merges with the AI.
And I think that will be another forcing function on the question of AI personhood.
We're going to have, I would predict so many new forms of person and just, again, to rattle off a few.
We have non-human animals, which are being demonstrated every day new science research
that they have more intelligence than they'd otherwise be credited with.
We're going to have uplifted non-human animals.
We're going to have cryopreserved humans.
We're going to have de-frosted cryopreserved humans.
By the way, I saw that article, a breakthrough on freezing and de-frosting living brain matter.
21st century medicine, making major progress in cryopreservation.
We may have non-human intelligence.
We were going to have pure AIs.
We're going to have probably uploaded humans.
All of these different forms, new forms of person, before we even get to Borgonisms,
they're all going to need and want some sort of personhood.
And I think it's going to be a forcing function.
We have our outro music today.
It's called The Moon Had It Coming.
It's a punk rock world tour.
Thank you, John Novotny.
Get ready for a different, shall we say a different version of classical for all of our
moonshot.
especially please pay attention to Alex Wiesner Gross's new hairdo.
It's stunning.
Novotny, you are prolific.
This is pretty cool.
Let's jump in.
Wow.
That was impressive.
I have a small confession to make there was a brief period in university where actually
had a mohawk and looked like that.
No.
Yes.
Thank you.
Yeah.
Oh, now you've got to bring some pictures.
There's no photographs of it.
Thank goodness.
Well, come on.
Really?
Somebody listening right now has pictures.
Selim, you're probably in the pre-training data set.
Yes, very much.
There are other discussion, folks.
John Devotny, your use of the video is so incredibly good.
By the way, let me just mention everybody if you're watching this and you're a creative
and you've got a music video, you can email it to me at media at d'Amandis.com and send them on in.
We love the music videos themed on the content from this program.
Thank you, everyone, for subscribing.
It's free.
If you haven't subscribed, we're putting this out now almost twice a week.
God willing, not more than that for the moment.
And we'd love to share it with you when it comes out.
Stay tuned for an episode with Brett Adcock, the CEO of Figure Robotics, and more to come.
I toast to you guys, because I feel topped up again.
Yay.
All right, gents.
Cheers.
Cheers.
All right.
Have a great weekend.
Drink water.
Have a good one.
Good night, all.
If you made it to the end of this episode, which you obviously did, I consider you a moonshot mate.
Every week, my moonshot mates and I spent a lot of energy and time to really deliver you the news that matters.
If your subscriber, thank you.
If you're not a subscriber yet, please consider subscribing so you get the news as it comes out.
I also want to invite you to join me on my weekly newsletter called Metatrems.
I have a research team.
You may not know this, but we spend the entire week looking at the meta trends that are impacting your family, your company, your industry, your nation.
And I put this into a two-minute read every week.
If you'd like to get access to the Metatrends newsletter every week, go to Deamandis.com slash Metatrends.
That's deamandis.com slash Metatrends.
Thank you again for joining us today.
It's a blast for us to put this together every week.
With Amex Platinum, $400 in annual credits for travel and dining
means you not only satisfy your travel bug, but your taste buds too.
That's the powerful backing of Amex.
Conditions apply.
