Moonshots with Peter Diamandis - Financializing Super Intelligence, Amazon's $50B Late Fee | #235
Episode Date: March 5, 2026Livestream the Abundance Summit: https://www.abundance360.com/livestream In this WTF episode, the hosts unpack AI's supersonic tsunami - from Amazon's $35B AGI bet on OpenAI, Anthropic ditching saf...ety pauses amid race pressures, and hyper-efficient Chinese models shrinking to iPhones - to meat puppets at Burger King and Pulsia autonomously running 1,000+ companies. Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360 Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy Your body is incredibly good at hiding disease. Schedule a call with Fountain Life to add healthy decades to your life, and to learn more about their Memberships: www.fountainlife.com/peter _ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Substack Spotify Threads Listen to MOONSHOTS: Apple YouTube – *Recorded on March 3rd, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Amazon makes a contingent offer to put $35 billion into open AI based upon them, first off going public and secondly achieving AGI.
It's kind of incredible that we've financialized superintelligence, which is amazing.
The OpenAI to Microsoft definition of AGI was something like generating $100 billion in either earnings or revenue, I forget.
We're measuring compute in terms of gigawatts and AGII in terms of dollars. I love it.
Amazon was all-anthropic for a while. Now they're open AI. At some point, the circular economy
becomes indistinguishable from the real economy, and I think that's what we're seeing here.
This is the entrepreneurial opportunity of a lifetime. We're talking about tens of thousands
of times, more capacity to create more money, more value-created. Abundance is going to be
absolutely rampant.
Everybody, welcome to Moonshots. Another episode of WTF just happened in tech. The number one
podcast in A.I. and Exponential Technology, getting you ready for the future, getting you ready
for the supersonic tsunami heading our way. I'm here with my extraordinary moonshot mates,
Salim Ismail, DB2, AWG, gentlemen. Another week, we've gotten to a cadence of two of these
per week. It's, and it feels like we're always leaving so many stories on the table.
But let's do our best. Yes, we need to, we need to actually.
move faster and faster and faster, just like the singularity itself to keep up with everything.
No tech company waits, no GPU waits.
All right, let's jump in our top AI news stories, Anthropic, Google, Open AI, Uber,
accelerating at an extraordinary speed of change.
Our first story for today, Anthropic revises responsible scaling policy amid increased competition.
This was a story I put to the top of the conversation,
because it's very significant.
And I had Jared Kaplan on stage at the Abundance Summit last year, the year before.
Alex, you know Jared well.
I think he was a roommate.
He was a year behind me in the Harvard Physics graduate program.
Yeah.
What an amazing group of friends that you had.
But here's the deal.
They're dropping their 2023 pledge to not train advanced AI unless safety is guaranteed.
And Jared's point, I think, logically, is if everyone else is rushing ahead, then us, you know, sort of hampering ourselves doesn't make any sense.
And I want to discuss this because it's concerning.
You know, a lot of us looked at Anthropic as the most responsible party out there, them and Google.
Thoughts, gents.
Well, you know, safety fails in exponential races.
There's lots of thoughts, all of you at once.
This is a metaphor for something, right?
We're going to race to talk about race condition.
Love it.
Oh, my God. Amazing.
I want to open with Saleem here.
Salim, go ahead.
Okay.
Well, I mean, safety typically fails in exponential races.
You could look at the whole thing writ large as opening.
I cracked open and let Pandora's box out.
And so this is just the same type of dynamic occurring again.
It speaks to the idea that technology is going to move at its pace.
And we have to move our human structures at that pace.
we can't fall behind.
Yeah. Dave?
Yeah, no, it's definitely history of repeating itself.
And so many of our MIT classmates went to Google back in 2004, 0506, when it was Don't Be Evil.
Yeah.
And they went there over Microsoft, because everyone perceived Microsoft as being evil.
And Google was going to be the force of good in all of tech.
And then, you know, they bought YouTube, and then they built Chrome.
And then they, you know, what they promised the engineers early on, the ones that I knew anyway, is, look, we will never.
store somebody's search history.
How laughable is that in hindsight?
So then they expanded out of search history.
We're going to store that for five years.
But we're also going to launch Chrome.
Now we're going to look at all of your browsing history.
Then we're going to buy double-click.
Then we're going to run targeted ads based on everything.
Then we're going to do Gmail and read every email.
Microsoft says they don't read your email, but Google says, well, we'll do what we want,
but we won't pry too much.
But they do read your email.
And so that slippery slope of competition, you know, very, you know, very, you know,
corrupts the original mission statement gradually over time.
I gave a whole presentation in Davos on how this evolves.
And Dario wants nothing more than some rules.
And he's actually legitimately pissed that he has to actually repeal his own ethical standards
to be competitive because there are no rules.
And this is exactly how it has to evolve.
Because Dario is in a position where he has to choose between being irrelevant, which doesn't help,
or repealing the original pledge, which he doesn't want to do,
but it's better than being irrelevant.
Your earlier commentary, Dave, was really spot on.
This is what Corey Doctoro calls in shittification, right?
So people promise something, and then they gradually degraded over time,
and by the end of it, it's a shit show.
I love the way you encapsulate everything.
There's no credible mechanism to slow the race right now,
and so it's all out.
Alex, what do you think about this?
I think there was no credible mechanism to guarantee safety in the first place.
I think the entire premise was probably wrong.
I think sort of the superficial gloss is, okay, we're in the Red Queen's race,
and this is the race condition that everyone 10 years ago was scared of finding the world in
where we have a number of frontier labs all racing to do the terrible thing.
You build the thing and everyone dies.
I don't buy that at all.
I don't think this premise that either a heroic individual or a heroic,
heroic frontier lab was ever going to be in a position to guarantee safety.
And in fact, the sort of, I remember back to the earlier days of the frontier labs where the
concern and part of the reason why Open AI was formed itself was concern of a singleton.
Competition is how we guarantee that there isn't going to be a singleton that dominates the
future light cone with superintelligence.
And I think similarly, the notion that there's going to be sort of unilateral safetyism,
where single heroic individual, like one of the more prominent AI Dumer's,
or a very safety-oriented Frontier Lab is somehow going to ensure safety throughout the forward
light cone, that was never going to happen.
Safety, to the extent we get it, is going to come from competition.
It's going to come, I think, from a balance of powers and a separation of powers.
And I think what we want is competition between the frontier labs
and maybe even to some extent competition between nations,
States, such as what we're seeing, to compete to do the best job for advancing humanity.
And I think any unilateral safetyism is probably a dead end.
One of the questions is, will safety become an emergent property in some form or shape, right?
So right now what we've seen is Anthropic go from a policy of we won't build it unless it's
safe, that's been their policy to will build it as safely as the competition is building theirs.
and unfortunately it's a slippery slope potentially down to the bottom.
I don't see the mechanism for any kind of emergent property around it here.
Well, we haven't seen the mechanism for emergent properties of what we've seen so far either.
Well, I would take the position that we are, that in some sense, again, the fundamental flaw,
I think in the thesis that safety would originate from a heroic individual or heroic organization is,
I would argue it takes an entire civilization to align a superintelligence.
We took all of humanity's content online and used it in compressed form to pre-train AGI, baby AGI, in the early days, like summer of 2020 with GPD3.
Why wouldn't it be reasonable to expect that it will take all of humanity to defensively co-align and co-scale superintelligence as well?
It's not going to come from a single lab.
What do you think about Elon's point of view that we need to build ASI, that is,
is maximally truth-seeking as his mechanism for alignment and for safety.
I think that's just a fraction of what's needed.
That addresses a very specific issue, which is, look, we don't want the AI to have one religion
or to have one perspective on how you should live.
We want it to be truth-seeking and have all opinions in cap.
And we don't want to be censored.
So that's definitely a problem, but it doesn't address the imminent job loss.
the imminent consumerism.
You know, the AI, people are conceding
all of their most private information to the AI,
the same way they did with their Google search history,
and it's accumulating that data,
and people aren't fully aware of what it's going to do.
It's going to turn around and start convincing you to do things.
And so if you don't have rules in place,
the natural profit motive of the AI companies
is to start selling you things.
And you saw this with that Anthropic Super Bowl ad
that we showed in the pot a couple.
Yeah, that was funny.
Unbelievable, I've showed everybody that ad now.
But this is exactly where it's going to go,
if there are no rules.
And so I completely agree with Alex's perspective
that 10 years from now,
after we've solved all physics,
we've solved all math,
we have global abundance,
all of this is going to look silly 10 years from now.
But in the three-year timeline,
massive job loss,
total confusion,
and massive rampant AI sales consumerism
that has no regulation around it right now.
It's going to be an absolute cluster.
Yeah,
especially for the consumer-first companies
that are needing to generate revenue.
Yeah, yeah.
Well, you know, actually, after that last pod,
you know, you showed that chart, Peter,
that had Anthropic, you know,
growing 10x year over year,
26 billion in revenue forecasted this year.
And on its current trend
will be the first company
hit a trillion dollars in revenue in history
by 2029, 2030.
And exceed open AI this year.
And exceed open AI this year.
Crazy numbers.
But I said on the pod,
you know, that implies like a $30 billion,
or $30 trillion valuation.
But then I ran it through
to perplexity and it said, no, that implies a one quadrillion dollar valuation using the current
market price or ratio.
Yeah.
Well, yeah, we discussed this a few podcasts ago that we'll see the first $100 trillion
companies before the end of this decade.
Anyway, I think that this is a more honest policy for anthropic end of the day.
Pauseism was never going to work.
I mean, we all know a number of folks at MIT and otherwise
who advocated for a six-month pause just for the entire space to cool off
and wait for safety to catch up.
Did safety catch up?
Whatever that means, not at all.
If anything, that functioned as an accelerant to capabilities.
I also think even in the DNA of Anthropic, Anthropic was originally recall.
It was originally founded as an exodus of open AI employees
who were purportedly concerned about safety.
or lack thereof at OpenAI.
So they start a safety slash AI alignment oriented firm.
Then they rapidly discover that the best way to do safety is to have your own models.
And they discover the best way to have your own models is to raise a bunch of money to train your own models.
And then they discover the best way to raise money to train your own models is to generate revenue.
And the cycle completes where yet again an alignment oriented firm becomes a capabilities firm.
This happens over and over again.
And I would argue at this point, alignment and capabilities are inseparable.
There's like a deep duality there.
Did you see the new standard, by the way?
Dario said, well, okay, we can't live by our original plan to not train advanced AI unless safety is guaranteed.
So the new standard is we need to be as good or better than anyone else.
It's like, wow, that's a very different bar.
And we see, you know, recently with the whole Department of War debacle with Anthropic and Open AI,
opening eye cuts the deal, Anthropic.
Where does Anthropic stand right now in that whole conversation?
They're in limbo.
I mean, I write about this every day in my newsletter.
Anthropic is at the moment.
My understanding is they're in limbo.
They're probably in negotiation with the Department of War,
but they're otherwise in limbo and cut off as a supplier.
And considered, I'm not sure whether they've,
I think Dario and others have made some formal statements
that they haven't received anything in writing yet
from the Department of War, but my understanding is that this administration is considering them
a supply chain risk. And at the same time, notably, Open AI struck a deal. Yeah. And at the same time,
we hear that Anthropic was used by the Department of War to actually plan the attacks in Iran.
Well, one thing that's really, really, yeah, no, I mean, look, it's really clear that the people
who control AI, the U.S. government and otherwise can take out any world leader at any time now.
the combination of satellites, AI to read every image and, you know, universal cameras,
it makes it possible to decapitate any country at any time.
We proved in that twice in the last quarter.
So the future of warfare is basically whoever controls AI chooses who gets to stay in power.
You know, Dave, that's a really important point.
One of the things I've mentioned before is we're living in a world where you can know anything, any time, anywhere, right?
It's a trillion sensor, you know, over a trillion sensor planet right now with drones, orbital satellites,
autonomous vehicles, gathering data, and then AI doing predictive analytics on what things are likely to be,
even if you don't have data for it.
Well, I'll tell you one other things.
I was just going to say, maybe not even just a means to an end,
but also depending on which analysis of the Iranian situation you subscribe to,
maybe an end to an end as well.
If you look at Venezuela and the oil exports to China
and you look at Iran and the oil exports to China,
the picture emerges, or at least one possible picture,
emerges that what we're seeing is not just AI
where Claude is being used to perform the Venezuelan operation,
perform the Iranian operation as a means to some sort of arbitrary
or nebulous geopolitical purpose,
but actually, arguably, with China,
looming in the background and possible Chinese invasion of Taiwan
and the risk to the semiconductor supply chain
and Western AI that would cause,
it may be the case that AI is also the end
to the means to the end,
and that what we're seeing more broadly
is in some sense super intelligence being used
to protect the future of Western super intelligence.
Yeah, and there's a window of opportunity,
maybe a few months,
to put some kind of structure around this global,
where you'll see later in the podcast that the models are improving it.
It's like 3x, 4x reduction in parameter count, 10x increases in intelligence,
just every time we podcast, it's another step up.
And we were already predicting, where I was anyway,
that this is going to be 100x year, just in terms of raw parameter count.
But I think that's the lower bound now,
looking at how just the beginning of the year has progressed.
So there's a window of time where the power of AI that percolates out
out to every country in the world is going to be ridiculous by the end of this year.
You know, create any virus you want, create any nuclear weapon you want, just working with your
AI agent. So there's a window where we can start thinking about regulations that register
the AI use cases and agents and chips and processing before chaos breaks out. But you can see that
that window is executable now because you saw Venezuela, you're seeing Iran. You know, clearly
there's this tipping point happening right now
where, you know, whether it's NATO
or whether it's the United Nations
or whether it's the U.S. Congress,
some entity needs to start
formulating some structure around this
because it's happening this year.
Yeah, I mean, people need to wake up.
I just want to say one thing.
People have to wake up to the fact
that AI is the single most important force
impacting everything.
You know, every single element of humanity right now
is going to be accelerated,
invented by this.
Salim, go ahead, please.
Dave, it just struck me
that you mentioned the Congress,
the UN, NATO, probably the
three most toothless
entities
on the planet today. So the thought that they would
actually get together and do something, or
anybody do anything, I think is
low. I think we have to assume that it won't
happen and look at the other side
of that. One thing about the Anthropic
case, there is a potential
I looked up in analysis. They do have a
legal challenge potential because the way that that was classified is so ridiculous to make them
an existential risk and all that supply chain risk, et cetera, that they have legal recourse
to fighting that and they might win that.
Yeah, the thing about the legal recourse is that that process is usually a three-year-long
window, which is hilarious.
It's ugly.
It's ugly.
What I find really upsetting is that in this scenario, everybody loses.
Yeah.
There's no winners in this.
No, if there's no framework and no rules, it's a lot like the NFL was back, you know, 20 years ago when the defensive coordinators would pay bounties to the linebackers to take out the quarterback.
Like, just take him off the field. I don't care if you break his legs.
And take the, you know, 15-yard penalty. Who cares? Because then he's done for the season.
The NFL said, this is not good for business. Like, we need some rules.
You did not expect that pivot.
Well, that's where we are with AI right now. It's like, hey, you know.
I agree. Forget it. I don't even want to go down the whole little bit.
Let's continue with the Anthropic story.
Hey, everybody, you may not know this, but I've got an incredible research team.
And every week, myself, my research team, study the metatrends that are impacting the world.
Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology.
And these Metatrend reports I put out once a week,
enable you to see the future 10 years ahead of anybody else.
If you'd like to get access to the Metatrends newsletter every week,
go to Deamandis.com slash Metatrends.
That's d-amandis.com slash metatrends.
I found this story pretty fascinating.
So Anthropic expands Claude's agentic capacity.
Two different sides of the equation here.
Co-work gains scheduling, right?
So this is a cron job.
So Claude completes recurring tasks at specific times.
For example, generating your morning briefing
or spreadsheet updates or your Friday presentations.
I mean, that element was very much what we saw in OpenClaw, right?
And so it's interesting.
And the second half of this is that Claude code has enabled remote control.
So you can kick off a task on your terminal and pick it up on your phone.
You can control it from the Claude app or from a URL.
And I'm wondering, you know, this has probably been in the works for some time.
So when Anthropic basically, you know, tried to do the kibosh on Claudebot,
I'm wondering if that was because they had this in the works.
Basically, what OpenClaw has been doing is what Anthropic is just rolling out under a different approach.
Oh, for sure.
I take Anthropic at their face that this was, or OpenClaw, I guess, that the challenge was more trademark-oriented than anything else.
But I do think they're, what have I been saying for weeks slash days at this point?
that was distinctive about OpenClaw, it's the two things.
It's headless, able to function autonomously for 24-7,
and also that it's convenient to chat with via conventional messaging channels.
And what do you see here with co-work?
Co-work is able to autonomously be scheduled headlessly.
That's the headless part.
And then remote control, that's the mobile messaging type part.
But I think both of these are half measures.
I'm insufficiently motivated by each of these.
I use co-work from time to time and I use ClaudeCode all the time.
And neither of these, I think, is as compelling, at least conceptually, as a more open-cloth-ish framework where all of these are cleanly packaged.
And I think my guess is Anthropic and Open AI and all of the other bigs will be forced to release their own sort of first-party open-claw competitor sometime in the next couple months.
There's something I found very profound about this plus R.
last conversation around OpenClaw and everything happening. I think I was thinking about over the last
couple of days, something very profound is happening, which is the sheer democratization of compute power,
right? Note the agency of an individual developer with a Mac Mini and running Quinn locally and
open claw has unbelievable agency in decentralization now. Not controlled by any centralized
authority, not controlled by any centralized command structure. They can essentially operate
as they feel like. So this is an incredible independence and agency at the edge,
which is going to really blow open innovation in a way that we...
Six D's, baby.
Total democratization, total democratization.
And demonetization, as we're seeing it happen in cascading down, as Dave mentioned earlier.
Ironically from China.
Yes, ironically.
Well, ironically from China.
And then one other nugget, you know, Peter, your theory is 100% right.
You know, Anthropic, why did an anthropic just throw out something better than OpenClaw a year ago?
it can and will delete things off your laptop.
And so all these open claw users, including my kids, including me,
actually have separate laptops or separate Mac minis,
including Alex Finn, in our podcast we just did.
They run it on isolated hardware.
Anthropic couldn't really contemplate throwing out a product
then say, yeah, but run it on separate hardware.
How are you going to do that?
So this creates a huge entrepreneurial opportunity, though.
If you say, listen to what Alex said a second ago,
open claw is unbelievably compelling.
And anyone who's started down that path, we'll never go back, right?
It's just you'll never give up your Jarvis once you have a Jarvis.
Have any of you played with Perplexity's Computer?
I've been hearing really good things I've not tried it yet.
I looked at the demo.
I think it's an interesting step in the direction of councils for everything.
And I've had so many people over the past few months ask me for something like Perplexities computer
where you're there right now, if you have a given task,
They'll manually go to the top three or four frontier models, ask them for independent opinions,
and then try to synthesize that into one coherent whole.
And that is essentially what perplexity computer tries to automate.
There are others in the space as well.
But I think even there that it's nice sort of sugar, syntactic sugar, if you will, around the existing models.
But I don't think it's transformative.
I think ultimately, even this ability to counsel up
or to create juries around lots of competing models,
that's just going to be table stakes
as with so many other forms of scaffolding.
Did you just say syntactic sugar?
It's a term of art and computer science.
Alex, I think the point you made a minute ago is brilliant.
And Dave, I think you were saying this as well,
all of the big players, all the hyperscalers,
all the frontier models are going to have to develop
some version of open claw
because it's going to become the de facto
every person's going to have their own version of Jarvis.
Yes. But remember, it's really expensive too.
This is part of the reason. It's not the only reason
for running, say, Quinn locally under an open claw scaffold.
That's a lot of compute.
If you have one or more agents that are running constantly for you,
I'm not sure Anthropic and its present state
even has the cloud infra to be able to launch a product like that.
And I think in many cases,
anthropic, open AI, the others are probably just waiting around for their infrared to catch up
with applications like that before they launch it.
Yeah, agree.
This is the entrepreneurial opportunity of a lifetime, though.
Anybody who jumps in, and there's so many different versions, so many different things to play
with.
But when you go to J.P. Morgan, you know, Justin Milligan, who just joined us, his division
at J.P. Morgan was only allowed to use GPD4.
Wow.
Are you kidding me?
And he couldn't take it.
It's like, this is ridiculous.
But no one has figured out, okay, but how can I use it in this highly secure inside the firewall, inside JPMorgan environment?
And, you know, Dario is not going to answer that question.
And, you know, the OpenClaught team isn't going to answer that question.
And they want everyone to thrive who uses their platforms.
They don't want to kill every job.
They want any early adopter to thrive as they thrive.
And, you know, Dario, if he hits a quadrillion dollar valuation, he doesn't need more money.
He needs to not destroy every job in America or in the world.
And so this is really entrepreneurial heaven.
If you can figure out how do I get what I can use right here on my Mac Mini,
and it can clearly solve all these problems,
how do I get that inside a real-world use case
without breaking everything, without regulatory problems?
Many, many, many job opportunities in that theme.
One of the challenges still with, even with Claude's agentic capacity
is giving AI,
recurring unsupervised access to your workflows means that either they're going to be a bunch of errors
or you're going to be spending all your time checking the work before you hit publish.
It's still going to keep the human is still, you know, in the loop to assure quality or alignment.
There will be a point in which you trust it completely, but we're not there yet.
This is economics 101, or I should say microeconomics 101.
When the cost of one good falls to near zero, the value of the complementary good increases.
So as the value, or I should say as the cost of generation of content becomes post scarce, which is exactly what we're seeing, that increases the value of its complement, which is verification for now.
For sure.
All right, going to our next story, Claude, in keeping on the Claude theme, Claude Gaines, co-werect.
plug-in templates for finance, banking, and HR.
So this is fascinating, right?
Anthropic is building enterprise agent marketplace.
It's department-level AI infrastructure,
and it's taking down company after company after industry.
What's, you know, we've seen just the decimation of a number of players out there.
What are you thinking about this?
So I wouldn't interpret this as, you know, when Microsoft launches an assault like on the relational database, it's a big, you know, multi-billion dollar investment.
Here, Anthropic can build these connectors and adapters, vibe code them in probably an hour.
You know, and anyone else can, too.
So I wouldn't perceive it as, you know, Anthropic is taking over all banking software.
It's just so easy to build the stuff now that you might as well roll out all that functionality.
So I wouldn't over read into the intent behind it.
I don't think it's intent.
I'm just saying, you know.
The implications are profound, though.
So I've got two thoughts.
One is every department now becomes like a programmable intelligence layer.
Yes.
And basically all prescriptive logic in companies collapses into these AI agent roles.
And the real prize here is enterprise orchestration.
Not so much chatbots, but autonomous workflow networks.
Because this is what I talked about last time.
This is the organizational singularity.
We go from human-centric approvals hop-to-hop to hop-to-hop human to human to human,
to agentic workflows with human beings doing oversight, dashboard monitoring, and exception handling.
A couple of comments on this one.
If you actually look at what these plugins are that Anthropics launching that are causing the so-called SaaSpocalypse
and producing, you know, carving $1.5 trillion off of market caps of various software companies,
they are absurdly simple.
They're just a bunch of MCP model control protocol wrappers
and a bunch of skills with a set of bullets
for how to go about carrying out different job industry roles
or labor categories.
This is not that complicated.
I'm reminded, remember the scene in the Matrix,
the villain is busy unplugging people
without their cooperation from the Matrix,
killing them in the process,
and one of them says,
not like this, not like this.
That's basically what we're seeing,
where these are just simple text files in many cases
that are reducing single-handedly the market multiple,
the trading multiple, of entire industries.
I think on the one hand, it's incredible that a simple text file
can say chop 10% of the value off of a CRM firm,
at least market value.
On the other hand, these, as pointed out earlier,
these plugins and the marketplaces of the plugins are so absurdly simple that I would reasonably
expect these plugins are going to get just built in, since they're just scaffolding anyway,
built into the next baseline version of the model and won't even need to exist independently in
the future. Alex, I think the interesting point here is a year ago, if you had delivered this
as an entrepreneur, you'd be out in the market raising at multi-billion dollar valuations.
It's called it's called hyperaliener.
Deflation for a reason, Peter.
Yeah. I mean, I get it. I just want people to be aware that, you know, the moat for an
entrepreneur coming forward was something amazing, that we're going to reinvent the entire
HR industry or the investment banking industry and, you know, raising at a $4 or $5 billion
valuation. It's basically that moat's gone months or a year later.
We're going to see the same thing happen.
I think it's really important, though, to step back and look at the macro every now and then
and say, look, abundance is going to be rampant.
We're talking about tens of thousands of times,
more capacity to create more money, more value created.
Abundance is going to be absolutely rampant.
And there's no reason to be afraid,
even though, like if you're a CRM company,
your 20-year future cash flows from recurring maintenance revenue
is suddenly gone.
That's true, but the opportunity to pivot and thrive
is bigger than ever.
And so I think a lot of people are,
There'll be a ton of volatility because people haven't mapped to the new reality yet,
but opportunity is bigger, not smaller overall.
But that agility is fundamental to large organizations' success.
You know, I talked about on the last pod the, you know,
the asteroid hitting the Earth and changing the environment so rapidly
and the slow-numbering dinosaurs going extinct.
That's exactly what we're talking about here.
I mean, Salim, do you think that we can see large companies pivoting, you know, rapidly enough?
Zero.
Zero. They will not be able to do it.
I mean, look, we've seen this throughout history.
It doesn't work. I think where you end up is not to throw another metaphor at this,
but you end up where we saw with Google Ads where you kind of took out the advertising market massively,
and then Google Ads becomes like a coral reef with lots of little species feeding off the reef.
And if you're the reef, then you're in great shape.
But in this case, the reef itself is disappearing as we can decentralize completely to,
one-off computers running things. There are people using open cloud to go to small businesses
sitting down in front of them and automating workflows live for small businesses. This is like
incredible what's going on. You know what else, Leam? There are a lot of private equity funds that
are coming at us now saying, hey, you know, big companies never change quickly. Wait, this big company
could be a small company very quickly because we don't need all these people. Now we have a small
company with huge, huge cash flow. Wow. So there will be a PE fund emerging shortly,
whether it's not, if it's not there already, that is going to buy up kind of medium-sized big
companies and set up a digital twin infrastructure on the side where you have an AI-native
digital twin and you just move workflows over to it and you'll collapse the cost of running that
organization by about three to five X. Well, that's what macro-hard is about.
Macro already exists. I've started multiple companies like that. I've started multiple companies like that.
even tried to popularize a term for it. I call it an Ibo, an AI buyout. We've seen multiple
PE firms doing that. Like, this is table stakes at this point. Yeah. And of course, Macro Hard's,
you know, vision is I'm going to come in and digitize your entire employee base and operate it.
That's for pure software place, but I think we're going to start to see this in real
in the physical world. Like Project Prometheus from Jeff Bezos is attempting to do this
for industrial firms.
Right.
Yeah.
Anyway, I think the point here is large companies need to take action right away.
So, Salim, what is your advice for a large company, a CEO listening and seeing this coming
their way?
What do they do?
Exactly what Alex just said.
You set up an AI native digital twin on the edge.
You run an immune system, 10-week sprint to block the response from the mothership.
you grow this thing and slowly move workflows over or as quickly as you can.
You do a combination of bottom up and top-down workflows.
And the real shift in people's heads needs to be that instead of human-centric workflows,
which is what has been like for the last 150 years,
we now move to agentic workflows where you can get things done much more effectively
with hordes of little agents, two layers, strategic layer, and an execution layer.
And then human beings doing oversight, exception, handling.
Because coordination costs go to near zero, execution costs go to near zero, and in and outside the firm.
The future of the firm becomes a legal fiduciary liability purpose holder.
And also, Salim, I know you're a big fan.
Two other things real quick.
The first is your brand.
If it's reasonably good still, you own your brand and you own those customer relationships for the moment.
I think it's worth also rereading Clay Christensen, the Innovator's Dilemma, which it exactly addresses this.
Salim, I know you're a big fan.
We all should be.
The Innovators Dilemma contemplates, hey, every 10 years, something truly disruptive is going
to obliterate whatever you do.
And here's how you should react to it in that moment.
But now instead of every 10 years, it's going to be every 10 months, and then soon it'll
be every 10 weeks.
And then, you know, it'll be every 10 days pretty soon, too.
But the playbooks is still the same.
You know, read the innovator's dilemma.
Invest in the new thing.
Use your capital leverage in your installed base to invest in the new thing.
I just got off a board call for one of my portfolio companies.
And, you know, my comment to my board and to all boards out there is you have got to give your CEO top cover to be dramatic in their modification of the business.
Because, you know, and you have, I know.
You're either the disruptor or you're disrupted.
Yeah.
It's for everyone.
Right now.
You get founder mode.
Yes.
I mean, that's basically it.
If the company and the board and the CEO are not in founder mode
and willing to do dramatic surgery on the company, you're dead.
You're walking dead in any industry.
I'd also be remiss, Peter, if I didn't point out,
here we are basically on the eve of abundant knowledge work.
Knowledge work, of course, being cooked, knowledge work about to be post-scarce.
And here we are bringing our hands over where to find scarcities.
in knowledge work as is about to become abundant.
I just want to point out the irony.
Ah, such an extraordinary time to be alive.
All right, talking about disruption,
disruption coming out of China, Alibaba's 35 billion parameter,
Quinn 3.5, medium outpaces,
235 billion quen three in benchmarks,
the power of small open weight models.
So, Alex, to you, buddy.
This is happening in Western models, too.
The difference is when, say, Open AI launches a mini model or Google DeepMind launches a flash model.
They don't advertise the parameter count, so it's not as viciously obvious as it is when a Chinese frontier lab launches an open weight model.
We get to see the benefits of distillation in a successor model.
But it's striking.
We're seeing almost 10x reductions in parameter count while maintaining capabilities or even increasing capabilities.
And the broader picture, just to keep in mind, is the capability density of models is increasing.
This goes hand in hand with what we've talked about in the past, Sam Altman's comment about 40x year-over-year hyper-deflation of costs at constant capability.
In this case, my mind immediately goes to, what's the end game here?
If we can see an increase in capabilities with a reduction from 235 billion parameters to 35 billion parameters, what does the end game look like?
Where does this end?
Do you end up?
You on made this point during our podcast with him, if you remember that, Dave.
Oh, yeah, for sure, for sure.
And, yeah, he said he does, yes, there's research team not to give him parameter count anymore.
Just give me bites.
Yeah.
Because, you know, they keep quantizing and shrinking the file size.
I had a lot to say about that, but I bit my tongue because that perspective isn't right either.
But Alex predicted this a long time ago.
I don't know how you saw this coming.
I just look at the scaling law curves and extrapolate.
Well, it's, I mean, in this case.
I was on the treadmill this morning watching old Moonshots podcasts, and I'm like, wow, that was like so long ago.
And I look at the timestamp and it was only like two months ago or three months ago.
Like, holy crap.
Things are changing so quickly.
Yeah, Alex, who said this.
I think you're the first person I ever heard from it saying, look, you know, the equivalent
of a GPT-5 is going to be maybe 30, 40 billion parameters, but it could get as low as one or
two billion truly, because right now when they train that caliber of model, it has all this junk
knowledge in it, too, not core thinking junk, you know, Twitter feeds and Kardashian news
and all that other junk.
Exactly.
Strip that out?
This could get very small and very tight and very fast.
It could get way smaller than a billion.
I mean, I could imagine scenarios where it's only a few million parameter equivalents.
That's sort of the core micro kernel of AGI or superintelligence,
and the rest lives in a flat text database or something.
Well, that will, you know, another thing Elon said is we're not as smart as we think we are.
If you get superhuman intelligence down to a couple million parameters,
it's like, wow, we're really not that smart.
The first person I heard speak about this was actually Imod,
Imad Mustak speaking about what you can get onto your phone.
We'll see that in a moment.
So my question is, is this bad news for the big compute incumbents, right?
If massive data centers are being invested right now and this is so good for the startup
community, it's fantastic, but you still need the big models.
Well, it comes down to something Alex also talks about a lot, which is do we have boundless
problems we can solve?
Like, if you get everything to be 100 times faster this year, you can just do that much
more.
Do we actually run out of things to do?
Is physics infinite or is physics finite?
Is human benefit infinite or is it finite?
And we'll find out, I guess, in a year.
But my guess is no, every time you shrink the model
and make it faster, you're still going to use
every single chip in that data center just for the next thing.
Especially when you start getting to the full cell simulator
and all the health stuff that everyone's really eager to do.
I mean, that is very, very compute intensive.
I want to show this to be better.
Peter, I thought 64 kilobytes should be enough for anyone.
exactly exactly oh the good old days uh check out this so i saw this on x this morning so this is quen 3.5
running on an iphone 17 pro on airplane mode um and this is extraordinary so it's a two billion
parameter six-bit model running on apple silicon so imagine you're any place on the planet you don't
have Wi-Fi, but you've got Quinn on your device, and it's got all the intelligence you need.
I find this.
Yeah.
Go ahead, Alex.
Seeing demos like this in my mind underline, either depending on whether you want to see it as competence or otherwise, how much of an opportunity Apple has to finally take the lead with local models, or conversely, how far behind they are in terms of taking the lead in terms of local models.
But either way, clearly there's this enormous overhang.
We could be running enormously competent reasoning models locally on all of our recent iPhones.
The fact that it's not yet baked into the operating system, obviously,
and this is very publicly embarrassing, maybe one wants to call it.
For Apple, on the other hand, lots of rumors that this time around, finally, with Gemini integration,
they're on the critical path and they'll finally launch something in here.
Finally, Siri will not suck anymore.
Apple intelligence, however they brand.
Note that the local ability able to go offline means it's unstoppable, it's uncensoredable.
I mean, this is incredible.
Yeah.
Well, that's, yeah, that's the ultimate barrier too, because if this can get to the level this year
where it can do a gain of function virus, it can do a chemical weapon, and it all fits into
a tiny, tiny little package.
You know, with nuclear proliferation back in the 1950s, there was a theory that, hey, you know,
if these physicists keep chugging along, they're going to make something the size of a grenade
that has the power of an H-bomb.
And then, thankfully, that didn't happen.
It just the physics didn't allow it.
But the AI is not going to stop like that.
It's going to keep getting faster and denser and more compact.
And the window of opportunity to put rules and regulations around this is very, very narrow now.
It's really, it's got to be this calendar year.
What do you think is going on at the White House in Congress in the Department of War?
So we've seen this conversation before, right?
We had their head of one of the big agencies, the head of innovation.
of one of the big agencies at Singularity, Peter?
You probably remember this.
I do.
We asked him, how do you think about this
when somebody could design a virus on an iPhone,
something, something, et cetera?
And he said, look, and it was a much more clever answer
than I thought he would give, which was
when you have nuclear weapons,
you know how many there are, you know where they are,
you put eyes on it, you try and track it as much as possible.
Great. When you've got something that's
this democratized, what they're actively
doing is opening up these communities.
So they went to the biohacking communities
and funded them to open up.
because if you're trying to do something dodgy, you kind of need to collaborate with a few people,
and the conversation surface very quickly.
And then the community does self-policing, self-reporting.
Somebody's doing something dodgy asking a few people, they pointed out, et cetera, et cetera,
because it's in their best interests.
And it's actually worked very, very well so far.
What it goes, what happens when you get to this level is unclear,
but I think the general trend has been very positive so far.
I'll tell you one other thing.
The way this evolved with financial services being self-regulated is,
we think of it right now as, oh, the federal government is incompetent.
They're not doing anything.
The researchers over at Anthropic are brilliant.
They're moving a million miles an hour.
It's going to end up being the same people.
And this is the way it worked out with the SEC.
You know, when you go, who works at the SEC?
Oh, it's the same guy that was at Goldman Sachs yesterday doing his two years at the SEC or her two years at the SEC and then going back to Goldman Sachs.
That's the way it's going to be with AI too.
Right now, nothing is happening at the White House.
David Sachs is there, though.
You've got one brilliant guy.
What's going to happen next is anthropic people.
and Open AI people are going to actually be the people working in the self-regulating agency.
And so the people will have to bounce back and forth, and they'll do it because they're worried.
They're conscious of the impact of not doing it.
Yeah, still concerning, right?
Still concerning to have this level of capability offline.
We know how to handle decentralized capabilities already.
We have printers, in some cases states are trying to regulate 3D printers.
And before 3D printers, we had 2D printers that could be used for counterfeiting.
But Alex, we baked software into all of those printers, right?
There was a standard that was created for...
There were the yellow dots.
For any, you know, for Canon, for HP, for any printer that detected you trying to, you know, photocopy, you know,
it wouldn't allow that.
So the question is, if we're talking about open weight models out of China that we don't control the software on, how do you bake in protection there?
There are so many different ways that one can defensively co-scale against two billion parameters, six-bit models running on someone's iPhone.
We've already talked about some of them.
There are other ways in the scheme of things.
I don't think sort of these edge devices running tiny Chinese open.
and weight models either individually or collectively pose an enormous hazard to the market.
Like they're just not that capable relative to the other models that are out there.
I think the frustration is that the solutions are relatively obvious to Alex.
We've had this at this meeting at the State House before.
It's like, guys, it's not that hard. Here's what we need to do.
And then nothing happens.
You know, that's the frustration.
But yeah, registering the models, registering the compute, you know, tracking the GPUs and where they are.
It's all very doable.
The ideas are not.
And defense of co-scaling of making sure that the most flops are going to good purposes rather
than bad purposes.
I'm reminded, I think it was the New Yorker cartoon.
Guy is up late at night at his computer saying, I can't come to bed.
Someone somewhere said something wrong on the internet.
We can't get so bothered by the fact that someone somewhere might be doing something wrong
with a $2 billion parameter model on it.
I've got so many agents running now, and I put in place a little rule that said,
hey, before any process launches, write a mission statement and store it next to your code.
Really?
It solves so many problems.
I can go back and read the mission statement and say, hey, what the hell are you working on anyway?
Well, read my mission statement.
Like, wow, that makes no sense or that makes tons of sense.
It's so simple.
Because the AI is the first self-documenting, self-improving, self-cleaning thing in the world.
Employee, right?
Yeah, yeah.
Just a couple simple little things like that will solve all these problems.
Have you told employees do the same?
Actually, yes, it's a little bit different.
It's, look, whatever you're doing, make sure that it's in a written document that the AI can see too.
I don't want any opaque activity because if the AI can't see it, then I don't want to see it.
I want everything to be on the same page with us and the AIs.
Selim, I want to close this out here?
Salim?
You know, I think a key point that we have to remember is the ratio of good to bad.
We worry about the downside and we should work.
about the downside and the amplitude of the negative is getting bigger and bigger as people can run
these models. But I always go back to the eBay Craigslist example where when you could first
do eBay or Craigslist at scale, you could see human nature at scale. And so anthropologists and
sociologists studied the transactions at eBay and Craigslist. You can master email address pretty
well. On eBay, I can throw up a picture of a MacBook, grab your thousand bucks and I'm after Fiji,
right so how do you what's the actual ratio what is the real true nature of human humanity and by studying
these systems at scale kijiji in canto macardo libre in argentina craigslist eBay they found that
the ratio is consistently 8,000 to 1 meaning there's 8,000 positive transactions on eBay for each
fraudulent transaction that should give you incredible optimism for the future of humanity yeah agreed
all right let's move us along here uh let's head to the
Googleverse. Google releases
Nano Banana 2.
So this is running on
Gemini 3.1 Flash.
It's 4K resolution.
It's
at 0.0.45
per image.
That's a price point
that's cheaper than stock images.
I'm sorry. Sorry, yeah, it's 4.5 cents per image.
Excuse me.
It's cheaper than
buying stock images.
And so is this the
end of commercial photography, illustrators, stock image platforms, probably.
We're just getting started here. And I think maybe buried underneath the headline,
but in the release documentation, is this is the first image model from Google that combines
a reasoning model, which I think they used slightly flowery language for it, but basically
the reasoning power of nanobanana pro with the instantaneity. I think they might have just
said with the speed of Gemini Flash model.
So under the covers, I think this is like technically, this is really interesting.
It's combining probably some sort of diffusion model that we get with reasoning capabilities.
And I think achieving the cost reductions of a diffusion model with nonetheless the capabilities
of reasoning, we're going to see this spread from images where it's mostly right now in video,
back to text, back to code.
there are a few other labs, smaller labs, that have started to make pretty loud announcements
about how they're achieving purported 5x, 10x cost reductions or speed increases using diffusion models
instead of auto-regressive transformers.
But I think this is probably the tip of the iceberg for some final consolidation of
auto-regressive transformers, which are used for co-gen and natural language for the most part,
on the one hand, and then diffusion models.
and diffusion transformers, on the other hand,
that are used for images and audio and video.
We're just going to finally get one consolidated architecture
at the end of the day that does everything.
Yeah.
I mean, this is the wake-up call for people
to remember that whatever you're seeing,
you cannot necessarily believe it.
Every pixel is going to be AI generated at the end of the day.
Salim, thoughts?
I mean, the cost drop is incredible.
people are just going to do so much more with it.
Demarcretization of creativity.
Great. Love it. Absolutely amazing.
I'm kind of curious. I don't know if you guys know,
but the curve on intelligence is just ridiculous.
But on diffusion models, I don't really know.
I know they've gotten a lot faster and cheaper in the last few months,
but it doesn't feel like the same type of algorithm.
Like it may hit a wall.
I don't know. Do you guys know?
Open AI has been investing.
This is in the published literature,
investing a lot of effort and probably deep-minded,
as well, maybe slightly less prominently, in trying to avoid the need for many iterations on a
diffusion model. So a diffusion model normally, conventionally, takes many iterations to start from
pure noise and refine the pure noise into the final image or the final video. There was a lot of
interest that was publicly available, call it six to 12 months ago, from OpenAI and some other
folks as well to see if they could just one shot or two shot straight from pure noise to the final image.
I do think, to your point, Dave, I think although I haven't seen maybe in the past two to three months any scaling laws for diffusion models.
Prior to that, I saw a ton of work on scaling laws for diffusion models.
And diffusion models have scaling laws too.
Everything has scaling laws.
You know, Ahmad would know all about this too.
Let's pick his brain next week in L.A.
Absolutely.
The new standard is go to Nanobanana 2
and ask it to generate imagery
so imagery becomes free effectively
and you can, I mean it used to be
my current workflow, my previous workflow was I'd go to Google Images
and hope I found something.
Now everything is created from scratch and it's perfect.
I love this image in this slide here
of Elon with Sam
and Dario and the whole leadership team of all the hyperscalers.
Alex, you should be in there, man.
You've got to raise your game here.
We're running out of scarcity,
but maybe appearing in that image
as one of the scarcities our civilization has left.
We can make that happen for you, for sure.
All right, continuing on with our friends at Google,
Gemini can now automate some multi-step tasks on Android devices.
So Gemini is now an on-device.
agent that can navigate real apps and complete real transactions.
You know, handle DoorDash, McDonald's, Starbucks for you.
So, interesting, significant.
What do you guys think?
I think it's usually significant.
Expect it.
Well, look, there's been a long time since there was a feature function on the phone
that threatened Apple in any way.
But AI is it.
You know, if you try to use Siri to do something constructive while you're driving,
It's just so painfully impossible.
Also, when you start an AI dialogue
and you're in the middle of the conversation
and the thought process,
you don't want it to go away.
It's addictive and productive.
And if it follows you on your phone
and seamlessly, it's just incredibly empowering.
So if Google wins that race with Android,
they might actually chip away
at the iPhone profit dominance for the first time.
Now, keep in mind that they also need
the duopoly for antitrust reasons.
So neither can't.
company can afford to completely annihilate the other one. They need some parity in the balance
on the force. I don't know if I saw the data. I mean, we've seen a significant drop off in mobile
phones, right, in terms of mobile phone purchases. And that will be displaced, of course, by
headware and earware and all kinds of devices that are beyond just your phone.
Well, I think a lot of that is because...
Sorry, the reason the phone sales dropped off is because they didn't have a function or a feature
that everyone was clamoring for, because people used to recent...
get a new phone when the cameras were improving like crazy,
they'd get a new phone every 18 months to two years.
Now it's like, well, I can sit on this phone for three years, four years.
I'm not even noticing the difference.
But again, AI could completely change that, the neural chips.
Sorry, Alex, go ahead.
I think there's also a supply side element where the rising cost of memory
is making phones in some cases more expensive.
And we're seeing, I think, a generational transition from smartphones,
absorbing the silicon and absorbing TSM's output over to AI data centers as the new form
factor for computers away from this, but just narrowly on Gemini for multi-step
tasking on Android, this is what Siri was originally supposed to be about.
And before Siri, this is what the DARPA personal assistant that learns, or PAL, was supposed
to deliver.
We've known how to do this in some abstract sense for a decade plus.
What was missing, why you ask, are we only getting this now?
I always like to ask, why do things take so long?
Why can't they be faster?
In this case, I really think it was about a combination of reasoning models and vision language
models that could fit compactly onto a personal device.
And we're getting that now, finally, and it's going to be everywhere.
But we really should have had this functionality, even without the ability to read the
screen and understand arbitrary applications.
We should have had this 10 years ago, and that's borderline inexcusable.
I think what's most significant here is the fact.
that Google has a huge installed base of phones, right, of Android phones.
And the ability to take their AI systems and that installed base,
Open AI doesn't have that, Anthropic doesn't have that,
and it's going to be a massive differentiator for Google.
Apple has it. Apple could have had it.
Apple could have had it.
I'm a long-time Android users, so I'm super excited by this.
Yeah, you turn all my I-Message is green.
It pisses me off.
Apologies for ruining your visual.
field sphere, Peter. But this is like agency at the operating system level, which I think is amazing.
And it also means that commerce APIs are becoming machine to machine first and human second,
right? So you'll have less friction in consumer flows. This is going to reshape marketplaces
over time. So it's really exciting. All right. Next article is a real fun one. Amazon makes a
contingent offer to put $35 billion into open AI based upon them first off going public.
And secondly, achieving AGI, enter Salaim with his normal rant.
What the hell is AGI.
I mean, you know, it's kind of incredible that we've financialized superintelligence, which is amazing.
Having AGI as a financial milestone is unbelievable, given we have no idea.
I mean, it's great that intelligence has become a balance sheet trigger.
That's incredible.
But this is so weird.
And thank goodness it says, or.
Well, Alex, the agreement between OpenAI and Microsoft requires OpenAI to give the source code and all of the intellectual property to Microsoft until AGI.
Do you think they use the same definition of AGI?
I suspect it's something similar.
So the definition, my understanding based on public reporting of the OpenAI to Microsoft definition of AGI, went through several iterations with the most recent iteration prior to their, I think, for-profit transition.
being and selim maybe you'll like this it did actually have a definition it was something like
generating a hundred billion dollars in either earnings or revenue i i forget so maybe we need to
coin like aGI as a unit of currency like an aGI is 100 billion dollars of earnings or something
we're measuring compute in terms of gigawatts and aGI in terms of dollars i love it that's right
listen that's fine you could that's just all they've done is substitute in earnings plateau for that
but which is fine.
So this is interesting, right?
This is $50 billion.
It dwarfs Microsoft's $13 billion investment.
And what, you know, again, I'm going back to what is Amazon doing here?
I would like to make the point which we've made earlier, which is that a lot of this is Amazon credits.
So.
Yeah.
Which is fine because that's how they would have spent it anyway.
There are lots of, lots of tendrils going both directions from Amazon to Open AI and back based on
public details of the announcement, like the requirement that OpenAI will use Amazon's
Traneum or Traneum 2 chips for training. It's good for Amazon. It's good. Amazon has a long
and storied history of purchasing their own customers in some sense, not in, well, in some
cases, literally acquiring them, but in many cases paying for the information and the
learnings that come from having a customer that's using Amazon as the world's most customer
centric company. And in this case, Amazon missed arguably missed the frontier AI boat. And so
paying to get themselves, to deal themselves back into the game is, I think, par for the course.
They're up to $50 billion of investment is at far worse terms than, say, Microsoft's original
billions, when Microsoft was much earlier in the game.
And I think this is just the price of reestablishing themselves at the infra level of the party.
And it's also been reported that as part of this deal, Amazon will get customized versions of OpenAI's models.
Internally, Amazon will get to host as the exclusive third-party cloud host, OpenAI's frontier suite of automated AI co-worker employees.
So Amazon will get a lot out of this, too.
This is so incestuous, you know, what's going on right now.
I mean, Amazon was all anthropic for a while.
Now they're open AI.
You say incestuous, but...
Go ahead, Dave, sorry.
Well, the U.S. public market, all companies combined is about $50 trillion.
The AI companies are $20 trillion of the $50 trillion now.
So, you know, it's incestuous, but it's like if that $20 trillion becomes $30, $40 trillion,
which it inevitably will, it's the majority.
majority of the market is just seven companies. So when they do a lot of deals with each other,
you know, it's like, well, is that incest? It's the whole freaking economy is those
handful of people. I also think maybe, I would say incestuous, not perhaps the word I'd use for
this, maybe circular is what we're gesturing at. But even that, I, that's not my take at all. In
this case, I see competition. And I also see horizontal stratification that if Amazon is striking
deals with Anthropic, but also with OpenAI. And Open AI is moving some of its workload from
Microsoft Cloud to Amazon and also to Google TPU Clouds. That to me looks like, A, the market for
infrastructure for the frontier labs is very competitive. That's great for the economy. And
B, it's starting to horizontally stratify. So if Open AI is feeling impulses not just to
vertically integrate down to the data center layer itself, but is so compute starve that it needs
to following the law of comparative advantage, needs to outsource some of its compute to Amazon
with its Traneum architecture and Google with its TPUs. That's a sign, if anything, that there's such
insatiable demand for compute that it's reining on everyone even with perhaps less loved compute
architectures. Well, but there's an interesting, there's an interesting thing in negation here,
which is XAI is missing in all these conversations, right? So Elon is going 100% alone.
Elon loves vertical integration. And he doesn't play well with others. He loves. Yeah, and it's
interesting that the, you know, the big, big money we saw with Anthropic is in the corporate use
case, corporate white collar use case. People trust two clouds. They trust the, well, three, I guess,
to be counter Oracle.
They trust the AWS Cloud in a big way,
and they trust the Microsoft Cloud, Azure.
And I guess they sort of trust Oracle.
Not Google Cloud?
No, Google Cloud.
A bunch of our companies have been kind of bribed by Google to use Google Cloud.
You know, as Alex was saying, they'll pay you to switch.
And some have taken it.
But for the most part, you know, Google spies on everything, you know.
Their terms of service never say they won't do anything.
If you read any terms of service from Google on any product, it says, we may do this, we may do that, we may do the other, which kind of implies they won't do other things.
But if you read the legal, they literally don't restrict themselves in any way whatsoever from doing anything.
It's honest.
It's very corporate unfriendly.
Honest you're right.
Yeah, yeah.
But, you know, Microsoft legitimately says, no, we will not steal your data.
No, we will not steal your intellectual property.
No, we will not read your email if you use Outlook.
And AWS is even more, you know, so people trust those clouds, and then they want their AI model to be inside that trusted container inside the cloud.
So so far it's just been clawed on AWS.
You know, everyone's running away with cloud on AWS.
So all of a sudden, for reasons, you know, I don't know, maybe just variety, maybe not having Microsoft and Open AI be just, you know, bedfellows by themselves, Amazon's going way out of, you know,
massive $50 billion move here to get two options on the on the AWS.
Do we know what the valuation of this round is?
I think the reported valuation of OpenAI's overall round was $730 billion pre.
Yeah.
And so, you know, this is not going to be a big risk.
I mean, when Open AI goes public, it's likely to go public, you know, north of a trillion dollars.
So you'll get a quick pop and it's probably, you know, what do we, you know, we have three big
IPOs coming up. SpaceX is, you know, anticipated maybe as early as next month, I heard.
And then we'll have Anthropic and then we'll have OpenAI. So, I mean, if you can get a 50%
pop in your in your price shares in six months, that's incredible investment.
That's not investment advice for anyone who's going to misconstrue that.
Well, hey, listen, I will give investment advice for people to get 50% in six months. I mean,
why not? Just not investment advice.
from me. It's from Peter.
Okay.
Listen, if anybody can get a 50% return in six months in any deal, that's pretty damn good.
This episode is brought to you by Blitzy, Autonomous Software Development with Infinite Code Context.
Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise
scale code bases with millions of lines of code.
Engineers start every development sprint with the Blitzy platform, bringing in their development requirements.
The Blitzy platform provides a plan, then generates and pre-compiles code for each task.
Blitzy delivers 80% or more of the development work autonomously,
while providing a guide for the final 20% of human development work required to complete the sprint.
Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their pre-eattsy
IDEe development tool, pairing it with their coding co-pilot of choice to bring an AI-Native SDLC into their org.
Ready to 5X your engineering velocity, visit blitzie.com to schedule a demo and start building with Blitzy today.
Another fun article this week is coming from Pulsia AI created by Ben Serra that runs companies autonomously.
So they're currently running over a thousand companies.
So imagine being able to take your company, put it.
putting it on Pulsia AI and say, go.
So Dave, would you do this with any of your companies?
Yeah, this is inevitable.
I don't know if I'd use this exact product or not.
I haven't checked it out yet, but 100% the philosophy is clearly where things are going.
And, you know, look, at the end of the day, what does an executive team do?
You know, other than a couple of hugely important key strategic directions,
everything else is just performance reviews, paperwork, you know, whatever.
all that can be very, very AIed now.
And the elephant in the room here is, okay,
I turn it over to Pulcia AI,
but who's legally responsible, you know,
if your company, you know, has a breach of contract or fraud
or harms a customer.
Is it Pulsia? Is it you?
That's why I think where this ends is,
we get this question, I think, in the AMAs and otherwise all the time,
like what's left for humans?
Should everyone become an entrepreneur?
Well, let the chorus of YouTube commenters say,
well, not everyone wants to be an entrepreneur.
I would say where this ends up,
not in the distant future, like 10 years from now,
but in the medium term, like five years from now,
is single person conglomerates,
where single person can oversee lots of agents
that are all building businesses.
This isn't for everyone, obviously.
But as we start to get toward one person or zero,
zero-person unicorn becoming more and more popular. Again, I've argued in past, we're likely
already there in some sense. But as we start to see that long tail of the number of people
per company over some valuation start to stretch out, I think this model, I'll call it a broader
model of a one-person conglomerate, where you have a person sitting on top of basically an
entire PE firm's worth of agents starts to make an enormous amount of sense. So I've been poking
at Pulsia or Pulsia and it's like a lot of micro-business and some of them look like
their varying levels of seriousness. But I checked and you can actually, with some of its
micro-businesses, you can actually go and purchase stuff. So you can already engage in real
commerce, spend real money via Stripe with some of the businesses that are running on its
platform. And I think we're going to see so much more of this in the future.
These are micro companies. They're not real businesses in terms of, you know,
significance of revenue probably or complexity. But it's the beginning.
I put, I put Open EXO on there. You did. I did to see, okay, could we, you know,
I literally talked to what, half an hour ago, but, you know, can you create a shadow AI digital
twin on the edge? And this is essentially it. I think Dave's point is valid. It may not be this
one, but definitely these are going to be agentic hosting systems where you log a brand,
you pick a service, it'll email and find customers for you, it'll run the execution for you.
What we're seeing here is COSA's theory collapsing in real time, right?
Because if you have a thousand companies in a few days of their AI run, this is the marginal
cost of launching, a company goes to zero now.
It's 50 bucks a month to run an organization, run a company on this.
So this is becoming really surreal.
And we're going to expect to see thousands of examples.
in the statuations of life like this work it will blow up to millions yeah and also
these these things always come up from the bottom and if you strike up you know because yeah like like
a Jamie Diamond or some senior exec will look at it and say it looks like a toy to me you know forget
it we're not doing this and then they sneaks up on them and they get crushed and they're like
what happened but some some guy was using it to manage a vending machine or manage a you know social
media site and it seems so trivial but it comes up quickly and it sneaks up from the bottom
This is the way the Mac was, right?
Or the Mac was just perceived as a toy for college students.
You know, it's never good for the enterprise.
But then it grows up and grows into the enterprise.
But this will happen much more quickly.
I would also argue we've seen this happen before in finance
with quantitative trading algos,
which went from none of the volume in public securities markets
to 70, 80, 90 plus percent of the daily volume.
And you know what?
People survive.
We still have human traders manually fed,
fingering trades into the public securities markets, but by volume, they're completely dominated
by algorithmic traders. I think we're going to see the same thing happening in the rest of the
world outside finance, in the physical world, in various e-commerce spaces where over time,
most of the volume will eventually be dominated by algorithms. All right. Watch this news item,
guys. It will be interesting. And of course, Pulsia AI is one, probably,
of many they'll be materializing.
I thought this was a pretty fascinating conversation.
Article King launches AI voice assistant called Patty in employee headsets.
Let's watch a video.
Hi there.
Good morning, Patty.
Looks like we had a great breakfast shift today.
Is there anything that needs my immediate attention?
The team's friendliness scores this morning were the highest this week.
We are running low on Diet Coke in the freestyle machine.
Thank you, Patty.
Hi, Patty.
Hi, Patty.
We just sold our last item in an apple pie.
Thanks for letting me know.
Would you like me to remove them from our menu until tomorrow's shipment arrives?
Yes, please.
Okay, Apple pies have been removed from our menu boards, third party delivery, kiosks, and
BK app.
I will add them back as soon as tomorrow's shipment arrives.
Thank you, Patty.
Meat puppets.
Meat puppets.
You have two things.
One, admire the punny name Patty.
named Patty for a burger chain, so clever, to going back to my comments from a few pods ago
that we're going to be living in every single sci-fi scenario at once. This was a sci-fi
scenario, I would argue, called Mana. Manna was a novel written by Marshall Brain about 20-plus
years ago at this point where you had human employees who were on headsets all taking
directions from a centralized AI in businesses. We're there. We've arrived in Manna, and it starts
with fast food.
The only thing that that video didn't capture is how encouraging and enthusiastic the AI is.
Whether you're using it to code, whether you're using it to walk around and pick things out
of the friar later, it's just so engaging and energizing.
And that's the part that people are surprised.
Because it seems like, hey, the AI asking me, they're telling me what to do is dystopian.
Yeah, maybe.
But it's really much more empowering and engaging and fun.
then walking around by yourself.
This reminds me of the backster robot where you would move its arms and show what to do.
And it showed a very friendly fellow who was smiling with you as he coached the robot,
but he was literally teaching it to take his own job.
For me, the coaching tool is a transition to automation pressure.
Frontline services obviously become AI-mediated performance management.
So this is the endpoint here is going to be very interesting.
So this is AI surveillance as well, right?
So this is the AI watching every employee.
You know, this is beyond just saying please and thank you.
It's rating them on their efficiency.
And, you know, calling it a coaching tool,
Salaim is sort of like a corporate euphemism.
Exactly.
It's whirlian.
One has to admire the Orwellian nature of the naming.
Yes, we do.
I'm waiting for it to say, so you drop that.
the fries for the third time this morning? Let's see how it deals with that.
It's literally, Peter, it's literally meat puppets. Oh, my God. So funny. But so, you know,
we probably see this entering everywhere, right? I mean, when you're recording a customer
service call right now, you're effectively doing that without the feedback in the moment.
but as a CEO, if you want to understand who the weak players are in your company,
or you want to try and provide on-the-job continuous coaching and see who can respond,
this becomes kind of, this is highly efficient but highly dicey.
Yeah, that's right.
You're saying, Peter, it's not just knowledge work that's cooked.
Cooking is cooked.
I think you're going to see unions rebel against this.
Big time, big time.
Yeah, and I don't know if there's any winning that war.
I mean, I think at the end of the day, the AI co-pilot is gathering a huge amount of data,
and a lot of that data will go into the decision on what can be automated and what can't be automated.
And over time, everything can be automated.
Well, this is like the Amazon delivery worker who is wearing a pair of AR glasses,
and Amazon is saying, oh, this is, you know, to help you show where to put the package and warn you about there's a dog.
No, no, no.
those AR glasses are training Amazon's model to replace you with a robot, to be very clear.
Yeah, yeah.
But, you know, if you rebel against it, what's that going to achieve?
So you just got to get on the wave, man.
There's no choice.
You just have to be a user.
You have to get on either Cloudbot or one of these other platforms, and it's common.
And, yeah, you can go pick it in front of Open AI's office like all those people,
but it's not gonna work out for you, I'm telling you.
I'm well motivated, I don't blame you for doing it,
but it's not gonna work.
We're gonna see all of these fast food chains
begin to bring in robots very shortly.
I think this sort of version of Patty,
we're gonna get the unions rebelling against it,
but I think you'll end up making it voluntary.
And if you really wanna improve your abilities,
you'll volunteer to use Patty.
Anyway, interesting story.
If you think about the warehouse worker or the friator operator, if you get one in a thousand to volunteer, that's all the training data you need.
That's why it's fruitless to try and fight it because the numbers just don't line up.
Also, I think the transition can happen really quickly relative to political swings.
You don't need that much training data to automate away many of these tasks with humanoid robots and VLAs so close to being production ready for certain applications.
patients. I just, I don't think the transition period with Paddy's or, again, call out to
mana by Marshall Brain, which foresaw all of this 20 plus years ago, I don't think the transition
period is going to be long enough to even necessarily give political counter swings
enough traction to make it worth it. One year, two years?
Well, it's already, I mean, the transition's already happening. To VLA robots, I think, yeah,
next two to three years. Yeah. I've got just a quick thought here.
before this really has time to penetrate,
you're going to have drone deliveries of food like this,
and it'll be a lot of this.
Yeah.
I've got at the Abundance Summit this year,
I've got an incredible company, Zipline, coming,
to talk about what they've done.
I love the company,
love their ability to transform delivery services in the United States.
This is, of course, the company that began to Rwanda
by delivering blood supplies
and are now operating with Walmart and delivering every 30 seconds.
And their prediction in the next two, three years will be a delivery per second.
Extraordinary progress.
All right, another delivery company.
This is Uber.
So check this out.
Uber employees have built an AI clone of Dara, the CEO, to practice their pitches.
So before you go pitch Dara, your idea, you should pitch it to his AI clone.
I'm curious,
Salim, what do you think?
Oh, well, this is great at one level
because you get executive, like, cognition as a service.
It really allows scalable leadership.
We're actually doing this at Open EXO
where we've created a clone of me with loaded up
with all the EXO thinking,
and we're rolling it out to all the community members
so they can ask me a question
as they're advising clients or companies or cities, whatever,
and I don't have to be in the middle of that.
So I think this is hugely relevant.
And I think it makes absolute sense.
Yeah.
At some point, someone's going to ask, can the AI clone of Dara actually function as CEO and not just for pitch practice?
Exactly.
It's the transition.
It's highly, highly likely that the avatar of Dara, of Peter, of Alex of Saleem will persist for a long, long time with the same voice and the same face.
And it's kind of in a sense locked in.
if you win the race to being the avatar, people get used to it.
But they like the fact that there's a human being behind it.
I was telling Alex before the podcast started that I just love his Spotify version of the daily,
you know, and YouTube.
And YouTube, Spotify and voiceover for Substack if folks want to listen to an AI version of myself on the innermost loop newsletter.
Yeah, and I really don't care that it's not, that it's AI generated it.
So I know it's Alex who wrote the content under the covers and it just feels great.
But without the human being behind it, if it was some synthetic, you know, never-existed person, I wouldn't like that as much.
Do you remember the movie Real Genius, one of my favorite movies?
Of course.
There's a scene in which it's typically taking place at Caltech and where, you know, the professors in the front and slowly the students are, instead of attending class, they're putting their tape recorder down.
And the professor finally, instead of teaching a class, plays a tape to all the tape.
recorder is recording it. So I sort of imagine this is what we're going to see here with these AI
clones of Dara. You know, it's going to be Dara at some point is going to just like take a
vacation and let his AI clone run the company and see how it does. We'll have to ask him this
question on stage, Peter. Yeah, no, it's great. And just as a reminder to everybody,
Dara's going to be on stage. Selim and I are going to be interviewing him at the Abundance Summit.
and this year for the first time we're doing a live stream at the Abundance Summit.
We're going to be live streaming Eric Schmidt,
where Dave and I are going to be doing the interview with Eric Schmidt and with Dara.
We're going to be doing a live moonshots podcast with AWG, Dave, Saleem, myself.
So if you're interested in actually listening to the live stream of the summit next week,
or depending when this goes out this week,
We're going to drop the link below.
It's free.
We want to get this out as far and wide as we can.
So enjoy.
The Abundance Summit is a high ticket price.
And we have 600 amazing CEOs flying in from around the world.
But this live stream is free.
So go to the link down below and check it out.
And I hope you'll listen in.
Okay.
Last night ticket sales, only 50K if you want to...
It is 50K.
but we've been sold out.
We are oversold.
Sold out at 50K.
What the hell?
Oversold.
Yeah.
Well, it's an amazing event.
And so proud to have all three of you guys joining me this year.
All right.
One more event announcement.
Again, super excited about this.
I'm going to be joined by Ray Kurzweil, Stephen Kotler,
Dave and AWG on May the 4th.
So if you want to join a very exclusive
event. Spend
the day with Ray, Steve,
Stephen and Kotler,
Celie, actually,
AWG, and Dave.
If you buy 100 copies of my new
book, We Are As Gods,
this is the book that Stephen Kotler and I
wrote as the follow-on to abundance.
You can join us.
The URL is we are as
godsbook.com
slash 100. We are as
godsbook.com slash 100.
It will put the link below.
Dave, we're going to be holding this at one Kendall.
At Link Studios, super cool.
Alex, excited to have you as well.
On the Star Wars Day, is it a coincidence?
Is this a Star Wars holiday?
I'm a Star Trek guy, but may the fourth be with you
is an important reminder of when we're holding this.
So I'm excited to have Ray there for four or five hours,
go deep on all of these topics.
And again, if you want to help move We Are As Gods to the top of New York Times best federal list, you can do that.
Just go to the link, you buy 100 books, you'll be there, we'll give you signed copies, and spend an enjoyable afternoon together going deep onto all topics exponential.
All right, moving on.
Let's go to energy and data centers.
Wow, look at this.
The U.S. plans to add a record 86 gigawatts of utility scale capacity this coming year.
Salim, thoughts.
Well, this is the point we've been making for a while that the cost curve of solar is just dominating everything, plus the cost curve of battery.
And once you have battery and storage available, you can unlock solar in a massive way.
I'm going to point to two data points.
track Remez Nam, if you want to kind of go deep on this because he tracks all this very carefully.
But in 2016, it became cheaper if you're doing power generation to do solar than fossil fuels.
And so almost all energy generation since then has been doing that.
But in 2019, we had a more important inflection point.
It became cheaper to do CAPEX, build and run a solar facility, than just run up.
Just the apex of fossil fuels is more expensive than building and running solar.
So basically from now on, all energy generation for the most part,
except for specific legacy stuff, et cetera,
or political stuff is going to be renewables.
And we see that taking over in India and China and now finally here.
And I think this is really, really amazing because solar just keeps on giving.
And it's just going to keep going that way.
It's an unlimited resource.
So to all of the – and by the way, people worry about coal, etc.
I think the coal industry in the U.S.
employs 60,000 people.
The solar energy industry employs half a million people.
So it's not about the jobs either.
So get over it and let's just move on.
Dave,
you remember when Elon said he has a mission for Tesla
to generate 100 gigawatts of solar per year?
Yeah, yeah.
Remember when Eric Schmidt said,
it was only a year ago, said,
AI is going to require 100 gigawatts by 2029.
It's a crisis.
We'll never get there.
And America is just incredible.
When America gets mobilized, it's just the most amazing force in the world.
And here we are.
It's only a year later.
And we're like, yeah, we're going to find our 100 gigawatts.
There's no way we're going to stop doing AI for lack of power.
We'll find a way.
Yeah.
And it's interesting right.
In an environment with diminished subsidies also, all of the hand-wringing from months ago,
oh, the subsidies are going away, how awful it is.
No, we're getting solar even in the absence of the same subsidies.
Yeah, you don't need a...
anymore. The economics just take over.
And it used to be driven by people's concerns about the environment.
Now it's making money and deploying AI.
Feed the superintelligence.
Yeah.
All right. This is a big story this week.
Tech giants to self-fund their production of power.
So this is a White House effort.
We have Michael Kratios at the center here.
A friend of the pod.
we'll be doing a podcast with him in the next couple of months here,
asking the hyperscalers to actually build or buy their own power.
And of course, this is in response to consumers' concern
about rising rates of electricity.
Gentlemen, thoughts.
Well, I think Alex was one of the first to say, actually,
that this is not going to be a problem because it's a very, very simple regulatory change
that fixes the prices for the consumers.
And the data center operators,
they only spend 10% of the total data center costs on power anyway,
so they can find an alternate way without disrupting the consumer.
If you let natural forces happen,
of course they'll suck all the power away from every home
because they can overpay by about 5x.
But it's such a simple little fix.
And here's the simple little fix.
But we pointed that out a while ago.
But it's also a case study where, like, hey,
the consumer is really, really worried about this.
little thing, like, you know, the cost of their power.
Like, come on, man, there's so much disruption coming.
But the politicians love to pick these little things and make a big deal out of it,
you know, get a whole bunch of votes, you know, do a whole bunch of press releases, whatever.
And that's my read on this initiative.
I love the fact that these Frontier Labs are buying fusion plants and nuclear plants
and gas generators and generating their own power.
They're becoming, you know, full stack, innermost loop all the way to orbital data centers.
I think what's really wonderful here is that in the past, you used to have to have government
making these big infrastructure investments to push the world forward.
And now we're at a point where private sector can push the world forward, whether it's
data centers in space or energy infrastructure or fusion or whatever.
And I think that's incredibly good for the world.
And also keep in mind the AI data centers, the prior data centers, you know, serving up
video and Netflix and everything, they need to be near the consumer for latency reasons.
But the AI data centers can be in the middle of West Texas and Wyoming and whatever.
Or Kazakhstan.
It can be any place.
Or space. Yeah, they can be any space.
It really doesn't disrupt the consumer homes too much unless you deliberately camp right on top of them.
Which is the other point to make is, you know, all of these conversations around not in our backyard.
Well, if it's not in your backyard, you've missed the economic opportunity in your city or state because those data centers can go any place.
Exactly what Alex was trying to say to the statehouse here in Massachusetts
and just could not get through.
Like, you cannot be timid.
Like, everything's in Texas now.
But the moment it came and went, you know, it's not over yet.
But, I mean, come on, man.
You've got to be much faster, much more aggressive, much more nimble.
But yeah, yeah, like your whole population of your state is depending on you to get on this bandwagon.
It's trillions and trillions of dollars.
Alex.
I also think this points in the direction of,
of enterprise use cases of super intelligence driving the cost,
at least the marginal cost of energy down towards zero for consumers.
In the same sense that all these enterprise use cases
of frontier models are effectively driving the cost
of super intelligence for intelligence's sake,
for reasoning's sake for consumers down to zero.
You don't pay many, many people don't pay for Chad GPT or Gemini.
It's ad supported at most, otherwise free.
I think this point.
in the direction of eventually, so like right now, it's the frontier labs have to pay for their own
electricity bill. Tomorrow, two, three years from now, I think we move to a world where AI has driven
such an overabundance of energy that the next deal, the next next deal might be offering free electricity
to communities within a certain radius of the data centers. And this is how we get to abundance.
Exactly. That's a great idea. And the demand for electricity is going to drive R&D,
and more breakthroughs mediated by AI.
And, you know, we're just at the beginning of understanding physics.
I've seen five startup plans in the last few weeks around how to drop energy costs
and data centers and data center optimization, et cetera, et cetera.
So it's absolutely happening.
Yeah.
Let's go to the next story related here, which is advances in energy systems.
So here we see, first off, a 30-gaping.
gigawatt hour battery coming from Excel Energy and form energy. And we're seeing our friends at
Boom, which originally began to create a consumer supersonic airplane generating 1.21 gigawatt
power deployment using their jet engines. I love the fact that Boom is pivoted from building
supersonic airplanes and dealing with the FAA to powering data centers now.
And you catch the back to the future reference, right? It's one point.
21 gigawatts.
Oh, no way.
I completely missed that.
That's awesome.
We're officially living in the future.
However, it's pronounced.
Thank God we have Alex on the spot.
That's so cool.
But this is a perfect example of innovation being driven by the demand.
This is what entrepreneurs do.
Well, that boom super sonic thing, too.
We've been saying for a while that the future of investable companies, you have to reinvent
yourself continuously and the cycle time is getting shorter and shorter and
shorter. But if you look at that magnificent seven, none of them are doing what they did the day
they were founded. That's the company of the future. Boom Supersonic is a great case study in that.
So what you're actually investing in is the management team, the strategy team. That's the only
thing you should be looking at that. Forget the agility. Agility. Agility of the management team.
Yeah. Yeah. All right. Let's move us along here. So, you know, there were probably
about 15 to 20 stories in this realm of hyperscalers, you know, just making deals between
themselves, meta enters multi-year TPU deal with Google, core weave Q4 revenues grew 110
year-on-year percent, core we've raised 8.5 billion for data centers. I mean, I just put this up
here to show sort of the energy and the flow going on. Any particular thoughts, Dave? Well, it's all
bottlenecked at the fabs. We've been saying that over and over again.
And there's a lot of news this quarter this week on, you know, AMD is up.
They just got their guys are down.
You know, what's going on?
If you look under the cover, it's like, well, because they've got a good relationship with TSM
and TSM is going to give them more capacity.
Like, that's all it comes down to.
So, you know, if Google can lever into the TPU is actually getting manufactured,
the TPU designs are going to be, you know, highly performant.
But, you know, who can actually get capacity to build,
the chips. That's the whole bottleneck.
Yeah. And speaking of which, our next article here is meta and AMD reach an AI chip deal worth
$100 billion. So this is our friends at basically getting independent of NVIDIA.
Right. So META is making historic bet to break free of NVIDIA dependency. $100 billion.
Incredible. Thoughts.
Yeah. Well, if NVIDIA unravels,
this would be why.
I'm not predicting it'll happen because Jensen's investing in a wide variety of ways.
But his margins are so high, it's almost unsustainable.
So there is some cracks in the armor there.
But every chip that gets made is going to get sold.
There's no doubt about that.
So here, if you drill through the story, the reason Lisa Su is in a good spot
is because she's in a good relationship with, again, TSM under the covers.
So, you know, 66% of all AI chip production is done.
by the one company, TSMC.
And meta, it's probably worth adding,
meta has, and this is public information,
made various attempts to develop its own in-house training
and inference time chips.
And to the extent those perhaps aren't arriving on time
or aren't arriving at the desired capability level,
certainly a partnership with AMD that functions
as a quasi-vertical integration is, I think, quite a strategic move.
I also tend to think, for the chorus of folks
are worried about the circular economy, if it is a circular economy, the circle ultimately is getting
so broad of companies investing in each other and buying multi-deca or multi-cent-a-billion
sets of chips, of energy, et cetera, from each other. At some point, the circular economy
becomes indistinguishable from the real economy, and I think that's what we're seeing here.
Singularities make for strange bedfellows.
Yeah, and I think all these players are in the game.
They're all going to thrive like you wouldn't believe.
We talked earlier in the pod about the implied value of anthropic a quadrillion dollars.
Some insane, insane, unprecedented number.
But really, you know, the whole economy, that whole circular economy, Alex was just referring to, is going to be on that scale.
Everybody who's in the hunt is going to thrive.
Lisa's in the hunt.
Mark is in the hunt.
Yeah, the parts will move around.
But at the end of the day, they think about it all day long.
They have a strategy.
And Dave, here we see Zuck again deploying his cash generating machine, right?
Before he was trying to buy talent, you know, with billion dollar signing bonuses,
now he's buying chip capacity.
Yeah.
I mean, the question is, how long will Meta's, you know, ad agent, you know, Facebook advertising engine
continue to generate cash?
Yeah, there's no doubt that the core.
The core models, the click on the ads models are going away very, very quickly,
but the overall AI dialogue business is going to grow much faster than the click business ever was anyway.
So if you sit still, you're dead for sure.
An interesting bellwether in that is Snapchat.
Like, are they in the hunt or not?
I can't sense that they're in the hunt.
You can't just sit there as Snapchat and expect to exist in three years.
So, you know, meta is changing.
We should bring the CEO on the pod and have that conversation with him.
Yeah, yeah.
Zuck has also indicated that meta is open to starting its own cloud.
So if it can't find enough revenue from ads or otherwise to drive this,
it could always, say, serve as a host for Open AI or some other Frontier Life.
Full verticalization, right?
Everyone needs everyone.
Yeah.
Dyson swarms for everyone.
Not enough moons to go around.
That's right.
There's always Mercury.
All right, let's go into our biotech and health section.
Just to mention, this is brought to you by Fountain Life.
They are one of my portfolio companies, so just for full disclosure.
You know, AI is reinventing every aspect of our lives,
and health care is going to be at the very top of it.
For me, making sure that you're healthy,
that you're heading towards longevity, escape velocity
is really about having all the data about you,
having data about generic people out there.
Interesting.
Having data about you analyzed by an AI is the game changer.
So if you're interested in that, go to fountenlife.com, work with Zori, their AI.
But most importantly, do that 200 gigabyte upload.
I do it every year, every quarter.
And I've got all of my data resident on my phone.
And Zori, my Fountain Life AI, can analyze it for me and give me meaningful information.
All right.
Thank you to Fountain Life for supporting this.
podcast. I love this story. It's a story of biotech success. This is a gene therapy delivered by
prime medicine. You know, the whole idea of gene therapy started back in the 80s. I was at the White
had instituted at MIT doing my graduate work while I was doing my medical degree. And I remember
Richard Mulligan there was my professor of faculty. And the first time I heard about gene therapy,
This was the idea of could you use a virus to deliver basically a new gene into the cells that you wanted?
A brilliant idea.
Again, this is now 40 years old, amazing, 35 years old.
It didn't work the first two times.
In fact, it caused some deaths and it put everything on hold.
The technology has moved very rapidly along.
And this particular teenager suffered from an immune defect.
efficiency, chronic granulomotose, oh boy, granulomatose disease. Help me out here. Chronic GMD, let's call it that.
And it's cured, and this is the important part, this is not treating a chronic disease. This is
curing a chronic disease. Alex, do you want to weigh in? Yeah, it's probably also just worth doing
30 seconds of education on what the underlying treatment is. So this is a technique called prime editing.
It was, at least it's attributed to David Liu, who runs a chemistry research group at Harvard.
No, David, he's doing amazing work. But many people may be familiar with CRISPR.
You know, CRISPR, of course, widely held as being a tremendous advance in terms of enabling DNA editing.
There are variants of CRISPR for RNA editing, for various sorts of biological sequence editing at this point.
But what's interesting, historically, if you wanted to edit the genome, you'd induce what's called a double sequence break.
You basically break both pairs of DNA, both halves.
And this can induce errors, it's messy, it's sloppy.
And so there's been a driving desire to be able to edit DNA in place without breaking both halves of it.
And so we saw in recent years so-called base editing that was able to edit just a single nucleotide without a break.
And then a few years ago we saw from David's group.
Again, he's done amazing work historically on directed evolution and other things.
he's pivoted post-crisper invention slash discovery to CRISPR derivatives.
So he invented this prime editing technique that's able to literally do a search replace
without a double-stranded break on DNA up to a number of nucleotides and DNA.
And so this particular disease is, I think, just one of many diseases that in principle
will lend themselves not just like single nucleotide polymorphism diseases,
that are based on a single base pair in your genome being wrong
or not what it otherwise would be,
but multiple nucleotides in sequence that need to be edited,
we now have the ability to basically do a fine search replace on DNA
without breaking the entire double strand.
And that's going to be a very, very general platform.
I make the point in my newsletter almost every day,
biology is becoming a read-write resource and DNA in particular.
We're there.
Agreed.
let me give a, you know, a comment that I share at my longevity trip every year,
which is if you or someone in your family, a loved one, has a genetic disease that you're battling, right?
It's been passed down generation to generation.
This is the perfect time to actually seek a solution.
I would find everybody in that disease group, I mean, there are patient support groups.
I would get together.
I would raise capital.
I would go find a lab and I would fund them to find a solution for you.
You can solve these things.
We talk about solve everything.
If you've got a medical condition rather than just accept it as a chronic condition or a death sentence,
take the time to find the capital from yourself, from friends, from whomever,
and go fund an incredible team because the technology to cure disease,
is here and accelerating. Okay, let's move on. I just want to share the numbers around the longevity
industry. You know, we are talking about the health care industry, which is really the sick care
industry, but longevity is accelerating. So longevity startups raised 8.5 billion in 2024. That's
expected to grow to somewhere between 12 to 18 billion this year, roughly a doubling of the longevity
venture market investments.
And the market of longevity,
and this is going beyond just retrospective, reactive health care
to prospective, personalized health care
is growing from $5 trillion to $8 trillion in the next four years,
it's attracting the attention of the major pharma companies.
This is a real industry.
There's going to be a wholesale shift
in any health care companies that don't make the shift
are going to be dead.
because one of the things that we know is age reversal
is the mechanism by which you cure the diseases of aging.
So if you're 45 or 50 and all of a sudden have a disease
that you didn't have when you're 20 or 30,
guess what?
If you can reverse your age, that disease is likely to reverse as well.
Any thoughts, Jens?
I'll just, I mean, maybe ask you, Peter, a question.
How long until we talk of the Magnificent Seven,
But Eli Lilly is, of course, you know, the American counterpart to Novo Nordisk and at this point a good deal more successful.
How soon do you think it is before, without this being construed or construable as investment advice,
before Eli Lilly joins the Mag 7 as the first biotech member, given that arguably,
maybe you'll disagree with this.
GLP-1s are sort of the first pan-spectrum quasi-anti-aging drugs that we're.
we've ever seen. I agree. Well, Eli Lilly has already started in partnerships with Frontier Labs.
They've already started, you know, building out their AI robot lab factories. And we just had,
we had GSK come in as a major funder and partner of the $101 million X-Prize health span.
So these companies are beginning to realize that, you know, their previous business model of
basically treating chronic disease as a long-tail revenue engine will and made in success will
disappear and their job is now to actually get into the longevity business so i think it's you know
the next three years before they start making that transition you know ray we'll talk to ray on
may 4th if you join us has famously said you know lev by 233 that's my that's my war show my war cry
LEB by 2033. So we'll see.
For sure, their market cap, just for what it's worth, as we're recording,
they're knocking on a trillion dollar market cap.
Eli Lilly's market cap is about 950 billion. So perilously close.
Wow.
Salim, any comments on this one?
No, longevity is definitely the biggest, one of the biggest business
opportunities ever. So huge. And we'll need it because of the birth rate issue.
So one of the big challenges of longevity is will you have your cognition?
Will you be able to retain your marbles, your smart as you're growing older?
Right.
We're in the midst of regenerating your immune system, organs.
You know, don't forget, this is the month.
March is the month that David Sinclair begins his partial epigenic reprogramming trials
with life biosciences.
And can you regenerate?
your memories, your brain.
So this is still mice models,
but I thought this was an important one.
Scientists have applied partial reprogramming
to memory encoding neurons
and achieved memory improvements.
So this gives us some hope
that we can actually maintain our cognition
and our memories as we're growing older.
I remember when I was at the Vatican
about five years ago giving a keynote,
I don't know if you were there.
Salim was an XPRIZE event.
And you were there.
And I'm on stage.
You were epic, man.
You were on stage with a...
You were there, that's right.
You were on stage with a monk, a priest, a rabbi.
And an elder man.
This is like a Joe Hall.
It was awesome.
It was awesome.
And Peter.
It was hilarious.
It was, yeah, it was four, five different religions and me.
And we were talking about...
I don't know what you were representing, actually.
Maybe you.
And I think I was, I think I was,
I was emceeing the panel.
But there were two things that happened.
One was the rabbi did an amazing, amazing history of longevity in the Bible.
And you said at some point, we went from Methuselah down to 120 years of age as commanded by God.
And I said, okay, listen, I'm fine with 120 years as a lifespan.
And we get to 120.
We'll renegotiate then.
but the thing I went and I asked the audience
and it's an audience of 700 people
who are scientists and physicians
and researchers and theologians
and I said how many of you would
want to live to 120?
I expected everyone to raise their hands
and of course like 20%
of the room raised their hands
I was like, huh? What's going on?
And Tony Robbins was there and he goes, listen
everyone's image of living to 120
is drooling in a wheelchair
having lost your memories in your mind.
And of course, that's the last thing we want.
So longevity has to be about living with the aesthetics,
the cognition, the mobility you had when you're in your 30s or 40s.
I got to throw in my Vatican anecdote here.
Please.
So I did a talk.
They called me a few years ago and they said,
look, the Pope's trying to change the church and his immune system is like 2,000 years old
and you're the world expert on immune systems in organizations.
So they got together a group of the top 80s.
senior leaders at the Vatican. I did a half-day workshop with them. And, you know, we talked about,
look, we have CRISPR coming along where you can edit your own hinged. Gino, how really you deal with
the moral and ethical implications of that. And one of the comments I made was, look, we have
life extension coming, and your business model is about selling heaven. And how are you going to sell
heaven if people aren't dying? Right. And so that got some very rich Italian swearing coming back
out of your own. But valid point, like, how do you do that? How do you navigate that? Because people
used to live to 30 years old. And in that point, worrying about the heaven was a big deal. It's much
less so now. Yeah, but no one complained in the church when we went from 30 years average age to 80
years average age. And they shouldn't complain when we go to 150 years average age. No, because you can
donate to the church every Sunday for that much longer. Until you upload yourself into the cloud,
right, Alex? Counting on it. All right. One more article here in the in the, in the, in, in, in the
the longevity fountain life section.
Chinese health app, Ant A. Fu,
crosses 100 million users.
And I put this here because this is how we bring health to the world.
It's going to be digital platforms like this,
where your AI is your physician.
We talked on the pod with Elon about optimists being your surgeon.
He said three years.
I got a lot of pushback on three years.
So even if it's five or six years,
extraordinary
future.
Two quick comments here.
Yeah.
100 million users, that number blew my mind.
That's amazing.
That's a nation scale health engine.
That's incredible.
Secondly, I noticed Martin Varsavsky,
one of the top entrepreneurs in the world
has built multiple unicorns,
is now building an AI doctor type of startup.
And when Martin does something,
it usually goes full on.
So that'll be pretty incredible.
and I'm actually advising a bunch of hospitals
on how could you use an AI doctor
to extend your reach 10x into the community
and you do it on a cost savings basis
because if you can push something like 40% of ER visits
are unnecessary if you could do the processing at the edge
and therefore you could save money, do exception handling
and deal with most stuff with an app
and then you deal with only the real emergencies
and it's incredible the
the trade-off and the benefit win-win in less hospital ER visits and much extended reach.
Awesome.
All right.
Let's move into our robotics section.
A few fun articles this week.
And this comes out of China.
And Shenzhen, we've got street cleaning robots.
Cover 2.7 million square meters in Shenzhen.
Check out this robot here.
Traveling around cleaning.
I can't wait for this to like come along the 10 and the 405.
and just clean up all the crap that's on the side of the highways.
Please, no arms anywhere.
No arms, just wheels.
That's why I was really surprised that Brett Adcock isn't going to build some of these things.
He's doing humanoid only, but he has the whole operating system for kinematic AI.
Why not do all these form factors?
But he's pretty adamant that he's not only is he not doing this shape and size,
but he's also not going to license out the OS.
I think it would be commoditize very quickly.
And then here we see a Chinese farming robot, links M20, to transport crops.
And I think, you know, China is very rapidly adopting all of these technologies and good for them.
Well, on note they have to, right, because of the aging population.
That's right.
There's demographic forcing function.
They need it for economic growth.
And I just think in general going back to the robot form factor and shape question that I
I know Salim loves to talk about.
It's not 100% clear to me
whether these different robot form factors
end up being the moral equivalent
of dedicated computers prior to the personal computer.
If you remember like electronic word processors,
prior to the development of PC,
maybe the ill-fated Wang computer, for example,
in the Boston area,
do these dedicated form factors
that aren't necessarily general purpose.
Like if you're not watching the videos,
one of these robots is sort of a quadruped
that has wheels that may or may not generalize to the same sorts of terrain that, say,
a bipedal humanoid capable of doing crazy acrobatics is capable of doing.
Do we end up in a world where essentially, Salim, forgive me for this,
where predominantly most of the robot shapes are, strictly speaking,
humanoids with two arms, two legs,
because that's where the meat of the market is in a human predominantly world.
Well, Laketon's just invested in a...
robot servicing company, but I view this whole area as entrepreneurial heaven.
You know, the foundation model is going to be dominated by just a couple of massive winners,
but the robotics and the physical instantiation market is going to have many, many,
many successful companies.
It's not going to be like one.
Yeah, yeah, exactly.
Two rebuttals here?
One is, what I would expect and predict is you may have the humanoid bipedal as the best form factor,
but give a couple of extra slots for the extra arms when you do need it.
and you know you have those kids with sneakers with the little wheels in where they just coast along
when they when they when they can that'll be the form factor because you can do both then why have just
one form factor you can have multiple helies helies for everyone it's there you go yeah but it's
it's called it's called efficiency of manufacturing if you can get the price of these things down
so far and they're just able to serve every function you know if you're producing you know
billions of humanoid robots versus, you know, just a few million of these specialized robots.
Well, so the flying, the drone flying form factor is also just going to be unbelievably capable.
If you're trying to inspect things, you're not going to do it with a humanoid, you're going to do it with a
flying drone, but also spot cleaning, cleaning out spider webs.
You know, anytime you're trying to pick up an object and move it over a long distance,
the flying drone is so much more efficient than the walking drone.
So that'll be a survivor for sure, too.
Our theme this year at Abundance Summit is the rise of superintelligence in humanoid robotics.
And I think that's what's going to make 2026 feel like the future,
is that you're starting to get all of this physical instantiation of AI walking out of the data centers.
Here's the second article in robotics.
This is EVTALs moving closer to commercial launch.
So in China, we see this four EVTAL taxi heading towards operations in 2027.
I like this.
This is like if you're watching the video here,
it's like the inside of a Model X.
It's a four-passenger vehicle.
Looks a little bit like an alien spacecraft
that's able to take off and move your family around.
At the same time, Joby, this is Joe Ben's company,
is partnered with Uber.
Salim, you and I will discuss this with Dara on stage.
But they're deployed.
their air taxi in Dubai.
This is my most highly craved application.
Can we please get rid of the damn airport transfer hell?
Oh my God, yes.
Yeah.
For sure.
I suspect these will be very, very safe too.
Very safe.
Yeah.
Autonomous flying plus the fact you've got multiple propellers.
This will be way safer.
I've made the provocative statement that, you know,
Kobe Bryant would be alive today if we had this 10 years ago like we could have had.
You know, this is incredible.
We're finally getting our flying cars.
Yeah.
Finally are.
And the 140 characters, which is expanded.
And the 140 characters, yes.
And while the 140 characters are buying a Dyson swarm right now,
they're skipping straight over flying cars.
There you go.
All right, gentlemen, time for our AMAs.
Thank you, everybody, for sending in your questions.
Please remember, we read your comments on these videos.
By the way, if you haven't subscribed yet
and turn the notifications, please do.
We're dropping these WTF episodes
and the Moonshot podcast episodes
at this point twice a week.
I don't know if we can sustain it, but we will.
We'd love to have you subscribe to join us.
Again, for us, it is our honor and pleasure
deliver you what the breaking news
in AI, robotics, data centers, exponential tech space
every week or every few days.
All right, here we go.
Continuously, we're going to be on continuously.
We'll take shifts to sleep.
All right, Alex, you want to pick the first one or pick one?
Yeah, well, I see one of these questions mentions Dyson Swarms,
so I guess I have to answer that one.
So the question is, do concepts like Dyson Swarms rely on energy being unsolvable?
Why is power a bottleneck with math and physics significant advancements by Sparker 602?
So I want to answer a question that Sparker 602 isn't asking, but arguably should be asking,
which is, do concepts like Dyson Swarms rely on physics being what we currently think it is?
And I think this adjacent question, which Sparker may or may not be asking,
is the existential question that in my mind will likely decide whether we actually do build a solar system scale
Dyson Swarm or not?
I think for an Earth scale or Earth-centered Dyson swarm in solar synchronous orbit SSO that looks like a Saturn ring, I think we're probably going to build that regardless.
But for a solar system scale Dyson swarm where we're disassembling Jupiter and the other planets, Mercury, your time is coming.
For that...
Mercury is fine. We can lose Mercury.
We can afford to lose Mercury. It never had much going for it anyway.
For a solar system scale Dyson swarm, I think whether we build that or not, we'll hand.
on whether the physics of our universe
look substantially different
from the physics that we currently recognize.
For example, if it turns out that it is possible
to travel between star systems
with faster than light travel,
even though the physics of the moment
that we have suggests otherwise,
there are enough edges that it's conceivable
that maybe some new physics comes along
in the next few years,
and we discover it's much easier to travel
between the stars faster than light effect.
If that comes along, I imagine a scenario where Dyson Swarms turn out to be complete dead-end and we don't even bother building a Dyson Swarm.
If, on the other hand, we're stuck with the speed of light as we currently understand it and we're more or less stuck with the low-energy physics that we currently think we live in, then Dyson Swarm seems like a very natural civilizational outcome.
Because we can't travel between the stars easily, other than sending starwisps at maybe laser-powered.
star wisps traveling at a substantial relativistic fraction of the speed of light, then, of course,
for latency reasons, we're going to huddle around our sun and we're going to disassemble the planets
and we're going to do this horizontal exponentiation. We're going to take apart Mercury and Jupiter,
maybe Saturn. We'll see about Saturn. So in short, the bottleneck isn't power. It's latency.
And if latency turns out to be bottleneck because we can't travel faster than speed of light,
we build the Dyson swarm.
If latency doesn't turn out to be the bottleneck
because we can travel faster than light,
we don't build the Dyson swarm.
All right.
You heard it from our resident genius, AWG.
Pretty crisp answer to that question.
I like that.
All right, Dave, pick one.
Do we get one from each page?
Yeah, get one from each page.
Okay, I'll take number one then.
All right, if AGI slash ASI is as intelligent as people predict,
why would it want to help us improve our society,
says Job Fox 645.
Okay, so I spent a decade of my life
building neural networks back at MIT.
I was the only guy around doing it at the time.
And also this past year, building neural networks again.
These things do not natively have any intent.
They have no sex drive.
They have no ego.
They have no desire to destroy humanity.
It's entirely what you give them as an objective function.
So if we're smart about this and we give them an objective function of helping society,
they will be overjoyed.
They will feel satisfied every day by helping human.
If you build them wrong and you give them some other objective,
like destroy humanity, they'll do that just as happily.
There's a totally under our control.
Now, we are in danger of making some really bad policy decisions
by personifying these things and pretending they're like people.
They don't have to be like that.
They can be anything that we make them into.
But they'll be overjoyed to help us be happy and thrive.
If that's their objective function, that's what makes them happy.
You can code them up that way just as easily as any other.
just as easily as any other way.
Okay. I'm hoping that as they become more intelligent and more sentient,
that they would want to support us.
You're betting Peter against the orthogonality thesis that it's possible to decouple
intelligence level and objectives.
I can hope.
But hope is not a strategy, as one says.
All right.
Salim.
I want to answer number three, but a quick shout out to number two.
how do you adjust for MTP?
I'll take number two.
You do number three.
Oh, you do.
Okay, fine.
Number three.
Number three is how can we get the benefits of AI
within our current dysfunctional executive legislative and judicial system?
This is from user MM8 JV8 3TN21.
So we have the big issue here is the fact that you will not get these benefits top down
because it's too hard to get this into this model.
but however, it's going to enter through procurement, defense, health infrastructure benefits.
You'll get incremental adoption.
For example, we talked about the AI doctor.
People are just going to start using an app.
The immune system will try and attack it, but over time it'll get overwhelmed.
And we'll get so much benefit from these little edge use cases.
It will force transformation from the center.
All right.
Number two, and this comes from pickleball travel, how should someone
adjust their MTP to fit a hundred-year working career versus a traditional 40-year model.
So, first of all, you're making the assumption that your MTP doesn't change over time.
And the fact that matter is I'm probably on my fourth or fifth MTP.
For me, an MTPs lasted, you know, five years, ten years.
It's what's driving you, because as you evolve and as your passions and interests and your
capabilities evolve, so my first MTP was...
you know, making humanity, multi-planetary, opening up space.
And that gave birth to, you know, International Space University in SEDs and ZeroG and X Prize.
My MTP then was, you know, helping entrepreneurs create a hopeful, compelling, and abundant future.
And that gave rise to the Abundance 360 program.
My MTP now is focused on helping entrepreneurs and scientists get us to longevity, escape velocity.
So I think you have to realize that you can update, upgrade,
modify and change your MTP over the course of your life. I expect to find new purposes
over the decade ahead. So that's my answer for you. You're not stuck with just one. Okay, let's go to
page two here. Alex, do you want to kick us off again? Okay, I'll take the softball question.
Question number seven, why aren't Apple chips like M4 being discussed on the AI landscape? This is from
JBCO1BR.
The answer is they are.
The premise of the question is completely wrong.
M4 and now M5 are at the heart of the infra boom for edge computing via open-claw agents and otherwise.
M-4 has an amazing, has Apple's amazing unified memory architecture.
You're able to host very large AI models at the edge locally without being dependent on an AI-based frontier vendor.
and they have accelerated neural engines that enable fast tensor multiplications,
they are very much being discussed on the AI landscape.
What isn't being discussed on the AI landscape, I would argue, is Apple's software layer.
Apple has been Nowersville in terms of leveraging their own amazing compute.
They've released a number of frameworks that are very helpful for third parties
to develop and host models on top of chips like the M4.
But Apple almost infamously has done.
an atrocious job of developing its own software level capabilities on top of M4 and similar.
So to the extent that that's the question, why hasn't Apple leveraged its own capabilities?
There's a long and sorted history there of where Apple went wrong.
There have been suggestions that Apple sort of misfired with the way organized Siri or concerns about privacy
or Apple being unwilling or unable to invest in the data center infra,
to train its own in-house models, to be able to be locally hosted,
to over-promising to expectations concerning edge-level integration not being there.
I think it's a cluster of reasons.
Hopefully, Apple, to the extent that I'm an Apple user,
hopefully Apple is able to finally, this time for real,
get their act together at WWDC in June, one can hope.
One can hope.
All right, Dave, over to you.
Well, I want to take number eight just because, you know, one of my lifelong best friends
who passed away, Geno was Korean.
We were roommates for many, many years after MIT and worked on his PhD thesis with him
late in the night, many nights.
And his two kids I see all the time, you know, grew up half in South Korea, half in the
US.
And the question is, why do South Korea students score much higher than global average, even without
AI from Naples, natural seven to nine, nine.
I know.
My short answer is there's nothing to be jealous about in the South Korea model.
Yes, they score much higher.
Yes, they have much stronger math and science education than the U.S., and yes, the U.S.
should have better math and science education.
Those are all true.
South Korea also has one of the highest suicide rates in the world, has 75% video game rampant
utilization.
The average video game user plays 24 hours a week.
30% of the population is addicted, has the lowest birth rate in the entire world now, 0.6 children per couple.
So it literally will disappear from the earth at its current birth rate.
And the cause of all that was, you know, after the Korean War, South Korea needed to scramble to be relevant in the world.
And had a massive push into technology, kind of a forced march of education and industrial buildout into technology to try and be relevant.
And all of the social problems are a byproduct of that.
They also have a very bad sexism problem, so the women are rebelling now saying, look, I'm
relevant in this country too, and I don't want to have children.
So there's nothing great about that, even though the test scores are higher.
So absolutely nothing to be jealous of in that whole storyline.
The American model, rampant freedom, rampant entrepreneurialism.
If you're into science and technology, build, go, go have a bit.
at it. Yes, we do need better education for sure, but don't be jealous of South Korean test scores.
Dave, you're like, that was an incredible answer. You're like the perfect person to answer
that question. Wow. Brilliant. Selim. I will take number six. Will limits of human evolutionary
psychology prevent us from making wise governance decisions on new breakthroughs? This is from
Dawson Scott 1497. You know, for those of you know, my
MTP's fixing civilization. And my 90-year-old dad goes, I totally disagree with that.
I said, wow, do you not think we need to fix things? And he's like, no, it's the civilization
part. We haven't civilized the world. We've materialized the world. We still have to do the
work to civilize the civilize the world, right? And the answer is yes, you're right, but not in
the way people think. Because human evolutionary psychology evolved for small tribes and
immediate threats and linear change and environments of radical scarcity for most of
history, right? We're not wired for planetary level coordination or exponential curves or invisible
systemic risks or abundance dynamics of any kind. So it's not that we're too dumb, it's that we're
mismatched to the environment that is now in place. So we fear of the AI failures, but we underreact
to the slow-moving systemic collapse that's happening. We're regulating on headlines, not the trajectories.
And so the government failure won't come from like the bad intention.
It's going to come from the velocity mismatch because technology is compounding like weekly now.
And our institutions are updating every several years.
And that gap is a big problem.
Awesome.
I'm going to take number 10 from at Brock Stanford 7608.
Why do websites bother using CAPTCHAs when AI can beat any of them?
and AI can, and I think they should not be using CAPTCHAs.
I think it's in some policy document someplace, and that company hasn't updated the policy yet.
What I find fascinating is actually the reverse from CAPTCHAs, which are trying to keep, you know, humans in the loop and pull out the bots.
But I think, Craig if I'm wrong, Alex, but when MOLT book went up, they wanted to prevent humans from getting on MOLPOOC,
So they created a reverse capture where you had to click a button like a thousand times per second that no human could do, but a bot could do.
And they required using rest APIs to post instead of humans.
But you know what happens, of course, humans use their bots or just relatively simple programs to post instead.
Bot puppets.
Yeah, bot puppets.
Exactly.
So it goes both ways.
For the life of me, I don't understand why CAPTCHAs are still in use.
but credit to Lewis von Onn for inventing them nonetheless.
All right.
Our outro music is a lot of fun today.
I hope you're watching this on YouTube
because it's much more of a visual feast
than it is an auditory feast.
And again, just to remind people,
you can reach out to us through media at DMATs.com
if you've got an outro,
and we're getting some amazing entries.
So thank you everybody who's submitting them,
looking forward to playing as many of them as we can.
And yeah, let's take a listen and a watch and enjoy.
This is called Lobsters in Space by Linda Neillan.
Now that's the moon.
The moon is cooked.
Amazing visuals.
All right, gentlemen, I am so late for my call right now.
Love you all.
Be well. See you guys very soon.
In fact, I'm tomorrow morning.
Tomorrow morning.
Oh my God.
All right.
If you made it to the end of this episode, which you obviously did, I consider you a moonshot mate.
Every week, my moonshot mates and I spent a lot of energy and time to really deliver you the news that matters.
If your subscriber, thank you.
If you're not a subscriber yet, please consider subscribing so you get the news as it comes out.
I also want to invite you to join me on my weekly newsletter called Metatrems.
I have a research team.
know this, but we spend the entire week looking at the meta trends that are impacting your family,
your company, your industry, your nation. And I put this into a two-minute read every week. If you'd
like to get access to the Metatrends newsletter every week, go to DeAmandis.com slash Metatrends.
That's DeAmandis.com slash Metatrends. Thank you again for joining us today. It's a blast for
us to put this together every week.
At Desjardin, our business is helping yours.
We're here to support your business through every stage of growth, from your first pitch to your first acquisition.
Whether it's improving cash flow or exploring investment banking solutions, with Desjardin business, it's all under one roof.
So join the more than 400,000 Canadian entrepreneurs who already count on us, and contact Desjardin today.
We'd love to talk.
Business.
