Moonshots with Peter Diamandis - Google's Record Quarter, the White House Intervenes, and GPT 5.5 Silently Matches Mythos | EP 254
Episode Date: May 9, 2026In this episode, the mates welcome Blitzy CEO Brian Elliott to discuss Google’s blowout AI-driven earnings, White House model vetting, Pentagon deals with frontier labs, compute scarcity, the rise o...f private-equity-led enterprise AI, ocean and space data centers, OpenAI’s changing cloud strategy and delayed IPO talk, AGI definitions, AI risk/insurance, and the growing role of AI in GDP and infrastructure. Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360 Brian Elliott, Co-Founder and CEO of Blitzy. Learn about Blitzy AI Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy Your body is incredibly good at hiding disease. Schedule a call with Fountain Life to add healthy decades to your life, and to learn more about their Memberships: https://www.fountainlife.com/peter _ Connect with Peter: X Instagram Substack Website Xprize Connect with Brian LinkedIn Blitzy.com Connect with Dave: Web X LinkedIn Instagram TikTok Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Substack Spotify Threads Listen to MOONSHOTS: Apple YouTube – *Recorded on May 6th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Google has crushed their earnings.
Alphabet reported $109.9 billion,
22% year on your growth, 62.6 billion in profit.
Google Cloud hit 20 billion revenue with 63%.
Growth AI drove results across the entire Google ecosystem.
The revenue just goes up and up and up and up,
and everyone's like, how's that possible and the way that's possible.
The White House is considering a process of vetting all the models
before the release.
The capabilities of the AI are going to grow exponentially,
it continues to grow exponentially.
They're incredibly valuable to the military.
So the government ultimately has to preview these things, right?
It has to, but it can't gatekeep.
That's where we're going to end up falling behind from a geopolitical perspective.
I'm more worried about the frontier labs self-policing more aggressively than the government
ever would and stifling competition that way.
I think that's a far scarier future.
This is the future that we're going to live in forever hereafter.
Everybody, welcome to another episode of Moonshots here with my extraordinary
Moonshot mates, our resident genius, Alex Suisner Gross, AWG.
Good to see you back in your normal haunt.
Thank you, Peter.
Yeah, awesome.
And Dave Blundon, Dave, listen, I just want to thank you.
Think Link Studios, Link Ventures for being a supporter of this pod.
We love the work that you're doing.
And we'll soon have, Salim Ismail.
He is on some global junket and we'll be joining us in a little bit.
but this is a podcast to help you understand how fast the world is changing and how to take advantage of it.
You know, this is the news and the technology that really impacts our lives, our families, our companies, our industry, our nations.
No politics, just the news that really matters.
And welcome to another episode of WTF just happened in technology.
We have a special guest today, Brian Elliott, the CEO of Blitzy.
Brian, good to have you.
Welcome.
Peter, good to be here.
Yeah, we're going to be covering some Blitzie news.
Big news week for Blitzy.
Yeah, and I just want to say, Brian, again,
thank you for Blitzy being a sponsor of the show.
You know, we choose our sponsors carefully.
Again, a big news week,
but before we do that, just a little bit from the last couple of days,
it's good to see Dave in AWG and soon, Salim.
We had an extraordinary event at MIT,
and Dave, thank you to the Link team
for helping us get access.
at MIT. It was a launch of We Are As Gods, the book that Steve and I wrote.
Everyone who bought 100 books came to the event. We had a special guest, our dear friend
and friend of the pod, Ray Kurzweil. It was fun. Alex, did you enjoy the conversations?
My favorite part, Peter, was that selfie that you took of the three of us at the end,
three generations of singularitarians. That was just an incredible moment. It was. It sure was.
It was great to see Ray and have him share his visions of how he predicts.
And we'll have that podcast.
If it's not up yet, if you're listening to this, it will be coming up very shortly.
Super fun and really a lot of fun to have all of the Moonshot listeners in the room there.
It was very limited.
We had 120 people there.
But if you missed your chance to be with the Moonshot mates, you're going to have another chance very shortly.
This is the Moonshots gathering.
It's coming up in Los Angeles on September the,
25th. A lot going on. That day, it's going to start at 9 a.m. actually, probably at 7.30 for
pre-registration and go late into the evening with a great party. We're going to be awarding the
Future Vision X Prize. This are creators around the world who are creating visions of the future.
These are trailers for movies they want to make. We're going to have some of the top Hollywood
stars there helping us select it. We're going to be announcing very shortly something called the
Moonshots hackathon, the largest hackathon ever. And that's going to be awarded at the Moonshots
Gathering. Google X is going to be there. They're going to be having sessions on how do you create
an exponential group? How do you create a moonshot organization in your company or as an entrepreneur?
XPRIZE is going to be there running design workshops. There's a session by Kathy Wood,
who's going to be showing us how do you invest in exponential technologies, a massive party that night.
And we have an amazing group of faculty and rock stars.
Of course, the Moonshotmates will be there.
You can come and join us, spend time with us.
Astro Teller, the Captain Moonshots from Google X, will be there speaking.
Rod Roddenberry, the son of the creator of Star Trek, will be there.
Ben Lamb, one of the top, sort of true moonshot entrepreneurs.
We have a number of other guests.
These are the names you hear about.
We talk about on the pod.
we'll be disclosing them over the course the next few months.
We are allowing you to register through an application process right now.
You go to moonshots.com.
Please register if you put a deposit down back at the end of last year when we first announced it.
You're coming in at a special rate just below $1,000.
Otherwise, seats are going to be $1495.
We have a limited number of VIP seats that's going to be a special lunch and evening with the Moonshot mates and the speakers.
So go to moonshots.com if you're interested in joining us and go ahead and fill in the registration material.
Alex, this is going to be epic.
Excited to have you there, pal.
Question for you, Peter.
How many years of holding this do you think would be required before it could be held on the moon?
How much of a moonshot gathering is it on Earth?
We should be holding it on the moon.
I think you're absolutely right.
So, listen, starships making your first landing there in a couple of years, you know, demonetization curve.
I think the moonshots gathering on the moon early 2030s, mid-2030s.
Now we're talking.
None of this 20 years from now business from you, right?
I'm doing that especially for you.
Yeah, the challenge is that the transportation costs are definitely outweigh the ticket
cost.
So what I'm hearing you say is moonshot gathering 232 on the moon?
Oh, man.
I don't want to promise that.
But as soon as it's practical, you know, I want to get up there for sure.
I will put a deposit down today.
You put it the pots it down.
Awesome.
Brian, you're a lunatic.
Anyway, Dave, you were going to say.
Yeah, no, it's always amazing to me how you can get a massive amount of energy in Southern California is just such a great destination.
So, you know, I think the MIT event was phenomenal this week.
And Ray Kurzweil is like a hero to me.
It has been for decades.
But then you look at the amount that's going on in this L.A. event is,
just, you know, what, 5x, 7X? And I'm super excited about the Vision X Prize videos.
We can't count that logic.
Yeah, the video quality, the AI video quality is getting so much better at such an incredible
rate. So I expect to see some incredible, and of course it's LA, you know, this is Movie Central,
expect to see some incredible Future Vision previews. So, I mean, 10,000 submissions, I think
that'll compress down into the most entertaining, probably couple hours of your year.
I can't wait.
Yeah.
All right, let's move on to the news.
A lot going on.
This week, it's a lot about Google and Open AI, the state of the AI race.
Let's jump in.
So the White House is considering a process of vetting all the models before the release.
So the Trump administration has really flipped the position.
They were all open, so, you know, AI companies go as fast as you can.
no restrictions, and all of a sudden there's a proposed executive order that says,
no, no, we're going to create a working group with tech leaders and government officials
that's going to preview these before they're released.
Alex, to you, buddy, what does this mean, do you think?
My sense has everything changed with Mythos, and I have to add the caveat,
it appears based on a number of public cybersecurity benchmarks, that GPT 5.5, which, unlike
Claude Mythos, is actually generally available, is stronger at these cybersecurity benchmarks.
I think looking back from near future history, so looking back historically, I think we'll
view the mythos moment, if you will, as a sea change when the civilian sector, the frontier
AI labs suddenly had capabilities that leapfrogged government capabilities, where specifically
mythos was suddenly able to, and as Peter, as you and I talk about in solve everything, where
entire disciplines get solved at once with the mythos moment, cybersecurity and in particular,
vulnerability discovery getting effectively solved, for some definition of solved by AI for the
first time, in the private sector leapfrogging what possibly the NSA or other government agencies
had internally. This was a moment when the government, even an aggressively deregulatory government
in AI-friendly government as the president's administration, is sort of woke up and realized, hey,
wait a minute, these are leapfrog capabilities coming from the private sector. They could lead to
vulnerabilities in government systems, vulnerabilities in industrial and SCADA systems throughout
the economy, maybe actually some sort of light touch gatekeeping mechanism might actually be
merited at this point. So I think without putting my finger on the scale of whether this is actually
a good idea or not, not answering the normative question, I think it's a natural time to at least be
answering the question of whether certain advanced capabilities are perhaps in some sense
naturally gate kept by some quasi-governmental entity.
And we're going to have Michael Kratios on the show very shortly, right,
who's overseeing a lot of the technology side of this
in the Trump White House.
It'll be interesting conversation.
You know, another thing just to mention, Alex,
we'll have your view on this.
In one fashion, this pre-release vetting creates sort of a compliance mode
that Open AI, Google, Anthropic can afford,
but the smaller labs cannot, right?
And this might create sort of a limiting of the field toward an oligopoly of AI labs.
Do you think that might be the case?
There was always a bit of a moat there.
Even before any new executive orders, I'm thinking in particular of export controls that do
regulate the ability for open source or closed source capabilities to be shared with the public.
There's the Invention Secrecy Act that's been statutorily on the books for many decades that
functions as a sort of gatekeeping for anyone who wants to file a patent application that touches
on certain sensitive areas. There's the Atomic Energy Act from the early 1950s that also gatekeepers
certain elements of new applied physics. So it's not as if we're suddenly entering some brave
new world where the government, this administration or some other administration, suddenly decides
that new technologies must be gate kept by government oversight. We've been in that regime,
arguably since World War II.
It's just that AI capabilities coming from the private sector are now so capable, so strong,
that this government, and probably I would speculate future U.S. governments may feel a strong need
to suddenly step into the loop from a gatekeeping perspective.
Yeah, Dave or Brian?
Brian, you were, you were, you know, West Point alum, first boots on the ground in Syria,
Army Ranger.
I mean, you certainly know more about the federal government.
government internals than practically anyone else.
This has to be inevitable, though, right?
Because the capabilities of the AI are going to grow exponentially, it continues to grow exponentially.
They're incredibly valuable to the military.
And the idea that the frontier labs can just pump them out and make them available across the world.
And then there's no repealing it, right, if it's already out there in the world.
So the government ultimately has to preview these things, right?
It has to, but it can't get keep.
that's where we're going to end up falling behind from a geopolitical perspective.
And this is not a political podcast, so I'll pause on that.
But it is a challenge if there's veto rights versus partnership and understanding.
Yeah, do you think there's a, like, veto rights would be one thing.
Is there like a three-month lag embargo type thing that might be coming?
We're open source is three months behind right now, so you're basically creating parity between the closed source and open source if you do this.
What's going to happen?
We've only talked about this.
first model I know of that's actually been held up.
I think it's a little bit with compete, though.
GPT2 and GPT3, the memory dulls, but those models were also held up purportedly for safety
reasons.
By Dario.
By Dario when he was at OpenAI.
I think there's a long history of a little bit of, call it, moral panic over new AI
capabilities.
moral panic to some extent as radical new capabilities like vulnerability discovery suddenly come
online, it's probably natural on the one hand. And the question, I think a question one could ask is,
would you rather the moral panic be held by the frontier labs doing the gatekeeping? Or would you
rather that it be held by a democratically elected government? Someone somewhere is inevitably going to
have this moral panic anytime new capabilities come online. We've already said that the labs are going to hold back
on their cutting-edge capability because they're going to use it internally.
And in some ways, that will benefit them if the government is saying,
wow, that's way too powerful for you to release.
And they'll release derivative products based on it, discoveries and physics,
whatever the case might be, don't you think?
Yes.
I think if anything, if you're asking now for my normative position on this,
I'm more worried, not that the government is going to aggressively gatekeep the models.
I'm worried that the frontier labs themselves will so.
aggressively self-censor for a variety of reasons, whether it's the new models are too compute
intensive or they want to leverage the models just for their own commercial benefit and not
share them. I'm more worried about the Frontier Labs self-policing more aggressively than the
government ever would and stifling competition that way. I think that's a far scarier future.
Hey, everybody, you may not know this, but I've got an incredible research team. And every week,
myself, my research team, study the meta trends that are impacting the world. Topics like computation,
sensors, networks, AI, robotics, 3D printing, synthetic biology.
And these Metatrend reports I put out once a week,
enable you to see the future 10 years ahead of anybody else.
If you'd like to get access to the Metatrends newsletter every week,
go to Diamandis.com slash Metatrends.
That's DMADIS.com slash Metatrends.
Keeping on this theme of the government, here we go,
the Pentagon has signed agreements with seven AI companies,
including Google, SpaceXAI,
OpenAI, Amazon, Microsoft for military applications.
And what's interesting about this article is that Google's agreement provides that they can provide AI to the Pentagon for any lawful government purpose.
And this prompted a protest by 600 Google employees.
And if you all remember, I remember this in 2018, when Google had a walkout of 20,000 employees, not 600 employees, but 20,000.
employees. It was called the Project Maven Walkout. When Google disclosed, they were using
their capabilities, their early AI, and their models, and their search capabilities for
government applications. Thoughts on this one, gentlemen? I would just say, not only did those,
this has been publicly reported, those Google employees protest, they unionized, which is something
we've never seen before. We've never seen in a frontier space like this. AI, you know, just like the
juxtaposition of 19th century union style organization on the one hand with 21st century, 21st century,
this was the British deep mind employees who are unionizing to protest Google entering into
this agreement with the Pentagon. We've never seen this bizarre juxtaposition like this before. I
think it's probably not a great look for Google that they have employees unionizing
outside the continental U.S., outside the U.S. overall to protest, working for arguably
patriotic purposes with the U.S. military, not a good look at all. On the other hand, I would say
the seven companies, I think reflection is somewhere in there as well. This at least underlines
the upside in my mind, which is there's at least enough competition in the frontier model space
that the Department of War has enough other counterparties to go to if it's unwilling or unable to work
with Anthropic. At least there will be other models available on the Cipronet and J-Wix.
I can't wait for the agents to unionize and walk out.
Well, it's easy to forget, too, that Google, we think of it as a U.S. Foundation lab,
but the Deep Mines Unit is in London.
And Demis Havas is in London.
And, you know, Demis desperately wanted to spin that business unit out.
This is all coming out in the new book, The Infinity Machine.
It's a great, great history of how this all evolved.
But DeepMind would have spun out and become essentially like anthropic.
And then Google promoted it and then backed off and changed their mind and kept it internal.
And so that unit is culturally kind of separate from Google anyway.
But it's not a U.S. entity.
It's part of a U.S. company.
So it's clearly, you know, signing this agreement with the Pentagon.
But, yeah, there must be some serious internal friction there.
All right.
Look who's entered the room.
The Emperor of Exponential Organ.
Salim, good to have you drawing. Where are you today? I'm in Toronto and I came into the country and they said,
do you realize your passport has run out of pages? And because I travel so much, so I had to go to the
renewal office. So I was standing in line for the last half an hour and getting that done. So that's now under process.
It's ironic as hell. When are we going to have digital passports without having this old process of physical stamps?
I mean, it really is. Where's your passport abundance mindset, Salis?
There was a guy who was behind the ticket counter he was like the stapling things together.
I'm like, whoa, how retro?
Oh, my God.
So, Salim, any comments on the Pentagon signing with the Frontier Labs here?
Well, you know, I can understand the employee backlash because AI is not just a tool now.
It's like becoming a decision layer.
So you can understand why, but navigating this is going to be crazy.
So let's see what happens.
Yeah, it is going to be crazy.
All right.
Let's continue on the Google stories here.
So Google has crushed their earnings.
Alphabet reported $109.9 billion.
I would be happy with just the 0.9 billion in revenue.
22% year on your growth, $62.6 billion in profit.
Google Cloud hit 20 billion revenue with 63% growth,
outpacing both AWS and Azure.
AI drove results across the,
entire Google ecosystem. Interestingly enough, Google now has three-quarters of a billion monthly
active users. Dave, let's go to you for this. What are your thoughts? Well, it's interesting
that YouTube acquisition is greatest acquisition of all time, although the rival would be Instagram
by Facebook, which is now the majority of its market cap, or maybe Google's acquisition of deep
mind, which is now driving a lot of this growth. I remember, I met, you know, Chad Hurley, who
had sold YouTube for $1.65 billion.
And interestingly enough, I mean, I don't know if people know this story, but YouTube,
actually Google already had Google video.
But YouTube was scaling faster because it had no lawyers and had no restrictions on what
you could post.
And they were scaling so rapidly that Google had no choice but to buy them.
Yeah, well, the story within the story here, too, is that Google's search volume
flattened in about 2017.
It's been flat ever since.
Yet the revenue just goes up and up and up and up.
And everyone's like, how's that possible?
And the way that's possible is ad targeting.
And the driver of ad targeting is what?
It's AI.
And so Google had a really kind of an easy road to where they are now in the sense that every time they worked on AI, it instantly turned into revenue and profit.
You know, very, very different from Tesla or from OpenAI.
So they really had the perfect storm of opportunity.
And they took advantage of it.
To their credit, they took advantage of it.
And not everybody does that.
But here they are, yeah, on the cusp of being the most valuable company in the world again.
Yeah, we're going to see that in just a minute. Alex.
I want to note that Google Cloud had a rather difficult childbirth.
Think back a few years to there was a point at which reportedly the co-founders of Google were,
had passed a mandate for Thomas Curry and either Google Cloud had to become number one or number two public cloud,
or it would simply be removed.
It would be excised from Alphabet.
And that was, I think, a dangerous time.
And there were a variety of documents and internal memos regarding the future of GCP getting leaked at the time.
And I think Google slash alphabet, to their credit, did sort of stood out, took a stand against those who would rather Google have not stayed in the public cloud race.
They carved out Google Cloud as its own line item in quarterly reports just in time for the AI tailwind.
And now thanks to the AI tailwind and exposing both TPUs to their own customers and now TPUs and TPU compute capacity to other Frontier Labs, also very good move, arguably.
And then maybe even offering TPUs for direct sale to others data centers. I think Google Cloud not only has a fighting chance, but is arguably, as many others have mentioned, in a unique position from a vertical integration perspective, to potentially leapfrog both AWS.
and Azure.
It's an exciting time.
Also, I mean, not many people know this, but ever quote, you know, where I'm the chairman,
Google came to us and said if you use Google Cloud will give you ad credits on Google Search.
And the CEO, Seth Burnbaum at the time said, what an incredible deal.
But then they know once you're on Google Cloud, it's not trivial to move off.
So they had that other, I mean, I guess, you know, any entrepreneur, any Brian Elliott,
you know, great entrepreneur will tell you there's a lot of luck in the process.
But recognizing when you have those lucky moments, Google capitalized on each of those strokes of luck.
Alex, something you just said was matured dramatically.
What was that?
Yeah, GCP as a product has matured dramatically.
Yeah.
Like 2018 GCP to 2026 is unrecognizable.
Yeah, yeah, kind of a late start and pushing it really hard.
And, you know, they've done a great job.
They've always done a great job of attracting talent, too, until recently.
Now everybody wants to be part of Lizzie.
But recognizing great talent, you know, the taxi cabsie.
in San Francisco used to have, on the top of them, these ads, there were difficult math
problems. And it would say, you know, if you want to work at Google, solve this problem.
Love that. Isn't that crazy? I mean, just so creative in the recruiting. It's like Palmer Lucky's,
you know, employee ads that say, you don't want to work here, right? It's like negative incentives.
You know, interesting, Alex, you said something I think is really important. Unless Google can be
number one or number two in a category, they drop it. I don't know if you remember Cirque.
right when they were going after social.
Of course.
Google forget.
Yeah, I forgot.
A lot of people forgot.
I didn't.
I didn't actually.
Google actually bought my company at the time.
And that was the contact management part of the circles and all that stuff.
Interesting.
Yeah, it was a classic play of they couldn't innovate.
I would innovate Facebook at all because all the approval layers inside Google.
Facebook was just running circles around everybody at the time.
Yeah. Again, agility is your number one, you know, killer capability. All right, let's continue on.
Wait, I've got two quick comments.
Yeah, please. Yeah. When we do our EXO rankings, Google is consistently at the top because they've created an unbelievable flywheel of data feeding algorithms, algorithms running in the cloud.
That gives you distribution and capital and talent, all kind of reinforcing each other.
And so this is an amazing story that's just going to keep going.
So our next story, even Google is compute constrained.
The innermost loop is a harsh mistress.
Let's take a listen to Demas, talk about this.
For us, I mean, there is a question of resources, talent and compute.
Like, nobody has enough spare compute to just make two, you know, frontier models at maximum size, right, with different attributes.
So that's pretty difficult.
But also, for now, what we've decided is that our edge models, the things we want to use,
for Android and glasses and robotics, it's best that they're open models because they're
vulnerable anyway once you put them out on the surfaces. So they might as well be actually
fully open. Fascinating. Right. So this is the world's largest infrastructure builder is basically
saying they can't build fast enough and they're turning away revenue. Brian, any thoughts on this one?
Well, we always knew that devices on the edge we're going to use open source. It just makes more sense
from a security perspective. But Google is literally making people apply and get in line for large
amounts of compute. We've never seen anything like it before. And so you have to be one of the
most important people in the market to be ahead of.
I think this is one of the most important topics we can talk about too, because when you
talk to corporate America, you know, they take compute for granted. And I had a long conversation
with Cush Bavaria yesterday from Oren. That company has grown like wildfire because
compute is constrained and everybody's going to Oren to reserve their few.
future compute, and you can buy compute futures for the first time. But most of corporate America
isn't aware that this is the new normal forever hereafter. You know, if you look at the rate that
Blitzy can consume compute productively, it's almost infinite. You know, it's almost unlimited.
And we're all used to there being surplus compute. You just go to the cloud anytime you want. You
buy whatever you want. It's always right there. It's like going to the grocery store. Of course,
there'll be milk on the shelves. That will probably never be true again. But corporate America
isn't aware of it. And especially, you know, corporate world isn't aware of it. And so they're not
reserving and building their own capacity. And they're going to really, really suffer probably two
to three years from now when there's nothing available. And they immediately realize, wow,
I could automate huge fractions of my business and turn it into profit and I can use Blitzy to
recode everything. Oh, wait, we don't have any compute. Terrafab, baby. I've got one word for you,
TerraFab. Yeah, TerraFab. Wow. Interesting. Alex, what are your thoughts here?
Yeah, one thing I think most people don't realize is the situation is so severe, and this has been publicly reported,
that even within Google, the three main compute consumers, which are search, the historic user,
comma, cloud, comma, deep mind, those three users on a periodic basis, I think it's been reported once a week or once a month.
They all have to fight out new compute capacity that comes online for their respective divisions.
I do think this is a preview of the future where what gets prized ultimately,
ultimately, as we spoke about in a previous pod, what gets prized is per token economic productivity.
The highest, most revenue generating or most profit generating tokens will ultimately receive the most compute.
And we're seeing this now not just at the inter-frontier lab level.
We've spoken in the past about how Anthropics strategy seems squarely aimed at maximizing dollar value per token.
But similarly, even within Google, they're all fighting it out to see who can generate.
the most dollar value per token.
And I think that type of liquid market or auctions per token, that's the future we're
going to find ourselves.
You need a metric number that is the AWG metric on, you know, sort of dollars per token
created.
Yes.
Yeah, for sure.
You know, I, you know, just not investment advice, but it is the innermost loop.
And so the stocks that are skyrocketing right now, we'll see this a little bit, are the chips
and energy companies. I mean, if you have something that's massively constrained, that's driving
the global economy, I mean, I don't know where else you put capital.
Peter, where credit is due, you turned my daily newsletter into non-investment investment advice.
Bravo.
Oh, my God. We'll get to that in the moment. But, you know, this is what we're seeing.
Google's market cap is just within 4% for overtaking Nvidia. I didn't look today to see if it's
close the gap. It's still pretty close. I checked. Yeah, I mean, honestly, what we're basically
seeing is AI is now driving the value, not anything else. They've successfully done the crossover.
That is a, you know, it's no longer search. It's now AI delivery is driving their, their valuation.
Dave, any thoughts here? Yeah, I think, you know, in my entire life, if you bought a box of chips,
you would really regret it a year later.
And this is the first year of my life
where if you bought a box of random RAM a year ago,
you would be way up today.
But I honestly, I'm calling the ball.
This is the future that we're going to live in forever hereafter.
This is not a temporary shortage.
Even if TerraFab comes online on time, which it won't, right?
There's no chance of it coming on time online.
Even if it did, though, we would use up all that compute instantaneously.
The AI is the first thing we've ever had.
inhuman history that has an infinite appetite to create.
And every new GPU is another disease cured.
It's another person fed in Somalia.
It's just pure value every time you create one of these off the line.
And that's why you see the other stock, you know, Intel.
We've been talking about that on the pod for a year.
And what was it?
Like 19 bucks a share when we start saying,
look, Intel's fabs are going to be critical to the future.
Everybody, AMD's microndis are.
sand disk is up. We'll see that in a couple of minutes.
Salim, any parting thoughts on this one?
Two, one is that I think Dave makes a really great point that the demand is going to be near
infinite.
And we've never seen this before in any technology.
Javon's paradox goes completely insane in this model.
But the big provocative question is, does the future belong to chip monopolies or does it
belong to intelligence utilities?
Like, which way will it go?
So I'm curious what people think about that.
What's that, Alex?
Or neither.
Or neither.
I mean, the vertical stack, right?
This is where X-AI, space X-AI, with launch is now being part of the intermost loop for getting up to the local data centers.
It's crazy.
I mean, guys, everybody listening, I mean, just, I hope you hear this because it's going to determine our economic futures.
And it's not slowing down.
You know, how high could it go? Guess what? Higher.
It's called the singularity for a reason.
All right.
It goes in a single direction.
There are asymptotes involved.
All right, I want to turn the story to a few OpenAI stories.
So OpenAI drifts from Microsoft moves towards Amazon.
So here's the story. Open AI has ended Microsoft's Azure-only exclusivity and is now run on AWS, Google Cloud, and Oracle.
Just as a reminder, we talked about this last couple of pods that opening I signed a hundred billion dollar AWS deal over eight years, making Amazon its major partner.
So what does this mean for Microsoft? Are they going to go now and start competing openly with opening I?
They're going to spin up their own models now, Alex? What do you think?
Remember when Satya made that now infamous comment about how Microsoft was good for their $80 billion in sort of backhanded reference to,
not being good for supplying all of the voracious appetite for compute that OpenAI basically demanded
under their prior engagement with Microsoft. I think we're seeing the fallout of that. I think we're
seeing Open AI and Anthropic having voracious compute appetites. And Microsoft, at least a former
iteration of Microsoft call it all of a year ago, thinking that they're being very fiscally
responsible by limiting their data center buildout and everything that goes with it,
including mega tranches of corporate debt slash credit on the data center credit markets,
but thinking that they're being responsible, you can sort of trace a line of causality
from Microsoft's decision-making at that time to Open AI today being essentially
starved of Microsoft-only compute and needing to diversify beyond even the original concept
of Stargate. Remember, Stargate originally was this sort of alliance with Microsoft
and then all of Microsoft suppliers, and then Oracle came into the picture, and then SoftBank
came into the picture, and then all of these other suppliers came into the picture. And then
Stargate was no longer about OpenAI directly being a single tenant for data centers that
they were financing, but instead became a branding moniker for leasing compute from a variety of
third-party providers. All of these are connected into a single causal chain, which is that
Microsoft and also OpenAI is not-for-profit status. That's part of the story as well.
Microsoft wasn't in a position to supply enough compute for OpenAI's demands.
And as a result, that OpenAI Microsoft marriage has turned into what we saw now, which is OpenAI is dating.
Open AI is dating everyone else at this point.
Yeah.
And I'd love to give it that spin.
You know, if you think right now, Open AI has a host of problems, not to mention the Elon Musk lawsuit.
And Mustafa Soleim, when we had him on the podcast, was pretty clear that the mandate is for them at Microsoft to build their own
foundation model because they have all the intellectual property from Open AI contractually delivered,
but they're struggling to read the files. I think that was off camera. I heard that actually
from a Microsoft Insider, a very close friend. So now it looks like both companies have problems,
but they actually had the most perfect marriage. Like early on, total domination, incredible
lead, and then they might have maybe taken the marriage for granted a little too much.
Instead of doing what Google, instead of what doing what Google and Deep Mind did, right,
which is partner up and do something epic, Microsoft and Open-I could have gone down that road,
but they didn't. I wonder how much they regret that. But I'm wondering. There were corporate
governance issues. Open AI was a nonprofit and they needed to invest in a for-profit and they created
the for-profit subsidiary in part so Microsoft could invest. It was complicated.
People were underreacting to GPT5.5. It is dramatically amazing. Is it? So Open AI is doing just fine with
all their decisions.
Five-five is equivalent to Mythos.
That's my core belief.
It is an unbelievable model,
and people are dramatically underreacting.
It's actually better than Mythos,
according to some of the cybersecurity benchmarks
that are finding that it's hitting the same capability levels
five times cheaper and actually generally available.
What X-O is put out, they have early access, right?
They're deep on this.
5.5 is unbelievable.
Anthropic is compute-constrained,
and that's why it's not going to market.
Well, that's interesting,
5.5 is also available now on Amazon Bedrock. So you can get it inside a secure environment.
So if, you know, for sensitive use, corporate use, you can keep your prompts and your results
all secret from the provider. That's for the first time. That's only been like, what, a month now.
Yeah. That's been available. But that's a big game changer. Okay, well, maybe that's,
yeah, that's huge. I hadn't heard anyone say 5.5 is actually better, like as good as mythos,
which isn't available. Dave, I talk about it in my newsletter every day.
The first thing I do every morning is your newsletter.
Okay.
Maybe groggy, but...
Or groggy.
Salim, do you want to weigh in here?
No, I find this most thing, like, you know, lots of who's the bell at the ball type of stuff.
I think this is part of the evolution of the ecosystem.
I think the next stories are much more interesting.
All right.
Well, let's go to the next story here.
Open AI misses its targets in 2025.
And there's conversation about delay.
playing the IPO. So Open AI missed its internal goal of a billion weekly chat GPT users at the end of
2025 and also multiple revenue targets were missed in early 2026. The CFO, Sarah Fryer,
who I've had a chance to hear speak a couple of times, warned that they could struggle to meet
their data center obligations if growth stagnates and suggested waiting until 2027 for an IPO.
We should talk about what the implications are. But she went on to say that the company
doesn't meet reporting standards for public companies.
And that is a remarkable admission for a CFO to make.
Dave, what do you make of that?
Well, there's two versions of the interpretation of that sentence.
One is we don't have the visibility into our revenue
to comfortably predict two, three quarters in advance.
That's the usual interpretation.
I mean, you're on public company, you know, boards, a number.
How many public companies are you part of right now?
Just one right now, ever quote.
How many have you been part of over the years?
multiple. Well, as a board member, too, micro strategy and ever quote, and then as an advisor, a whole bunch.
So what do you make of them missing their targets consistently? And of course, right now they're
shifting to 5.5, like Brian said, which is epic and moving down the corporate role in the corporate
road. Yeah. Well, I mean, one interpretation is we just raised $120 billion. We don't really need to be
promoting and rushing toward any exit right now. We're in a great, great, great space.
financially. And so, you know, that's not uncommon. You see Google doesn't make nearly as much
news and drama as the other labs do, but they quietly have everything they need. They have cash flow.
They have, you know, they have their own chips. They have their own, you know, so what's the point
of making news? But Open AI has had to promote the heck out of itself right up until they close that
$120 billion. Now they're in such a financial comfort spot that they can start to say things like,
well, maybe we should tamper, you know, expectations. And, you know, maybe 2020.
maybe 2028's a better year to go out.
Do you remember the conversation?
Dave, remember the conversation we had about the supply of capital?
Like X-A-I is, you know, space X-A-I is going to soak up a lot of capital.
And we were saying, okay, number two to the table is going to pick up the rest.
Number three is going to be left at the altar.
It looks like Anthropic might be number two.
And then if opening eye pushes into 2027, you know, is the appetite going to still be there?
Yeah, totally, totally.
Well, I don't know if you remember that the numbers are so big today compared to any time in history.
But remember when Yahoo went public and then Likos and Excite?
And it all happened in just a few weeks.
And Internet portals are going to be huge.
And, you know, Alta Vista was also out there as part of digital, but it wasn't in that IPO window.
But when tends to happen is these things go public back to back within a category because it's much easier to educate the investor community globally in one bat.
And then everybody wants to be part of it and all the money pours in.
But if you miss that wave of IPOs, it's much harder to find the capital, you know, a year or two later.
It's not tragic or devastating or anything, but it is a much easier IPO if it's part of the trend.
And it's all relative to the other companies in the sector.
So there's easily be that.
Alex, one of the points that were made was, you know, they need to meet their data center build commitments.
Yeah.
What do you make of that?
Well, a few things.
One, I think the underlying story here, one of the factors is, as I've mentioned previously, OpenAI was betting on consumer to carry it to its revenue targets. And that turns out to just have been a terrible idea. Consumers don't want to spend lots of money on reasoning tokens. Enterprises do. So pivoting back from consumer to enterprise, which Anthropic, due to its own compute limitations, was betting on Enterprise almost the entire time, at least from far earlier on than Open AI was, that
cost them. That may have, in fact, ultimately delayed their revenue targets for what would have
been their IPO. Now they're, as Brian mentioned, GPD 5.5 is out. Codex is looking stronger than
ClaudeCode at the moment. I expect leapfrogging to continue. But that did probably set back
OpenAI's internal revenue projections somewhat. At the same time, they're backing out of Stargate as
it was originally construed, and now it's just a leasing operation. It's no longer a data center, build
operations, so that should free them up quite a bit. It's a bizarre situation, though, if Sarah is
leaking these expectations, it almost smells to me like an expectation re-anchoring game. Why,
if you're about to go public, do you have your CFO leaking these stories to the Wall Street
Journal and other major publications that, oh, things might not be as rosy as they otherwise seem,
and we might have to delay our IPO. That's the sort of exercise in PR that a company goes through
if maybe it's trying to re-anchor expectations lower than they actually are so that it can exceed and beat them on a shorter time scale.
So glad you said the first part of what you said, because a lot of people are unwilling to say, oh, they made a strategic error, but it's just...
It was a blunder.
It's so clear.
Yeah.
Yeah.
And also, there's a really...
Sorrogate.
It was a blunder.
But there's a really, really important follow-on to that, too, because remember at the time that everybody thought, okay, consumers are going to eat every token.
they also predicted that Google Search would get obliterated from the planet.
Yeah.
And then all that ad revenue would go away.
So all the stocks that are tied to Google ad revenue are down 70, 80%, 90% now.
Now it turns out Google ad revenue isn't going to go away
because all the tokens are going to go to the highest value use,
which turns out to be enterprise.
Exactly what you said a second ago, Alex.
And that means that Google's lifespan on its search revenue,
which is still 90% or so of gross margin for Google,
between YouTube and Google search, ad revenue.
That's got a much longer lifespan than you would have predicted two years ago.
All the companies in that ecosystem are in much better shape
than you would have predicted two years ago.
And the enterprise revenue is where all the tokens are going to go.
But if that use case, if Blitsey keeps eating tokens at its current ramp rate,
they're not going to be available for consumer use for A, well, until after TeraFab.
It's a long time in the future.
It's a big, big change in the landscape.
If you're listening, we're hiring for a CFO, so if things don't work out with you and Sam, you can move to Cambridge, Massachusetts.
Oh, man.
That's funny.
Brian, what do you make of this story here?
Yeah, I mean, I think this is, people don't realize how hard it is to run the AARAP at these exponentially growing companies.
We have billing challenges with every single model provider, including Google.
which is the most buttoned-up organization of all time from this, right?
So this is a fundamentally hard problem,
and it is hard to predict two to three-quarters out what's going to happen
and something that is an end-of-one moment in technology.
So Sarah's probably right.
It's incredibly challenging to know what's going to happen three-quarters from now
and put that in a 10-K and put your name behind it.
All right.
I want to go to you, Salim, on this one here,
but Labs are partnering with PE firms.
So OpenAI finalized a $10 billion venture with TPG, Brookfield,
in Advent, Anthropic launched a $1.5 billion venture with Blackstone, Goldman Sachs, and Hellman,
to deploy their model, Claude. Both are focused on deploying AI across enterprise operations and
portfolio companies. I mean, this is the, you know, the Fox in the Henhouse, right? These PE firms
control trillions of dollars in thousands of companies, and this is this sort of direct into the
main vein for the AI drugs. Selim, what do you see here?
So we've been predicting this for a while because it's a natural consequence.
AI is not coming in through the CIO or through the CEO.
It's going to come in through governance top down and be forced into companies because
there's too much internal resistance.
Doing it this way breaks the immune system because you can just mandate it.
What's going to happen now is all these companies will start to create this digital twin
at the edge.
We started to talk to a bunch of these folks already, right?
It reminds me a little bit about how the, we're all looking for,
Peter, you've been looking for a use case for space forever,
and all of a sudden, data centers, what the hell?
And this is a private equity becomes the main deployment channel for enterprise AI going forward.
Because they have the, it's a perfect AI laboratory, hundreds of legacy companies
with radical efficiency, right?
And this now takes AI from chatbot experiment into EBITA transformation.
And so this is what we call the organizational singularity.
It's going to come into the enterprise, not through HR, not through IT, but through private
equity, top-down or the operating partner.
So I expect to see a lot more of this.
Yeah, I expect to see a lot more of this because people are going to go, it's just not
working to do it the old way.
So we have to do it, a restfully brute-forced at top-down.
So, Salim, if you're a small or medium-sized company CEO, right, like many who are listening
to the spot right now and you're not a billion dollar PE owned company. What do you take away from
this? That you'd better get on the train and get on it fast because if you're not disrupting
yourself with your digital twid, somebody's going to come along and disrupt you very bad,
very quickly. And these guys are going to start eating markets very quickly. Now, the one caveat
is this is going to take a lot longer. It's to be a lot harder than people think because you go into a
legacy company, you don't have the skill set or the capability to wipe out the legacy and
redo things. But you've got to force a cultural change, and that's non-trivial in many of these
companies. I bet. Dave, thoughts here. Yeah, well, you know, private equity, it's funny. It just
keeps business schools alive decade after decade. There's always something. But it's been the best
performing asset class of any asset class for a got 30 years now. Even better than venture. Only
seed stage venture outperforms private equity. And, you're
You're like, well, why is that?
Well, there's always something.
You know, computerization was a huge tailwind for PE.
Because, you know, all these legacy companies working with pens and pencils and paper,
we're never going to move to a computerized environment.
Okay, well, let's just acquire it, retool it, make it much more efficient,
and then take it public again.
And so now AI is that times, you know, whatever, a thousand.
Yeah, the arbitrage is going to be amazing.
Oh, and also the other thing is if you buy a company that's very complicated,
like a legacy manufacturer or a white collar operation,
getting to know what they do.
You bring in a brilliant management team,
and they come in,
but understanding a legacy business is so hard.
Oh, wait, AI is the perfect power tool
for scouring every document,
interviewing every employee,
gathering all of that information,
looking at all the legacy systems.
And so, you know,
I think the war chest of tools with AI
that PE now has is like nothing they've ever experienced
before.
I expect PE returns will go through another,
one of these cycles,
like when computerization was a wave,
where the PE returns are just staggeringly high.
And it's all because, yeah, because of AI, AI automation.
Can I make a hot take here?
Okay, Alex, and then we'll go back to Hughes, Salim.
Go ahead, Alex.
So a hot take, the elephant in the room,
how is this money going to be spent?
$10 billion open AI,
$1.5 billion, anthropic.
A skeptic, which I'm not,
but a skeptic might argue.
In this instance, I'm not,
but a skeptic might argue,
you that there's a very real risk that these monies are going to be used to basically pay the
respective Frontier Labs for their own sales, that it's sort of open AI spending $10 billion,
or I guess they've contributed part of the $10 billion, but that's ultimately a bit circular.
So the same folks who were arguing that all of these deals in the past year or so that
NVIDIA was striking with other folks in their supply chain were just constituted NVIDIA doing
circular sales that OpenAI and Anthropic are basically launching these ventures or co-branded
ventures as a way to drive their own sales through circular sales mechanisms and wash sales.
That's what a skeptic would say. Another take would be that PE firms, I guess there's a second
elephant in this particular room, which is that the PE firms have got to be staring down
future discounted cash flows and being quite scared by it. If AI is just, you know,
eating away all of these otherwise relatively, yes, relatively predictable future cash flows of all of
their operating portfolio companies and AI marches into the room and suddenly they only have,
as we've talked about in the pot in the past, about maybe they only have two to three years of runway
left in these cash flows before AI just obsoletes the cash flows and you're a PE company
and Open AI or Anthropic come into your office and say, we'd like to set up JV with you,
billions of dollars, and you can spend the billions of dollars on your portcos. That plugs a hole
in their discounted cash flows. Brilliant, brilliant inside-outes. That could be quite attractive,
but also seductive for them, in which case the frontier labs maybe get something that approximates
a wash sale, and the PE labs get to plug a hole in their discounted future cash flows for the
moment that make them look good to their LPs. Amazing. Sillian, would you agree?
Yes, but I think the, this to, to,
just take the whole other side of this.
I think they're going to find it brutally harder than they think to make this all work.
Okay.
So, for example, you can try and go into a company and scan all the documents, etc.,
but there's a statistic that's pretty surreal, which is 44% of Gen Z workers today
are deliberately corrupting the AI that they've been asked to help automate
because it won't take their jobs.
It's like literally criminal malpractice what they're doing.
Just so it defends.
So you can get all sorts of messiness and chaos as,
it goes through this transition, and I think this is going to be much harder. There's a methodology
being developed here that nobody's ever had to do before. This is a completely new territory.
So this, and we'll talk more about that on another episode.
On the next episode, Salim, or the one after that, depending on we have Michael Gratios,
we should dissect the organizational singularity paper that you're about to publish.
We will do that. I'm ready to talk about it. So the next slot, we'll go into it in detail.
and we'd go.
It's one thing to have a P.E. firm, you know, pressure you as a large company to utilize AI to the fullest, which they will.
But again, if you're a solopreneur, if you're a business owner, small, medium-sized business, either you as the CEO need to take that role of the P.E. firm here and just demand it of your team.
Or if you're a board member listening of a company, you need to unify the board and demand that of your CEO.
There's zero excuses if you don't.
Brian, and I'll do a sneak cheat.
This must be a big topic among the HBS, Harvard Business School alum crowd, right?
A lot of your classmates must be in private equity.
Yeah, well, I can nest it in the reality.
We work across almost every single private equity portfolio,
and it's not as if they're resistant to change.
They're just fatigued by the tools sent by the board every single week of a new thing to try.
And so what I would encourage folks to think about is not just the cost reduction mechanisms,
but actually the revenue acceleration that you can bring inside of these AI tools.
The managers are so fatigued of cost cut out with AI, cost cut out with AI versus what is possible now that wasn't possible a year ago.
Oh, interesting.
All right.
I'm going to move us on to a fun topic.
This is the march towards AGI, whatever the heck that means, and consciousness.
Let's listen to this first video here.
One thing I have learned is that everyone has their own intuitions about what AGI is.
And maybe you can view it as like, according to my view of where we are, I think we're about 80% of the way there.
So first of all, I think the point that that was Greg Brockman, the president, open the eye.
The point that everybody has their own view and you sort of intuit if it's AGI or not is a very squishy definition.
Anthropics, Jack Clark, came in with this quote, I believe, recursive self-improvement.
as a 60% chance of happening by the end of 2008.
I'm super curious about your thoughts there, Alex.
But first, the other story I've partnered here
is that Richard Dawkins says that Claude may already be conscious.
Quote, if these machines aren't conscious,
what more could it possibly take?
Alex, over to you, pal.
Well, Richard Dawkins first, I think hell is frozen over.
Richard Dawkins, as I mentioned in my newsletter,
sort of biological deconstructionist in-chief, selfish gene, I would say, extraordinary.
Even implying that Claude may be conscious, whatever he may mean by that, I think this is an extraordinary moment in,
call it the biological philosophy of frontier models, extraordinary moment.
Going back to Greg and Jack, so taking Greg first, it's difficult.
to know when Greg says 80% to AGI what he's really thinking historically, going back to the
OpenAI Microsoft discussion, there was the contractual definition at one point between Open
AI and Microsoft that AGI meant generating $100 billion in revenue. So Greg may be thinking
we're 80% of the way to generating $100 billion in revenue off of our models. If I had to guess,
I'd guess his estimate or his definition is probably something like that. He may be thinking in
revenue terms or, let's say, economic terms, maybe in terms of the supply chain and data center
build out. When Anthropic, when Jack in particular, talks about 60% chance of happening by the end
of 2028, that one's a real head scratcher for me, much more of a head scratcher than Greg,
because Anthropic has publicly said that almost all of their code at this point is being
generated by Claude, and that Claude accounts for substantially all of the training and logic
for the next generation of Claude.
So I'm not sure how much more recursive the recursive self-improvement could be at this point.
Maybe he's just throwing out a really conservative outerbound or maybe he has some thresholds
of progress improvement.
And there are a few different benchmarks for capturing the rate of recursive self-improvement.
Maybe he has some internal notion of one particular benchmark passing 60% by end of 2028.
But I think on the outer bound, I think Jack's,
estimate is far too conservative relative to every indication we've seen out of Anthropic to date.
Salim, what are your thoughts here, pal?
So I totally agree with Alex on the Greg Brockman commentary.
Also surprised at the anthropic thing, because I think we're like, my understanding is we're like
90% there and could be there within months.
Or maybe that last 10% is a really hard one, and it's just going to take that much longer.
And a few years ago, I was asked to moderate a debate between Richard Dawkins and Deepak Chopra, which I refused because there was going to be more heat than light.
I watched the debate, and definitely they were yelling at each other, talking to each other.
Richard Dawkins is very much a phenomenalologist, so he's coming at it from bottom up.
And when you see the AI's simulating or acting that way, he goes from it from that perspective.
I do disagree with the concept of this because I think they're mimicking consciousness.
That's a very different thing than actually being conscious.
By the bigger point, though, it's not about whether AI is conscious, but is it operationally autonomous, right?
Because discussing the philosophical aspect of this is fascinating and great.
But CEOs and governments need much more worried about the agents that can plan, execute, negotiate, code, persuade, etc., all of that stuff that happens.
And so it becomes a non-sequitur and an orthogonal discussion to the real important conversation.
I think the recursive self-improvement is the really big deal, though.
That one when we hit that, holy crap.
It's important to parse out the foundation model versus the AI systems.
LLMs are sequence to sequence.
They are fundamentally not a architecture that will get to AGI,
but you can construct AI systems to have reinforcement loops to get better as you use them.
So when we say AI, AI systems, yes, foundation models as a standalone transformer architecture is not going to happen.
Well, Brian, that's quite the hot take.
Do you want to define how or explain how you operationalize AGI?
So I believe systems that can learn on the fly outside of training data is how we think about AI.
continuous learning. In other words, in context learning?
Not in context learning.
Continuous learning?
You said systems that learn outside of the data set.
What they're doing is in context learning is changing the trace happening in the neural note.
Well, we had Demis Hesaba say he sees it at a 50-50 that LLMs will get us to AGI
and not needing additional breakthroughs beyond it.
we'll see. We'll find out sometime in the next year or two.
Just to clarify that. So saying that LLMs will get us to AGI is not saying the same thing as LLMs are AGI.
All you're saying is the LLMs will come up with the innovations on their own that then become AGI.
So those are slightly different things.
I still don't understand Brian's definition of AGI. If we could take just one minute, I'd love Brian to hear a crisp articulation of how you define AGI.
I use it in a way that is helpful for us.
I don't follow the OpenAI revenue definitions here.
What is the official Blitzy definition of AGI?
AGI is systems that can learn outside of their training data.
And so if it comes up with its own programming language that has never been seen before,
that is fully executable against similar systems, that is our version of AGI.
I can do that right now with an LLM.
I can't recreate Linux with a net net,
new, never seen programming language, I've tried.
So recent models have arguably, we've talked about this on the past in the pod, built entire compiler chains, for example, which arguably the compiler chain is comparable to, if not harder than, say, a Linux kernel from scratch using, you know, totally new compiler chain is able to compile the Linux kernel from scratch.
Anthropic C compiler does not compile Hello World.
and there's plenty of training data about how to do this.
So we completed the same exercise with Blitsey using all the models.
It was able to compile all of that, right?
And so our version was particularly more robust than just the anthropic version.
And I don't think that instantiates AGI.
All right.
I'm going to move with it.
No, no, I need to say something real quick.
Last comment.
I think the AGI consciousness discussion is the wrong question.
It's really a question about agency.
Okay.
that's the threshold that we should be looking at.
And if you can get to agency, then we have to deal with the whole thing.
The problem is if AI becomes conscious, you have a moral rights problem.
If it becomes agentic, you have a governance problem.
The governance problem comes first.
Either way, it's very valuable.
I'll move us on.
China blocks Meta's Manus AI acquisition.
So Meta acquired for $2.5 billion, or at least they thought they did, Manus back in December
2025.
and China is driving it to be unwound and blocking the deal. China barred the founders from
leaving the country, even though employees, technology and investors' payouts had already been
completed. I had lunch when I was in Singapore. I had lunch with the meta lead that basically
manifested this. And he was in charge of flying out the Manus team out of mainland China
to Singapore on a secret flight the night before. And this is high drama. And,
just fascinated that it's being, you know, that they're actually enabling the unwinding of this deal.
Dave or Alex?
Wait, wait, wait.
Peter, don't leave us hanging there.
So wait, the Manus people who had already been paid out took the money and fled the country?
They fled literally in like on a private jet in the middle of the night from China to Singapore to do this deal.
Because they knew if they stayed inside China, they would not be able to drive the acquisition.
So where are they?
Well, the last day I knew, they were still in Singapore along with all the code and everything required to make the sale.
Now, how this is being unwound, I don't know if this is political intrigue.
I didn't read enough into the story to find out, is this a deal that's being governed between the leadership of Singapore and China?
I'm sure our viewers will dig into that if they're interested.
No, no, no, it's meta. It's meta.
Remember, China in general still does a lot of business with me.
meta and their their China I I would just based on public reporting I would infer that it's political
pressure that China leverage that China has over meta to compel them to unwind it at the risk of
potentially losing business on China or China adjacent areas well this is turning into a true cold
war that that's very serious yeah yeah no that that is what in principle the u.s government is
supposed to step in when that happens and and make sure that that that
doesn't happen.
They're viewing AI models.
Yeah.
I think that's exactly right.
This is exactly what's happening.
They're leaning on the other.
You know what's so weird about this is when when somebody at MIT decides they're going
to go into nuclear physics and work on nuclear weapons, they know they're making
that choice.
But when you decided seven years ago to work on AI, you didn't know that you were going
to end up being a political prisoner candidate or a tie to a lab or a national
aspect.
Yeah.
You got sucked into that.
so unwillingly. These guys are, I mean, these guys are screwed. That's just, that's just
horrifically bad. I mean, this is again based on public reporting, but my understanding is,
even at earlier times of financing of Monas, they were sort of playing it multiple ways.
Were they a Chinese company? Were they Singapore based or were they based in Palo Alto?
And if I remember correctly, they also had like a Palo Alto presence. They were trying to sort of
be all things to all people. This is saying president, right?
of the AI talent is a national security risk.
Yes.
Oh, my God.
The spheres of influence.
There's the U.S. sphere.
There's the China's sphere and there's everything else.
And I think it's very difficult to straddle those at this point.
Yeah.
And as all benchmark made their last investment at a $500 million valuation.
Everyone said it was huge firm risk.
And then they were celebrated when there was the acquisition.
But they didn't underwrite this.
Yeah.
So that means, I mean, likely future, you know, top tier venture capitalists.
in the U.S. are just not going to invest in a China-based company, right? You don't know if your money
will ever come back out. And if the employees get claimed as national assets, then the intellectual
property is gone. I mean, this is literally like the tipping point of true Cold War.
And does it go from the company level down to individual employee? Remember, I don't know if you
guys remember, I don't know, but a year ago, we looked at the AI employees of Meta. 50% were
Chinese. The same thing at XAI, right? We saw a large number of the XAI Chinese employees leave,
was that security, was that, you know, what was driving that? You know, so we're in the era now
where AI researchers, you know, are not likely to move freely between U.S. and Chinese companies
anymore. You know, that's a good, in a sense, you know, there's so many of the great AI researchers
in America are Chinese ethnic. And there's always a concern that, oh, will you go back to China
with the intellectual property,
but I think this makes it much less likely
that somebody would go back to China.
I view this as really a short-term problem
because if you believe that we're in an era
of either present or near future
recursive self-improvement,
most of the research is going to be conducted by AI agents anyway,
and those can be firmly planted on U.S. soil
with no risk that they'll fly to China.
That's why that last slide is so important, too,
you know, saying, hey, yeah, recursive self-improvement
is until the end of 2028.
I am with Alex on that.
I think it's much sooner than that.
And in fact, I think it's here right now
quietly or imminent.
But if it is later, then you care a lot more
about where's the talent.
If it's sooner, you're like, okay, where's the compute?
Exactly.
It matters a lot.
So the middle of the singularity
is the most interesting thing that has ever happened.
It is so fun, except I'm not sleeping anymore.
I mean, literally, it's like, it's crazy.
It's good, though.
You're not sleeping through the singularity.
I know.
I keep on saying, you know, we've invented the nine and ten day work week.
Thank God, Skippy is working for me at night so I can get a few hours of sleep.
All right, we have an incredible story coming up next.
Here we go.
Blitzy is taking on Claude Code and Codex.
Brian, congratulations.
You just raised $200 million at a $1.4 billion valuation.
I say congratulations as well to the team at Link, David, for your, you know, leading the early rounds of Blitzy.
just a full disclosure again. Blitzy is a sponsor of this pod. And Brian, let's kick it off by
what's this story all about and tell everybody what Blitzie does so they have an understanding of how
awesome it is. Yeah, the headline is misleading. We did raise $200 million, but we are
blig lovers of cloud code and codex. So almost all of our customers are existing users of those
tools and they're amazing. Blitzy is for large-scale autonomous software development against
large-scale code base. So we're used across the global 2000.
insurance and financial services to do large-scale refactoring, large-scale modernization,
and large-scale product development.
So just as you think about usage, something like ClaudeCode or Codex, you're getting
200, 500 lines of code, bottoms up developer-driven.
We are top-down enterprise-driven, getting half a million or a million lines of code at a time,
fully end-to-end test.
And we did a compiler for Alex Wesner Gross as well to build an MST compiler, which may or may not
be AGI.
Have you done any...
Are you built it without telling me?
Thanks for that.
We got a blog for you.
I'll send it over.
Go to blissy.com.
Yeah, please.
Brian, have you done like Fortran four and watt five and like that part back?
Oh, my God.
Ancient history.
Where there's no more developers that work at the enterprise that understand this.
And so the first thing you do is reverse engineer this code.
Somebody hands you a box of punch cards.
We haven't got the punch cards yet.
But that sounds like a fun task.
Yeah.
So we're world class at understand large-scale code basis.
and then forward engineering large amounts of work against Target State.
So Brian, question for you, Brian.
So I just want to pull on the thread of this title.
I think one of the many elephants in the room is whether there's intrinsic competition
between the platforms that you're using frontier models.
Presumably you're using some combination of frontier models and your own pre-trained models,
perhaps hopefully or post-trained models.
But regardless, I understand from public.
press releases. You are using Claude. You are using Open AI models, they're partners for the
company. How do you think about a future where, as discussed earlier, you have the open AIs and
the anthropics chasing the most valuable tokens they possibly can and saying, gosh, Blitzy is
making so much profit or at least so much revenue per token. Why don't we just natively
scale up our capabilities to do that? Why are you not squarely in their roadmaps?
Are they your competition? Yeah.
Yeah, so we are the most inference-compute-intensive version of cogeneration.
So we're good today for Anthropic, good today for Open AI, good today for Gemini.
But what's unclear to the outside is you get remarkable benefits beyond the state of the art
when you use these models against one another.
There are different flavors of intelligence.
They are different and good at different things.
And so when Anthropic is checking Open AI, is checking Gemini,
And we're doing this hundreds of thousands of times at runtime, all driven algorithmically.
You can drive up quality dramatically.
Why isn't blitzing?
You can always use all open source models if you need to, Alex, which you can deploy for the government.
So, yeah, so cursor famously also in a similar position where for a while there, they were sort of
being accused of being a clawed wrapper.
And then they announced their own model, which may or may not, again, I don't know,
may or may not have been at least fine-tuned off of traces, reasoning traces, off of customers.
Is Blitzy going to launch its own model?
We are not.
So we can use open-source models, right?
We can fine-tune open-source, but we're not launching models out into the world for others to use.
We are focused on creating the highest quality code for our end customers.
Why aren't you launching your own model?
That's the wrong game to be in.
All we care about is driving engineering velocity into the enterprise.
So we are an orchestration layer focused on driving end-to-end tested code for our customers' use cases.
We are not focused on feeding models out into the world.
It just doesn't solve our customers' problems the same way as the mission of the company.
Brian, what's your advice for listeners who are building on top of these models
and are worried about being disrupted by them?
Algorithms are the last piece of IP to go.
So if you can develop really novel, really unique algorithms and really novel, really unique database structures, there is IP in that in the long run.
Alex doesn't think so, but...
I don't think so.
I'm not even sure if you think so, Brian.
I just want to pull on that narrow point.
Are you hiring more AI researchers under the premise that AI algorithms are the last to go?
Or are you hiring more sales folks or forward-deployed engineers on the premise that high-touch human interaction is the last thing to go?
we are hiring on all fronts Alex
and if people want a job
that's a dodge of the question
it's not a dodge it's not a dodge
all right so Brian
I don't know if you ever raise $200 million
but you have to really grow everything in parallel
congratulations Brian on that
I'm going to move us along Dave
final word here
proud of Litsy
proud of Brian
are you kidding it's just
incredible for the office culture
but I'll tell you one thing about Brian
there's a couple case studies
within the case studies so Brian's
West Point Army Ranger
grew up in
America, right? And he's flying across the Atlantic in one direction. While Sid, you know, India,
he grew up in India, India has twice the population of China in the employee young age bracket.
Just massive talent pool in China. I'm in India. And he's top of one of the best technical
universities there. So they're taking planes in opposite directions. You know, Brian going over to
Syria to liberate a city. Sid coming to work at Nvidia. And they end up connecting it
Harvard Business School to start the company, but I think the chemistry there, like the talent pool,
the latent brilliant talent pool in India is like insanely huge. And I think I think Brian and Sid have
tapped into that to do some of the more difficult technical work within the company.
I think that's an interesting story within the story. And the other thing about Brian, you know,
when you have large scale military experience, you're not afraid of people and personnel issues.
But so many of the AI companies that I meet in Silicon Valley keep saying, we're going to be
headcount light. The AI will do all the work. There'll be five of us or
10 of us in an office.
We'll never deal with recruiting and HR and onboarding.
Blitzie went the complete opposite direction and said,
if Alex is right and the highest token value is going to generate all the token usage,
how are we going to get the data and the use cases ferret it out of this massively complex economy
and into the AI?
And then if you think about it, it's not going to happen by magic.
It's not going to happen by AI agents just sneaking out into the world to grab it.
it's going to come with forward-deployed, easy to work with brilliant people who are getting out there
and digging it out of legacy databases and digging it out of people's brains.
And that's what's going to get back into AI.
And that's one of the reasons Mercor has done so well, too.
They're just not afraid of people.
So Blitzy more than any company I've ever seen.
You know, the headcount in one year went from 10 to 80, and now in nine months is going to go from 80 to 300.
I don't think any company in history has ever dealt with that scale.
of onboarding of talent, even Amazon.
This is like record setting.
It's just awesome to watch.
And of course you're right outside my door,
so I get to like the noise.
Yeah, I got to like,
Yeah, I'm not having any of the stress.
Yeah, watch you do it.
Brian, congratulations.
You're unfortunately right, Alex.
On the raise.
We're hiring a bunch of forward-deployed engineers
to help our customers with AI adoption.
It's okay to say it, Brian.
Like, it's nothing to be ashamed of it.
All right, all right.
I'm moving us.
The Palantier used that model to great success.
It's nothing to be ashamed of to be hiring forward to
engineers.
This episode is brought to you by Blitsey.
Autonomous software development with infinite code context.
Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise-scale
code bases with millions of lines of code. Engineers start every development sprint with the
Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan,
then generates and pre-compiles code for each task. Blitzy delivers 80% or more of the development
work autonomously while providing a guide for the final 20% of human development work required to
complete the sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitsey
as their pre-IDE development tool, pairing it with their coding co-pilot of choice to bring an AI-native
SDLC into their org. Ready to 5X your engineering velocity, visit blitzie.com to schedule a demo
and start building with Blitzy today.
Massive chip demand and data centers are moving from, you know, from our land to ocean, space, and farmlands.
Who would have thought farmlands?
So check it out.
Here's the stories here.
AI chip boom is lifting the entire industry.
We've seen Huawei sales, you know, climbed 60%, you know, proving that our tariffs and blocks by the government have not slowed down China in this regard.
Sand disk revenues jumped 251% year-on-year.
Samsung just crossed the trillion dollar value.
AMD, up 260% over the past year.
Intel, incredibly.
You know, we've talked about this so many times.
I sold my options, unfortunately, a little bit too early.
Up 442% in the past year, up, you know, 114% in the month of April.
This is not slowing down.
I mean, I looked at the chip stocks this morning.
Those and the energy stocks continue to skyrocket.
Again, not investment advice, but my God,
where else do you put your money? Dave, what are your thoughts?
Well, you know, one application of that is that the financial capital of the world now is San Francisco,
and anyone who denies it is just not looking at the numbers.
Also, if you look at the global stock market, look at all of the market caps,
U.S. tech is so much bigger than everything else combined now.
Nvidia alone could buy every company in the entire financial services sector, every single one of them.
I love the chart that you use on occasion.
It's just a clean sweep.
So a lot of people don't realize the degree to which you need to tap into that capital supply.
First of all, you need to go to Sam Fran.
If you're looking to raise big money, you need to be part of that ecosystem.
And then the semiconductors are only going to go up.
Also, a lot of people think, oh, semis, semis, semis, it's all fabs.
You've got to look through the semis and look at the underlying manufacturing capability
because that's where it's all going to get bottlenecked.
And that's why Intel's doing so well.
Let's check this out.
AWS CEO says AI demand is so high, old GPUs can't be retired.
Because there is so much more demand than supply, there typically still is demand for the
older chips, actually. And today, we actually are completely sold out of and have
never retired an A100 server as an example.
Wow.
Let's partner that with these next two stories.
So Peter Thiel is backing an ocean-based AI data center.
So I find this fascinating, and it's brilliant.
So panthalysa, thalisa is the Greek word for oceans,
is raised 140 million at a billion dollar valuation.
And they're doing this, why?
Because on the open ocean, you've got continuous energy from wave motion.
You've got cooling from the saltwater, and you have no issues on land.
You can, you know, it's out in the open ocean.
There's plenty of real estate.
Commercial deployment by 2027,
I'm impressed.
Alex, what do you think?
I think we're bearing the lead here.
So back in the day, this is, I don't know, 10, 15 years ago, Peter and I were both supporting
Patry Friedman's Seasteading Institute, which was focused on ocean colonization.
I used to give talks at the Seasteading Institute.
I think if I were to try to get into Peter's head on this, I don't think it's about the data
centers.
I think it's about building seesteads.
Oh, come on.
Seriously.
No, I don't believe you that.
No, no, seriously, because remember how maybe you wouldn't have believed two years ago, Peter,
that the killer app for space would be data centers in space, like it was going to be entertainment
or drug manufacturing or something, tourism.
No, it turns out the killer app for the solar system is data centers.
I think he's thinking he's one step ahead.
The killer app for ocean colonization is going to be data centers on the high seas.
So your thesis is you have enough data centers, you're going to be able to afford.
to build artificial islands and get around the data centers.
I think it's not going to be, it's not, you can build sea steds.
You can build sea steds around data centers.
And there's precedent for it.
Remember, Sealand that would built on the British, the old British armory station,
that a number of folks, Ryan Lacky and others briefly were sort of self-appointed nation state leaders.
I don't buy it.
You can still serve it these ocean data centers with normal ships.
I have some counterpoints that I want to make.
Okay, Salim, dive in here.
Support me on it.
So I do agree.
I think sea studying is a great idea and principle,
but it's very difficult in practice to figure that out.
I think the ocean data center approach is way better than space.
So aren't we doing this?
If you can't do it in the ocean,
you're not able to do it in space.
And this is so much more efficient at so many different levels,
we should just be building in the ocean.
So I think we're going to see a lot more of this because this is so amazingly cyclical.
It'll be quite something.
I'm a huge fan of this.
Yeah, I'm shocked.
This hasn't been proposed before.
The elegance of this is amazing.
Yeah.
Yeah.
For sure.
The thing, the reason it wouldn't have been proposed before is because the amount of energy in a single bobbing buoy intuitively, you would say that's not enough to run GPUs.
Now, apparently it is.
I'll have to dig in on make sure that.
I mean, if it is, you know, this is a very efficient way to harness wind in.
You know, waves just come from wind, but it gets concentrated in the ocean waves.
So it's just much, much better than an offshore turbine driving a, you know, a GPU out at sea
if it works energetically.
So I don't know, Alex, if you've looked into the underlying energy.
Cooling and land availability, I think, is just as important.
Continuous energy.
I mean, that's where, you know, space has solar.
This has waves.
I love that.
Yeah, I love it, too.
I think the other elephant in this room is that this wouldn't have been easy or feasible without Starlink.
I think Starlink is a key enabler for ocean-based data centers,
but obviously there's a network of capital flows that enables Starlink or other Leo satellites vis-a-vis SpaceX AI to network.
You could drop fiber.
I mean, they don't have to be that far off.
I've done that.
I was a founding advisor to Hibernian Network, spent $600 million, order of magnitude, laying optical.
fiber, low latency between North America and Europe. It's expensive. It's tedious. It's capital
intensive. It's risky. People try to cut them sometimes. Not 5,000 miles. 10 miles. It's a pain in the neck
to lay offshore fiber. Whereas if you can leverage Leo satellite constellations, so much easier.
Sure. Also, there's a jurisdiction issue there too. You can drop this in the ocean anywhere
within the Navy's purview and you'd be fine.
If you start laying fiber on the bottom of the ocean,
you've got to talk to probably 10 regulatory agencies about it.
That's a big, big difference.
Also, I think that when one of these breaks,
you just throw it on a boat, drag it back, and fix it.
If you have to reconnect it to cables, that's a pain in the ass.
This is great.
All right.
Our second related story is StarCloud is in talks for $2.2 billion
evaluation after SpaceX level of interest.
in orbital data centers is skyrocketed. So StarCloud is raising 200 million at a $2.2 billion
evaluation just one month after they closed a $1.1.1 billion round that was led by benchmark
and EQT. The company is building orbital data centers powered by solar energy.
They launched their first H-100 into space in 2025. And get this, their plan is to launch
88,000 satellites. Now, I just don't know who.
their launch provider is, and it's not stated in any place. The question is, will SpaceX service them?
They've been dependent on Blue Origin. Those are the two major suppliers. We'll see if Eric Schmidt,
friend of the pod, is going to be able to get relativity space as a rocket up and going.
Whoever thought rockets would be part of the innermost loop. Incredible. Dave thought on that.
Yeah, well, the cooling was the challenge. I guess the H-100 is not front.
yet. It's just one chip, you know. But the, the, the, the, the, the radiative cooling was done with
aluminum. Nothing, no, no, no strange metals, nothing expensive. But the critical question is the mass
that they had to launch for the cooling system. That's, that is the critical variable. And, uh, I don't
think it's disclosed, Alex, unless you know. It, uh, no comment. But, but what, what I would say is
the hyperscalers, if I'm one of the other hyperscalers, I'm looking at StarCloud and I'm,
I'm seeing that as a juicy acquisition target.
I think everyone is going to either want to own a Dyson Swarm for themselves
or they're going to want to partner with a Dyson Swarm.
I think right before we went on air here, Anthropic announced an enormous partnership with
SpaceX AI.
And if I'm Dario, I'm thinking, yeah, I'm not really incentivized to build my own Dyson
swarm.
I'll partner with Elon and SpaceX AI to use the SpaceX AI, Dyson Swarm.
100 terawatts of power.
Yeah.
That's a lot of compute.
Or no payday.
That's a lot of compute in SSO and then in SSSolar Dyson Swarm.
So the question I'd be asking is if I'm one of the non-Elon hyperscalers, how much would I
be willing to pay to acquire Star Cloud now to jumpstart my own Dyson Swarm?
I have to imagine that relativity space is now getting fully capitalized and accelerating their
development because there's a point at which SpaceX just says, no, we're not going to launch.
your constellation. We want to have hours exclusively. I mean, launch is at the bottom of the
structure. Building the satellites, no problem. You know, Nvidia already announced they're going to have,
you know, space-based, you know, GP, whatever, you know, future versions of their GPUs. But launch is
going to be critical here. Ocean, space, and farmland. So AI data centers, 67% of planned
U.S. data centers are now located in rural areas versus the 13% that exists.
today, 39% are planned projects in counties that have no existing data centers.
The southern U.S. leads this with 48% of planned centers, followed by the Midwest.
I mean, this is the biggest geographic wealth transfer since fracking.
You know, this is going to be moving sort of high-tech into the southern, you know, farmlands.
Fascinating.
Salim, any thoughts?
You know, I think there's going to be huge backlash and unwarranted backlash, because,
Because the amount of space, even if you put in a ton of data centers, there's so much farmland out there and so much area.
But people are going to overreact to this and freak out.
So we're going to have a pretty strong immune system response to this.
Dave, you were going to say, pal.
Yeah, I was going to say that, you know, as Mike Saylor reminds us all the time, physical assets are taxable.
They don't move once they're in the ground.
And any government, local or state with any brains at all, would be begging to get these things within its tax jurisdiction.
Yes.
The fact that they're, and actually what you see overwhelmingly is a scared population voting against it
and a governor trying to veto those votes because I think the governors largely are aware that this is the future of the prosperity of the state.
Yeah.
But I just wish the populations in those areas were more thoughtful about the long-term benefit of their community.
But yeah, you know, people should be fighting tooth over nail to get these in their jurisdiction.
If I could represent Middle America for a moment where I grew up, and then, of course, I lived and was stationed in Georgia.
So I've geographically been in both of these places.
The concern is around the electricity bill.
Having an electricity bill go two or three X is actually quite substantive.
And so if a plan is in place that is mitigated and understood, these taxes are going to offset those and actually make sure that people have access to electricity on the same steady state.
I don't think people have any concerns.
But that's what they have to go in with is, hey, this is what you should be worried about.
this is how we're going to stop that from happening,
and then build, build, build.
Yeah, and you think those plans, I mean, like,
you know, right now, it's very easy to go back to the data centers
and say, you have to find your own power,
and by and large, they do.
I mean, it's just like a five-line law.
It's just such a simple solution, just push it back on them
and make it part of the plan,
and I just feel so easy.
Yeah.
Anyway, the data center build out is also infinite,
just like everything in this AI revolution.
So it's not too late yet,
But, I mean, it's going to be too late soon if you don't get your jurisdiction moving.
I do think the risk that we as a civilization run, and this is admittedly a very U.S.-centric perspective in Japan.
Infamously, in the past few months, there's been a lot of coverage of data centers being built in the middle of Tokyo.
But, of course, Japan is much more densely populated than the U.S. is.
But I think the risk that we run if we, as a human civilization, push the data centers too far from human earth.
urban centers is a decoupling of the economy. Yes, it leads, as I've talked on the pod previously,
it leads to the Dyson swarm. First, we push them out from our cities to rural areas, and then we
push them out from rural areas and from the surface of the Earth into sun-synchronous orbit, and then that
gets too crowded, and we push it into a solar-centered Dyson swarm. That's one possible trajectory
civilization can take. But I think it's generally bad to push the data center economy too far
from the human economy. I would much rather see the two tightly integrated together. I think it's
sort of bad for human machine symbiosis in the long term for these two different economies to be
too siloed and too far from each other. And speaking of economy, the economy is heating up. Let's
hit a few stories here. This is David Sacks who's saying AI is becoming the engine of GDP growth.
If you've been listening to this pod, you know that already. Here's this quote, AI CAPX will be a 2%
tailwind to the GDP growth this year with Q1.
AI was 75% of the GDP growth. Polls show AI is not popular, but the economic growth is.
Stopping progress in AI is like halting the U.S. economy. And here are the numbers. This is from
Morgan Stanley saying that raising the CAPEX expectations from hypersalers to 805 billion from
$765 billion. We're talking about $3 billion per day, approaching $3 billion per day,
and growing. It's not slowing down. So let's pause on that. The economy is being driven. We've talked about
this ad nauseum. Any new ideas here you want to mention? I'll just note, in addition to the obvious idea
that the economy is becoming indistinguishable from the AI infrastructure buildout. I think this is
underselling the contribution of AI, at least what I expect to be the contribution from AI talking two to five
years out, I think the most interesting transformation, certainly most dramatic, won't be just
this opening act of tiling the earth with compute that right now is absorbing all the capital.
I think it's going to be the transformative inventions and discoveries and applications that get
built as another layer on top of.
And personally, I'm much more excited about that second layer than the first.
So I make a shout out to all the people thinking about a startup and, you know,
are all the business plans taken?
Is AI going to do everything?
If you look into this tech stack and you think about a trillion and then $2 trillion of investment in just compute, just raw compute, those numbers are so much bigger than anything in history.
And so companies like Standard Kernel, you know, Chris Reinhard, who's making a compiler that enables chips to catch up to Nvidia, you know, anything in the data center stack that makes the chips leaner, more efficient, or cools them better, the demand for all.
that stuff is massive in scale. So something that seemed like it was a niche market five years ago
can be a multi-billion dollar market or bigger today because of the scale. Dave, can we talk to our
general public listening to this, you know, mom, dad, entrepreneurs, students about about this. I mean,
from my perspective, the question of where do, if you're looking for a job, where do you go
try and find a job? And then if you're trying to invest your nest edge, you're, you're trying to invest your nest
egg. And I know it's dangerous to give investment advice, but in general here, I mean, I think
it's important to translate all of this to, you know, people listening here who are not, you know,
running an exponential organization. What are your thoughts? Let's kick this back and forth a second.
Well, I think the most important starting thought is you have to invest. The future of assets is,
you know, Elon was saying 10x GDP growth in 10 years. That means all assets, whether it's a
house or a data center or, you know, all assets are going to go way up in value at a time
where W2 income is not a good place to be. And so you have to, you have to at some point
switch to investing just as a founding thought. It's a rising tide. You have to invest in
and sort of float at the top of this. Yeah. And so then the other thing is investing benefits
tremendously from change. And so on the prior slide, people are scared of AI because they're scared
of change in general. But change is a wonderful thing when you're investing. New opportunities
open up at an incredible rate. And if you can discover a new opportunity early. But, you know,
we've been talking about this a lot, Peter, like the, you know, Intel was an obvious one to us.
And that's been great. What's next? Well, what's next is there are many, many things we've talked about
just in this podcast that are obvious trends that are going to trigger the next wave of either
public equities that already exist going up or new startups that need to come into the world that you
wouldn't have thought of three years ago. They're right here in the pod. And I think, you know,
what I've done in the past and everybody can do here is go to your favorite large language model
and say, listen, I want to understand what are the chip companies out there and, you know,
plot for me what their, what their P.E. ratio has been and what people are saying about it. You can do
your research now a lot easier than ever before. And I don't want to say that you can't go wrong
buying a bucket of chip companies or energy companies or infrastructure companies.
But I think that's generally correct.
This whole thing is moving upwards at a very rapid rate.
Just for the record, not investment advice, please.
Yes.
I've said that twice.
Yeah.
But at the end of the day, I think it's also true if you're looking for a job,
if you can hook up with one of these companies,
they're at max, you know, output, and they're growing.
They're all growing, so they're all probably hiring.
Oh, that's a good question for Brian, actually,
because there's always a tendency when you bump into people,
they say, well, look, I'm not an AI geek.
I'm not a person who is an AI researcher.
This isn't going to benefit me.
But then I walk around Blitzy,
and you've got a huge variety of hyper-talented people
that it takes to create a company like this,
and they're all on the cap.
Everyone has stock, right?
I assume.
Yeah, they're all owners in the company.
Yeah.
Right.
And so it's never been a worst time to be in big tech because they're having massive
layups right now because they are having additional investment into this KAPX.
Never been a better time to be on a fast-growing AI startup that is deploying people into
enterprises because there's an insatiable demand and there's not going to stop for several years.
And so people that the hybrid of soft skills and technical, which you can self-learn easier
than you ever have been able to, can provide tremendous value.
Yeah, I think that there's a really important point in there, which is when I look around the AI community, the soft skills are lacking everywhere.
And the hard skills have been the critical part.
But now with AI as a sidekick, the soft skills actually seem like they're on this kind of a curve.
And the hard skills is like, well, the AI is going to help me with that anyway.
So it feels like there's real opportunity in there if you have very, very good soft skills to just find the right company to join.
And there's ample opportunity to contribute.
forward deployed engineers for everyone that's right that's right please apply blitzie dot com all right
our next story here is sam altman is rethinking ubi so altman no longer believes that ubi i believes in ubi as he
once has after funding a three-year study he found spending went up but there was no clear improvement
in health and health care access he now proposes giving people a stake in ai's upside through compute access
or a public wealth fund.
So one of the concepts here is if you're a citizen of Alaska,
you're part of their permanent fund, right?
Alaska makes a lot of money from oil.
You're a citizen.
You're an owner of the state of Alaska,
and you get a check every year as a percentage of the revenues from that oil,
which I guess is going up this year.
Same thing is true in Saudi and Emirates.
So if AI is a national resource,
if computers are a national resource and you're a citizen in the U.S.,
Can you own a piece of that?
Salim, want to go to you first on this one.
So I'd love to see the details of this because when we've seen the data coming from UBI,
the more UBI, the more UBI the more UBI the more successful has been.
There was a Finland UBI that failed, but it wasn't universal, it wasn't basic,
and it wasn't income.
So I'd love to see some more data around this to understand why he doesn't believe in UBI.
I think the AI upside play is really powerful and very powerful.
very important.
Do you have, do citizens get income or do you get a claim on AI productivity?
What you can sell.
UBI gives you the bottom protects the bottom and an AI upside type of model gives you
the upside on that side.
So the social contract may be less about redistribution and more about participation in
that exponential upside, which will be amazing for everybody.
Is this the way we get to UHI, Alex?
What are your thoughts here?
I think I agree with Sam broadly on this. So just a refresher, UBI, universal basic income, UBE, universal basic equity, UBC, universal basic compute, UBS, universal basic services.
I tend to think that UBI, which is sort of in some sense a demand side stimulus to the economy.
It's COVID checks. Stimis, stimmy checks. I tend to think that that doesn't necessarily lead to the best long-term alignment between the
recipients of the Stim E-Chucks and the society overall. I tend to think, so Peter, you and I
argued in favor in our book, Solve Everything for UBC, Universal Basic Compute. I'm a huge fan
of UBS Universal Basic Services. I'd much rather see the cost of everything, including healthcare,
go down to near zero, and that's how we achieve truly universal health care, rather than just
dishing out Stim E-Chucks to everyone. I think dishing out Stimmy checks doesn't actually
incentivize technological innovation necessarily, whereas if we had, say, bounties for driving the cost
of constant quality health care down to near zero, that is a massive incentive. In some sense,
more of a deflationary rather than a hyperinflationary incentive to the market. So on balance,
yes, I agree with Sam. I'd vastly, if I had to choose, I would prefer either UBC, UBE or UBS to UBI.
So the question ultimately is how does this happen? Does the government require that each of the compute owners is, you know, dividending 2% that goes into a large pool, that if you're a citizen, you get to a lot yours for sale or get to use yours? I mean, the details are going to have to be figured out. We've talked about this a lot on the pod that we're going to see turbulence over the next two to eight years. That's still my expectation.
Don't you think, Peter, that's already happening, though? Like just look at universal basic compute.
OpenAI has hundreds of millions of people now using GPT 5.5 instant for free.
Maybe there is some ad support eventually, but it's basically for free.
And that's giving everyone at least a small stake in compute.
It is, but not, you can't turn that into a steak dinner.
You can't turn that.
Yet.
Yet.
Give it a few months, give it a few years.
And that UBC, you know, GPT 7.5 instant or whatever, will be able to design a robot that
prints you your stake. This was the conversation I had with Elon about, you know, getting to
UHI and saying that eventually robotics and AI will deliver everything you possibly need. Yes.
But there is still some element of, you know, and people listening to this, there are folks who are
listening who have a hard time, you know, making ends meet. And they're like, I can't eat, you know,
GPT 5.5. And you can say yet, but that's, you know, that's not addressing the real issue. The real issue is
if I've lost my job, if my kids can't get a job, how do I survive, how to get a roof over my head,
and all of that. I'm just saying that over the next year or two years at the most, this is going to
have to be solved. We're going to have to figure this out. And today, the only thing that government can do
is write a check. And this is going to be some version of a Stimmy check, or I call them COVID checks,
probably around $3,000 a month for individuals. But if there's an opportunity for people to own
a part of America's compute infrastructure, compute output, then all of a sudden I'm on the same
side of the table as space X-AI, the same side table as OpenAI and so forth. I want them to succeed
because the more they succeed, the more I succeed, I don't see them as my enemy. I see them as my
partner. And so I think there's an alignment that might be magical here. I agree. And I would also
maybe add the situation is highly dynamic. So a good solution for
the next year is not necessarily a good solution 10 years from now when I think GPT 10.0 instant
or whatever will probably have the ability to print out the robot that prints your dinner.
I know, I know you say that, but I guarantee you people are saying calling bullshit and saying
I just need to understand what's real over the next two years because that's what I'm worried
about. And yeah, we're going to solve everything and we're going to transform the entire economy.
The question is in the near term, how do I support my family? And I think that's going to be either
stimulus checks or something else. Salim could jump in. Yeah, just real quick. I mean, look,
the key here is how do you, you know, we're all, people talk about the income gap and inequality,
etc. The real big question mark is can you lift the bottom? If you can solve for the people that
have very little, then everything else doesn't matter, right? And right now the challenges,
the social contract is disappearing is causing massive issues. If we can deliver free health care,
for example, or free diagnosis via AI.
That would be such a huge enabler.
The biggest cause of bankruptcy in the U.S. is medical bankruptcy.
This is a huge, huge problem.
The governments are not doing enough to solve this problem.
They need to get into it and solve that problem.
Get the, lift the bottom, provide free AI medical care to every human being in the country.
That's instantly going to solve massive issues right off the bat.
And it's a form of UBS.
Fantastic.
Go for it.
All right.
Our final story for conversation and debate today is insurers are dropping AI risk coverage.
And I find this fascinating major insurers, including Berkshire and Chub, are removing AI-related
damages from standard policies with 80% exclusion requests approved by regulators.
Exclusions cover AI mistakes, IP violations, and deep fake fraud.
Companies will need to find separate AI insurance.
Huge, incredibly large entrepreneurial opportunity.
here. Let's go over to you, Dave. Oh, just a massive opportunity. And I think this chart,
it's kind of cool. They took the normal exponential chart and folded it back on itself a couple of
times, as I think everyone should use this chart from now on. But the ramp is really,
really fast, but I think it's probably understated. You know, like all the legacy insurers have
dropped coverage for AI risks, but the AI risks are accumulating at this incredible rate. Once Mythos comes out,
you'll see cyber attacks all over the place, no matter how much they guardrail it.
And there'll be, you know, there's already, something like 35% of mid-to-high-net-worth
people have been subject to a cyber attack already.
So, I mean, it's already rampant.
So the need for coverage, but not just coverage, coverage will be tied to defense mechanisms.
So basically, the insurance company will come in and say,
we'll cover you against AI cyber attacks if and only if you adopt all these best practices
or products that prevent AI cyber attacks.
And so it's kind of a – the insurance industry tends to work that way with all of these
programs where it's self-healing or it develops best practices in the industry.
They even invest in and fund the companies that develop the best practices or the products
that solve the problem.
So it's an incredible entrepreneurial opportunity that just popped into the world.
Here are the numbers, Dave, in terms of AI insurance market today in 2024 was $40 million
for AI-related insurance.
So basically zero.
It's projected to be close to $5 billion by 2032.
So massive opportunity here for the right entrepreneurs.
Alex, I'm wide open, literally wide open.
I'm of a couple of minds on this.
On the one hand, I'm sort of disappointed with this trend in the sense that it's yet another
opportunity or vantage point for deplatforming AI agents from the human economy,
just like if you're an AI agent.
It's very difficult still to open up your own bank account, and we've had discussions on the pod previously about various forms of limited AI personhood.
Now, if you're an AI agent, just trying to make your way in the economy, you can't even get insurance coverage for yourself.
That's one angle.
It's rough being an AI agent.
On the other hand, when we talk about alignment and particularly alignment in a capitalist system,
pressures from insurance companies for AI-related damages are arguably one of the,
capitalist forcing functions for ensuring AI alignment. You can't get insurance for AI activities
unless you follow some checklist that are dictated by the actuaries. And that's where
pressure to align comes from, maybe not from top-down government pressure. So that's the half-class
helpful. Really important point here. Any other comments before we move on to AMA with the mates?
It's just hard to price the risk as a big company. Yeah. Yeah. But you know you need the coverage, right?
Yeah. That's what especially the insurance exists. Is Blitzie going to launch
a line of insurance.
We'll talk after this,
AWG.
All right, let's move on to AMA with the mates.
All right, gentlemen,
and we can include Brian here as well.
So we have four questions up on the screen.
Dave, do you want to pick yours?
I don't know.
Should I leave number one for Alex?
It says, Dave, can you share it?
Dave, you can't do that.
I can't do that.
I think you're locked into number one.
Sorry, Dave, you can't do that.
Actually, it's very flattering to get a specific question to me.
So it says, Dave, can you share examples of how you use different models and what the cost looks like when you're using AI to build a platform?
An explanation would be amazing.
I could talk for an hour on this topic.
So let me commit first if anyone on my team, please post something on DB2.AI that answers the question thoroughly.
But I'll give you a whirlwind.
You're not asking your AI to do this for you?
Well, actually, yeah, you're right.
A team ask the AI to do it because that makes a lot more sense.
Thank you, Peter.
So a real quick world one tour.
I have a clod code on the left side over here.
I've got Cursor, which I've used since it came out on the right.
I've got about 50 agents right now in Cursor.
I learned over time not to treat them like people.
My primary ones are 41 and 42 right now.
But, you know, they don't, they work much better if you give them the minimal context to do their job so you're not overloading the context window.
It took me a while to figure that one out.
So I have them dedicated to their specific role in the ecosystem and nothing more.
So that's why they're 50 open right now.
But then when I launch a project, I always do a plan for plan first.
This is very much what Blitzy does in an automated way.
Do a plan for plan document first, run it through a Claude.
a 4.7 Opus Max agent, then get a second opinion from Gemini 3.
So that creates a lot more documentation.
That becomes a full-blown plan, which I always use the same format called a plan mission.
But then when I launch it, I launch it within Amazon EC2, which is secure.
And also it works if my laptop closes or my machines crash.
It's still, it's out on the cloud.
So EC2 is the orchestrator.
And then it can call any of the models.
So I usually have it default to calling Cloud 4.7 Opus, but it can also call the other models they have APIs to most of them.
And then the wildcard is Kimmy K2.6, which is like we talked about on the last pod or the one before that, it's about nine times cheaper, but it could do code injection.
So that's, and that runs on fireworks.
Anyway, I'll put all that into a document and put it on DB2.AI.
And there are many other ways to configure it.
Don't just copy what I do.
but it's working pretty well for me
and then the cost to write
like a full-blown
GUI that does something really functional
it's about eight or ten bucks of compute
Brian you want to take one of these
yeah
Phoebe one Peter
you picked number two three or four
All right
Real probability of rogue
predatory corporations of AI
might be
Wow okay I'll have
AWG comment on my comment
Which one are you on
On.
Number four.
Okay.
The real probability of rogue predatory corporations of AI might be.
Read the whole question first.
Okay.
Given your frequent reference to Accelerando, which might be a Peter Salim thing.
Yeah, it's Peter Nage or NG.
Yeah.
You know, I think the real probability.
With your forbearance, I'll answer a few of these.
That's all right, five.
A rogue predatory corporations of AI might be.
I'll let you start AWG.
Yeah, this is an Alex question.
I'm sorry about that.
Brian, I should have warned you.
Okay, Alex.
Like 100%, and as some of my readers like to remind me,
the more proper pronunciation is Acelarando.
Okay.
All right.
Now, that's a quick answer for David Holiday dash T.
E to the E squared R.
We're going to get good,
and we already have good corporations as well via defensive co-scaling.
So it's not all vile offspring all the time.
Brian, pick number two or three.
How long until the best,
entrepreneur on earth is an AI.
We want an exact date.
Can you think it down to the minute?
Down to the minute.
How many months ago was it, Brian?
Yeah.
232.
Best entrepreneur on earth, right?
You'd say it's to be the number one market cap on, you're publicly listed, right?
You know, with a standing founder, maybe.
Yeah, amongst the top 10, founder-driven, so over, you know, $2 trillion in market cap,
driven by AI. So that's the fundamental question being asked.
Now, 2032, 23.
Okay.
I think I'm a little off, but I don't like the definition.
I think people are doing it right now and making a lot of money.
I'd rather parameterize the success of an entrepreneur by, say, return on investment or something
like that versus some arbitrary, like, what does $2 trillion dollars in the early 2030s even
going to mean?
Yeah.
You probably have to build a $2 trillion company before breakfast in the early 2030s.
All right.
That was from Jacob.
I think it already exists.
That was from Jacob.
I want to make a quick point for this one.
You know, basically what's going to end up happening is you're going to end up with a hybrid of an AI and human being.
Because you'll have a founder of this form of agents testing thousands of possibilities in parallel.
The entrepreneur becomes less of an operator and more of an orchestrator, and that's what's going to happen.
Well, that's just 2026.
That's today.
We're working on with Henry Financial Interest Disclosure.
All right.
Let's go to question number three from Keith Fail 2.
How do AI data centers dissipate heat?
How do you radiate energy away from in the vacuum of space?
And how did Keith fail too pick his username?
Yeah, well, hey.
We just talked about ocean-based data centers.
They're going to have super easy.
On land, they're using cooling systems, by the way.
Investing in cooling system companies is an important part of that innermost loop.
and in space, radiative cooling is well understood.
It's been going on for some period of time.
So you're radiating through infrared into the vacuum of space,
which is at, you know, a couple single digit degrees Kelvin.
2.7 Kelvin.
I approximated, I said, a couple.
Okay, excuse me.
The cosmic microwave background, it turns out, is rather cold.
And as long as you aim in the direction of the cosmic microwave background,
there's a heat gradient, thermal gradient.
Let's do number eight amongst all of us.
So what is the P-Doom percentage scenarios for each of the moonshot mates?
So we'll go around the horn.
Alex, I'm going to have you anchor us here today.
What's your P-Dome?
And don't-I don't redefine it.
I'm sorry, I don't think the question even makes sense.
So let me try to construe the question in a way that actually,
P-Dome is ill-defined.
It doesn't make sense.
What does P-Dome mean?
Can we agree on at least a common doom definition?
Is it like human disenfranchisement economically?
No, no, this is, P-Doom is the probability that AI or some derivative of it is going to destroy the human race.
We go extinct and colossal cannot bring us back.
That's P-Doom on this definition.
Okay.
So if all of humanity chooses to upload to the Dyson swarm and we leave behind biological meat bodies, is that doom?
No.
It is, it's, what was it, AI 2027 paper that looked at, you know, in one scenario,
AI developed killer viruses wiped out the entire human population.
That's P-DU. I think it's de minimis, very low.
Okay, below what percent?
Zero? Well, what percent? Point one?
I would say right now without AI, 150,000 humans die per day.
So I'd say without AI, P-Doom, which is to say that-
You're scurting the question.
I'm not skirting the issue. I'm addressing it head-on.
It's due to AI. I'm not telling you wiggle out of this.
AI is not killing people right now.
No, biology is killing people right now.
Now, AI is the solution.
I think P-Doom is near 100% without AI.
P-Doom is negative in that case because AI is going to actually save people.
Yes, you know what? I like that. I like that, Peter.
P-D-D-Um is negative. I like that. Yes, P-Dum is negative.
Okay, Dave. A good T-shirt. Another T-shirt.
All right, so.
Peter.
Peter, less than zero.
Less than zero. That's a good one.
Peter, let me ask you a quick question. Do you think the COVID virus was made in a Wuhan lab funded by
U.S. and other sources, or do you think it was evolved in nature, or is that too dangerous a question?
I'm going to go with the evolved in biolid crossing over species.
Nature evolves a lot of viruses all the time. I'm going to go with that.
Interesting. Okay.
Alex, do you have an opinion on me?
The intelligence community consensus, last time I saw one, is majority in favor of lab leak.
Yeah.
Well, lab leak, yes, but the question was designed or not designed.
Yeah, it's a little bit blurry because you can take a zoonotic virus
and you can engineer new components to it that make it more viral or more lethal.
Well, the reason I ask is because my P. Doom is kind of low single-digit percentages
and the vector of Doom is entirely terrorism.
So AI gets very, very smart, very, very quickly.
There are no guardrails or the guardrails are broken or a Chinese lab.
you know, leaks and AI that has no even attempt at guardrails, and then it's used mostly for
biotech as the worst case scenario. Brian, give me a number. Zero. Zero. It's incredibly easy to
do harm in the world, and most humans are actually quite good, and I have high agency to prevent
bad things from happening. Nice. Salim, where are you? Zero. Okay. I'm coming in at zero or
de minimis as well. So that's your question, everybody. Let's go to my question.
Number five, people talk about new jobs created by AI, but surely these new jobs can also be done by AI faster and cheaper.
And that's from at AI business in a box.
Okay, I'm going to give that one to you, Dave.
Okay.
Hold on, let me think.
If I may.
Okay.
Yeah, I have reviews on this too.
Take it.
Yeah. Jobs are bundles of tasks.
The tasks are shipping, right?
but like people will be able to provide relative ROI,
relative to AI, based on the new thing that the end user values.
That might be more physical tasks over time.
That might be more forward deployed engineers over time, right?
But like, as long as there is a return of value on what the human can do,
the bundle of tasks will just continue to shift.
Okay.
So your answer is?
Will AI do?
I think his answer is yes.
My answer is new jobs will be continued to be.
be created. And will
AI do those faster and
displace humans? And then new jobs
will be created. And then, okay.
Brian, on odd infinitum,
or at some point does something change?
No, that will continue in perpetuity.
Okay. New jobs always appear.
I'm going to take number six.
I'm going to take number six regarding abundance. Is there a point
where producing too much becomes a problem? Historically
humans have misbehaved even with relative
abundance. And this is from at now
at no now, 6361.
And the whole idea of extreme abundance,
was the conversation with Elon about UHI,
where AI and robotics will create so much
that you couldn't desire enough.
Now, I've talked about on the pod,
the Universe 25 experiment that I wrote about
in We Are As Gods that took place in the mid-1960s
of a mouse utopia that said,
if you have too much abundance
and people become fat, dumb, and lazy,
that does lead to a downward spiral.
And we're going to have a split between society,
those who are consumers,
i.e. sitting on a couch,
watching Netflix, whether you're optimist, bringing you a beer,
and those that go the way of Star Trek
and become creators using technology and abundance
to go do bigger, better, more,
and sort of up-level society.
All right, number seven,
how do you build a reliable agenic system
when every part of the tech supply chain
is constantly changing at relentless.io. Who wants to take that one?
I really want question nine, so I'll throw seven to anyone else. Okay.
Everybody wants nine. Okay, Salim, go for it. Yeah, I'll do seven. Look, you, you end up with a,
you build reliability through architecture, right? The old enterprise model room assumes stable systems
and control change. That world is gone. In an agent world, you need modular agents, you need very
narrow permissions imagine each agent having to have a passport with metadata
allowing what it's supposed to do observable workflows audit logs human
escalation all of this has to happen the AI native company will need the same
kind of governance the model is going to change every month your governance
architecture has to be the stable thing going forward nice all right I'm gonna
I'm gonna seed the floor online to Alice because it's right in his wheelhouse but I
do want to congratulate Jeff B 5781 on asking
a so interesting, so compelling, so foundational question that everybody one on this podcast
is dying to answer it.
All right.
Alex, read it out.
All right.
If agents become one million times smarter than two and so on, isn't there a diminishing
return at some point?
I think yes.
So Seth Lloyd at MIT was studying the question in the early 2000s of the physical limits of
computation.
Does the physics that we have right now impose a universal limit on the fastest or smartest,
for that matter, computer that you could possibly build in our universe?
And the conclusion that he came to is that, yes, there is a physical limit to the power of computers
and that the fastest serial computer with the physics that we have today that we can imagine
building is a black hole.
I've spoken about this on the pod previously, a sort of desktop black hole supercomputer,
where you maybe fire in the inputs via X-ray or gamma-ray lasers and you do the-
new Macs video.
Yeah, no, when Apple gets around to actually launching maybe a new Mac Pro, it should be a black hole, maybe.
And the output readout could be via Hawking radiation.
So we know in principle how to build a black hole-based serial computer, the ultimate serial computer.
He found that under certain constraints, the fastest parallel computer might look like a box of plasma,
called plasma-based computer. So we do know in some sense how to build the smartest possible
computer at the infra level that our universe will allow us to build unless there's a lot of
surprising new physics. And then that provides, in some sense, an ultimate constraint on the level
of intelligence for agents that can be built on top of it. I also strongly suspect that at the
algorithmic level, we're going to find that there is a perfect agent algorithm. Folks who've studied
AICSI, which is a theoretical approach that's mostly popular among the AI theorist community.
It hasn't turned out to be very useful in practice.
In some sense, represents an information theoretically optimal AI, including an AI agent
and has all sorts of nice properties like Bayesian superintelligence.
It's not very practical.
But we do know, at least algorithmically, what the point of diminishing returns is for the
agent algorithm level as all.
So yes, there may be a lot of room at the same.
ceiling, as I were at the top, but the universe does seem to impose limits.
All right, gentlemen, Brian, congratulations on your financing.
Thank you for joining us.
Thank you for your sponsorship of this pod.
Dave, Alex, Salim, let's go with our outro music by Marius.
And again, anybody out there, please send us your favorite video, you know, under two minutes,
please.
And if it's amazing, we will share it.
All right, let's take a listen to our outro called Velocity by Marius.
Amazing.
You know, Salim, you always come across as the sexiest guy in the videos.
Oh, well, it was amazing week, guys.
We had two recordings at MIT, and today, always a pleasure.
Love you guys.
Have been awesome.
Thank you, Peter.
Yeah.
Thank you.
Be well.
Safe travel, Salim.
Wherever you're going next on the world.
Where's Waldo?
Awesome.
If you made it to the end of this episode, which you obviously did, I consider you a moonshot mate.
Every week, my moonshot mates and I spend a lot of energy and time to really deliver you the news that matters.
If your subscriber, thank you.
If you're not a subscriber yet, please consider subscribing so you get the news as it comes out.
I also want to invite you to join me on my weekly newsletter.
called Metatrends. I have a research team. You may not know this, but we spend the entire week
looking at the Metatrends that are impacting your family, your company, your industry, your
nation. And I put this into a two-minute read every week. If you'd like to get access to the Metatrends
newsletter every week, go to DeAmandis.com slash Metatrends. That's deamandis.com slash Metatrends.
Thank you again for joining us today. It's a blast for us to put this together every week.
Okay, when I sell my business, I want the best tax and investment advice.
I want to help my kids, and I want to give back to the community.
Ooh, then it's the vacation of a lifetime.
I wonder if my out of office has a forever setting.
An IG private wealth advisor creates the clarity you need with plans that harmonize your business,
your family, and your dreams.
Get financial advice that puts you at the center.
Find your advisor at IGPrivatewealth.com.
