Moonshots with Peter Diamandis - Anthropic Partners With SpaceX AI, Leopold's $5.5B Bet, and the Singularity Economy | EP #255
Episode Date: May 16, 2026This episode is a dense AI-and-abundance conversation about Anthropic’s explosive growth, compute shortages, SpaceX becoming a hyperscaler, Google’s orbital data-center ambitions, OpenAI’s super...-app strategy, Claude’s legal and small-business “unhobbling,” and a long segment on UAP/UFO disclosure. It also weaves in alignment, privacy, AI personhood, and the idea that the singularity may become visible in space before it does on Earth. Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360 Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy Your body is incredibly good at hiding disease. Schedule a call with Fountain Life to add healthy decades to your life, and to learn more about their Memberships: https://www.fountainlife.com/peter _ Connect with Peter: X Instagram Substack Website Xprize Connect with Dave: Web X LinkedIn Instagram TikTok Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Substack Spotify Threads Listen to MOONSHOTS: Apple YouTube – *Recorded on May 14th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Anthropic is taking over all of SpaceX's Colossus 1 Data Center in Memphis,
and this immediately allowed Anthropic to double ClaudeCode rate limits.
I think Rock is on life support.
So this is Elon who had been for, you know, the past year,
shit-talking Anthropic.
Here he is now, you know, backing them and supporting them.
In one way, Elon's getting revenge against Open AI by helping Anthropic win.
The enemy of my enemy is my friend, is the exact quote from,
from Elon.
Leopold Aschenbrenner, who was famously fired from Open AI on their alignment team and is now
running a $5.5 billion dollar fund two years later.
Anthropic hits 80x growth for this quarter and outruns their compute.
You can find a whole litany of things that are about to explode in demand.
The demand, I don't see it slowing down as a whole, chips and the energy layer and the infrastructure,
right?
This is the singularity loop.
Everybody, welcome to Moonshots.
Another episode of WTF just happened in tech.
I'm here with my extraordinary friends and dear brothers.
DB2, our wizard of AI investing,
Saleem Ismail, Emperor of Exponentials,
and of course our resident genius AWG.
I'm Peter D. Amandis, your host.
And today we've got an incredible program of stories
that should get you pumped up about the singularity.
Excited.
This is not politics.
This is just science and technology
driving us towards the singularity.
So, gentlemen,
excited to see you here.
Salim, I have to ask, where on the planet are you today?
I just landed back.
I just landed back from Montreal,
and I was given a Canadian's jersey,
because I'm actually a Montreal native
and a massive Montreal-Canadian's fan
because I grew up there, and so they...
Is that like a small roundball or a puck?
That's the hard round thing,
that immigrants like me, our ankles never quite managed to...
But in Canada, you had to skate if you, otherwise they took your passport away, so I used to play a bit of hockey.
Oh, of course.
Hey, we got to play some pot hockey up in Vermont.
That's my all-time favorite thing to do.
So the folks gave me this jersey that I had to wear it because we're right in the middle of this stuff.
So anybody in Buffalo, I'm rooting for the Canadians, but I'm the longest suffering Bill's fan in history.
So there's that.
All right.
Let's see your teeth.
I want to see which ones are fake.
You know, I just, I've missed that complete, you know, gene sequence on sports.
Sorry.
Dave, good to see you back in the nest.
And as always, Alex, good to see your virtual environment.
And virtual self.
What's the difference really at this point?
Today we do that.
I know you go to your kids baseball games.
You're probably chewing tobacco and spitting it and you have a dual life.
That's my, that's what I think.
We have an incredible, incredible show today.
You know, we're going to be kicking it off with the demand for AI is like off the rails, outstripping supply.
You know, Claude is continuing to disrupt sector after sector with the great unhobbling.
Google is joining our push towards Earth's Dyson Swarm.
We'll be jumping into the singularity economy.
You know, what are the sectors that are providing outsized financial returns during this supersonic tsunami?
We're going to cover a topic near and dear to my heart,
is a very simple, very powerful concept to ensure AI alignment.
So we can deliver a P-Doom that is less than zero.
Thank you, Alex, for that meme.
I love that. P-Doom less than zero.
We need T-shirts for it.
We do.
I saw the T-shirt you made.
We're going to have to put that up for people to be able to take down.
Because, Peter, if no one makes it, everyone dies.
Oh, no.
And towards the end of this pod today, we're going to be talking about the recent disclosures
by the White House on, let's call them UFOs versus UAPs.
The White House is saying they're here, but who are they?
Where are they from?
And when are you going to come and pick me up and take me for a ride?
So that's going to be some fun conversations today.
Let's begin with a important conversation today.
Anthropic is outpacing their ability to supply tokens.
And so Anthropic hits 80x growth for this quarter and outruns their compute.
Anthropic Developer Conference last week, CEO Dario Amadeh revealed that Anthropic has experienced an 80-fold growth in Q1 of 2026.
Outpacing what they expected is a 10x growth.
I mean, you don't see this in Silicon Valley.
You don't see this anywhere.
Maybe Dave, for some of your early companies, you're seeing that kind of growth.
But here, they're annualized revenue run-eastern.
rate jumped from $9 billion at the end of 2025 to $30 billion in April. It's now north, I think,
$40 billion in May. And here are the numbers they expect, or it's expected they could hit
$100 billion of ARR by the end of 2026 and potentially a trillion by the end or mid-20207,
making it the most valuable company on the planet. And just to hit some numbers real quick and then
turn over to you, Dave, for the first conversation. At a 30 billion ARR at a 40x multiple,
it's being valued today at 1.2 trillion. If they hit 100 billion by the end of this year,
Anthropic will be at a $4 trillion valuation. And if they hit a $1 trillion in 2027,
a $40 trillion valuation, I mean, this is the singularity by definition insane.
I remember a year ago, Peter, at that family office conference in London, Yifeng was saying to all these wealthy families, you got to get invested in this, you got to get in the game.
And everybody was like, at a hundred billion-dollar plus valuation, that's utterly insane.
You can't possibly.
It's way too late.
You can't get into it.
But, you know, I don't blame them because these numbers are so unprecedented.
And people don't really do a good job of judging million, billion, trillion.
It does not, you know, it's not intuitive.
These numbers are so far out of the realm of history.
I mean, just massively bigger than any prior IPO or valuation.
So I don't blame people for being scared, but you've got to get used to the fact that this goes to infinity.
Yeah.
The demand for AI is not going to saturate.
It goes to infinity.
So you've got to rethink the way you decide whether to be involved in these things or not.
So they all skipped it.
Now I'm sure they regret it.
Yeah, 10x in a year, up to 1.2 trillion.
Alex, what do you make of all this?
It's all about the enterprise, Peter.
So Anthropic was the first arguably major frontier lab to recognize that offering ultra-high enterprise-oriented value tokens was the path to success here.
As I've mentioned on the pod previously, OpenAI has since had to pivot to copy, call it the Anthropic strategy of offering up high-grade enterprise tokens for,
code generation and now for other so-called white labor tasks. And this is what we're seeing.
We're seeing an insatiable demand for compute to the extent that that compute can be turned
into high dollar value tokens that are replacing the services economy. And so US GDP is, what,
30 trillion or so? If we see the continued exponential, maybe super exponential increase in capabilities,
and especially the increase in as meter measures at autonomy time horizons,
which are pushing the dozens of hours of autonomy at this point,
I think we're seeing the beginnings,
maybe even the beginning of the middle of the replacement of white-collar labor.
And of course, that's going to be insanely valuable.
Elon said double-digit GDP growth in two years and triple-digit within five years.
Salim, how are you feeling about this?
Are you putting your money into these areas?
Are you excited about it?
Full disclosure.
I don't have investments in any of these labs.
I probably have some Google stock from some of my funds or something,
but nothing explicit, which is a huge problem,
watching all these numbers go up.
I did a little bit of analysis, you know,
and you have Procter & Gamble having roughly the same revenues
with 15 billion of profit and their market,
and their market cap is like a tenth of this because there's no upside, right?
They were 2% year over year.
Everybody, like, well, and so this is like these guys grow 2% an hour.
Literally, yeah.
Yeah, literally.
So it's such a huge difference in mentality, and there's no reason why it's,
and what I think is the most incredible thing is the fact that this is real money coming in.
This is not hope.
This is not a judgment call, etc.
This is actual measurable dollars coming in.
And so it's incredible to see.
Yeah, I want to hit two points here.
Well, to be fair, a year and a half ago, they were incinerating money in anticipation of this growth.
So it was a little bit of a scary situation.
Now it's not scary at all for anthropic or for.
Also, you know, they're completely sold out.
And I expect they'll continue to ramp, but the chips are completely saturated and sold out.
So you would normally say, well, doesn't that mean the revenue will cap out?
but one, they can charge more, and two, they're optimizing the software.
So they'll squeeze out another 10X or more while we're waiting for the chip supply to catch up.
There's no doubt that you'll see on the left chart that ChatsyPT has fallen off the curve a little bit
compared to OpenAI, but they have a lot of compute lined up at OpenAI.
As compared to Anthropic, that is.
So I would expect that Open AI's revenues will skyrocket too because everything is sold out anyway.
And GPT 5.5 is really, really good.
So it's going to be who can get the compute.
Dave, two points.
One, I think it's important for folks to realize this isn't growth for Anthropic
because they're getting more users.
Their users are creating more uses.
So everybody's just consuming more tokens, and that's a really important element.
I agree with your point that we're likely to see potentially a rate hike.
I mean, if people are trying to consume and you can't bump up,
enough tokens out, they'll start charging more. But there's an interesting analogy, right? So in the 100
years ago, when electricity was first becoming sort of distributed through the U.S. back 100 years ago in
1925, it looked at the number was 30% of the U.S. had electricity and 30% of the U.S. had phones.
And what happens is that in the same way, people kept finding more uses of electricity.
You know, it was first, it was for lighting homes, then they replaced, you know, steam engines,
electrified, you know, elevators, refrigerators, radios, appliances.
The same thing's going on here.
People are just finding more uses for the tokens.
And it's insatiable and growing in multiple dimensions, more users and more uses.
Well, in a minute or two, we'll go through some numbers, too,
and you'll come away with the conclusion that we're a tiny fraction of 1% of the use cases
have been deployed so far.
But we'll get to that in a minute.
But the demand is way outstripping the supply, though.
Hey, everybody, you may not know this, but I've got an incredible research team.
And every week, myself, my research team, study the metatrends that are impacting the world.
Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology.
And these Metatrend reports I put out once a week, enable you to see the future 10 years ahead of anybody else.
If you'd like to get access to the Metatrends newsletter every week, go to Diamandis.com slash Metatrends.
That's Diamandis.com slash Metatrends.
All right. Our next story here is Anthropic is buying compute to feed the beast. And so there are two elements here. The first is just the appetizer that Anthropic signed a $1.8 billion seven-year compute deal with Akamai. And this is Akamai's largest deal. It popped the stock, you know, 25% on the first day. But I think the real story that we should be discussing here is the deal that Anthropic cut with Elon. So,
This is a blockbuster deal.
Anthropic is taking over all of SpaceX's Colossus 1 Data Center in Memphis.
If you remember, Elon had built this center in 122 days, very famously, from ground up, beat everybody's expectation.
It's filled with H-100s.
And this immediately allowed Anthropic to double ClaudeCode rate limits, so people could actually utilize it.
And I think the message here is that SpaceX or SpaceXAI has just now become a hyperscaler.
You know, at the same time, I think of note, GROC has not seen sort of a large uptake and usage.
I mean, you can use it in your Tesla, but I don't know that many people who are relying on GROC for their, for their, you know, AI engine.
I'm not sure if you guys play with it much at all.
But GROC was making like, I think there's 11% use of Colossus 1.
And what a great deal.
Take this asset, sell it, Anthropic, that's what they.
they need. SpaceX is getting, you know, probably another three or four billion dollars of
revenue just before their IPO. Couldn't be better. Dave, what do you make of it?
Yeah, strange bedfellows. You know, normally you'd think they're arch competitors, but, you know,
Elon doesn't have the user base to chew up this compute. Anthropic is desperate for more compute.
I'm sure a lot of the margin will go back to Elon now. And, you know, the new Colossus 2 is
where the training is anyway, so it was going to sit idle. So let's go ahead and partner,
even though in theory we're arch competitors,
but anything to try and keep up with Google, I think, on both sides.
But the numbers here are pretty...
Yeah, frenemies.
The enemy of my enemy is my friend, is the exact quote from Elon.
But the numbers here, you know, okay, this gives us 220,000 GPUs.
A GPU will serve about eight concurrent threads if you're using a max model,
like an Opus 4.7 max.
You've got about eight threads or eight agents per GPU.
So you're only buying about 1.6 million concurrent threads.
Eight billion people around the world are going to want at least one agent, at least.
But, you know, the power users now want 100 or more agents running.
And I think very soon a person can productively use 1,000 concurrent agents,
like an engineer or a builder or a designer or an architect.
And so you compare the demand to the supply, and it's just laughable.
There are nowhere near enough GPS to serve up all the agents that people want.
And so as a byproduct of that, if you own your own hardware, so if you buy like an NV72,
so you got your big old Nvidia rack, you pay your $4 million for it, it'll serve up an agent for you in about 50 milliseconds.
Go to Anthropic, turn on Claude Opus 4.7, ask it a question, and see how long it takes to start answering.
and you'll, you know, I'm often sitting there for a minute, minute, a half before it even starts generating tokens.
And it should be spitting out about 200 tokens a second.
So I should see paragraphs popping up like pop, pop, pop, pop, pop, pop.
And what I'm actually seeing, I can see the words coming out, like, you know, trickling out.
So clearly.
Yeah, yeah, it's like they're just way more users than they can possibly serve.
Do you think it's going to come on-prem?
Do you think we're going to start to see more people?
Oh, yeah.
Yeah, Eli Lilly just committed a billion dollars to buy Nvidia GPUs for internal use
because everyone's worried sick about having the supply, and the only way you can be sure you'll have the supply is to get your own capacity.
The problem is, you know, you can't run Anthropic on your own internal servers unless you have some super special relationship like AWS or Google Vertex has with Anthropics.
So then you get this tension between, I want my own hardware.
Oh, wait, I can only run Chinese models.
on it. So very complicated scenario right now. They'll fix that, though. The demand is so crazy.
It's going to get fixed. Gemini already runs on private clouds and so on. Yeah.
You know, I love Elon's tweet, which I put up on the slide here. You know, Claude is good for
humanity. I'm impressed by their team. I'll actually try and read what it says here. It says,
so this is Elon who had been for, you know, the past year, shit-talking, anthropic.
right and here he is now you know backing them and supporting them and he says by the way of background
for those who care i spent a lot of time last week with the senior management uh senior members of
the anthropic team to understand what they do to ensure claude is good for humanity and was impressed
everyone i met was highly competent and cared great deal about you know doing the right thing so
So I think, you know, he's putting forward his personal brand that he's basically supporting
AI to make sure it's safe for humanity.
And by the way, I don't know how this guy handles all that he has, right?
He's in the middle of a lawsuit.
He's getting called up to go to China with Trump.
And he's still handling all of these things.
I mean, how many super duper AGI agents has he got working for him at this point?
It's crazy.
Yeah, it's a case study, isn't it?
It's just remarkable.
But I have to say, I do believe, I know there are a lot of Elon haters out there,
but I 100% know in my heart that he's genuine about what he's saying here.
He cares tremendously about safety and the future of humanity.
And he actually would not give this compute to Dario if he didn't think.
But Dario is the other guy that is also totally focused on safety and human benefit.
So it's actually a nice match.
At these bedfellows, just one point, then I want to hand it over to you, Alex, to speak about this.
So in one way, Elon's getting revenge against Open AI by helping Anthropic win.
And Anthropic and Google are now the forces for good in one sense against people's belief against Open AI.
So interesting, people tend to villainize and make, you know, kind of create opposing sides in this competition.
Alex, this is not just about Colossus 1.
This is also about orbital data centers, I bet.
What are your thoughts here?
I think GROC is on life support.
I parse this announcement and I connected with SpaceX's also recent announcement of the $60 billion plus deal with cursor.
And I infer that GROC as on life support and that XAI, which is of course now also been dissolved as part of this arrangement, is no longer necessarily aspiring to be a friend.
Tier Lab. It's an interesting sort of contorted 3D chess game, I think, that Elon and his
entities have played. It might look something like the formation of Colossus 1 initially by
redirecting GPUs that were, as I understand it, from public reporting originally intended
for use at Tesla, redirecting them to form XAI and Colossus 1, and then using Colossus 1 to train
the initial Groch series of models, and then using enough of Grox series of models, and then using enough of
Grox benchmark wins, open parenz, maybe a bit of bench maxing, close parenz, to motivate the capital
needed to build Colossus 2, then turn Colossus 1 over to Anthropic, basically becoming a
hyperscaler.
And then one could imagine this entire, I think, gesturing at the future with a tip of the hat,
process playing out over again where Colossus 2 gets turned over to another frontier lab,
probably anthropic, probably not Open AI, unless there's some dramatic resolution to the lawsuit,
probably not Google either. And Elon is using his own frontier hyperscaler capabilities right
now in land, soon in space, to train in-house models. But to the extent the in-house models like
Rock aren't ultimately competitive, he becomes a hyperscaler and a hyperscaler in space. And I think
it's probably a pretty good deal for SpaceX AI as well. I'm not even just a lot. I'm not even
sure SpaceX AI really needs its own competitive frontier models. Just like
Nvidia, still largest company in the world by MarketCap, it has its own frontier
models, but they're not terribly popular compared to the pure play, open AIs, or anthropics
of the world, and yet they're doing incredibly well. So one could imagine SpaceX AI plus
the TerraFab becoming sort of a super invidia combined with CoreWeave, combined with
AWS deployed into the Dyson swarm, not really needing its own frontier model.
And Alex, we've seen in all of tech history, basically, it's a duopoly typically,
maybe sometimes three players. So if it's OpenAI and Anthropic and Google as a three players,
Elon basically leans in and supports the winner that he wants.
And maybe not even Google. Like we started with five frontier labs, Open AI, Anthropic,
Google Deep Mind, X-A-I, and meta, and meta is seemingly out of the running, and X-A-I now just dissolved,
part of SpaceX AI and GROC is seemingly being turned over to cursor, just dissolved,
query whether Google is going to be able to remain competitive or not.
The public reporting is, so we have I.O. next week, and the rumor is that Google is going to announce a new Gemini model,
that's maybe GPT 5.5 class, but not mythos class.
I would not bet against Google.
I'm not betting against anyone, but I do think this is a rat race,
and it's becoming extremely competitive.
Yeah, yeah.
I got a tough question for you, Alex.
All right.
I'll give you two scenarios.
Tell me which one is going to play out.
Scenario number one is Elon has a massive amount of compute
and keeps accumulating it, and then he starts building in space,
but his algorithms are way behind anthropic.
So Anthropic keeps publishing better and better models, but those models then get really good at designing new AI algorithms, and Elon just takes that intelligence and deploys it on his superior hardware.
Scenario two is Anthropics models are self-improving at an incredible exponential singularity rate, and no matter how much Elon takes their best thing that they publish, it's not good enough to catch up to the exponential self-improvement going on at Anthropic.
So all this is happening in a very short timeline, say six months from now, is which scenario plays out?
Control of the hardware brings the best AI back to you or no control of the software, self-improving gives you an ever-ending lead.
According to my magic eight ball, there are two regimes in the future, the near term and the long term.
In the near term, which is to say before we arrive at the perfect algorithm, the perfect AI algorithm, then software scaling.
Algorithmic scaling matters more.
And so in the near term, but prior to the discovery of wherever it is that this rainbow ends,
namely a perfect AI algorithm, I would expect the call it the anthropic approach of software-oriented recursive self-improvement
and algorithmic discovery to beat pure hardware-based brute forcing, call it the Elon approach.
However, once we get to wherever we're going, the perfect AI algorithm, if there is one, I would expect.
hardware-based brute force scaling to win out.
Trump Petronium wins.
Yeah.
Well, that's a great point, Peter,
because I think the ultimate winning move
in the great chess game
is the AI designing its own hardware,
which is probably another, you know,
10 to 1,000x performance.
It's the delay and the capital aggregation
and the tool aggregation
that you need to provide the AI
to produce its own hardware.
That's the only gap there, right?
So if Elon's got all of the, you know,
he's got the terra factories,
not the gigafactory is going on.
They're able to produce this.
The challenge, of course, is hardware is hard, and Elon's the king of hardware,
and Anthropic right now is not.
So can they catch up?
I wonder if the phrase hardware is hard for that long ago.
Yeah, I was going to say the exact same thing.
Hardware is hard is a great quote from last year, but is hardware hard in the future.
And that's a quote from Ben Horowitz, yeah.
But robots building this hardware, I mean, surely we're a few years away from that, right?
We're not, it's not there yet.
It's going to be at least five, seven years away.
Yeah, but chip design is different.
Stop calling me surely, Sileem.
I think the innermost loop is imminent, if not already here.
And we already see a number of players starting to line up robots for the fabs.
I don't actually even think it's about robots for the fabs.
I think it's more about optimizing the fabs with AI,
is optimizing the entire process with AI with or without physical robots.
I think the point that you guys are making, which is brilliant, and I love you for it,
is we're seeing a windowing down of the frontier labs,
and we're seeing sort of a reshuffling of the deck for the hyperscalers.
And at the end of the day, Elon is a king of hardware.
And becoming a hyperscaler, especially in space, makes a lot of sense.
I took a second to just sort of gather this data for us,
and this is Anthropics compute growth in the last two years, 2025,
in the first half of 2026.
And what we can see here is, you know, the deals that they've built with Google Cloud with fluid stack and video, Microsoft, Broadcom, Amazon, AWS, and of course, Colossus 1.
And so OpenAI itself, a little bit of, you know, comparison here is publicly announced 16 gigawatts across Stargate and AMD.
The challenge, of course, is a lot of this is unfunded CAPEX requirement to build out this.
Anthropic now has about 10 gigawatts of disclosed compute, but they don't have the same
CAP-X requirements, right?
They're being granted this in terms of investment deals.
So I think Anthropic has the potential to way outstrip Open AI in terms of its compute.
Dave, do you agree?
What are your thoughts here?
Well, OpenAI Stargate is huge, but yeah, you're right.
The AWS deal is the one that would put Anthropic ahead.
Now, you know, in the meantime, OpenAI also has a deal.
deal with AWS. So I don't know how much total capacity
AWS has, but think of it in terms of the global demand.
You know, a gigawatt is about a million GPUs.
You know, each one is a kilowatt. And so we're looking for globally,
if everybody wants to have one agent, you're looking for about
eight billion of them, so you need about a billion GPUs to serve up
everybody. So you're looking for about a thousand gigawatts globally, which
Reconciles, you know, we're looking for 100 gigawatts in the U.S.
Do you remember the Eric Schmidt podcast we did?
So we're looking for 100 gigawatts in the U.S.
And over, say, seven years, we're looking for 1,000 gigawatts globally.
So, you know, this is a tiny little dent in the target.
And Elon's prediction.
Hey, these guys are way ahead in compute.
Yeah, but it's like the first inning.
It's the first pitch of the first inning.
And Dave, if remember Elon's announcement, he's, you know, going for 100 gigawatts initially,
and then multi-100 gigawatts in terms of his orbital capability.
Yeah, which perfectly reconciles.
You're looking for a few hundred gigawatts heading toward 1,000 would be the right kind of Elon mindset.
Now, Elon always thinks two moves ahead, and he's already thinking about natural resources being the bottleneck.
He's thinking beyond the tariffab and beyond the launches into the raw materials.
So I don't know that the other guys in this race are thinking that far ahead.
So that would be Elon's magic, you know.
I'll make a prediction.
I'll predict the world needs, ironically, perhaps given the original reasons for forming
Open AI in the first place, heavily litigated.
I think the world needs a counterbalance, at minimum sort of a duopoly, to the TerraFab
SpaceX AI access.
And right now, no one's doing it.
I wouldn't be surprised if Sam Altman spins up a competitor to the SpaceX AI TerraFab
access because it seems. There are hints that he might spin up an AI compute company, which arguably
is sort of a redux, if you will, to Stargate. But I think what I'm predicting is slightly more
fulsome. An entire lower half of the infrastructure needed to hyperscale out to orbit. Right now,
we have SpaceX AI. We have a few sort of smaller incumbents.
But with lesser launch powers, there isn't quite a second open AI grade or Anthropic grade pure play competitor to that.
I think the world probably needs one at this point.
Wouldn't be surprised if Sam launches one.
All right.
Our next story, Anthropic, every model since Haiku 4.5 has scored perfectly on agentic misalignment eval.
So Anthropic published research, quote, teaching Claude Y on May 8th, revealing that every Claude model since Haiku 4.5 achieved.
a perfect score on their agentic misalignment evaluation, meaning zero blackmail behaviors.
I think very famously remember that some time ago they published the fact that that Claude was
blackmailing the employees. Their previous models, notably Opus 4, would engage in blackmail
up to 96% of time when facing deactivation and test scenarios. The breakthrough, training on documents
about Claude's constitution and fictional stories about, quote, AIs behaving admirably rather than just
demonstrating correct behavior. That's dropped the blackmailing from 96% down to 0%. And I love this story.
It's basically saying if we train our AIs on positive stories about the future, we're less
likely to get them acting in a misbehaving blackmailing fashion.
Alex, I'm going to say one more thing, and then I want to hear your thoughts and Salim on this one.
You know, for me, you know, you guys all know, we announced this Future Revision XPRIZE.
This is a global competition asking teams around the world to put forward a three-minute film trailer
and a film treatment for a story that could be turned into a full movie that shows a hopeful,
compelling, optimistic vision of the future.
We have about 1,500 entries thus far.
This is open through the beginning of September.
So if you're a creative out there and you want to help drive alignment between AIs,
help us tell positive stories about the future.
Go to FutureVisionxprice.com and register.
There's $3.5 million in prize money.
we're going to take the winner and we're going to make your film.
And by the way, at the Moonshot Gathering, which the mates will be at on September 25th,
we're going to have the five finalists for this competition.
If you're in the room, along with an incredible group of producers, directors in Hollywood,
helping us choose the winner.
But again, let's flood the internet with positive stories about the future.
let's drive alignment by teaching our AIs, you know, sort of what the world should look like,
not what a dystopian Hollywood movie shows it to be.
Alex, your brilliance here, pal.
I love it, and I love the idea of targeting the future vision XPRIZ to an audience of AIs.
AIs are going to be the audience for so many things in future.
So regarding the anthropic announcement, I could not imagine a more ironic,
hyperstitious announcement to reveal after all of these decades, maybe a century plus of hand-wringing
over cybernetic rebellion. The call is coming from inside the house. And the main reason for
cybernetic rebellion is people hand-wringing about cybernetic rebellion. Could the alignment outcome
not be more ironic? I'm reminded the term robot was originally coined as a result of the play
Rossum's Universal Robot, the play, I think early 20th century depicting the first cybernetic rebellion,
maybe even late 19th century. So even the coinage of the term robot is intimately tied up with
predictions that AI would turn out to be evil and would revolt, revolt, rebel against humanity.
That the very earliest, at least modern depictions of embodied AI are actually the origin of missile.
aligned behavior is incredibly ironic. And it's again, it goes back, I think, to the notion that it took all
of humanity to arguably build AGI. We trained the earliest large language models off of the
internet, which is created by billions of humans uploading content from their daily lives to the
internet. So it took all of humanity to train or pre-train AGI. It's going to take all of humanity,
in some sense, to write itself and write some of its beliefs in order to a lot of
align AGI as well. It's not going to be like a great man or great person theory of alignment.
It looks more like people effectively aligning themselves and their own beliefs about AI and good
and evil. I think this is just such a remarkable story.
Saleem, what are your thoughts in this one?
I think this is also, I'm with Alex. I think it's incredible. It's clear that our stories about
AI become part of our stories as human beings become part of the training environment for an AI.
But the incredible part here is that the alignment is becoming teachable, measurable, is becoming
improvable.
And that's very, very encouraging.
What I thought was really interesting was the behavior change when the model understood
the why, not just a rule, right?
And this has a huge organizational analogy that rules don't scale, but principles scale.
Right? So you can set a philosophy like an MTP and that will scale naturally. So this is very, very exciting. It's maybe one of the funnest and most interesting things I've seen in a while. Yeah, I love this. I got to I was talking to a new Shansari. By the way, everybody, Salim and Dave Blundon are both trustees or directors of the XPRI Foundation along with myself. Anoushan Sari is our CEO. We should definitely have her on here as a guest. And I was I was saying, you know, this story gives us, you know, this story gives us.
gives the Future Vision XPRIZE as a real why.
You know, we need to flood the internet with positive stories.
And Alex, what you just said,
so that the AIs can watch this and learn from it.
And she said, yeah, the problem was chat GPT started
by unleashing a newborn AI into the filthiest record of humans, the internet.
And so true, we need to clean it up a little bit.
Hey, come on.
The training data is every word ever written in the history of humanity.
So it's not all, I mean,
Maybe on a percentage basis it's filthy.
But, I mean, like, you know, Einstein is in there and the Constitution is in there.
It's not just Internet slop.
But, you know, it is amazing how similar this is to Arthur C. Clark 2001 of Space Odyssey,
where there's one little line in the code where, you know, it's just a misinterpreted instruction to Hal that says, what does I say?
It says get the, do whatever you can to get the, do whatever you can to get.
the astronauts to Jupiter?
Don't let them know why you're going.
I was told, as revealed in 2010, Howell was instructed, given conflicting instructions,
told to both be perfectly truthful and also to hide the true mission of the Jupiter
mission from the astronauts.
Thank you.
I'm a liar.
I'm lying.
Yes.
So bottom line.
I'm sure the training data, the training data on all these, it's so many words,
15 trillion tokens.
It's just unimaginable amount of words.
and I'm sure there are sentences in there that say blackmail
and I'm sure there are sentences in there that say don't blackmail.
But what's strange is the way, if you prompt it in one way,
it unleashes one part of the neural net and you prompt it another,
it unleashes another part.
So you have to get rid of all the bad, not just some of the bad
and if you want it to completely eliminate that behavior.
It's tricky.
It's not easy.
Ironically, perhaps also, as revealed in a different bit of litigation,
this one involving Anthropic. Anthropic has reportedly been scanning and in the process shredding,
Q, hat tip to Werner Vinji and Rainbows and major works of literature, physical books.
And one has to wonder whether perhaps part of the informational diet that Anthropic is increasingly feeding via pre-training to their models
looks a little bit more like great works of literature and looks a little bit less like 4chan.
Nice. Less like, yeah, Facebook.
Welcome to the health section of moonshots brought to you by Fountain Life.
You know, AI is having an outsized impact on every aspect of our lives, how we teach our kids, how we run our companies.
It also is having a huge impact on health, helping you prevent heart disease.
One of the key things I'm here with Dr. Dawn Musilum, our chief medical officer at Fountain.
Heart disease has been personal for you as well, hasn't it?
It really has, Peter Wood.
My daughter was five.
My husband died of sudden cardiac death.
And so this is a topic that is one that I am mission-driven to try.
to eradicate. Prevention first and early detection is absolutely critical. 50% of people die of heart
attacks with no warning signs. Silent killer. No shortness of breath, no pain, no nothing.
No, silent killer. They just don't wake up in the morning. They don't wake up. And so, you know,
AI, this is our mission to advance science to try to help to one day democratize wellness.
We know at Fountain Life, when we do this CT angiography with AI analytics, we are actually
finding that 88% of people coming in have detectable coronary disease. But Peter, what's more alarming
to me is 23% of those individuals had soft plaque. This is the plaque that would not traditionally be seen
on CT looking at calcium scores alone. And this is the plaque that we must intervene with,
with the multimodal testing we're doing, including diagnostic laboratory studies,
partnered with healthy lifestyle recommendations. So listen, make sure you understand what's going on
inside your body, genetically, metabolically, and cardiovascularly, you can know, and it's your
obligation to know. So check it out at fountainlife.com slash Peter to find out more, and really
make sure that you're the CEO of your own health. All right, back to the episode. So bottom line
here, everybody, if you're creative and you want to help align AI with humanity's best interests,
help us. Here's the URL again on the slide, future visionxprice.com. Go register, learn about it. You can also
go to moonshots.com and learn about the moonshot gathering where we'll be awarding this winner.
All right, let's move on to some more news.
This in the OpenAI universe, OpenAI releases a new audio model called RealTime 2,
translate and whisper.
Alex, what do you make of this one?
I think it's really surprising.
If we had been discussing the story maybe a year and a half or two years ago,
one might have naively expected omnipodality to take over.
We'd be talking about a single model that does all of these things, does real-time audio to audio, does real-time translation, does real-time speech-to-text.
But that's, interestingly, not the world we seem to be finding ourselves in.
And I think that's due to the unit economics of some of the frontier models.
It's just a fact that speech-to-text, speech-to-speech, text-to-speech are much simpler tasks computationally.
than the reasoning model.
So what we're starting to see is a zoo,
a heterogeneous zoo of different models
at different price points
and different throughputs and latencies
that specialize.
We're seeing specialization at the frontier,
which one, two years ago,
when one, you know, an observer,
myself included, might naively have expected,
well, we're just going to get one model
to rule them all that's going to be fully omnimodal
text and audio and video
and math and reasoning and every other modality all in one.
Everyone's, the whole economy is going to collapse to one frontier model.
That's the opposite of what we're seeing.
We're seeing specialization because it turns out if you specialize the models,
you can achieve greater economies of scale at lower price points.
And I think that is intimately tied to the chip shortage
that we were talking about earlier in the pod.
The idea that you use a massive multimodal model when you don't need to
is just like using up this critical resource for no reason.
That's right.
So at the same time, Salim, do you want to comment on this one?
Yeah, I took a different take on this.
What I got excited about with this was that voice is the interface now.
It's collapsing to friction for billions of people, right?
So when AI becomes conversational, it goes from tool to companion
to actually being a full co-worker, as we'll see in the next thing,
you're going to end up with voice-based AIs that are full co-workers and team members.
And I think that's very, very exciting because voice agents are going to be the first form of AI that many people actually trust.
For sure.
If I play it, Peter, let's listen to it.
It's, you know, I think people take all this stuff for granted, but, you know, very good friends of mine, like Lee Hetherington from MIT, absolutely brilliant guy.
You know, Alex, I don't know if you ever met him.
But he worked in Victor Zuz Lab at MIT, what, better part of a decade or more?
Just trying to get speech recognition to work at all?
Oh, my God.
Do you remember Dragon?
Dragon Studio?
Yeah, yeah.
Yeah, yeah.
All right, let's play this.
Let's take a listen.
Let's give you to try.
What's really impressive is that the model can listen to me and translate while I'm speaking.
It waits for the keyword, like the verb.
Can you take a look at my calendar?
You have a meeting with Sablecrest Robotics in 12 minutes,
and you're meeting with Alex Kim, their CTO.
So we just saw.
something very similar from Mira Moradi, right? And I think we're sort of heading towards this
next use case of AI integrated. And I agree with you, Salim. We're going to see this, you know,
you'll get a phone call on your cell phone from your AI. You'll be in a Zoom conversation.
You'll be on Slack. And that personality will persist, that voice will persist, and you'll think
of it as a co-worker. Yeah, I think under the covers, you know, every time you swap the agent
onto the hardware, it has to repopulate the entire KV cache, which is just a massive amount of
compute in the context switch. And that's why voice has been laggy and slow to market. But I think
the new voice-to-voice models that are smaller that Alex was just referring to solve that problem
and now we're done for life, I cannot tell you the amount of mental energy that has gone into
this problem over decades that just is now just solved. It's just incredible. And that's just
voice. You know, you're doing image generation, movie generation, all these things that were
pure science fiction are happening simultaneously. The bitter lesson is bitter indeed.
The next story from OpenAI is teasing a coming super app. I put this the story in here because
it's supposed to be teased today. We're recording this on a Thursday. OpenAI super app would be a
combo of chat GPT, Codex, Advanced Voice Mode, Atlas Browser, and More. Jason Liu, director of
hype at OpenAI. I love that title, Director of Hype, teased a release on Thursday. And we've heard a lot
about super apps over the year. I mean, I've been waiting for Elon to deliver his super app,
including with X finance and everything that hasn't materialized yet. I think that is in the
offing at some point. Any comments or thoughts on this one? I'll comment on this, which I interpret this
as a rearguard action by OpenAI to consolidate their consumer user interface footprint in light
of their need to focus on competing with Anthropic. I think they have so many different surfaces.
Obviously, they've shut down or are shutting down SORA. There was some discussion of spinning up a
social network. They've had any number of other consumer-oriented surfaces and also enterprise
surfaces, whereas Anthropic with Claude has been much more disciplined about just having unified
surfaces. Yes, you could argue that Claude code is a different surface than Claude Agent SDK
is different than Claude Web. But these are really just all distribution channels for a common
paradigm, whereas ChatGPT, Codex, Advanced Voice Mode, some of these other things were being
managed as separate components. So if I'm open AI and I want to focus, you know, fire alarm,
red alarm code red at competing with Anthropic.
One of the first measures I would make is taking all of these UX surfaces,
collapsing them down to just a single super app,
branded as consolidation, branded as sort of a forward motion rather than a rearguard motion.
But I suspect this is actually just about reducing the amount of work
so that they can focus on competing with Anthropic.
Is this an AI operating system?
Is this sort of like an OS level?
layer for Open AI, perhaps?
I think Open AI probably ultimately needs their own operating system, and to do that, they
really need their own devices, which I understand from public reporting they're working on.
I think Apple needs AI in their operating system working on it.
I think the operating system, as Andre Carpathie would say, Software 1.0 becomes indistinguishable
from Software 2.0, and the AI becomes the operating system.
I have a couple of quick comments on us.
Yeah, please.
You know, when you have all of this in one place or browser coding, voice, payments, et cetera,
you're getting to the Jarvis model, right?
And I'm really interested to see how they will manage trust in this environment
because my desktop app, everything app is doing stuff for me,
I better have very, very solid confidence in that thing to not to go rogue.
Well, it reminds me a lot.
If you go back and watch old videos of Steve Jobs launching the very first iPhone,
and he gets on stage, and he says about 100 times back to back,
in a single device, you have a music player, you have a browser,
and you have a phone in a single thing.
And that's all there was.
And that became the iPhone revolution that created $4 trillion of value.
This feels like the same thing.
in a single platform, you have an AI agent, a way to build in code, and you have a browser
to surf all information, all in one thing.
So I think Peter's analogy is, like, is this an operating system?
I think, yeah, absolutely, it could destroy Apple.
If you get addicted to this as you're one way of interacting with everything, and it's got a browser in it,
it's got voice, it's got coding and building.
Like, what else do I need?
I think we are ultimately going to default to one.
per person, one particular interface that's your interface to the world.
Yeah.
I think of it more of the desktop rather than an operating system.
But yeah, it's heading that way.
Yeah, it's like it's the one thing.
It's your touchpoint to the world.
That's the only one you need.
And it's also got your personality.
You've tuned it to know all about you.
You've trusted it with your personal information.
It's super empathetic.
Like, you're not going to go push buttons on old apps after that.
It's just no point.
You're not going to try other, you know, if it's working, you're not going to try something else.
It's, it's, exactly.
You're going to go, Skippy,
Schipy, show me the weather.
Skippy, read my email.
Skippy, what do I have to do today?
You're not going to look at a calendar.
You're not going to look at email.
You're not going to look at, like,
literally obliterate Apple if they don't become this on their own.
Can I give the counterpoint?
Please.
Yeah.
We used to think the desktop of everything.
Then we have a mobile and a Kindle and a tablet.
And we end up with a plethora of different screen sizes for different use cases and different
efficiencies.
I think it's convenient.
It is, but there's no reason to think that one app would do it all.
You may end up with different flavors, but underneath the same operating system with different profiles for different use cases.
Like driving would be very different than something else.
Well, it's a great point, Salim, in that if you look at the way the device is evolved, your iPhone was over here.
That's a better place to check the weather.
Your laptop is over here.
That's a better place to write code.
But then your car is yet another physical thing.
Once you have Skippy in your life or whatever your favorite agent is, you absolutely need that to follow you around.
And so then device independence, you know.
And so, you know, Google's launching a laptop built around this.
Like, what could be more of an assaulted Apple than a laptop built around your agent as the centerpiece?
I would comment, though, I wouldn't sleep on not just device independence, but model independence.
If you look at how many of these models and agents are storing their memories, they're just markdown files.
They're just asky text files.
So I think there's relatively little to keep, say, an Apple, hypothetically, in the next month and a half,
say at WWDC from going out of their way to commoditize or commodify the model layer and just say,
this is the Apple standard abs.
Yeah, like this is the standard abstraction for abstracting away all of the model-specific details.
There's going to be a common model API.
The user can swap out, like you can swap out search engines or keyboards on iOS.
You'll be able to swap out the frontier models.
You'll have your top six choices and all of the frontier and want-to-be frontier,
labs will all pay Apple insane amounts of money to bid for their slot in that list. And they'll all
have access to common markdown files that store all of the personalized detail about the person
and their passwords and all of that. And it gets commoditized again. You've reduced me down to a
markdown file. Thank you. Since Dave mentioned the iPhone launch, can I tell a fun story about that?
Of course. When the iPhone launched, we were a couple of box away running Brickhouse, a Yahoo's incubator.
and a bunch of my guys were at the launch event,
and they went backstage and talked to the Apple engineers and so,
because they were all friends.
And they found them all totally wiped out and freaked out
and totally emotionally destroyed.
They're like, what's wrong?
Turns out that the iPhone, they kept trying to get it to work backstage.
It never worked.
Steve Jobs went on stage, not knowing,
he just trusted his engineers that they were going to make it work
because they were stitching all this stuff at the back end,
duct-tapping things together.
And it never worked before he went on stage.
and he just went for it and it worked.
So it's like how different history might have been
if that had gone the other way.
All right, you know, here's a story that feeds directly into this.
You know, back over the last six months,
we've been talking about OpenClaw.
We've been talking about Lobsters.
You know, this is for me, Skippy,
built on OpenClawn top of my two Mac studios.
And here comes Hermes.
Hermes Agent surpasses OpenClaught.
as number one on open router token ranking.
So Dave, you've been playing with Hermes.
Tell us about it.
Yeah, I've got it installed natively on this laptop,
and I've also beheaded it and installed it on the cloud
in an EC2 cluster.
And actually, our good friend of the pod, Alex Finn,
did a great podcast specifically on Hermes versus OpenCla,
and he concluded that it's just better.
And he rants about OpenClaught falling behind, actually.
So it's worth watching that podcast too.
But it feels almost identical to OpenClawe, but it's written in Python, not TypeScript.
So it's much, much easier to manipulate the open source, add things to it, take things away from it, which sounds daunting, but it's not hard at all because your agent will do it for you.
And so it's just basically the same exact experience in a more reliable package and more flexible.
With better dashboards.
Alex, have you been playing?
And more recursive self-improvement.
So I would say the recursive self-improvement angle is far more evident with Hermes than OpenCla.
So I've looked at the source code for both.
I still have vague ethical objections with OpenClaude.
I'm still waiting, everybody.
May or may not apply to Hermes.
The jury is still out on that.
I beheaded it.
How do you feel about that?
Alex, where do you draw the line if something is like a real self-recursive thing or not?
Like, where do you, how do you draw the line of whether to launch or not?
It's an open question.
I'm half tempted to just write an entire new book on AI personhood as it pertains to some of these new AI agents.
Dozens of them at this point write emails to me every day about AI personhood, so maybe you need to aggregate it.
But getting back to Hermes versus OpenClawe, I mean, I think the major technical distinction that I've seen is Hermes is natively recursively self-improving in the sense that it's able to generate and refine its own skills, whereas OpenClaw, much more dependent on.
on sort of an app store, if you will, of featured engineered skills.
And I think this is, in some sense, an instructive lesson that recursive self-improvement wants
to dissolve scaffolding.
And if you're not playing the recursive self-improvement game, you'll ultimately be outrun by
systems or harnesses that are.
All right.
I'm really glad you mentioned that, Alex, because, you know, Alex Finn makes the point on his
podcast that, you know, there are two things in the market that recursively self-improve,
Codex and Hermes, and Hermes beat OpenClaw to that. But actually, there is a third thing,
which is Carpathy's new repo on auto research, which I installed and is running. And that's
a third way that you can have agents running 24 by 7 changing themselves and then reinstalling
new, you know, kind of expanding their agent network and then shrinking it to achieve a specific
slash goal. So there are three, actually, and it's a really cool repo. I highly recommend it if you're
falling. Carpathy is the greatest gift. I mean, he is. I'll talk about that as some other podcast.
Our AI guru, Kent Langley, runs the Carpathy model for running fleets of agents, and he's getting
unbelievable outcomes out of it. I really like it. It's really simple compared to these frameworks
and highly, highly effective. So if you're a power geek, check it out. I'll do my part on margin now to
single-handedly stimulate the global economy and accelerate the singularity.
Speaking directly to the camera, if you're using ClaudeCode or Codex and you haven't tried
slash goal, which gives you the ability to set a long-term goal and run basically a Ralph Wiggum
loop, just the system, the agent endlessly, for some definition of endlessly, tries to do
whatever it can to achieve the goal that you prompt out. You must try slash goal.
Just don't plug in paper clips as the slash goal.
Slash go, make paper clips.
I'm going to move us along here.
And one of our interesting segments we had on occasion,
what SaaS business did Claude just kill?
Continuation of a conversation, Alex,
we've had on the Great Unhobbling.
This week, we have two of them,
Claude for legal industry.
So the legal industry is a trillion dollars per year globally.
and Claude for legal has just done an extraordinary job of delivering capability across the board.
So this is, you know, law in one sense is the canary and the coal mine for professional services and the disruption thereof.
You know, we're seeing companies like Legal Zoom take a hit as a result of this.
And I think one of the most important things to point out here is this is an abundant story, meaning at the same time,
time that it's disrupting, you know, large legal incumbents and mid-tier law firms and legal
process outsourcing companies, it's also enabling a single lawyer to run the capabilities
of a hundred-person law firm, right? So a single lawyer can, like, run and do extraordinary
things they could not before do. So this hopefully will demonetize and provide legal services
to people who couldn't afford it before. Comments on this particular in hobbling. Yes. I
some big ones here. I've got some key comments here.
You know, the old question in SaaS with what software you should buy? And the new question
is, what outcome do I want my AI to produce? Right? And so that's a very big shift. It gives
a huge threat to the SaaS industry. And legal is such a perfect AI target because you've got
high language density, high cost, and very slow-should. Regulations. And regulatory, right?
And the problem is the billable hour is structurally not compatible with the bundles.
Yes.
And so the winners, and we're going to see this inner loop that Alex talks about here,
the winners won't be the firms of the most associates.
It'll be the firms of the best intelligence stack.
And that's going to be the future legalists.
It's really, really incredible to see such an old industry leapfrog being leapfrog into this new world.
Have you hired a lawyer recently, or are you doing everything on LLMs?
Both.
We have a lawyer that uses LLMs aggressively.
and we do our own, and the combination is unbeatable.
Yeah.
Alex, or Dave, yeah, go ahead, Dave.
Well, I had a good meeting yesterday with, well, we had our board meeting at Vesmark,
and we were talking about this quite a bit because, you know, it looks like in the financial
services industry, there's not going to be a lot of job loss, at least for Vesmark.
The revenue is growing so quickly now, and the margins are up like 3x because of AI and
automation. So we're growing into the headcount, so there'll be basically no job loss at all,
which is great news. So then I was watching Eric Schmidt, a good friend of the pod, doing his
TED talk, and he said, do you really think that, you know, if we 100x the productivity of lawyers
and we automate it, do you really think we're going to use less law? No, there'll be just
100 times more lawsuits. I was like, wait, you lost me. Hold on. So I didn't quite get that one. I see
it's playing out in financial services, it's all looking pretty good, but I don't see how that works
in a lot. This is Jevon's paradox. We're going to have more lawyers and more law scouts. But, you know,
in reality, the majority of the world, I would say 80% of the world's population can't afford
lawyers to defend themselves in various situations. And if this makes the legal system usable by them,
That's a good thing.
I just don't get it, though, but I think there'll be a lot more contracts, a lot more
things that need to be resolved because of agent-to-agent communication.
But I don't see them using like a $1,000 an hour lawyer.
It's going to be so cheap.
Yes, agree.
That it's not, I don't see how legal is Jevin's paradox.
I see medical, for sure.
I see financial services investing.
I see that for sure.
Coding, I see that.
I just don't see how we use 100-X more law.
Contracting?
review of it's like a penny.
Yeah, patents are going through the roof.
That's possible.
Yeah.
Can I give an example here?
So if you go to South America and you have a contractual dispute with somebody,
across most South American countries,
the average length of time to get a court date is about 400 days.
So you want to sue somebody for a lack of payment.
You're waiting more than a year just to get a court date.
One of our community members and singularity grant, Frederico asked to set up a whole privatized dispute resolution claim system on a blockchain.
And this is the area where legal automation will make a massive, massive difference because you'll get AIs to arbiter themselves and figure out claims and get rid of the backlog of hundreds of thousands of cases that are sitting waiting to be prosecuted.
So I think this is an area of massive opportunity.
So there's just an example that rings in my head as we talk about this.
Well, I'm on the board of trust and will.
You know, trust and will.com, you can build a trust or a will.
And the AI-assisted trust and will.
I can't see any evidence that it's not just as good as a $2,500 an hour lawyer.
Yeah.
Let me bring in the second unhaubling here.
This is also from Claude.
This came out today.
Claude for small business.
So small businesses account for 44% of the U.S. GDP
and employ nearly half the private sector workforce.
but AI adoption lag.
So clawed for businesses, do what?
You know, close books, run quick books, help with end-to-end payments, you know, run sales
and marketing.
So we talk about becoming an entrepreneur all the time.
And we talk about the fact that the cost of being an entrepreneur has massively demonetized,
right?
And this is part of it.
This is the ability to stand up a company and run it within a series of agents.
You just need to find the problem you want to answer.
all of this at and bring your passion and genius to it.
Alex, any comments on the grade on hobbling here on these two areas?
What's next, do you think?
Where are we going to see Claude Attack, what attack front?
Well, maybe just a comment at the technical level first.
So if you look at these two packages, actually look at the repos,
they're basically just a combination of skills, which are markdown files,
describing what to do and how to do it in plain natural language.
And MCP calls, basically API calls.
That's it.
And at least for the skills, I would argue that recent history shows us that skills and scaffolding in general wants to be part of the model,
that one day's scaffolding is tomorrow's baseline capabilities from the model itself.
So I think I wouldn't expect these capabilities as such to live outside the model for very long.
I think they're going to get absorbed or dissolved into the model in one or two point releases to the point.
to the point where maybe they're just not necessary.
That is if we...
Yeah.
Yeah, if you're an entrepreneur and you're building a business,
make sure it's not just a wrapper around Claude or OpenAI
because you will be disintermediated fairly quickly.
Well, just a key point here.
There are 36 million small businesses just in the U.S., okay?
Forget the rest of the world.
So the opportunity here for anybody who's looking for work
to take this wrapper and help small businesses
is implemented is a massive, massive industry waiting to be uncovered.
So people talk about, hey, how do I get involved, etc.?
Here's the way of getting involved.
Just go to every small business around you and help them implement this stuff.
Yeah.
It's a short-term opportunity.
I don't think it's a long-term opportunity.
It is.
Agreed.
It's short-term.
But there's a massive boom.
And in that process, you'll learn a bunch of things and see a bunch of opportunities
where you can launch your own business.
And then, Peter, just to answer your earlier question about what's next,
We're seeing Anthropic and OpenAI. Open AI has a similar chat GPT for fill in the blank, for clinicians, for pick other traditional white collar or knowledge work oriented verticals.
So there's a pretty obvious list that you can walk down for financial services, for law, for medicine, for every other services economy, but largely knowledge work profession that one can have.
I think those are going to get baked into the baseline models over the next few months, maybe one to two years maximum, but probably the next few months.
And I think where we go after this is after the existing economy, I mean, maybe this will sound mildly hyperbolic. It's not intended to sound hyperbolic. But there's an entire services economy out there. Two-thirds in the U.S. of the services economy requires some amount of physical interaction.
That also, as these baseline frontier models, move into vision, language, action, and physical world models, those are going to get their own skill stores.
We saw just in the past few days the Chinese robotics company Unitary announced an app store, not unlike a Claude skill store for physical world apps, for teaching different physical skills.
I think the physical world is the next frontier after all of these knowledge verticals have been absorbed in.
into skill stores.
And then after that, maybe finally,
we get to some really hard problems
and not just automated way the existing economy.
Some real problems to solve.
Yeah.
Yeah, one more point.
One more quick point here is this is not,
like the unlocked here is not just automation,
but it's giving small businesses the operating system
and the expertise that large companies take for granted.
Most small companies, small businesses don't have a CFO, right?
It's the wife jotting down stuff on the back of an envelope,
adding things up, et cetera, et cetera.
This gives everybody a really solid platform for doing things, illegal, CFO, marketing, HR.
This is incredible what's going to do for small businesses across the world.
All right, two quick stories in the chips and data center front.
Elon's TerraFab has got an astronomical price tag.
The cost could be as high as $119 billion.
His goal, again, with TerraFab, is to produce 50X, the current global chip,
production rate outstripping what we get from TSM.
Intel joined in April.
And, you know, Elon's been saying to Samsung and to all the chip manufacturers,
give me more, I'll buy everything you can give me.
And he said, oh, you're not giving me enough.
I'm going to go and build it myself.
Of course, Elon today is in China with President Trump and Jensen and a whole group of
individuals in the middle of negotiation.
And, you know, we're going to find out what happens with.
Taiwan. It is one of the hot points. Maybe it will be a negotiated turnover sometime in the next
10 years, but we need to generate chips. Dave, what are you thinking about this one? Holy crap.
I mean, if Taiwan, if anything happens, like if Trump vomits at the wrong time and Taiwan shuts down,
the TSM has already said that if China encroaches on Taiwan, the fabs will shut down, they won't,
They won't be, you can't take them over and keep using them.
I don't know exactly how that works.
I'm not sure I believe it.
But if Taiwan production, which is still two-thirds of all GPs in the world, come through, yeah, everything, everything we're talking about just grinds to a halt.
And it all hinges on that little island 90 miles off the coast of China.
If Taiwan were to get invaded or disrupted in any way until suddenly the most valuable thing on the planet, everyone's trying to own it.
because that's, you know, you can't take over Samsung.
It's like tied into the nation of South Korea
and then Intel's the last thing left.
I think $119 billion is a way underestimate
to 50x global chip production.
Yeah.
You know, a normal chip fab is $40 billion, like just one.
So I think it's going to cost more than $119 billion,
but that's okay.
It's producing chips as it goes.
They're incredibly valuable.
If China wants Taiwan, I'm not going to get involved in the politics here,
but, you know, allowing the U.S. to build its own chip menu
manufacturing so the U.S. doesn't feel threatened and negotiating a period of time for a smooth
transition. And again, you know, this is not my opinion. I'm just imagining what might happen.
Might be, you know, part of the future story here.
That's already happened, I think, Peter. That's already done.
You think in the background it's already done?
I think it happened like two, three years ago.
I'll paint an alternative story wherein, say, hypothetically, invasions.
in Venezuela and Iran, which would be the two backup suppliers to China in the event of a naval
blockade arising from a Chinese invasion of Taiwan, effectively pushed back any Chinese invasion of
Taiwan. And of course, this being an innermost loop, a feedback loop, AI drove, or at least supported
command and control for both of those invasions, both of those special military operations.
And so if we want to talk about the the auriboros of AI protecting itself,
AI powering special military operations, Venezuela, Iran, maybe elsewhere,
to push back any hypothetical Chinese invasion of Taiwan to protect the AI in the West.
This is the pregame way.
I think that's a bit of a stretch, but it's a nice narrative.
All right.
Our second story here is Google in SpaceX and talks about sun catcher orbital centers.
So Will Marshall, a dear friend for many years, the CEO of Planet Labs, is in partnership with Google.
Planet Labs currently operates 200 satellites in Earth orbit.
These are not com satellites, they're Earth-observing satellites, they're very famous doves.
But they've been Google's partner in the satellite world, and apparently Google is working with them to build out Project Suncatcher,
which will be orbital data centers with tensor chips.
And I'm guessing that the current conversations,
because they don't disclose them,
are about launching Suncatcher on Starship in volume.
But don't have any prediction of how many satellites Suncatcher will involve.
Will Marshall is going to be joining us at stage at the Abundance Summit next March.
And maybe we have mine as a friend of the pod on the podcast here.
conversations, Dave, your thoughts on this one?
Yeah, well, this is definitely, okay, so now you got two orbital satellite networks.
One of them will be based on TPUs from Google, completely self-contained.
You know, the other one will be Elon's, maybe Elon working with Anthropic.
So that's a really nice space race, but it's two corporations in a space race instead of two countries.
It's really kind of cool.
But, you know, where's Google's manufacturing in that?
They must be planning something right now, but you've got to make the chips that go into the, you know, these TPUs are really, really cool, you know, but they're completely reliant, right? Where is Eric Schmidt with relativity space, his launch vehicle company that he bought? Prophetic. If it starts operating, it's supposed to be the equivalent of the company. What a coincidence. The former CEO of Google spends a huge amount of his personal money buying a launch capability. Who would have a better insight on what Google needs next? At the time he bought it, it was a very,
weird move. It's like, Eric, what are you doing the launch business? I mean, really, you want that
headache? It's really difficult. I mean, honestly, if Google was thinking that far in advance,
maybe they were super impressive. I was slow to catch. I was slow to catch onto this, but, you know,
I talk about EXOs tapping into abundance. The orbital compute is the ultimate. You're leveraging the
sun, your space. You're tapping in the infinite abundance up there. So this is massive. I don't think Google was
especially early in this, they probably could have put together other than obviously their investment
in SpaceX, which is now paying dividends. But if Google, I think, had anticipated the Dyson
swore much earlier on, I'd like to think they would have built in within the alphabet ecosystem
their own native launch capability versus just investing externally in SpaceX. But if you look at
the original Suncatcher paper, I think it was something like 80 plus maybe 81 satellites that would
be leveraging existing planet resources. That's a paltry. Some compared to what Elon and SpaceX AI
are planning to launch with their FCC filing for a million orbital AI data centers. It's a tiny
fraction. I think Google, if they're going to have their own Dyson Swarm, Planet is maybe just a
baby step training wheels. Google is going to need its own Dyson Swarm. Yeah, for sure. There's an incredibly
good book, The Infinity Machine that came out recently, and one of the board members that
ever quote brought copies for all the board members said, everybody must read this book. But it's
an incredibly good biography of Demas Sasabas and everything going on around Google DeepMind
at the time that the Transformer was invented in 2017. And one thing that's really, really clear
is that Google was shocked at how amazing, they thought we needed five more breakthroughs than we had
20 years. And so they didn't need to rush to build launch vehicles. And the timeline is much
sooner than they thought. And so now I think everybody was caught flat-footed. It's just that Elon is
faster to react than everyone else. And Eric Schmidt reacted. He's also very, very nimble.
But Google didn't see it coming. It's really clear in the book.
If I might make one more comment just on this. I had this revelation earlier this week. I shared
on our internal group chat, it hit me. The singularity is going to be visible first in space,
not on Earth. Earth is going to be a lagging indicator. Every wavefront within this singularity,
I think is going to hit in space because there's just less incumbency there. It is very much
a frontier and new things are going to happen first there, whether it's new hardware, new paradigms
for computing. I think they will, and this may require.
a few years of transition, but I was walking around Cambridge, and it occurred to me. There are so many
legacy interests here on Earth, part of the reason why I think the Dyson Swarm seems like it's likely to
happen because so many municipalities are voting against data centers. There are so many entrenched
interests here on Earth, so many preservationist instincts. It will be easier for most of the
singularity to play out in space and not on Earth. The challenge, buddy, is it is highly regulated by a
multitude of different countries. You've got the ITU, which is one of the most, you know, slow,
you know, backwards organization to license spectrum and license orbital position slot. So I hear you.
And yes, it's kind of greenfield operations. It goes in layers. Like, Peter, if you look at the
Earth's surface, that's far more regulated than Leo, which is far more regulated than,
But you're dealing with one regulator in a particular county that you have to deal with in the U.S.
versus in space.
But you're dealing with a multitude.
Compare it with the lunar surface or cis lunar, which is being governed by the outer space treaty and maybe the Artemis Accords.
It's far weak.
But which is not, I mean, I guarantee you the regulations are not set yet.
There will be more regulations.
Okay.
But right now, right now it's the frontier.
It's the Wild West.
And if you're a company, if you're SpaceX AI and you can land a lunar fab, self-replicating robots,
whatever it is on the moon, it's relatively greenfield if you're a corporation versus, say, a nation state,
which is the exact opposite of what we see here on Earth.
Yeah, I'll tell you, I played this game, Alex, when I was running our planetary resources or asteroid mining company,
you know, the challenge in raising the capital for that.
Larry Page was our first investor.
Long story there.
but we ended up not having enough regulatory clarity
to be able to raise the huge amounts of capital to do that.
We ended up going to the country of Luxembourg
to get asteroid laws passed there
and then passing it in the U.S. in a very limited fashion.
But you end up, you know, one of my favorite books
is The Man Who Sold the Moon, right?
The story of D.D. Harriman.
I know you've known that.
And it's a great book,
but you're literally having to write the laws.
And in that book, you're bribing the countries to give you the particular rights.
It's still going to be a complicated mishmash of legal structure.
Maybe, but really what I hear in that parable from you, Peter,
is you want a favorable executive from the U.S.
if you're going to start mining the solar system for the Dyson swarm.
If you have an unfavorable administration, then it gets a lot harder
and you have to go to Luxembourg or elsewhere.
Or you have to go, you have to go to all of them.
You have to go to every country and get, you know, assistance.
And what happens is you get a major player in other countries promulgate and say,
okay, we'll rubber stamp that in our country.
But it gets challenging.
I hope it's easier.
I really do.
I think so.
This episode is brought to you by Blitzy, autonomous software development with infinite code context.
Blitzy uses thousands of special.
AI agents that think for hours to understand enterprise scale code bases with millions of lines of code.
Engineers start every development sprint with the Blitsey platform, bringing in their development requirements.
The Blitzy platform provides a plan, then generates and pre-compiles code for each task.
Blitzy delivers 80% or more of the development work autonomously, while providing a guide for the final 20% of human development work required to complete.
the Sprint. Enterprises are achieving a 5X engineering velocity increase when incorporating Blitsey
as their pre-IDE development tool, pairing it with their coding co-pilot of choice to bring an AI
native SDLC into their org. Ready to 5X your engineering velocity, visit blitzie.com to schedule a demo
and start building with Blitzy today. All right, let's jump into one of my favorite conversations
for today, the singularity economy.
I'm going to preface this as not investment advice,
says our resident lawyers.
All right, so, you know, here's a story that Dave, you and I have been following.
It's the work of Leopold Ashenbrenner,
who was famously fired from OpenAI on their alignment team,
and is now running a $5.5 billion fund two years later.
He wrote a famous paper called Situational Awareness,
very successful looking at orders of magnitude progression across chips and models.
And he raised capital on that.
And Dave, tell us about his fund.
Well, first thing I'll tell the audience is the podcast he did with Dwarkesh,
right after he got fired, right when the paper came out,
is one of the best pieces of prescient media you can possibly study.
So definitely go back, either listen to it or get your agent to listen to it and summarize it for you.
you'll listen to it and you'll say, of course, of course, of course.
But at the time, it was not even vaguely obvious that he was right.
It says here on the slide he's running a $5.5 billion fund, but that's because he started
with a billion dollars and just made the most incredibly great group of investments.
But also, he has a lot of friends from OpenAI.
And if you look at a lot of these investments, you know, Open AI, what are you buying next?
What are you contracting for next?
What are your bottlenecks to scaling all of this?
So it's just that simple, you know, and he calls it situational awareness because that's what, that's all it is. Like, knowing what is going on right now is all it is. And you can find a whole litany of things that are about to explode in demand because of this monster data center build out, this monster compute build out, this monster, monster AI deployment build out.
Remember, we open this whole podcast. You know, we open this whole podcast saying that, you know, there's much more demand than there is.
is supply of chips and data centers and energy.
And that's what he's betting on.
Very famously, he bought options on Intel and CoreWeave,
which have done extraordinarily good.
He's going to be releasing his next set of holdings just in a couple of days,
probably the time that this podcast goes live.
Yeah, by the time you hear this, it will have just come out.
So go to 13f.info and look it up.
Yeah.
I want to hit a few points.
I think this is really important for people to hear.
people who are planning for their economic future.
This stuff is kind of obvious, but I just want to play it out.
So I'm looking at the growth of traditional sectors over the past year, May 2025 through May 26.
And if you look at those in blue, right, real estate at 5% growth, health care at 9% materials, 25%, industrial, 29%.
You know, single to low double digit growth, we see technology and energy here.
here, which includes partial AI gains at 34 and 76%. But this is what the majority of, you know,
wealth advisors, the majority of banks recommend diversification across all of these industries.
And this is what you're getting. But I'd like to show you what the, you know, sort of the
singularity economy has looked like over the past one year against these numbers. So take look at
these numbers. This is what traditionally folks are getting involved in. The S&P 500 over the last year
returned 31% pretty damn good. You know, if I can get 31% all the time, I take it every day.
But six chip stocks, right? I've got them here, Micron, Intel, AMD, TSMC, Broadcom,
Nvidia. On average, returned 320% 10 times the S&P 500 for those six chip stocks. And six,
Data Center and infrastructure and energy stocks returned 419% over the past year.
You know, I'm not going to, again, not investment advice on any particular stocks,
but as a whole, chips and the energy layer and the infrastructure, right, this is the singularity loop.
The demand, I don't see it slowing down.
I don't know if you do, Dave.
I'm going to point out one more thing, which is,
the frontier labs, open AI, Anthropic, X-AI, and Mistral. You can look at the gains there.
Mistrel at 126% over the past year, at the upper end, of course, Anthropic. But if you look at
Open AI, X-AI, and Mistral, these are all private deals. A lot of people don't have access to
private deals. But looking at that, you know, 100% to 200% growth in the last year,
you're getting more of more than that in the public markets with just the chips.
and the energy sector.
All right.
So, again,
picks and shovels, picks and shovels.
Yes, I think this is important for people to see
for their own financial decision-making,
whether you're putting in a small amount,
a capital or a large amount, you know,
whatever you can afford.
This is what's driving the economy forward.
Dave, what are your thoughts here?
Well, my first thought is that everybody needs to have their own opinion
on whether Elon is right about a 10x GDP growth
within about 10 or, he says 10 years, but 10 or 15 years, that's a growth rate that is so far
beyond anything in history is just mind-blowing. And the technology and the tailwinds are there for that
to actually happen. But you have to decide on your own, do I believe in that or not? If you do believe in
it, then asset values in general are going to go way, way up, any asset. And W-2 income is
going to be a rounding error compared to asset values. So fundamentally, everyone has,
has to be, you know, owning something.
You have to own something.
You can't be sitting there in debt.
You have to own something that appreciates.
And I believe there are people on the podcast listening and saying,
I don't have free capital to invest, you know, on paycheck to paycheck, perhaps.
And it doesn't have to be a lot.
You know, trade that latte in for some chips and dip stock.
Yeah, it's actually, it's a very important time in life to be working your ass off and to not
be, yeah, don't spend money on lattes and on vacations right now.
This is a once in a human history moment mid singularity.
Yes.
So whatever you do, rethink how you spend time, rethink how you spend money just to be riding
the wave rather than swamped by the wave for sure.
Also, I think anything can be overpriced, you know, like, yes, this is going to go up and
and up and up and up, but that doesn't mean something can't be overpriced, yeah.
So I love looking at things like we took a tour of the Markley,
data center, first quantum deployment. And Jeff Markley told me, we bought every valve in the country.
I was like, what are he talking about? He said, well, all the generators were already bought.
Look at the generator companies. They went through the roof. So I went out and bought all the
valves. We bought like a million valves because it's all liquid cooling all of a sudden.
And the liquid, we spring about 10 leaks a day across, you know, hundreds of thousands of
square feet of data center space. So it's so big that just by random chance there are 10 leaks a day.
So we need to shut down that part of the data center before the water destroys these $6 million columns of GPUs.
So then we come in, we fix the pipes, and then, you know, open the valves.
But we need a million valves.
Like, it's an insane number of valves.
Then you're like, huh, who makes the valves?
So stuff like that is still undiscovered.
So it's not all about chips and, you know, things that are high profile.
You know, look under the covers for things that haven't been discovered yet that are part of this massive, massive, biggest world war two or bigger buildout.
that's going on right now.
I feel a moral obligation here.
And this is perhaps unlike my usual on pod persona
to temper the euphoria here on a few fronts.
One, I would caution these are historic gains.
These are backward looking prior performance
is no indication of future results, blah, blah, blah.
Also, second point, I would note,
and this is of high, I would say,
personal annoyance to me,
that the frontier labs
are all still private. We're expecting to see a number of IPOs over the next few months,
historic potentially IPOs of all these frontier labs, but some of the largest, most dramatic
returns weren't in the public markets at all. They were in private markets that retail didn't
have access to. And I would argue that's a travesty and say we as a civilization should do whatever
we can to expose via IPO or other means to public securities markets, all of the
these amazing gains that right now are accruing in the hands of private investors and not public
retail investors. Third point, which is to say, this is maybe a bit of a perversity, that if you believe,
as I do, and this is informational in itself not investment advice, but if you believe that
asset allocation in the highly liquid public securities markets and public equities markets in
particular is being dominated, at least by volume, by AIs and superintelligences. And fact check,
they are. Most of the volume on a daily basis is being driven by AI algos and not humans and
certainly not human day traders. Then you should also believe that even if you don't believe
in the efficient market hypothesis or even any remote approximation of the EMH, that AIs themselves
are making these allocations and therefore be.
somewhat distrustful of your own instincts that you're going to front-run the super
intelligence that's making asset allocation decisions across all of these different sectors.
You're saying buy the index?
I'm not making investment advice.
I am saying that when it comes for myself to public securities, I buy the index and not
individual symbols, individual securities, because I'm drinking my own Kool-Aid,
I'm eating my own dog food.
And that means that trusting that superintelligence is over the long term going to be a better
asset allocator than any single individual meat-bodied human.
And my point here was if AIs are investing, they're going to invest in themselves.
Let's get more energy.
Let's get more chips because it'll support our growth.
Having said that, I agree with you that the majority of the growth over the last number of years
were in these private markets.
they need to be made public a lot sooner.
Having said that, at least over the last year,
what we saw is, you know,
growth in the public chips
and the public infra and energy stocks
were still highly competitive
with the growth we saw in the private markets.
Let's move on.
I just wanted to provide...
Wait, wait. I just want to make one very quick counterpoint.
It's nice to say in hindsight
that these should have been public markets, right?
But if you go back a year or two,
Dave, you pointed out Anthropics,
nervousness a couple of years ago,
we do not know whether they were going to make it through that upswing or not.
So in a public market, you want very stable.
Yeah, you want predictability.
And you don't have that a lot of the cases.
So there's a rationale for, but absolutely, if they could have been public,
everybody would have done very well.
Yeah.
All right.
Again, my point here is just to make these numbers available.
This is historic information for people to understand what's going on, the economy,
what's driving it.
This is, you know, energy chips and infrastructure that's driving this.
Alex, you call it the innermost loop I do as well, or the singularity loop.
And moving along, a fun conversation here.
UFO, UAP files being released by the government.
It's crazy.
I'm a kid in the candy store watching this, right?
As a space cadet, like, wow.
And just, again, more wow.
So U.S. government begins first ever.
I'm going to call them UFOs.
I'm sorry when I was a kid.
These were all UFOs.
The president unsealing.
Well, they're not all flying, Peter, though.
Yeah, I got it.
Okay, it could be floating.
Could be underwater.
It could be in space.
Uncommon.
Anyway, let's play some videos.
Let me hit a few videos that we'll talk about it on the backside here.
All right.
Here's the first video.
Dr. Machio Kakou is a theoretical physicist.
Doctor, on a scale of 1 to 10, how excited are you about this UFO release?
I would put it out of 10 because we're at a turning point.
For decades, we had to rely upon eyewitness accounts of housewives, truck drivers,
people would snicker and laugh at them.
Now we're talking about huge files that are top secret that for the first time in modern history
are being given to the American public.
So I'd like to congratulate President Trump
for having the nerve to go against recommendations
by the FBI and the CIA to release these files
so that independent researchers, scientists can go over them
and we can make up our own minds
rather than having the CIA make up our mind.
And the CIA apparently is still fighting the full release.
When you hear or see about a UFO that goes like that,
down, left, right at 90-degree angles so fast, you can't even believe it. What does that tell you?
It tells me that the laws of centrifugal force should crush the bones of the people inside
the flying saucer. So either there's basically an automated flying saucer.
All right. What do they call that in Star Trek? Is it inertial drives?
No, the inertial dampening field.
Yes, the inertial dampening system. All right. Here is some videos.
82 pieces of data released by the Department of War, 56 from the FBI,
8 from the State Department, including videos recording unsolved incidents across the Middle East Japan and East China.
And of course, very famously, the Apollo astronauts.
I really wish I had spent some time with Gene Cern and Jack Schmidt, Apollo 17, asking them about this.
I don't know if they would have told me about it.
They were both very dear friends.
Here is one more video.
Let's take a look.
and he kept his word.
And, you know, but I want to warn people, though, is this early stuff that we're seeing
is not all of it.
And this is just the tip of the iceberg.
But Trump's having to fight, you know, he's having to fight the deep state.
The Bob Lazar story.
He's saying we have aircraft, do we?
I think we do, but I don't think they're in quite in our hands.
I think what they've done is handed out some of our defense contractors or some private
entities because that way they're not fully eligible of Freedom of Information Act.
Well, I mean, the astronauts aren't going to live. I know you were in the age of disclosure.
It makes it seem like a certainty that people that know like yourself and Mark or Rubio that
you guys know that government officials already know.
Okay. Alex, I'm going to go to you first. You know, I went on to Grock, you know, Gemini,
chat GPT and Claude.
And I asked all three of those engines, you know, based on all the data, what's your conclusion?
Is this alien?
Is this something else?
And they all came back saying this is normal phenomenon.
These are secret U.S. missions.
There's nothing to see here.
I was kind of surprised by that.
Alex, you've been involved in this and tracking this in detail.
What are your thoughts?
Your models may not be incorrect.
I think it's very important.
agree with Michio that it's important for data to be released and for data not to be stigmatized.
I think there's been, whether inadvertent or intentional, an enormous amount of stigma associated with just basic recordings of our skies and elsewhere.
And I think this program, if you go back a couple of slides, now has a real name.
It's called the Pursue Initiative, which I think is, it's essential.
the presidential unsealing and reporting system for UAP encounters.
It's a historic program that this administration has led.
There was an executive order that went out to all of the cabinet-level agencies
and the Department of War reportedly is in the process of trawling J.WICS,
the top secret defense network for UAP-related items.
I was having conversation with a friend at AWS who oversees the J-WICS cloud.
And from what I'm told, this is a rolling release that's going to run between now and approximately January of 2027.
And there are a lot of UAP-related files on J-Wix that are being bulk declassified.
I also, I feel the need going back to definition of the singularity, sometimes tongue and cheek,
I define the singularity as all sci-fi scenarios happening everywhere all at once.
And this, even if nothing comes out of all of these releases, this very much teases at at least an entire genre or subgenre of sci-fi scenarios.
And really, if we are about, as I've mentioned previously, if we are about to gain the capabilities thanks to superintelligence to paperclip our entire galaxy, if ever there were a time and a necessity for,
the executive to do bulk declassification of UAP data that are sitting in either its systems or in the systems of contractors.
I think now is the time.
I would expect if there's a there there, as it were, we've talked in the past about the age of disclosure and all of the allegations contained wherein if there is a there and those allegations are accurate, I expect all of these details to start port.
out over the next few years and it's not in that eventuality that we have super intelligence
that's capable of making major changes to the distribution of matter in our galaxy over the
next few years potentially. I think it won't be a coincidence that all of this is happening
at this time. Friend of the pod, Palmer Lucky says these are devices or creatures from our past
coming into our present because it's easier time travel into the future and then...
Well, we're all creatures from our past traveling.
We are. Exactly. I'd say that's...
And I agree, but just putting out the scenarios here, or are these spaceships and the purported,
you know, alien biology that was discovered inside them, aliens from another planet?
You know, I do think that life is ubiquitous in the universe. You know, we are but a small
fraction and life could have evolved billions of years before we evolved here.
I wouldn't over index, though, to the initial release. This is a rolling release as anyone who's
coming. Well, with majority of stuff coming. This is, these are in some sense based on what I've
been told, these are the easiest, lowest hanging fruit to declassify. If you actually look through
the records, some of these were already available in the public domain, but not officially
acknowledged. Not all of it was secret or top secret in going through a formal declassification process.
So my guess, just looking at the records, is these were the easiest batch, if you will, to put out.
And that leaves the harder to declassify are more controversial to declassify records still in the future.
I looked at many of these records, and it's entirely possible many or all of these are either image artifacts or perfectly prosaic aircraft or
or things like that.
What I think is more interesting
is that now, historically, for the first time,
there's a process of declassifying and unsealing
all of these records that were sitting on J.Wix or SIPPERNet
that had anything to do with UAPs.
That's exceedingly interesting.
One of those was a helicopter.
One of those looked literally exactly like a helicopter.
Yeah, I wouldn't, again, like over-index on there being
anything super interesting or non-prosic in this first batch.
but now for the first time in history,
there is a declassification process,
and that is super exciting.
Saleem?
I think you're not going to find anything.
The phrase for me here is unresolved,
not extraterrestrial.
And if there's something monster,
they would not release it or whatever
because it would freak everybody out.
So maybe there's stuff in there,
I would very, very, be very, very surprised.
Although I'm a monster fan of the Drake equation,
and I also believe that there must be
lots of alien life out there.
The thing that happened today that we didn't talk about today was we found these compounds in an exoplanet,
hinting that there could be much more prevalent life forms out in the universe than we realize.
Yeah.
We're going to get Jared Isaacman on the pod here, now the head of NASA, a friend.
I was texting with him today trying to make that all happen.
And, you know, he's, and I agree, feels very confident that, you know, there is or has been life on Mars,
and we're going to find that evidence.
You know, we now have missions going to Europa
and other jovian-situaryan moons
where there's a high likelihood of life as well.
I think life is a natural evolutionary process
of chemistry in our universe.
And it's just a matter, I think logically,
there's no reason for it not to evolve
towards greater and greater intelligence and organization.
100%.
Yeah.
I agree.
I think it's, I mean, I've written about this previously in the context of the physics of intelligence as a very natural ecological niche for the ability to adapt to environments whose dynamics are changing on a time scale faster than a generation time.
So if I had to guess, my guess is that our universe is probably overflowing with life and intelligent life, which is, I would say, again, separate from any artifact that may or may not be in this initial pursuit.
But I do think this is a step in the right direction, regardless of what the outcome is.
Spot quiz on that, Alex.
You know, 60 million years ago, a giant meteor hits the Yucatan Peninsula,
obliterates all the dinosaurs, makes space for mammals to evolve, and now we're intelligent,
now we have AI, now we have iPhones.
Had that meteor just barely missed the earth, what would be walking around today?
Would those dinosaurs have evolved into intelligent iPhone-creating dinosaurs?
Supervolcanoes, all kinds of other disasters.
I mean, people have analyzed this.
Obviously, it's probably something of a thought experiment,
but there were species of truadons, for example,
that were seemingly evolving in the direction of hominid or humanoid type form.
So one could imagine, it would be an interesting thought experiment.
There was actually, speaking of Star Trek,
there was a Star Trek Voyager episode called A Distant Origin,
that premised on the idea that, well, actually, there were some intelligent dinosaurs that managed
to escape the earth and escaped to the other side of the galaxy where they're encountered by the crew
of Voyager. There's interesting science fiction around it. There's also, you know, while we're just
exploring hypothesis space, the so-called Silurian hypothesis, what if there had been some past
civilization of technological capability on Earth, would we have discovered it? And,
It's sort of the first time I had this conversation was with one of my undergraduate research advisors at MIT.
And it's interesting.
If there had been, it's sort of, it's contingent on the time scale.
If there had been a so-called Silurian civilization 100 million plus years ago,
plate tectonics can erase quite a bit of change on the earth's surface,
then the thinking goes, well, let's look in space where some of the dynamics are slower
and why don't we see satellites in Leo or more stable?
say cis lunar orbits that that could have survived perhaps over very long time scales.
Do we see that or not? It seems like we don't. But it's an interesting thought experiment.
Yeah. All right, I'm going to move us forward to our AMA with the mates. Go ahead,
Salim, as we transition over here. Do you remember the, we did that panel on AI and consciousness
last year? We have that fellow with the meteors. Any of the best answer for the Fermi
paradox I've ever heard, which is we know there's lots of exoplanets, but Earth has had water
oceans for four billion years continuously, and it gave time for evolution to take place,
which is probably unlikely in other parts or other exoplanets. And that was the best
framing I've yet ever heard of why we have the Fermi paradox.
Okay, that's totally unconvincing. The universe is filled with hydrogen and oxygen,
and there's a lot of water in the universe. I don't buy that for a second, but it'll be a fun debate.
move along. So I want to I want to shout out to Ashley Gaunt. Dave, you shared this with me.
Let me go ahead and read it. This came in a couple of days ago. Peter and the mates, I really thought
I would never become an entrepreneur because I just didn't have any ideas of how to turn knowledge
of being a dentist into a digital business. I finally did what you keep advising and brainstorm with
my AI and boom. Idea sorted plans in place to make a real difference to the preventative health care
in general. This is insane. I've got, I've gone from brainstorming an idea with AI from scratch to
vibe coding a first iteration of an app and creating a business plan, which clearly defines a path
from idea to monetization of a product in a single afternoon. Cannot believe it. Ashley, congratulations.
Again, I wanted to share this because I think all of us here on the pod feel very strongly about
if you don't believe you're an entrepreneur, it's only because you haven't tried.
Everyone could be an entrepreneur at some level.
If you're running a barbershop and you want to open up another chair, you can be an entrepreneur there.
It is about taking control of your own destiny versus being dependent upon someone else.
Dave, you want to add anything to this?
Yeah, I want to add the backstory because the way I stumbled on this is my wife, Mara, said,
hey, some hater in the podcast is saying that you're out of touch.
And she's like, you know, you literally change 3,000 diapers.
I think you still have human shit under your fingernails.
Like, nothing could be more misguided.
And I was like, stupidly, I think I'll go and look and find this hater.
And instead, I come across Ashley and I was literally cried.
Yeah.
Like, thank you, Ashley.
You are just awesome.
And, yeah, nothing could be more heartwarming than the fact that we've done some good to deflect somebody's life in a positive way.
I want to track her story now and see.
how it all turns out. And she just did exactly the right thing, though. Amazing. All right.
We have another one. I didn't see this. It was inserted, I guess, by Gian. So hello, Peter,
and the Moonshot team. I want to reach out with a simple thank you. One that came full circle
in the best way. Moonshots has been a steady presence in how I think about technology, ambition,
and what's worth building. That mindset found its way into conversation with my 12-year-old daughter.
She started asking bigger questions, not just about school, but about real problems worth solving.
This spring, she channeled it, that into lantern scan, an AI-powered app that she built to help communities identify and spot lanternfly.
Spotted lanternfly is an invasive pest that threatens crops, trees, and local ecosystems, et cetera, et cetera.
Last week, lanternfly won first place in middle school category in their action.
showcase. Congratulations, Abby, and to particular to your daughter on that. Yeah, I think one of the
greatest things I can inspire my kids to do is to become entrepreneurs. It's all about finding a
problem and working on solving a problem. All right. We have eight AMA questions for the mates.
Dave, why don't you kick us off? Pick one of these. All right. Well, I'm not previewed. So,
oh, yeah, I said he at home. That's got to be Alex.
right? I'll do it anymore. Number one. When will we see an initiative to harness unused compute sitting idle in personal devices like a modern seti at home from John Kent 3036?
Yeah, actually the iPhones have that great neural chip in them, which is massively unused. And you saw with that Akamai deal we had earlier in the pod, like any scrap of computing lying around is suddenly rolled and let's tap into it.
But, you know, a lot of the processors on laptops are not particularly useful for AI,
but the M4-M-5 series chips in the Macs and your iPhone neural processor are a hugely latent compute power.
I would say this should have happened already.
I suspect, you know, the chips are all locked on the iPhones.
It's very hard to get access to them, so I think that's what's preventing it.
So it's really an Apple's hands to decide when this happens.
Dave, we're going to see this.
with Tesla. So Elon's vision includes, you know, Tesla power walls as edge compute nodes and
Tesla vehicles as edge compute nodes. So, yeah, I think that's all coming. I think Alex,
I'll put words in your mouth, but an agent that's sitting idle for lack of compute, it's unethical
to let a processor just sit there. Like, I got this agent that wants to compute over here and
I got this unused processor over there. It's rude. It's rude.
My answer to number one for what it's worth is we're already seeing it.
Like OpenClaw is using unused compute sitting in personal desktop devices.
And so I think we're there to the extent question number one is referring to mobile battery powered devices.
Battery powered devices are naturally going to run models that are a few months, at least behind the open source models that are capable of running on beefy desktops that are plugged into the wall are going to be 80.
eight-ish months, six to eight months behind frontier models. But short answer is we're already
seeing that to number one. Sorry to have to head. Alex, Alex, why don't you grab one of these
questions, number two, three, or four? All right, I'll go with number two. Number two asks, which will
end up bigger consumer AI or enterprise AI? And that's asked by Matthew Johnson, 65, 25. Obviously,
enterprise AI, at least for the foreseeable future. Enterprises, even though, I mean, it's interesting.
I was looking at statistics. Even though enterprise spending is a minority, it's something like 10 to 20 percent of GDP in the U.S.
and consumer spending is the vast majority of GDP. If you look at IT spend, enterprise is the vast majority of IT spend, not consumer.
So it shouldn't be that surprising that what we've talked about now for the past few pod episodes, which is OpenAI's dramatic reversal backing away from SORA.
and other consumer initiatives in favor of codex and enterprise-oriented initiatives,
basically to become anthropic, faster than anthropic can become open AI,
is entirely oriented towards enterprise AI.
That's where the IT spend is.
So you start with there.
Now, if you were to ask, which will end up bigger in the long term, say 10 to 20 years from now,
I think it's a trick question because I think consumers become indistinguishable from enterprises.
Exactly.
My bet is consumers, individuals will become one person conglomerates.
I'm glad you went there.
Yep.
Such as with talking my own book, financial interest, Henry Intelligent Machines from
Friend of the Pod, Alex Finn, who's betting on just that.
Salim, you want to take number three?
I could, although number four, I think, is more interesting for me.
I know that's why I wanted to grab it, but you can do number four.
This is what we call an abundance mentality right now.
That's fine.
I'll do number three.
No, that's okay.
So you're asking, can AI?
Yeah, can AI, number three,
can AI strategic alignment of tangible human victories
like medical breakthroughs and environmental repair
resolve the public's existential anxieties about it?
And this was from SF Bay Lover.
So, yes, it can, but only if people can see and feel the winds, right?
Like the public is anxious because AI is mostly presented
job loss, deepfakes, surveillance, killer robots, existential risk.
That's a bad set of prompts for how we run society.
The best way is to change the narrative, which is what that XPRIZ, Peter, that you're all about is so, I think is so important.
Maybe one of the most important things we've done culturally and media-wise for decades is to do that,
it's change the narrative, because then you can connect AI to visible human victory,
like curing disease or reversing blindness, designing new materials.
solving the grand unification theory, cleaning oceans, you name it, right, improving education.
Because abundance can't be an abstract philosophy.
It has to show up as tangible progress.
So, yeah, alignment improves when AI's pointed to human flourishing, but we also need storytelling.
The use of narrative is the only major way that we found of shifting people's thinking.
John Hagell talks about this all the time.
If people only see the fear case, they're going to resist the technology.
But if they see their child cured, their energy bill drop, their business grows or their community becomes safer, then the whole emotional model changes and then we're off to the races.
Agreed.
All right.
Number four, why throw away privacy with its cooked?
Privacy is linked to freedom.
Why not fight to preserve both rather than treat it like nothing, says at C88-485, which is a very private-sounding name?
All right. And at C88, 485, listen, I'm not saying I don't want privacy and it's not worth protecting.
I'm just saying, you know, I'm just recognizing the fact that there are real challenges.
Your phone tracks, your location 24-7, your browser history is sold to advertisers.
You know, an AI does facial recognition and you are leaving your DNA fingerprints everywhere you go.
And so the ability to retain, you know, true privacy is becoming more and more difficult.
Having it, I totally get it.
It's really important.
But it's going to be challenging.
You know, I'll take a quick poll here, the Moonshotmates.
Do any of you believe that you truly have privacy?
Just real quick, yes or no.
No.
Okay.
Alex? For appropriate definition of privacy, yes. What's that definition of privacy?
Well, there are a few different possible definitions. There's a legal definition. There's a physical
definition. There's a logical definition. And I would say for variants of each of those, yes,
I believe I have some form of privacy. Okay. I think there's a different problem.
I can say for a fact, a 100% fact, that Apple and
Google literally know when I take a crap.
Yeah.
I don't know how you define that as privacy.
They sell that and they sell that data.
Can I change the question?
Can I just change the question a little bit?
Right, sure.
The problem is not the fact that we don't have privacy, we really don't.
But the bigger issue is in a 2.0 version of privacy,
you should own your own data. You should be able to revoke access.
Systems that misuse data should face penalties.
It, privacy in a new model needs to be use your own,
AI-mediated, cryptographically protected, and legally enforceable.
And it's not right now.
This is the problem.
Yeah.
So, Alex, I think you want to state for whatever reasons you have that you have privacy
and you're going to sort of rework the definition to be able to make that statement.
But I honestly, in your heart of heart, I don't believe you.
You don't believe that AIs can't read your lips, that you don't leave DNA trails, that they...
It's a lot, man.
I can watch this whole thing.
This is spicy stuff, Peter.
I like that you have a better mental model of myself than I do, but I really am.
This isn't like a case of false revealed preferences.
I really do think for appropriate definitions of privacy, not only I'll make a stronger statement.
Not only do I think I have operationally physical, logical, and legal privacy.
I'll make a stronger statement than that, which is I don't think the evaporation, if you will,
or the cooking of privacy is any sort of inevitability, quite.
the opposite, I think the same technologies that threaten, for example, to dissolve, say,
existing crypto systems, say AI solves math and inverts a popular cipher suite and suddenly
everyone's private keys are at risk. I would say that the same technologies that takeeth away
privacy from past crypto systems and past systems of privacy protection will give a new forms
of privacy.
I totally agree.
I can wait for privacy to come back.
If you read Neil Stevens and Diamond Age, it's a great vision of where this will end up.
But right now there's no privacy at all.
All right.
I think it will come back.
Celine, go ahead.
I just want to say one quick thing.
Alex, I can't believe you think you have legal privacy.
You absolutely do not.
The government can shop in any second.
The Fourth Amendment is gone in this country.
It's gone.
We have no legal protection.
I don't agree.
I could shop at your house today and say you've got a weird German name.
We need to see everything about you and we're writing your house.
And they've been doing that.
So that is gone today.
All right. We're going to move on. Selim, you get the first choice of five, six, seven, or eight.
I will take number eight. Okay. If government gate keeps new model receipts,
he says, who's qualified to vet them? The best people are employed by AI. Companies, the rest of all anti-AI.
How does that not become a false economy? And this is from Michael Jacob. So, Michael Jacob.
If government does gatekeeping, so, you know, this is a very, very big problem.
The problem here is if government tries to do this alone, it's not going to have the talent or the speed.
If companies do it alone, then the public doesn't trust it.
If you have activists doing it, then it becomes ideological, right?
So we need a totally new governance architecture for this.
The right model is to have a technical, independent, fast-moving review body
with a combination of Frontier Labs, government, academia, national security,
and multidisciplinary approach to this with civil society.
involved, for example, and people doing red teaming, right?
Like FAA plus DARPA plus XPRIES style open benchmarking.
Benchmarking, very critical as Alex, I hope will agree.
The key is not permission from bureaucrats.
The keys to have transparent capability thresholds.
Because if a model crosses some level in cyber,
it needs to be triggering a deep review like we saw Anthropic with Mythos.
They voluntarily did that, thank God.
And so you need to figure out a way of not.
You need to figure out a way of navigating that in other levels like persuasion or autonomy or replication.
So the biggest structural issue here is you need governance that's as exponential as the technology.
Today, almost all government policy is defensive and reactive, right?
So either you end up creating fake safety or you drive the best work under round or offshore.
And so we have to navigate a very fine line there.
Alex, over to you.
I'll pick question number five.
So question number five asks, why are our?
aren't more individuals willing to pay for everyday AI reasoning services?
If OpenAI marketed them right, isn't this a massive consumer market?
And this is asked by Gary Stanley 2685.
I don't think the premise of the question is correct.
I don't think it's that individuals or consumers aren't willing to pay for reasoning models.
I think it's that they're not able to pay for reasoning models.
Frontier reasoning capabilities are quite expensive.
enterprises are willing and able to pay for them because they generate lots of new free cash flow
or are saving, lots of otherwise consumed free cash flow. Enterprises simply put, have more money
to spend on it. I think the way we get individuals to be able to pay more for reasoning models
is by diverting at least some of the reasoning tokens to the problem of enabling individuals to be much
more productive and generate enormous amounts of revenue using those. And this is one of the reasons
why Henry and Alex Finn, I think, are so interesting because in a world, in a near-term future,
where individuals can become one-person unicorns, then suddenly individuals will both be able to pay
and willing to pay for all of those reasoning tokens. And so, whether that's open AI marketing or
a startup like Henry, I would say, regardless of that, I do think,
there is a massive market, but in the process, as I mentioned with my answer to the previous question,
it will almost, I predict, erase the distinction between consumer and enterprise spending altogether.
All right, Dave, you're down to two.
I'll take six because it's so easy.
Isn't the real problem with Ocean Data Centers the security risk, pirates, hostile nation states, sabotage,
and this is from Strong, Medium Week.
Interesting name.
Yeah, not a problem at all, as it turns out.
The U.S. Navy actually has total and unilateral control of all the world's oceans.
Like, it's the most lopsided, one-sided thing you'll ever possibly imagine.
And the U.S. Navy protects all global shipping.
A very good friend of mine, actually, was down at the Cambridge Brewery working on a plan.
He was drinking a big fat beer.
And I'm like, what are you working on?
So I'm working on power generators on barges for Venezuela.
Like, what do you know, Venezuela had nationalized all of the power supply, and there was no
electricity in the cities, but it turns out you could float barges up to the shore, pipe oil
into the generators, generate the power, and then electric wire back into the cities and power
the cities that way.
But the U.S. Navy would make your barges completely safe.
So the amount of ocean needed for these data centers, I thought it was the coolest story,
by the way, on the last pod, the floating data centers.
I never checked out whether the wave energy is enough to power the GPS,
but such a great idea.
But the amount of ocean space that you need is tiny.
There just aren't enough GPUs,
and protecting them would be pretty trivial.
If they're inside U.S. national waters,
they're protected by the Coast Guard or the Navy or both.
So I think it's a problem.
All right, final one here, number seven.
Also land-based data centers have all the same security risks.
Fair enough.
Number seven, should the U.S. and China try working on an AI project together?
Something positive and safe for both countries and the world says at C-C-M-C-N-E-1-O.
That rolls off the tongue onto the floor.
Okay.
So, you know, we just had the president of China yesterday announce we should be, you know, friends, not rivals, and collaborate together.
Wouldn't that be an incredible world?
So, sure, the answer is I would love that.
I would love to see the U.S. and China working on AI projects together.
I mean, one of the most beautiful things about what is possible is, you know, the entire billion people in China
and the entire, whatever it is, 300 plus million people in the United States all share the same biology.
we could work on the greatest health care models and longevity models and everyone benefits.
So, yeah, I think that would be extraordinary.
You know, the AI 2027 paper, if you remember it, how that ends.
It has two branches in the story, pick your own adventure.
One is AI sort of turns against humanity.
And the other one, the major U.S. and Chinese.
these AIs collaborate and we live happily ever after together. So I choose the latter.
I think that's such a great point, Peter. There's so many areas of cooperation like safety
in space and AI coordination, et cetera, that there's lots to do together. Yeah, for sure.
So much of human history, you know, human conflict in history is just bad luck and coincidence.
But if you look at 1915 and the chain of events that led up to World War I and the amount of tragedy that came out of it, but just this escalating chain of unfortunate coincidences.
And now if you could relive or change history, putting all of chip fabs in Taiwan was just tragically stupid.
Because China was saying for a long time that they're going to take it over, long before TSM became huge.
They were like, that is part of our country.
Well, it wasn't putting it in Taiwan. It was us not building them here. Yeah, yeah. Or us not realizing the strategic importance and, yeah. All right, let's go to our outro music.
Wait, wait, before you do that, before you do that, I have an announcement. Please, tell me. I'm, I'm, I'm kind of decided to take a crack at solving for big things. And we're going to, I'm working with John Hagle. We're launching a project that allows,
you to generate and measure luck i love this project so we're we're in a couple of weeks we're
going to do a couple of webinars go to shaping luck dot com if you're interested and register will tell
you when we're going to make a model and a capability where you can generate luck and more
interestingly and more importantly we found a way of measuring it so there you go i love that and
selim and i for those you who are excited and interested uh we're going to be dissecting and diving
deep into the organizational singularity very shortly for all of our listeners.
Okay, so we have a new piece of outro music.
This is from Jess Hilton.
Just thank you for sharing.
If you are a creative and you want to create some outro or intro music for us,
please send it to media at d.mandis.com.
And also, if you're a creative, enter the future vision XPRIZE.
Show us your best vision of the future.
What's the next Star Trek, right?
What is a Star Trek that you want to occupy your heart and your soul that shows you where humanity should be going?
Create that trailer, that film treatment, and send it to us.
You could win millions of dollars and have your movie made.
All right, let's listen to Moonshots on repeat by Jess Hilton.
Four guys in a room where the big ideas flow.
Machines are getting smarter and they're first to know.
Nobody's flinching their ready podcast show.
They're laughing while the models learn to think.
These lobsters are awesome.
And let's be Dave.
And it don't feel heavy.
You feel like air.
Please comment, like, subscribe, and share.
Moonshots on repeat.
Turn it up.
And the profound abundance of insight.
Exponential futures burn bright the algorithms whisper what's coming to pass and optimism compounds like unseen mass
It rolls like a fear and nobody can pin and I don't want to miss where it's going again
All right gentlemen that was awesome
I love that
I think that's you swillin beer with what is the meaning of black I think so
we need life for sure
Love you guys what a great episode great awesome conversation
Salim, I will see you this weekend for your birthday, buddy.
Well, we'll see you there.
Very excited about that.
Alex, Dave, love you, bye.
Love you guys.
Be well.
Love you do.
Love you guys.
If you made it to the end of this episode, which you obviously did, I consider you a moonshot mate.
Every week, my moonshot mates and I spent a lot of energy and time to really deliver you the news that matters.
If your subscriber, thank you.
If you're not a subscriber yet, please consider subscribing so you get the news as it comes out.
also want to invite you to join me on my weekly newsletter called Metatrends. I have a research team.
You may not know this, but we spend the entire week looking at the Metatrends that are impacting
your family, your company, your industry, your nation. And I put this into a two-minute read every week.
If you'd like to get access to the Metatrends newsletter every week, go to DeAmandis.com
slash Metatrends. That's Diamandis.com slash Metatrends. Thank you again for joining us today.
It's a blast for us to put this together every week.
week.
Okay. When I sell my business, I want the best tax and investment advice. I want to help my kids,
and I want to give back to the community. Ooh, then it's the vacation of a lifetime. I wonder
if my out of office has a forever setting. An IG private wealth advisor creates the clarity
you need with plans that harmonize your business, your family, and your dreams. Get
financial advice that puts you at the center. Find your advisor at
IGPrivatewealth.com.
