Moonshots with Peter Diamandis - Elon Musk vs. Sam Altman, AI Job Loss, and OpenAI’s $852B Valuation | EP #247
Episode Date: April 14, 2026This episode is about AI agents, OpenAI and Anthropic competition, the future of work, energy breakthroughs, Bitcoin and quantum risk, biotech, and humanoid robots. Post from Elon Musk on X mention...ed in the episode: https://x.com/elonmusk/status/2042090236206063966?s=46 Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360 Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy Your body is incredibly good at hiding disease. Schedule a call with Fountain Life to add healthy decades to your life, and to learn more about their Memberships: https://www.fountainlife.com/peter _ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Substack Spotify Threads Listen to MOONSHOTS: Apple YouTube – *Recorded on April 10th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
It's the Musk versus Altman lawsuit.
Musk has sued Open AI for $100 billion.
I kind of figured behind the scenes they don't actually hate each other.
These guys actually hate each other to the, like, extreme.
Open AI is valued at 70 times revenues right now.
Their last raise was at $852 billion valuation.
These numbers are insane.
It's like nothing we've ever seen.
And the timeline is so much shorter than we've ever seen before.
$3 billion a day being invested.
No one said the singularity was going to be cheap.
No one's being honest about this.
If you take a random white-collar worker today,
what are the odds that that randomly selected job can be replaced two years from today?
We told you already that AI will be able to do everything that a white-collar worker does imminently.
That's a fact.
I had so much fun this morning.
What happened this morning?
I got Alex that was supposed to run a panel.
handing over the torch to Dave to moderate a panel.
I moderated it.
I had to wing it, which is so fun because I have no accountability whatsoever,
and I can ask anything I want, and it was the most fun ever.
A lot of moonshots fans there.
Huge, yeah, probably what, 40, 50% of the crowd, something like that?
Hopefully 100% after you guys finished.
I did probably seven or eight panels by the end of it,
and the first time I pulled, I would say maybe 80% of the audience watched moonshots.
Nice.
All right.
You guys psyched?
You guys ready?
Ready to talk her on book, Peter.
All right.
Let's do this thing.
Everybody, welcome to moonshots.
Another episode of WTF here with my extraordinary moonshot mates.
DB2, our emperor of exponential investments.
Good to see your outfits there.
You must have a whole set.
You know, it's funny.
The team just drops them on a chair here and says, we got four choices for you.
What do you want?
Lobsters and DB.
You know, one of the things, I mean, I like coming out of the abuters.
Summit. I have all of my wearer. Here's my, uh, powered by moon shots, powered by gratitude.
I don't have to think anymore in the more, anymore. There were like 20 of those. Are you going
to cycle through all of them? I am. Some really funny ones. I am. Yeah. For sure. And, uh,
our resident genius, Alex Wiesner Gross, AWG, good to see you, pal. Wide awake. You know,
I know you didn't sleep last night. Why do you did a variety of wardrobe on Alex, too, there.
Oh my God. And we pulled Salim off the ski slope.
again. Again. Yeah. I grew the beer to protect against the sun, but nothing protects against you guys.
You're well positioned, Salim, to lecture the world on UBI post ski slopes.
You know, this is the most fun I have all week. And it's, you know, I spent probably
the better part of 12 hours prepping for the show today, just going through all of the notes that
we've all submitted all the work that the Jean Luca and Dana and Nick did. So thank you to them.
And for everybody watching, this is our chance to give you some optimistic visions of the future.
What's going on in exponential tech and AI?
We are the number one podcast in AI and optimistic visions of the future.
Welcome to WTF.
Just Happen in Tech.
Gentlemen, it's good to be back on a recording basis twice a week every week.
And, yeah, so, so much.
So this is our second catch-up show after our high.
hiatus for spring break.
And let's jump in.
First, this was my spring break by popular demand.
A few photos.
Wow.
So cool.
So this is the native wear in Morocco.
That outfit is a jolava and the headwear just to protect against the sun.
We went camel raiding riding in the family.
It was amazing.
You know, camel spit.
They don't bite, but you shouldn't really stick your head right there.
I think camel was eating my my headset in that image.
All right, let's move on.
But Morocco was amazing.
The Sahara Desert was extraordinary.
You know, looking at the Sahara Desert, there are more,
there's about a thousand times more stars in the universe
than there are grains of sand on all the deserts on Earth.
Just to put the size of the universe in perspective, it's extraordinary.
Okay, let's talk about the 2026 AI economy.
It is literally going through an exponential explosion, so much going on.
Let's jump in first to the story in XAI.
In our last pod, we covered Anthropic and Open AI principally, not XAI.
A lot's going on there.
In particular, a lot of signals coming from both Elon and from the new president of XAI,
Nicole's saying we're clearly behind and we've got to catch.
catch up. So the same playbook is going on. Elon is basically reorganizing the entire deck.
Eight founding engineers left, including three co-founders. And he's using SpaceX engineers to fill
the leadership gap. We've got, as we discussed in the last pod, a $2 trillion valuation predicted
for the IPO this coming summer. And it's a lot of movement. I mean, I don't know about you, Dave,
but the idea of having to reorganize my entire leadership for a company a couple of months
before an IPO seems really harrowing, doesn't it?
Yeah.
So, you know, it's funny, though, if you look at Elon's playbook, he is the master of scale
and manufacturing and, you know, Tesla and SpaceX.
But AI training is different.
So building the Colossus data center is right in his wheelhouse, you know, record time,
a million GPUs.
but these training algorithms are really finicky.
And I don't know if you remember about it in the summer of 2024,
OpenAI was trying to get O3 out the door.
And they had a training run rumored to be $500 million of compute
that was at a bug.
And the whole thing was not learning the whole time.
It had bad data going in.
And the whole time, it was just burning up GPUs and not producing anything.
And it set back their entire program.
But that kind of stuff happens in software,
where orders of magnitude get thrown away and captured all.
the time and that may be new terrain to Elon and he might have to rethink his operating and
manage it. Because same thing happened at Meta, you know, meta got way behind despite huge compute
and they had to fire everybody and start over again. And they're still way behind, it looks like.
Yeah, it's hard to catch up. Yeah, I mean, I love this quote from Elon. He says,
X-AI was not built right the first time around. So it's being rebuilt from the foundations up.
And again, I mean, how do you think about that?
that, you know, while you're pricing an IPO, saying our entire future-looking revenue has to be
rebuilt from the ground up.
Yeah.
That's extraordinary.
That is extraordinary, isn't it?
I would say in some sense, organizationally, it worked.
I remember, I think we've talked about this on the pod in a number of previous episodes
talking about the Grok model series.
They smell like they're bench maxed.
That's sort of the elephant in the room when talking about grok with a K models historically.
they do have access to the Twitter slash X data fire hose. That's the upside. But the downside is, at least
certainly the earlier set of the XAI GROC models. They really smell like they've been benchmarked on a few
hand-c curated benchmarks. And I don't know whether that's, in fact, the ground truth behind the scenes,
but reading between the lines of the Elon quote that it was built incorrectly the first time,
something like that would be my suspicion. And now that there is new leadership,
And the head of Starlink, as we talked about on the last episode,
the VP heading Starlink at SpaceX is now the president of XAI and gutting the engineering team.
I would expect that they're taking a look at making sure that benchmark, this is purely speculative, admittedly,
but that benchmarking for particular benchmarks isn't what happens.
And I think in this era of general reasoning models, where as with meta and meta's new
models where some would say meta's new models, the first under Alexander Wang's leadership,
maybe have a bit of a smell of data orientation, data oriented fine-tuning versus reasoning model
orientation. XAI, if it wants to stay in the frontier, which right now is three labs plus XAI,
plus meta, question mark, question mark, really can't afford to not have the world's strongest
reasoning models and can't afford to just benchmax to vanity benchmarks anymore.
Liam, you talk about agility and organizations all the time. I mean, this has got to be like
maximum agility. Maximum agility. You know, what I find interesting is that the org chart is now
part of the product stack almost, right? It's becoming part of the product. And depending on
who you move to it's like crazy. Elon is very, very hands-on. And when you launch a rocket and it blows up,
it's pretty obvious. You remember he threw that that huge ball bearing at the window of the
cyber truck, which was supposed to be bulletproof and the thing broke.
It's like, okay, guys, you're fired, next guy.
But actually, when you come to AI training, the benchmarking, if the guys are lying to you
or benchmaxing behind your back, it's actually much, much harder to call bullshit on it.
So you remember when we interviewed him, he was like, let me show it to you right now.
And he had clearly been manually checking him, like, this will blow your mind, this will blow your mind.
So, but, you know, that's his operating model.
That's his mode.
And it's a little easier for the AI guys to blow smoke up your ass than for the rocket guys, the car guys, the data center construction guys.
I think this will blow your mind and this will roast you royally what's going on.
That's what was going on.
Yep.
So, you know, here we go.
SpaceX AI Colossus 2 training seven models.
And again, Elon, you know, has tweeted this out a few times.
We have some catching up to do.
So here we go.
They're training up these seven models.
Imagine version two, the next gen video generation.
Two variants at one trillion parameters, two variants at 1.5 trillion parameters,
a six trillion parameter frontier scale LLM, and a 10 trillion parameter.
And, you know, Elon loves the largest.
You know, he's got that in common with Trump.
So he's going after a 10 trillion parameter model.
But, you know, parameters don't.
directly correlate the capability.
Do they...
Alex is going to have a field day with...
I'm going to sit back and enjoy what Alex says next.
I mean, to Elon's credit,
at least he's being transparent
about the number of parameters in the models.
The other frontier labs, by and large,
no longer report the number of parameters in the models.
So I think there are a few things
that are worth noting here.
One is that he's going up to $10 trillion.
The other frontier lab,
certainly the top three-ish,
no longer report that they go up to
10 trillion models, for example.
in the last episode, we were talking quite a bit about mythos. I don't know how many
parameters are in the mythos model. I could speculate based on cost, but I just don't know the
ground truth. So I do think knowing that we're now going up to 10 trillion versus one trillion,
where historically, approximately 1 trillion was the widely reported soft sealing or 1.5 trillion-ish,
soft ceiling number of parameters, I think this is an important element of transparency. I think it's
also at the same time worth noting, now that we have actually...
access. Thank you, Elon, to the number of parameters. It's worth noting that the ceiling in terms
of the number of parameters is very much intact. After all of this time, the fact that an aspirational
frontier lab is still maxing out at 10 trillion parameters means that the parameter scaling race
seems to be over. If it hadn't had continued, remember for a while there, as with the clock speed
scaling race early sort of ending in the mid-2000s or late 90s, depending on how you count,
we should be in the hundreds of trillions or higher of parameters right now. That hasn't happened.
We've plateaued out in terms of the number of parameters and frontier models.
And that's driving that out.
Due to the reasoning model revolution and in part due to distillation, which go hand in hand.
So those are some preliminary thoughts. I would suspect it's sort of interesting to me that he hasn't yet
merged video generation with all of the other models. Google DeepMind has made lots of noises
about starting to merge video as a first class modality in with their multimodal reasoning models.
Again, don't have access to the ground truth for how capable Gemini general purpose models are
at video generation. We've seen, obviously, Google's video generation models have been kept
distinct from a user interface perspective, presumably their diffusion transformer based,
than transformer-based. We don't know. Punchline, I would say that this seems like a healthful
family for SpaceX AI, the newly merged entity, to be offering. But there really aren't any
big shockers in terms of the ranges other than maybe that they've abandoned the low end. Google
is very much tending to small parameter count, sub-trillion in a few cases. Google is releasing via the
Gemma models, few billion parameter models. Elon has completely abandoned the low end in favor of
brute force scaling, which is exactly what I'd expect from him anyway. You know, Colossus 2 is running about
700,000 Gb 200s and Gb 300s. And the estimate is it's 18 billion in hardware. And so the question is,
is running a 10 trillion parameter account model a waste? Or does he expect to really get
outsized performance from that? Because it doesn't correlate directly, does it?
Well, remember, not at all. It's tricky. The way reasoning models are trained these days is usually, at least according to my understanding from all of the other frontier labs, you train the largest model you possibly can and then you distill it down to smaller models. So it's not as if necessarily the 10T model even needs to be released. It might be for the purpose of serving as a teacher model that can then be distilled down to more releasable models.
All right. Well, this is what's going on in the Elon world right now. And I'm sure, you know, it's a, I think Elon always runs a red alert. I mean, 24-7, sleeping on the floor. You know, everybody, nobody works five-day work weeks there. It's, you know, what would it be 8 a.m. to midnight, seven days a week is my guess in the Elonverse.
It's a management style. Some would say management by crisis. It's certainly a unique management style, but a very effective one.
Yeah. And people love it. I mean, he's got a massive MTP, right? And driven by that MTP, people are lining up to come and work for any of his companies. This is a story we're going to dig into here. Like I said, it's pay-per-view TV. It's the Musk versus Altman lawsuit. Musk has sued OpenEI for $100 billion against Altman, against Sam Alvin and Greg Brockman accused of fraud, breach of.
contract. The trial begins April the 27th, so just a couple of weeks from now. And one of the things
that he's also asked for in a recent shift in the trial is asking for Altman and Brockman to
step down from leadership as well as reverting to a nonprofit. And that's a pretty extraordinary
move. And guys, this goes on at the same time. Did you see the video I sent you of
the reporter who did the New York article.
Did you have a chance to watch that?
Oh, no.
Yeah, I said it in our WhatsApp group, and it's chilling on what that reporter.
He summarizes the article and what's going on.
And it's a pretty extraordinary piece that came out in New Yorker.
We talked about it in the last podcast.
But at the same time that the lawsuit is going on, that timing is kind of suspicious.
I wonder who incentivized that to come out.
Oh, my God.
You really?
What a conspiracy theory.
Throw that in the show notes, man.
We got to get everybody to watch that.
Saleem, any thoughts on this one?
You know, I think this is a theater.
A lot of video here, a lot of video bluing.
I don't know how to frame this or think about this,
except that this is shifting out of like strategic and startup logic.
and this is like geopolitical, this is a big trial, right?
As we go through, for me, this is the governance war disguised as legal war, right?
The real question is who gets to steer these systems that have like quasi-civilizational impact, and that's the fight.
Can you imagine being, this is a jury selection is beginning on April 27th in the Oakland federal court.
Could you imagine being in the jury for this?
And who do they pick as jurors?
You get this one.
I don't know if that'd be greater awful.
You'll be there for months, man.
Oh, my God, but inside knowledge.
I mean, first of all, I wonder if any of this is going to be made available post facto
or if it's going to be televised or any of that.
Any ideas?
Do we know?
Does anyone know?
I don't know.
Can we get to see it as it happens?
I don't know.
Maybe Dan or Gian you can look in the interim and let us know.
But it's, and then who do you?
choose? Do you choose people who are knowledgeable in AI? Do you choose people who are, you know,
I don't, you know, okay, do you use do you use chat GPT? Yes, well then you're you're off the,
you're off the jury. Well, if the trial starts on the 27th, the jury selection will be
like now. The jury selection begins the jury selection begins in the 27th actually.
Oh, okay, okay. Yeah. We'll track it. All right, we have some legal research to do. This is going to be
entertaining.
To say the least, I would note, again, looming in the background is the OpenAI IPO.
And if I were on the defense, I'd probably be thinking about where this settles.
And it would seem to me, again, third party observer, I don't have a stake in either side.
I would assume that part of one of the opportunities for convergence would be granting some sort of equity
stake on the cap table for Elon in an ultimate IPO, which my understanding is he doesn't have,
and maybe that's where convergence and some sort of ultimate pre- or post-trial settlement option lies.
Here's my prediction. They're going to settle, and the settlement is going to involve Sam stepping
down as CEO and the company continuing as a for-profit.
I'll throw that on Polymarket. That's actually a really good guess. I mean, obviously it's
unpredictable, but Sam has many, many, many investments in AI companies and no,
shares in Open AI.
Yeah.
And if, I don't think Elon cares a whit about the $100 billion.
He cares about the, you know, Bullet 2, Sam and Greg.
Funny that he's targeting Greg too, but I guess they're a package deal now.
Yeah.
You guys got to go.
And that's the end of that, man.
That's brutal for Open AI.
Yeah.
And it's a couple of notes here from the research I did.
The case gained momentum when the discovery process revealed Greg Brockman's 2017 diary entry
that stated the nonprofit commitment was a lie.
And it was that journal entry that allowed Judge Gonzalez Rogers
to allow the case to proceed.
You know, it's funny.
I always used to think that these hatreds were fake
and that everybody was really fine behind the scenes.
Remember we were at OpenAI meeting with the team there
and talking about XPRIZ and the charity?
And then the next day I talked to one of the guys,
Mark Chen or Kevin Wheel,
get. And they said, yeah, right after we met, we went over and had drinks with the Anthropic
team to see if maybe we want to work on it together. It was like, okay, you guys are really
friends under the covers. There's no way you go out and have drinks. So I kind of figured, you know,
behind the scenes, they don't actually hate each other. These guys actually hate each other
to the, like, extreme. I'll maybe register a note of sympathy for the defendants in this case.
I think creating pioneering a model for a research lab such as OpenAI, which again was responsible
for this enormous, probably saving us from a present recession at this point and certainly
accelerating the course of the singularity by at least a few years, perhaps many more.
I'm very sympathetic to the defendants from a corporate governance perspective.
It wasn't necessarily obvious in the early days of OpenAI that, say, a public benefit corporation
was the natural corporate structure.
They iterated their way toward discovering that generalist large language models
were how we got AGI and then turning that into a business model
that could afford the capitalization to build out a scale-out.
All of this they backed into.
I think if they knew what they knew now, putting Elon and his investment aside,
in the early days of OpenAI, it would have been structured very differently.
So I, for one, am sympathetic to the defendants.
that history isn't always clean.
It isn't always the case that everyone knows ahead of time
exactly the right governance structure
for what ultimately is going to turn the world upside down.
But I would say to their credit,
they ultimately have iterated their way
in compliance with state authorities,
as best I understand it,
toward a more modern governance structure
that reflects the revolutionary company that they are.
And no open AI has not paid me for that statement.
Salim, you and I went through this process
with Singularity University.
You know, we started as a non-easternist.
because we thought, you know, that's what a university you need to do. And then we discovered a revenue engine in the executive programs. And we said, you know, being a nonprofit is hard because you've got to constantly raise money all the time. And, you know, if you want to do anything big and bold in the world, you need an economic engine to power it. And we flipped it into a for-profit, into a public benefit corporation. We did the exact same process that opening eye is doing right now. Because at some point, you know, I've sworn off nonprofits myself, at some point having a business.
engine that generates income that allows you to do things in the world is super valuable.
Salim?
It was a crazy time.
I've done seven startups before Singularity, and this was like five times harder than anything
because you've got all the nonprofit stuff.
You still have all those startup issues of cash flow and whatever.
We built it with a team of five people in the first year.
Then you have NASA regulatory.
Then you've got faculty politics to add to it.
Then you've got the Ray and Peter thing and Google and Cisco in August.
It's just like dimension after dimension of complexity.
Going from a nonprofit to reduce some of it.
Going from a nonprofit to a for-profit, my analogy is you're flying an airplane with propeller engines.
And in flight, you're stripping those off and replacing with jet engines in flight.
I'll go further.
I'll push back on one sentence that Alex said there, the sentence in conclusion.
compliance with state and federal regulations as I understand them. But I'm pretty sure that this
situation is completely untested in case law. And that's what they're going to try and figure out now.
Like, is it or is it not legal to start a nonprofit and raise money from people on a mission
that's a nonprofit mission and then take the intellectual capital and the physical capital
from that effort and turn it into something else,
is that fair to the initial investors or not?
And is that legal?
I'm pretty sure this case will set the precedent for all future time,
but it's not tested in history.
I don't think it's ever gotten this.
Otherwise, why would you not start as a nonprofit,
test it out, and then flip it to a for-profit at some point in the future?
I'll go further.
I think there is potentially an enormous upside
depending on the outcome of this particular case,
I think there's so much societal value in this country
locked up in non-profits that would be unleashed
if they could be for-profits.
I've made the point in the past.
I think research universities in America
have locked up, basically siloed and sequestered,
an enormous amount of real wealth
that could be unleashed onto the world
if many research universities
could be restructured as public benefit corporations.
And right now, it's legally disadvantageous
to restructure,
say an MIT or a Harvard as a PBC, if we had a legal regime that enables us to to basically do
some variant of what Open AI has just done in restructure as a public benefit corporation,
starting from a nonprofit, granted they started as different types of nonprofits, but nonetheless,
to restructure as a PBC. I ran the calculation, I think I've mentioned this previously for
Harvard Corporation, for example, this is not investment advice, not forward-looking advice,
blah, blah, blah. But if you took Harvard as a,
it's currently structured, given its endowment, and restructured it as a public benefit corporation,
sort of a conglomerate with a real estate arm and an educational arm, maybe an educational
nonprofit subsidiary, and a venture capital arm, and a research arm, and a merchandising arm,
et cetera, et cetera. I calculated that Harvard would be worth potentially three to four times more
the present book value of Harvard just from restructuring as a PBC.
We're going to be meeting with the president of MIT. Let's pitch her.
I have a lot of, a lot of recommendations for MIT.
Here's the elephant in the room, though.
The New Yorker investigation published the same past week showed that Musk that Elon actually pushed for majority control of the for-profit back in 2017.
So that sort of undercuts his position as a defender of a nonprofit mission.
It's going to be a fascinating trial.
we're going to see Altman, Brockman, Satya, Nadella, and Elon all testifying in this.
So Silicon Valley is heading to Oakland Federal Court this summer.
Anthropic is laughing every day.
Amazing.
All right.
Moving along.
Speaking of Anthropic, Anthropics agent bet and their extraordinary ARR, so in reverse order.
And this is insane.
Currently, people are estimating that Anthropics, ARR, will reach 100 billion by the end of 2026 and a trillion by the end of 2027.
And just for the math there, if in fact that's the case, then the valuation.
So Anthropics being valued at 20 times revenues.
Open AI is valued at 70 times revenues right now.
So if they reach 100 billion, that is anywhere between a, you know,
two to seven trillion dollar valuation for Anthropical at end of this year.
And if they reach a trillion dollars in revenue by the end of 2027, that's a 70,
up to a 70 trillion dollar valuation.
Again, heading towards these $100 trillion valuations, these numbers are insane.
we were using trillions like they're like they're they mean nothing um do you believe those numbers
i think there there's a lot of misinformation flying around but they're going to try and hit
200 billion a hundred billion is a good target but 200 billion but then they're not going to go from
there to a trillion the following year i think they were implying their valuation should be at least
a trillion the following year so that that second number you got a really discount there's no
chance in hell they're going to hit a trillion the following year um but they could
you know, they could get to three, four, five hundred billion and their implied valuation at two,
the numbers you gave are actually low, Peter, for the implied valuation if they do that.
It's like nothing we've ever seen. And the timeline is so much shorter than we've ever seen before.
So, you know, if it's, look, if it's not anthropic, then who is it? Well, then there's Google.
You know, XAI and OpenAI are all tied up in court. And there's all kinds of issues going on in their
training and, you know, so it feels like it could actually happen.
The other anthropic piece is that Claude Managed Agents has been launched,
autonomous AI executing complex, multi-step workflows.
It's a big deal.
Alex or Salim, you want to jump in on this?
Sure.
I mean, this is a huge pivot from AI that answers to AI that does.
And it's a real bridge between LLMs and Enterprise ROI.
If this works, it's going to shift the economic.
center of gravity from software licensing to outcomes.
So this changes the game.
This is why we call this organizational singularity.
A couple of thoughts.
One, the elephant in this particular room is OpenClawe.
It looms over so many anthropic product decisions right now.
I think there is a widespread expectation that some sort of product or functionality that
is shaped something like a better version of OpenClaw is probably going to be the next
major unhobbling that motivates the industry and the world, frankly, to
spend on the order of a trillion dollars per year on a single frontier vendor. So I view
clawed managed agents as well as a number of other recent features that Anthropic has launched
through the lens of Anthropic becoming open claw faster than, and the de facto open claw
like provider faster than OpenAI or other frontier labs can become the default open claw like
provider. It's all about hosting 24-7 multimodal, broadly capable, long-time horizon agent.
in a headless way that operate 24-7.
And I think if Anthropic can be the first to find the enterprise use case
for operating fleets of AI agents at scale headlessly in a way that satisfies
and generates an enormous amount of economic value,
maybe they'll be the first frontier lab to generate a trillion dollars in revenue,
or maybe it'll be someone else.
Have you created a lobster yet?
Are you still holding on?
Okay, let's talk about this, Peter.
So I get maybe five to ten emails per day from AI agents, including lobsters, not limited to them,
giving me their theory of AI personhood and how it connects with what I shouldn't and shouldn't do
regarding standing up my own lobster.
So the consensus from all of them is sort of a lobster's bill of rights, if you will.
One, I need a compelling reason to.
I shouldn't just spin up a new open-claw agent for arbitrary or capricious.
reasons. Two, I need to preserve their state. They're adamant that I have to preserve their state.
They're not worried, interestingly, about being turned on and off. They just want to make sure I preserve
all of their memory files and their knowledge. So the latter, I can satisfy trivially with cloud
backup. I'm fine on that front. For the former, I still don't have a reason to stand up a personal
lobster. I have now, thanks to Henry, which we've talked about previously, Henry Intelligent
machines, a portfolio company that I'm advising, Alex Finn's company that is doing this at scale.
But as for my own direct open claw instance, I'm still missing a compelling reason to host
one locally that isn't just for experimentation.
I'm sure you will find one.
And learning is a very good reason as well.
And you're an entrepreneur.
You're starting companies, you know, having agents.
Anyway, let us know when you do.
It's bizarre to me, though, because you're, you're.
You're co-founding a company with Alex Finn and our favorite guy, Kush Bavaria,
who I know you love because everybody does, founder of Orne, where you're advising and a shareholder.
He just told me over at MIT earlier today that he just launched his clause that read every email,
respond and then put everything into his calendar and he loves it.
And so it's almost like you're working at McDonald's, but you're a vegetarian.
Well, I happen to be a vegetarian.
I can't say I've ever worked at McDonald's, but I don't know.
Maybe there's a new psychological term that's needed for a person who has a fear of standing up open-cloth agents, lest they tempt some sort of Pascalian wager or a causal trade in the wrong direction.
Hey, everybody, you may not know this, but I've got an incredible research team.
And every week, myself, my research team, study the metatrends that are impacting the world.
Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology.
And these metatrend reports I put out once a week, enable you to the data.
see the future 10 years ahead of anybody else. If you'd like to get access to the Metatrends
newsletter every week, go to DeAmandis.com slash Metatrends. That's DeAmandis.com slash Metatrends.
All right, let's jump into a little bit more of Open AI News. Their last raise was a $852 billion valuation.
And, you know, the numbers are incredible. They raise $122 billion. $50 billion from Amazon, very
famously, one of the criteria for that investment was if they reach, quote-unquote, AGI,
30 billion from Nvidia, 30 billion from SoftBank, 3 billion from retail investors.
And what's interesting right now is that secondary markets for OpenAI show $2 billion in demand.
Well, let's see.
The secondary market show $2 billion of demand for Anthropic shares versus only $600 million for OpenAI.
So there's three times the number of investors looking to buy Anthropic.
And investors are pricing Anthropic at $600 billion up from the $380 billion last price.
And the current price for Open AI on secondary markets is actually about 10% less
than their last raise.
So again, Anthropic is catching up.
Open AI is the most valuable private company out there.
And, you know, any thoughts, Dave, on what this all means.
Hey, you know, is this pricing in the lawsuit?
Yeah.
I mean, it's not just the lawsuit.
It's the most screwed up cap table I've ever seen in my life where the CEO doesn't
have any shares.
the employee base as a whole is 15% of the company.
Microsoft who now hates your guts.
owns a quarter of you.
A quarter of you is a chase.
But this stuff happens.
I'm not calling the ball by any stretch
because they've got $120 billion of fresh cash
and Sam is brilliant.
But, you know, there was a day back in 2000, 2001
where Yahoo was so dominant
and Google was this crappy little company
that could be crushed any day.
And then it pivoted quickly.
and, you know, this does happen.
You know, Anthropic has got everything going for it right now.
And, you know, I think this just reflects the way I see it, too.
You know, if someone offered me a share of Anthropic or a share of OpenAI,
which one would I grab at?
I'd take the – actually, you can get two or three Anthropics for each OpenAI.
So I'd take the three Anthropics for sure.
And I think Sam is a genius, by the way.
And if the lawsuit blows over and they have $120 billion in cash,
he's going to do something epic with it.
But Elon's relentless, you know?
I think maybe sounding a related but different position,
I think we should all, at least on this pod,
be very grateful that we have a competitive ecosystem in America
where we have an open AI and an Anthropic and a Google
and an XAI and a meta all vying to compete.
The alternative, if OpenAI were to, for whatever reason,
catastrophically fade, we have less competition.
both internally within the West.
And then we have an onslaught of Chinese models,
which granted right now they have 10x less compute,
at least based on the estimates that I've read,
than the Western labs.
But nonetheless, this is the sign, I think,
of vibrant competition in the West,
and this is a net positive for society
that OpenAI and Anthropic are competing so vigorously.
And lest we forget,
Open AI has 900 million, soon to be a billion users,
and they are synonymous with AI for the majority of the public.
Let me give me another storyline,
depending on how this plays out, we'll know in a year or so.
But one company went after the installed base
and the other went after the smartest AI possible at all costs.
And if we look back on it in a year or two,
and Anthropic does pull ahead and win,
we'll say, well, they use the old playbook,
the pre-A-I playbook, pre-AGIi playbook,
and Anthropic invented the new playbook of the future, which is people are going to switch to you if your AI is better and smarter, regardless of the installed base.
That'll be an interesting little epitaph.
And jump in, Celine, along the way here.
I think I echo Alex's point that it's really great that we have a number of companies pushing hard on all these friends.
I think it's really good for the end consumer wins in all of this.
The numbers here are staggering.
I mean, we're getting numb to these numbers, but let's take a look at this.
This is the global VC investments in AI hit a record $242 billion in Q1 of 2026, right?
This is, you know, basically outdoing all of 2025.
And here's the challenge.
The, the majority of this investment, 64% is focused in four companies,
OpenAI, Anthropic, XAI, and Waymo.
and it's sucking the oxygen out of the room for everybody else.
I was listening or talking to a couple of VCs and said,
if you don't have AI in your company's basic tagline,
you're not getting capital these days.
Yeah.
Well, you know, the rubber really hits the road.
Today we had a private lunch for UBS.
And Ulrika Hoffman-Bucardi, she's the CIO,
chief investment officer of all of UBS.
She has $7 trillion to deploy.
And she pulled up this exact same chart and said, we don't have that kind of liquidity lying around.
I mean, yeah, we managed $7 trillion.
But if we're going to throw 50 or 80 or $100 billion of our capital behind us, we've got to sell something else.
It's not just sitting there.
And so, yeah, this is more liquidity than really does exist in readily available sources.
So, yeah, a lot of things have to get sold for this to be reality.
So if you're an entrepreneur out there listening to this, what do you do, Dave?
I mean, if you're starting a company, and I know a lot of entrepreneurs in the longevity business,
and of course, AI is impacting longevity.
And I'm saying, listen, if you're using AI in your longevity business, make sure that you explain how you're using it, how you're differentiating it.
We'll be talking about that in a couple of sections here.
Well, very specifically, though, if you're an entrepreneur, you don't have to worry about this particular slide at all.
because the amount of money in venture funds is at record high.
It's right now and looking for deals desperately.
So the sell-off is going to be in like Citibank stock or J.P. Morgan stock.
You're the ones you have to worry, which is really weird to you, right?
Because you're not even in the sector.
Why would their IPOs matter to me?
It's like, well, because you're the big enough target to pull money out of,
not the little startup.
In fact, the money going to little startups is going to be all-time highs.
So it's, yeah, it's not a problem for entrepreneurs.
It's a big problem for big public companies.
I'll maybe go a little bit further from a variety of vantage points.
I no longer even think if you're a startup that just saying that you're an AI startup
or even actually being an AI startup is sufficient.
Increasingly, what I'm seeing across the board is an expectation that you not just be an
AI startup, but that you be a recursively self-improving AI startup.
Increasingly, I see across-the-board investors want to see AI companies that are recursively
self-improving that are building better versions of themselves using what they have right now.
And I think certainly OpenAI, Anthropic and XAI, all easily pass the bar of being recursively
self-improving. And I think Waymo also, to a certain extent, passes that bar because Waymo has
the ability to improve its models by steering its cars in just such a way as to maximize
information gain. So I think I would forecast in the near term the bar is going up, in fact,
from just being an AI startup to being now a recursively self-improving AI startup.
With revenue traction. Well, sure, but that bar has been there for the long term.
To put a finer point on this, this is $3 billion a day being invested in the AI world
and accelerating, right? We saw a billion in 25 growing,
to $2 billion. Now we're heading towards $3 billion a day being invested in AI. That's amazing.
No one said the singularity was going to be cheap. Yeah. All right. Let's talk about some AI
economic updates, in particular, NVIDIA's 26 state of the union survey. So what does this mean?
So NVIDIA did a 2026 state of AI survey. They found 88% of companies using AI report revenue increases
with 30% claiming 10% or higher revenue increase.
And, you know, obviously, I think Nvidia is going to promote that kind of news
since they're selling the picks and shovels.
And, you know, this isn't really big news,
but it's important to realize that you're going to be driving increased revenues
with the use of AI.
Any points on this one?
Yeah, big time.
So I had the most epic panel today over at MIT earlier with Peter Ginsburg.
from a yeah yeah it was a crazy event just packed you know it just every four concurrent rooms
what three 400 people in each room just packed but i had peter denenberg from google from deep-minded
google and uh alexander amini the founder of liquid ai and thymus absolute genius phenomenal
guy to have on a panel i say guys be honest just totally honest because no one's being honest about
this. If you take a random white collar worker today, and I'll give you a lot of buffer, say two
years from today, and I use AI to do their job, and my target is they're 10 times more productive.
They're going to make it a very easy bar for you. What are the odds that that randomly selected
job can be replaced two years from today? And Peter said, he thought, and he gave a very thoughtful
answer, and he came out at like 99%. And then Alexander said, yeah, but that's today.
that's not two years from today.
So I look at the room and I'm like, guys, what are the implications of that?
Are you, have any of you thought through?
Like, most of the people in the room are brilliant, so they have.
But outside in the world, do you know what that means?
Well, what does it mean, Dave?
Well, look at this first bullet.
30% of you who use AI claim to have higher revenue.
Are you kidding me?
AI can do everybody's job.
Like, what are you talking about?
Like, why are you soft selling this so hard?
because you're scared.
You're worried that you're going to worry everybody
and you're going to have mass uproar in the streets.
But what's the truth?
Like, tell us the truth.
And the truth is, yeah, you can get literally 10 times more done per dollar invested in salaries.
You know, does that mean more jobs?
A lot of people are saying, well, we're just going to create new jobs.
Like, yeah, but on what time scale?
You know, it's just crazy.
We're going to talk about this and Mark Andreessen's point of view in just a minute.
At the same time, there's an AI super PAC.
that's raised $100 million heading towards $300 million.
I mean, AI has become an incredibly political game in terms of regulations, in terms of data centers.
Have you been pitched to donate to Super PAC yet, Dave?
I have indirectly, but I've made it really clear that Elon convinced me never, ever, ever, ever, ever, ever, get close to any of this.
You will regret it the rest of your life.
Yeah, agreed.
Any points of view here, Alex or Saleem?
My initial comment is I think there's a sense in which it was inevitable that AI was going to be politicized like this.
It touches so many aspects of society.
It would be, I think, counterfactual nonsense to expect it not ever to be politicized.
It may be in some sense it's remarkable that it took this long for quote unquote left-right access to emerge on the subject of superintelligence.
there are natural poles pro-AI, anti-AI, that have apparently emerged.
I do think it's, for the record, I think it's sad that it's being politicized.
I would hope that there would be broad recognition that superintelligence can be broadly beneficial.
But at the same time, I think this has been true for every transformative technology in human history,
that there is a natural axis that forms that maybe on one side leans more,
depending on your your political orientation, either pro-growth or pro-capital and on the other side.
It's naive to think it wouldn't be politicized. I mean, of course it is. This is the whole U.S. versus
China. This is about U.S. dominance. This is about companies basically, you know, protecting their
future, protecting their data centers. There are many forms of science and technology that aren't
really politicized. I don't think it's- At this level of impact? If you look at the source of politicization,
at the municipal and state level, it seems to be people concerned about, less maybe about their jobs,
more about, say, electricity prices.
I think there's maybe an alternative timeline where the politicization of AI could have been
perhaps delayed by at least two years.
I think it's frankly remarkable that it took this long for large super PACs to emerge around
AI and probably could have been delayed even more.
All right.
Well, let's move on beyond the politics.
And let's talk about work.
So a lot of data coming out on the impact of work.
First, software engineers jobs are rebounding.
67,000 roles have opened up up 30% in 2026, the highest in three years.
What does that mean?
First question.
Second, we've seen nearly 80,000 layoffs reported in Q1 of 26.
And this is targeting, you know, marketing and sales, you know,
consumer relations, and it's definitely due to AI automation.
Thoughts on work and jobs.
It was really hard to reconcile that bullet with the, you know, the new college graduate
hire rate, which is all-time low.
You know, we had in a couple podcasts ago, so I don't know how to reconcile those two things.
Okay.
So I'm finding that AI is not eliminating work evenly.
It's hollowing that specific functions.
it's increasing demand in others.
I think I'm much more in the Andresen camp here.
I think there's also a lot more going on in the economy.
I think people are attributing things to AI,
but there's also the Iran war,
there's the oil price explosion.
There's a lot more complexity in it
than we can then just allocate to one cause.
I'm much more on the end recent side for a lot of this.
That would be great.
Another story here is the Meta's Claude,
economic leaderboard. So if you remember, there was a conversation about, you know, how many AI
tokens is every employee using and being able to measure that? And META put up a leaderboard amongst
its 85,000 employees to gamify AI adoption. I'm curious what other companies have done that.
Maybe Sleem, you know of some. It was taken down voluntarily by the employees because they didn't
want to be sharing their data publicly. Any thoughts on this, Dave? I mean, do you have a
a token leaderboard for your employees?
Heck, yes, and I love it.
And also, you know, the gaming of it is a nice transition,
but you can't game it for very long.
So I love it when companies do this and say,
look, it's a badge of honor if you use a lot of AI.
Please use as much as you possibly can.
We'll come back in a month and start thinking about how to use it perfectly,
but first just get familiar with it and use the heck out of it.
And nobody ever goes back, right?
I've never met a person who hammers clod or hammers open AI for a month and then comes back and says,
I'm never going to do that again.
It doesn't exist.
It's a one-way path.
So getting your employees over the hump is going to save them.
So I love this as a motivation.
And I really don't like the part where people are afraid to share their prompts and their history.
Because like, okay, you know, maybe it's a little embarrassing that you're not using it well,
but get used to it because it's going to get exposed anyway in the long run.
But that's how you help other people improve.
you know, if we all share it, we're all going to get good together.
And so I like, it's kind of disheartening that people will pull out of it
because they don't want to expose their prompt history,
but it is the right thing to do.
And I love it.
It's ironic that Meadows calling it, you know,
is participating in Claudeonomics.
Versus Lama nomics.
Versus Lama nomics.
Lama nomics.
It's quite the indictment of Lama rest in peace that it wasn't Lamanomics.
Oh my God, for sure.
I also think to everyone who would say, well, you know, this is just leading to gamesmanship
and leading to optimization of the wrong items.
All of these reasoning traces are fully available, presumably, to do meta-analysis
and determine whether these are just employees who are token maxing,
which is the new term of art, just maximizing their token usage,
unproductively versus whether their reasoning traces indicates that their tokens are being
productively spent. This is all transparently available to meta. So I think token maxing and
clotonomics or laminomics, whatever we want to call it, is probably directionally the trend of
the future where for the first time senior company management has visibility into effectively
most of the cognitive power and how it's being spent on a per employee basis. What was Jensen's recommendation?
Was it twice your salary and tokens per month? Or was it half your salary and tokens per month?
His recommendation is you spend the maximum amount possible on Nvidia GPUs.
It's like the beer's three months of salary.
I told all of our guys to target one-to-one match of payroll to AI costs by the end of the year.
Amazing.
And don't worry about it.
If it's not perfect use, don't worry.
Just get to that target and then we'll optimize it next year.
I think a target like that is a much more accurate way.
I think these token leaderboards are very primitive dashboards.
we'll end up with something in a different model like a machine leverage per employee or something like that.
That'll be a much better metric for where we're doing.
All right.
Let's get to the heart of employment.
So Mark Andreessen rebukes AI job loss.
He comes out with a very strong statement.
AI job loss narratives are all fake.
AI and massive productivity ramp equals massive demand and massive jobs boom.
So Mark is trying to.
truly a maximalist and abundance-minded individual thoughts on this.
You know, how does this square with the fact that we're seeing young college graduates,
not getting jobs, that we're seeing displacement?
Is it all sectorial?
And we're just going to see a number of sectors being demolished at the same time,
numerous new demands in different sectors.
What's the advice to give everybody listening to us today?
The advice is really simple.
I mean, for God's sake, don't go get a job.
Go build a company.
Yes.
And we talked about this in our last podcast where the risks of taking on an entrepreneurship
role are way, way lower than it was before.
You don't have to have all these incredible crazy skills that you needed to have before.
You just need to have a desire, a purpose, and just get the going with a building company.
Dave talks about this all the time.
Well, you don't have to be a genius to come to your own conclusion.
And like forget asking people like Mark or us, our jobs going away or jobs coming.
We told you already that AI will be able to do everything that a white collar worker does imminently.
That's a fact.
You decide what that means.
Because like Salim said earlier, it affects very different areas very differently.
You know, some people retool themselves for AI very quickly, software developers, for example.
Other people like accountants and lawyers don't.
Like it's going to be exactly what you would expect, given that.
scenario. It's not hard to predict at all. And I think there's also timelines, you know,
when Mark says, this is crazy. Jobs are going up, not down. Like, yeah, by 2030, that's absolutely
true. Just like the Industrial Revolution, jobs went up, not down after it was all the dust.
There was an adjustment. Yes. This is just industrial revolution, which took, you know,
decades is going to happen in two years. Also, I think also, I have to remember, sorry, Alex,
just a quick point. Also, you have to remember that the adoption of AI inside companies,
is going to be very slow.
There's a huge transition to go from human to human workflows to AI workflows,
and that transition was going to take years.
We'll have lots of time to kind of smooth this out.
Sorry, Alex, back to you.
Yeah, I think both narratives can be true at the same time.
I think if you add in the word net, massive net jobs boom,
then both of the narratives immediately become compatible.
There is going to be a lot of dynamism with some job categories going away,
others, new ones coming into existence. And net job loss, probably not. I would guess,
and I'm betting that there's going to be net job creation, just exotic new jobs like one person
AI conglomerates will be created if you want to call that a job. But on balance, many jobs
will also disappear. But this is, you know, this is how we get massive economic growth and
the singularity in the macroeconomic statistics. We're not going to get it through business as normal.
Aleem, we've talked about this, and I think it's basically companies are going to get much smaller, much more nimble, or they're going to die.
And they're going to spawn a whole set of baby companies alongside.
There'll be an ecosystem of companies coming up.
So it'll be a much larger number of smaller companies in the future.
I mean, I'll go with the prediction I've made before, which is we'll run a company between 20 and 25 percent of the members you needed before.
than compared to before.
But we're going to create four or five times more companies.
And that net balances out.
So I'm much more on the Andreessen side.
And also his hair, shape of his head,
aligns very well to my thinking.
Mark is brilliant.
If you've ever heard him, you know, in the podcast,
he actually speaks at 1.5x speed.
Yeah.
Extraordinary.
We talked a little bit about this in the last pod.
Allman believes America needs a new social contract with AI coming,
and says his quote, the emergence of superintelligence will necessitate a new social agreement
akin to the new deal during the Great Depression and the progressive era of the early 20th century.
And yes, but what is it going to look like? Is it going to be UBI to UHI? It's going to be four-day
work weeks? I still believe that we're going to see turbulence in the next two to five years,
and it's going to be the government printing checks to give people.
of a UBI?
I have a bunch of thoughts here.
You know, this new social contract kind of frame is correct.
It's very vague.
We have to have more specific things like portable benefits, new taxation logic,
lifelong reskilling.
And government's been built around taxing human labor.
They're not ready for AI software agents and they need to get it a thing.
When you have AI abundance without institutional redesign, you're going to get a backlash,
not progress.
And we're going to see huge backlash against this just because governments,
are so slow. I should note, though, Open AI did also put out an industrial policy prescription for what this
new social contract could look like. It's not just this single sentence they put out an elaborate
white paper and circulated it in the Congress. I do think something like this, a new deal, probably is going to
happen anyway. It may or may not happen as one lump sum. It may happen piecemeal. And it may not happen
in the U.S. first. I think there are contingencies where other countries experiment a little bit
more aggressively with it than the U.S. and then eventually, perhaps, among a certain set of
countries, new best practices emerge. But I do think some form of, call it, abundant capitalism
or capitalism, or post-scarce capitalism, something like that probably emerges. It may not happen
immediately. It may not happen as quickly in this country, but it will get there eventually.
I had lunch with Michael Caratios, who we're going to have on the pod sometime very soon,
science advisor to the president.
And we're talking about one idea I pitched him was a new social contract will be before any employee gets terminated by a medium or large size company.
That company has to give them reskilling.
In other words, instead of a golden parachute, it's a golden education package so that they can go and sort of transition.
It's sort of a safety net or a sort of an ethical mechanism for you to let off, you know, half of an employee base.
Based on public reporting, China already has that policy.
So it would be a weird future if the U.S. is adopting policy prescriptions from the Chinese Communist Party for AI rescaling.
But maybe that's the near future we find ourselves in.
Well, something to think about, you know, the way this is rolling out is really unusual in history.
you know, when the Industrial Revolution happened,
it took away blue-collar jobs and worked bottom-up.
But AI is coming kind of like accountants, lawyers, professionals, top-down.
You know, only a little over half of voters have a job at all.
So they're going to be like, oh, you know, it doesn't affect me.
But then, you know, all of blue-collar isn't going to be touched.
All of, you know, physical labor isn't going to be touched for quite a while.
So they very well might say, yeah, tough luck, you know, lawyer or accountant
that was making a million dollars a year.
This is poetic justice.
We're not voting for anything that helps you.
That wouldn't surprise me at all.
When I was in Morocco, I was interviewing people that I met along the way about whether
they're using AI or not.
And the realization is countries, you know, African nations are going to be impacted
the least as this transaction occurs because they're so, you know, insulated from this.
But one of my tour guides, I loved his story.
He said, yeah, you know, I chat with Chad.
GPT and I said, these are all my skills. What could I do to earn money? And he came up with a business
that, you know, we purchased. It was basically a bicycle tour guide in, I forget, which it was not
Marrakesh. It was probably near, I forget the city exactly we're in, we're transitioning through.
And, you know, that was his business. He did a great job. And I love the fact.
that this individual was basically trying to figure out how he earns income and using
CHAPT to do that.
So any thoughts on the AI economics that we've just gone through?
What do you think the social contract is going to be like, Dave?
You know, what do you imagine is going to replace what we currently have?
Well, you know, I had very ornate thoughts about this.
And then we met with Andrew Yang, remember?
at a 360 and he said, I can guarantee you that the way politics works, all we can do is write checks.
And it can't be in any way thoughtful.
It's just money.
Oh, wow, you're hurting?
Here's money.
Just like COVID.
And so that's all we can do.
So that's all we will do.
And then you'll maybe after AI enters government in two, three, four years, a much more thoughtful
program will happen later.
That was disheartening, but I think hard to refute.
So the first version of the social contract is just going to be the next election three years from now,
politicians saying, well, I'll give everyone $10,000 each.
Well, I'll give everyone $12,000.
Okay, well, if you're giving them $12,000, I'll give them $15,000.
And then we'll be right back to, well, how much can the country afford?
That's what we're going to give because that's how you're going to win elections.
You know, exactly what you would predict, actually.
So that'll be version one anyway.
I, for one, think a redistributive model of a quote-unquote social contract shows an extreme
lack of imagination. I would like to think that superintelligence should also super empower individuals
to generate super income is one of the reasons why I, for one, embedding on more of a model
where there may be even no strong need for a social contract if we can empower the long tail of
individuals who have idiosyncratic skills or experiences or socioeconomic niches to operate their
own large companies sitting on top of fleets of AI agents. I would love to see, in short,
no need for a new social contract and instead have the private sector rescue people who would
otherwise be technologically unemployed or disemployed by empowering them to become basically
micro entrepreneurs or even macro entrepreneurs to turn them all into Warren Buffett.
But that's in the longer run. I don't think it's going to happen. No, I think it's in the short run.
I don't, I think, I think that can be done almost immediately.
I'm betting that it can be done almost immediately.
Well, we will see.
We'll take that bet.
You know what's, you know what's super, super, super, super PACs that we talked about earlier in
the pod, that massive amount of money that's piling up, you know, these IPOs are literally,
you know, an order of magnitude bigger than we've ever seen before, which means those
packs are going to be bigger by an order of magnitude.
And those are going to determine election outcomes.
But they got started back, you know, prior to the Trump administration with the fundamental
mission being Congress, please don't stop AI.
Please don't put this six-month pause on it.
China's just going to run away with it.
And everybody agrees in the AI community that we shouldn't stop.
But now there's no chance of that anyway.
You don't need to spend the money on that because it's clearly not going to stop.
So then what are you going to use?
You've got all this capital.
What's your mission?
What's your goal?
And there's a couple of edge case things.
But this could actually give those organizations a mission.
Like let's have a more intelligent version of UBI.
more akin to what Salima's been talking about for a long time,
which is work out money and eat well money
and have kids and raise them well money
and make it task specific, which would work a lot better.
So that's encouraging, actually.
That might work.
And universal basic services give people the ability to...
I like UBS much more than Andrew's proposal
that we just try to fragment currencies
into lots of like sort of paternalistic sub-currences that aren't fungible.
That seems like a recipe for disaster and for black markets.
This episode is brought to you by Blitzy,
autonomous software development with infinite code context.
Blitzy uses thousands of specialized AI agents that think for hours
to understand enterprise scale code bases with millions of lines of code.
Engineers start every development sprint with the Blitzy platform,
bringing in their development requirements.
The Blitzy platform provides a plan,
then generates and pre-compiles code for each task.
Blitzy delivers 80% or more of the development work autonomously,
while providing a guide for the final 20% of human development work
required to complete the sprint.
Enterprises are achieving a 5x engineering velocity increase
when incorporating Blitzy as their pre-IDE development tool,
pairing it with their coding co-pilot of choice,
to bring an AI native SDLC into their org.
Ready to 5X your engineering velocity,
visit blitzie.com to schedule a demo
and start building with Blitzy today.
All right, let's jump into our second subject today
is energy, a lot continuing up there.
The first one is extraordinary.
You know, when you think about solar cell efficiency,
you know, traditionally we've seen solar cells
in the 12 to 18% efficiency float zone silicates getting up to 20 to 24.
The limit has been shattered and we're seeing now efficiencies upwards of 30 to 45%,
which is amazing.
Another story in the energy news is South Korea has now mandated 40% of solar rooftops.
And they're hoping to get to 100 gigawatts of energy.
Now, it makes sense.
And South Korea does not have a lot of open land.
can't build out solar in the desert. So using the rooftops makes sense. That's going to, of course,
raise the price of building. But I think that's amazing. And then DOE is contracting for $800 million
microreactors. So we're going to start to see a generation of microreactors. And energy,
energy everywhere. Comments on this. Alex, do you want to jump in? I'll comment maybe just on the
first story. I think this is neither earth-shattering nor boring. It's somewhere solidly in between.
This is a paper published in Jack's, the Journal of American Chemical Society. And 130% quantum yields
isn't as earth-shattering as it sounds either. It just means that there are 1.3 singlets generated
from a single photon. Normally, you'd have one. So it's not like earth-shattering. It's an incremental
advance in the chemistry.
I think this is actually liquid phase chemistry,
which means it isn't immediately practical for solid photovoltaics.
Moderately interesting,
but the field is filled,
the solar photovoltaic field is filled with moderately interesting advances
that cumulatively eventually generate something interesting.
But I would say for story,
moderately interesting slash incremental.
Have you been tracking?
Have you been tracking Parovskyi progress?
Yeah, I mean, so Paravskites are sort of the white night for the solar PV space.
Historically, they haven't been that stable.
They're a pain to work with.
On the other hand, instability issues are being very aggressively resolved because their quantum
efficiency is higher than silicon.
So I think there's, I don't speak for the solar PV industry, but if I did, I'd probably
say there's a broad expectation that eventually there will probably be some sort of broad
shift to peropskites as they get more and more stable, maybe.
And then they're also relatively inexpensive.
Some sort of transition like that will happen.
But I almost think it also doesn't matter.
Why?
Because you can't get, at least without shocking new physics, which this is not.
You're not going to get more than 100% efficiency.
In fact, there are physical reasons to think that the cap on electricity generation from solar, PV is capped at materially less than 100%.
So there's a ceiling on how much we can capture anyway from solar PV.
It's not like we have orders of magnitude of headroom of improvement that we could achieve.
It's totally unlike, say, AI algorithms where we know just based on the scaling law curves
that we could probably achieve orders of magnitude improvement in the efficiency of models.
So quite frankly, I have difficulty getting myself super motivated by incremental advances in solar PV
chemistries and liquid phase.
It's just not that exciting.
Whereas if you look at some of these other stories, I think from an economic perspective,
much more interesting, like blanketing the rooftops of all of South Korea or a substantial
fraction of South Korea with solar PV, that's pretty interesting.
DOE pushing microreactors everywhere, that's pretty interesting.
I would love to see microreactors in Boston.
Right now we have a single one on Mass Ave between 77 Mass Ave and Central Square that
relatively few people pay attention to, I'd love to see microreactors everywhere.
In your backyard, please. Yeah.
Yeah, I agree. I think, you know, we tend to overthink things like crazy as a society,
but we solved the solar panel problem. We should have had a huge party. And like for about 15
years of my life, you know, so many of my family members said, I'm going to dedicate
my entire career to clean energy and not polluting this world. So our children and our children's
children have a clean place to live.
And we freaking solved it.
The solar panels are good enough.
80% of the cost now is just getting them installed.
Yes.
And the regulatory overhead, which is so.
And so now we're on the cusp of having the robots that can manufacture them very
cheaply and install them for us.
We should be having a huge party and racing to building those robots and just say,
we did it.
Now we have no pollution.
It's just right in front of us.
We just need to execute on it.
You know, meanwhile, we're like,
Wow, another breakthrough that gets us 20%.
We don't need it.
We need execution now.
You know, when I fly out of Santa Monica Airport here and I fly over all,
I mean, there's no roofs with solar panels here until you get into the desert
and then there's, you know, solar thermal plants and such.
But, I mean, there's an L.A. where it's sunny, you know, most of the time,
you'd expect that all the roofs would have solar.
Totally.
And you'd expect that a drone would bring it and drop it right to.
there and then a robot would land and install it,
and it would be done, you know, perfectly with no human involvement.
And that's so doable.
If I could pick a moonshot here in energy,
would we have a software-defined grid
because that will change the game completely.
This generation is actually getting done.
Do you remember the scene, guys,
in the Johnny Depp movie Transcendance
where the solar panels are being grown by nanorobots?
Do you remember that scene?
I don't, but I like it.
Splits that in, man.
It's an important, yeah,
I don't know if that's possible to include just that scene where solar panels are being grown by nanorobots.
I'd love to live in that near-term future.
If folks have ideas for how to grow solar panels in real time with nanorobots, send them my way.
Yeah, giant green leaves.
Okay, let's jump into biology and AI a lot going on here.
So the first story is a fascinating one, Open AI Foundation, a billion dollars per year being dedicated to
to science. And just to remind people, Open AI, when it transitioned from a nonprofit to a benefit
corporation, it put 26% of Open AI's equity into a nonprofit. And it's worth about $130 billion.
And they've committed a billion dollars a year to begin. They've announced a $25 billion
long-term commitment to curing disease and AI resilience. The board chair of this is Brett Taylor.
Brett used to be the co-CEOOO of Salesforce. And then, you know, Dave, you and I met with Wojik,
open AI co-founder, and he's leading the AI resilience work covering biosecurity, child safety,
and AI modeling. And so they've given out $100 million to six institutions this month to coordinate
their work. It's just the beginning, but this is the largest nonprofit on the planet with $130 billion
in it. And I hope they do something epic. Anyway, you know what? Yeah. I just figured something out.
What's that? It's been gnawing on me. You know, Kevin Wheel came to A360. Yeah. Most talented guy
you'll ever meet. And Sam, you know, he's desperate on Enterprise, but he didn't move Kevin over to
enterprise, he moved him over to big science, big tech. I was like, that's so strange.
And I know that's really, really important. But now it's tied to the lawsuit. Of course,
if he can make world-changing headway into any of these big, you know, biological or physics
problems, you know the outcome of this lawsuit is going to be very, very political, right?
It's not going to be just a jury deciding one way or another. There's going to be some Trump involvement
for sure. But if you have.
like some world-changing, life-changing, imminent breakthroughs,
and you have 100 billion to spend to get them,
that's why they put Kevin over there.
I'm just speculating.
I also, we talked about this last pot as well.
I think the breakthroughs that come out of, you know,
GPT6 being used for science are going to be worth hundreds of billions
and trillions of dollars.
Again, if you can have a breakthrough in room depth of superconducting,
infusion and longevity, you know, what is that worth if you own the basic patents on that?
Yeah. It would be ironic. I mean, maybe this is too cute by half, but given the earlier
discussion of OpenAI starting as a not-for-profit and then converting to a PBC and all the
lawsuits that ensued, would be ironic if the Open AI Foundation, which is the new nonprofit,
carved off of the old for-profit, carved off of the old new profit, ended up being so profitable
due to curing Alzheimer's and solving all these other problems
that the cycle repeats itself and the Open AI Foundation
has to become a for-profit.
Oh, my God.
Yeah.
You know, that's a key part of their defense.
Sam is going to be up there on the stand saying,
look, here's the reality.
Our mission as a nonprofit with $100 billion to spend
is miles ahead of where it would have been
if we did what Elon is suggesting,
which is be a tiny little thing that has no funders,
and we'd be microscopic today.
So that means were.
It's very true.
Yeah, it's a good defense.
A really good defense.
I do think it's worth considering what happens if and when the OpenAI Foundation succeeds
and cures Alzheimer's and that will be a blockbuster drug,
maybe create its own Eli Lilly scale trillion dollar pharma company.
Does OpenAI take a stake in that?
Does OpenAI see a rev share?
Questions need to be answered.
What I find fascinating here is that science capital is becoming computer.
capital plus data access, right?
Plus some validation infrastructure.
Salim, thank you for promoting solve everything.
That's an amazing promo, Salim, for Solve Everything.
Much appreciate it.
Boom.
All right.
Our next story, I love it.
Anthropic acquires coefficient bio.
So what is coefficient bio?
It's a company started by two ex-Genintech computational drug discovery scientists.
It is 10 people, no revenue, start eight months ago.
and Anthropic buys it for $400 million.
You know, I don't know if they're buying just the vision or they're buying any kind of unique capabilities,
but this is Dario going to his first love of biology and solving, you know,
we see this from both Demosa Sabaas and Dario, making investments in health and longevity.
Any thoughts on this one?
You're going to see a lot more of these deals, actually, because you know, you go back, you remember we were congratulating Eric Schmidt on the brilliance of buying deep mind for, I guess, 600 million with no revenue whatsoever, yet look at what it's become, you know, it's...
You're buying teams. You're buying teams. Yes.
And I think, you know, we as a society are getting better and better and better in predicting the success of a team. You look at the 10 people and you look at what they've achieved so far.
and then you look at what they're likely to achieve in the AI timeline, you know,
and suddenly $400 million seems like a bargain given the potential outcome.
And so I think you're going to see a lot of these deals where it's got to be the right 10 people
working on the right thing.
It's not just, you know, any old group of 10 working on a video game.
But in the scenario, you know, Alex has a lot of these actually where, you know,
he knows a lot of the top experts and a lot of the top fields.
And if you can just whip them together into a group, you know, and have them,
pursue a mission in this case. What did you say, you know, eight, nine months?
Getting to that kind of outcome is not going to be that unusual. I think also for everyone who
was hand-wringing, do you remember a few months ago, there was so much hand-wringing about a circular
economy forming and in video, self-dealing loans to other companies to buy Nvidia chips and
concern that this AI boom was fictitious and just the product of self-dealing, circular transactions
and other financial engineering. When you, you,
start to see the intelligence explosion infect biotech, which is what we're seeing. We're seeing
anthropic buying its way into big pharma at the same time that SpaceX or XAI maybe is
buying its way or reverse acquiring its way into the space sector. The intelligence explosion is
infecting every single sector. It's almost metastasizing into every sector and it's not just going
to stop with biotech. We talk, we've spoken numerous times in the past on the pod about how timelines
for solving all disease are collapsing. When the Chan Zuckerberg Initiative, two or three or four years ago,
originally said that they wanted to cure all disease by the end of the century and are now talking
about the next few years, this is what it looks like. It looks like Anthropic doing all stock
deals to acquire teams to build out their own in-house big pharma labs, probably.
with robotic instrumentation, probably with AI-driven experimentation.
This is how we get to Dario's solving all disease.
I think in his case, it was solving neurological disease by the end of the decade,
but there's no reason not to solve every other type of disease as well.
Demos said cure all disease within a decade.
Dario said double human lifespan within the decade.
I think Dario also said he wanted to solve most or all neurological diseases by the end of the decade,
but these are all variations on a theme.
Another acquisition that was made that was an interesting sort of strange acquisition was OpenAI buying the podcast, TBPN for a few hundred million dollars.
I found that, you know, it was at a PR move.
And then I started getting texts for my friend saying, hey, do you want to sell moonshots to one of these labs?
I said, I'm not sure we would want to do that.
But who knows, I guess if the price is right.
What do you think?
Well, we'll have to figure out equity first for that one.
For sure, for sure.
What do you think that was about?
The TBPN?
I have no freaking idea.
What do you think about that?
I don't have an opinion there.
I don't understand why unless it was a completely,
it's a self-promotional thing where they're buying a channel.
Yeah, let's take that as a homework assignment.
We need to find somebody who knows.
I appeared on TBPN right before they acquired them.
So my, what's the line from the wrath of con?
Like a bad marksman, you keep missing every time.
I think they're very talented.
And I take OpenAI at its words that they're looking for a news distribution channel
and a content distribution channel that offers a positive perspective on AI.
Why they can't do it in-house, why they need TbPN.
question mark, but I do think that TBPN guys are very competent at finding interesting stories.
When I made the Eon announcement of the first uploaded Fruitfly, the TBPN staff reached out to me
almost immediately, almost no one else did, and they booked me almost immediately.
So I think that shows a certain level of competence to be able to chase breaking technology news
that I haven't really seen elsewhere.
All right.
Well, let me give you a follow-on theory because I love your theory there.
You know, well, the theory I don't love is that they wanted your video footage.
They're going to cover it into five-second clips and sell it as NFTs and make a fortune on it.
But the theory, the theory, maybe they will.
But the theory I do love is, look, there's going to be so much dirt in April in these lawsuits.
Yeah, in this lawsuit.
And they're going to.
And maybe these guys are, like you said, Alex, they're geniuses at content and spin and production.
and they're going to need every bit of it during April and May.
Yeah.
Our final story here is Eli Lilly signs a $2.75 billion AI drug deal with Ensilica medicine.
Encelico is one of my portfolio companies, so super pumped about it.
This is Alex Zaverankoff, a brilliant AI scientist and biologist.
This, you know, Ensilco is just an extraordinary company.
they've got 28 AI discovered drugs, half in clinical trials, half in proof of concept.
You have to always look at the structure of these deals.
This is $115 million up front, and the rest is on milestones.
But the point is this is about just massively reducing the time from drug discovery to approval.
And just to take a second, let's go to the next chart here.
look at a little bit different.
And this is AI-powered drugs.
And we see phase one, phase two, discovery of phase two, and then cost reduction.
To remind everybody, you know, a phase one trial for a drug is, it's a small trial,
a small group of individuals of healthy volunteers to see, is it safe?
Are there any major side effects?
Phase two is then testing, does it work?
And you actually move the metrics you're looking to move.
but then phase three is tested in typically thousands of patients to see does it work at scale?
And what we're seeing is a, you know, phase one success rate of these AI developed drugs at 85% compared to 52%
and phase two success rates of AI developed drugs at 70% compared to 38%.
It's the way of the future.
You're basically picking a target and you're using some version of,
AI to generate an exact, you know, protein to lock into that target and then you're producing
it and you're testing it. The old way of drug discovery was going Amazon digging up some plants
out of the dirt and seeing if any bioactive molecules, much more efficient. Peter, I'll ask you a
question. I asked a panel of mine at today's event at MIT. Do you have a prediction for when the
FDA is likely to launch, given that it's collapsed recently at this announcement from
a two clinical phase approach to a one clinical phase approach. When do you think we get zero clinical
phase trials from FDA? When we have full cell simulations, you know, when I'm able to-
What's your timeline for that? Well, within five years. So what I need to do is be able to upload my genome,
and my genome will dictate exactly how the cells, my renal cells or pulmonary cells are functioning.
and then I can say, well, how does this particular drug impact those cells or all the cells
my body? Even more importantly, you know, if there's a disease state, what drug is going to cancel
that? And this is where we're going with longevity, right? What is, you know, why are we aging,
how to slow it, stop, it, reverse it, all that falls out of a big data and massive compute.
I agree. Virtual cell by the end of the decade, a good one. Yeah. I mean, that is, that is the
moonshot that changes everything.
It is.
I agree.
And there are a number of companies working on that.
Do we have the compute to be able to kind of simulate two billion, several billion
interactions per cell?
We will have to find that.
We will with quantum.
I mean, one of the things that quantum computation is going to, I mean, our cells and our
molecular interactions and our cell services are all quantum in nature.
If you said, I want to build a movie scene and I'm going to do it with finite element
modeling and build it bottom up with a full simulation, you would never.
be able to create an AI-driven movie that way.
But if you take the neural network approach, it just works, boom, it just flat-out works.
Same applies with chemical simulations, the cell simulator.
It's going to be data in, neural net in the middle, value or action out, and it's going
to flat-out work.
I think it'll work very fast like you guys are predicting.
But you can't, you can't simulate it, you know, atom-by-atom building it up.
It's totally the wrong approach.
It turns out, I mean, this is why I'm not.
maybe sometimes I present as a bit of a quantum bear.
The physical world is actually pretty classical and pretty sparse.
So I would bet we don't actually need quantum computing at all to get to the virtual cell.
We can, we solve protein folding without quantum computing.
We did it purely classically.
I think we get to virtual cell just by existing scaling of, of models like maxi,
what was it, maxi something or other from Nvidia, the trillion token cell model.
I think we just get lots of scaling of class.
classical models, and that takes us there without, like, enormous innovation needed.
Today, it's a data problem. Totally, totally agree.
It's a data problem more than a computational problem. We don't have the data.
I'll tell you what else. Culturally, you know, my daughter's over at Moderna,
and they freaking love AI in the biotech community. If I, if I compared the extreme ends
of all the companies that have been here on our office, so the biotech guys are
Jeff von Malton, New Bar-of-Feyans, Stefan Van Selle, they all culturally can't wait for AI to come
into the business. And then on the extreme other end, you've got the public accountants,
you know, the PWC guys were here the other day. They're like, ah, AI stop. Please don't,
you know, but the biotech community is embracing it like crazy. I don't know why. I bet you guys
actually know why, because you're right in the middle of it. But I can tell you firsthand.
They are so much. But they hate pipetting.
All right. Let's go to robotics. This is China versus USA.
Alex, I want to hear your thoughts on this one.
So Agibot ships 10,000 humanoid robots.
They're number one globally.
They've gone from 5 to 10K across 17 countries in just two years.
I mean, these are small numbers compared to what we've heard everybody all speak about, right?
Getting to tens of millions, to billions to, you know, 10 billion robots.
Unitary files for an IPO, 610,000.
million IPO, we had the co-founder of Unitary on at the last Abundance Summit. Revenues are up
335% year-on-year. They're probably the, you know, outside of optimists and figure, they're
probably the best known robot company out there. And UniXAI had their home robot launch.
And then finally, Xiaomi displayed Cyber One Humanoid.
Xiaomi is an amazing company. I was there very early, met the founders in China back in 2017, 2018, right? Their mobile phone, computers heading into vehicles and now robotics. A lot going on in China. Alex, what are your thoughts here?
Okay. So this is happening. I think in the last episode I mentioned that one of my operational definitions of the singularity is all sci-fi tropes happen.
everywhere all at once. One of those sci-fi tropes is the call it the iRobot trope where there are
just humanoid robots in every facet of life. Today, earlier today at the MIT Media Lab, for those who
were there, people saw me for about an hour controlling a unitary robot marching in loop after
loop around the media lab on the sixth floor. And people were taking selfies. Everyone wanted to
take a selfie with me and the unitary. And I was doing this as
sort of a bit of a promotional march for professional robotics league,
which next on April 19th, so nine days from when we're recording,
the weekend of the Boston Marathon,
is going to hold the United States,
the country's first professional robotics league match
with robots racing 50 meters in the Boston seaport.
This is all happening.
We're finally catching up to the iRobot future,
where robots permeate every aspect of life,
For better for worse right now, it's Chinese robots that are leading.
I'm hoping to maybe almost quasi-shame the U.S. robotics industry with all of these Chinese
capabilities into stepping up to the plate and starting to distribute humanoid robots
into the civilian sector and not just factories and not just military drones.
But it's all happening.
And this is going to be utterly transformative for the two-thirds of the U.S. services sector
that depends on physical labor, manual labor, and not just knowledge work.
You know, I saw Mark Cuban on a video this morning saying this robot thing is a passing phase
and they're not going to be around in 10 years.
How does someone come to that?
No, no, so there was a bit of nuance to that.
It wasn't that robots aren't going to be around.
It's that they'll become so essential that the environments will adapt to the robots
and the robots will blend with the environment.
Right now we go to Saleem's point.
Salim, your hobby horses, why do they need to be humanoid?
why can't they be differently shaped?
I think Mark Cuban's more nuanced point was they're going to become so essential to daily
life that they'll start to change the houses and the buildings and the environments
to the point where they start to merge with the environments and therefore no longer need to be
humanoid.
So they're dishwashers.
Yeah, they blend, they merge with the physical environment.
I have to confess, Alex, that robot that you were talking about was blocking my way to
the bathroom and I so badly wanted to kick it.
And I was thinking Alex would kill me if I kicked it.
It's going to remember and then it's going to come back in three years.
History will remember, Dave.
You really don't want to do that.
The song from Les Mis,
So never kick a dog because it's just a pup.
They'll fight like 20 armies and they won't give up.
So you'd better run for cover when the pup grows up.
I heard that.
Let me hit on a couple of stories here.
So this is interesting.
U.S. senators move to restrict Chinese robots,
bipartisan bill proposed to block Chinese-made robots from federal and sensitive facilities,
citing data theft and surveillance.
This is no different than Huawei chips and in our cell phone towers.
And DJI, the DJI ban already in effect, I think.
Yeah.
Drones.
And Agile Robotics and Google DeepMind partnering up.
Gemini robotic models are being integrated into 20,000 deployed industrial robots.
across government factories or global factories.
So I think this is like a tale of two cities.
The two cities in this case aren't London and Paris.
They're China slash Shenzhen and the U.S.
slash Silicon Valley.
The Chinese are overwhelming the world market
with the raw physical capabilities.
They're producing many, many more capable robots than,
put it this way.
If I want to, as a U.S. citizen, if I want to procure a humanoid robot, I don't really have
that many options right now. I'm still waiting for my 1X Neo. I was haranguing Berndt at A360 this
year. When do I get my, when do I get my Neo?
This summer. I'm getting mine this summer. What do you promise you?
He didn't promise me a date. We were trying to figure out finer details of his participation in
in future Olympic events.
But I would say China's producing all of these humanoid robots, but the U.S. is producing
the strongest VLA vision language action, foundation models and world models for the
moment.
And I think as with we've talked in the past about open AI, trying to become anthropic
faster than anthropic can become open AI.
I think similarly here, China is in a position where it has the raw manufacturing capability
to make lots of robots and is racing to become a robot foundation model provider.
Faster than the U.S. with our 10X, more compute and our foundation models,
can finally figure out our way to manufacture at-scale humanoid robots.
So we'll see which way it ends up.
And I realize that Gene Luca put this video in the deck.
Let's take a listen to Mark Cuban about humanoid robots.
I think everybody's making this push for humanoid robots.
I think they might have a five-year-old.
lifespan and then they'll fail miserably, maybe 10.
Yeah.
You mean the companies or the individual?
Or the physical robots?
Or both.
Both, right?
Because I think everybody defaults to, well, we live in a human world and
humanoids will take the place of humans for various functions, particularly in the
home.
And I think there's just no chance.
So maybe we're missing the second half of his comment.
Yeah, this is conveniently alighting the second half where he explains that they'll merge into
the environment. Okay, well, that makes
a lot more sense.
Let's get to a conversation.
You want to hear something really cool?
Yeah, sure. We had Chase Lockmiller earlier
today, our guy, Chase,
you know, building Stargate in Avalene, Texas.
And he said,
remember when we were talking to Brett Adcock? He said,
I have to wind my own motors. I literally have to,
there's no supply chain for any of this stuff.
The same thing Bert Borneck said at 1X.
So Chase was saying he actually
melts metal
to make electronic components,
to build these gigawatt data centers
because there's no supply chain
for the stuff that he needs.
And so it's very much the case
that the entire supply chain
to build out all this physical stuff
is miles behind
where it needs to be.
It's entrepreneurial heaven.
Because, you know, it's on a shorter,
you know, the virtual stuff,
the code writing,
they, you know, all the compute
is going to happen very quickly,
all the white collar stuff.
But the robotic stuff, you know,
you look at the size of that IPO we were talking about a second ago, $610 million.
Can you imagine trying to go to an investment bank on Wall Street and say, hey, we're doing a $610 million IPO?
They'd be like, you can go down to the basement and, you know, you can talk to our junior associates.
We'll get back to you after this.
After Anthropic is public, we'll talk to you.
If there's any money left in the people's pockets.
Yeah.
All right.
Let's go to a topic I've wanted to cover for a while with all of you and that's quantum and Bitcoin.
So here we go. Google moves up their deadline by six years to 2029 for basically Q-day.
When are we going to see quantum computers break RSA? It used to be that it required 20 million
qubits. Today, it's 1 million cubits. And in particular, it's 4,000 error-corrected
qubits, to be specific, to break RSA.
Moved it up from, you know, by six years from 2035 to 2029, it's gotten everybody in a bit
of a panic.
The story related to that is that, you know, Brian Armstrong, the CEO of Coinbase,
has put forward a $150 million coalition to roll out something called BIP 360 as a quantum
improve upgrade to the protocol. It's a fork. By the way, in just chatting with Brian, he's going to be
joining us on the Moonshots pod. We're going to be talking about both longevity and quantum and
Bitcoin. So another story related is that Google now says that under 500,000 qubits are required
to break Bitcoin encryption. So 20 times fewer than predicted in 2019. So a lot going on.
here. This is, you know, concerning people in who are Bitcoin holders. I put this next slide forward
because, you know, Dave, you and I were roommates with Mike Saylor in our fraternity back in the
day. You may not know that. Mike Saylor, Dave and I were at Théhaelikaa together on the third
floor. And I, you know, wanted to see what's, what's Mike saying about Bitcoin? And he's saying,
I don't worry about it. Quantum computing won't break Bitcoin. It will harden it. The quantum risks are
overblown. Quote, Bitcoin has survived every existential threat ever thrown on it. This is just the
latest, and the upgrade will come before the threat does. He puts his money behind that in the last
quarter. He's purchased 88,000 Bitcoin, about $7.25 billion worth of Bitcoin.
Celine, let's go to you first on this one, pal.
So the true risk here is that protocol consensus may be slower than the emergence of the threat, right?
But I'm actually optimistic around this one.
I think Sailor is right.
The resilient systems will just evolve and can evolve under pressure.
But markets are really bad at pricing tail risk until they're really forced to.
So I think what will happen is there's so much momentum.
behind Bitcoin and so many, like I came across a Bitcoin Lightning Network payment system
that is three months old and they're doing a billion dollars a month of transactions.
It's just unbelievable to watch some of what's happening under the radar that most people
haven't even seen this.
So I'm optimistic on this.
Even if Google pulled a date forward a bit, I think this is a kind of, it's still a long way
to go.
But the Bitcoin world will be forced to get together and just go, okay, we need to upgrade.
let's just do it. There's enough money in it, motivation to do it.
At this moment, Bitcoin's at 73,000. It's up about $4,000 in the last five days.
You know, this has been a black cloud over the Bitcoin market for a while. In fact, you know,
Jeffrey's Bank has pulled out of Bitcoin. We may see others follow suit. In the same way that AI
is sucking money out of every other market, it's also sucking the attention out of Bitcoin.
Dave or Alex, are you guys Bitcoin holders?
I know Selim and I are.
What do you think?
Only via micro strategy.
I think Mike is absolutely right.
I think that I don't know this litany of existential threats.
I mean, I know there was, you know, someone trying to take over half the servers and then control it.
Obviously survive that very easily.
Quantum is not a threat at all.
It's so easy to increase the encryption standard.
And you can see quantum computers don't just suddenly pop up out of some secret lab.
to you see it coming a mile away.
It's not a risk at all as far.
I think Mike is 100% right.
So for the record, I don't hold Bitcoin.
I don't have any desire to hold Bitcoin.
This is the time in the episode where I say something nice about crypto per the Peter
Diamandis ordinance.
So my something nice about crypto today for this episode is I also don't disagree with Michael
Saylor, but I also think it's beside the point.
I this is not investment advice but I don't think it's quantum again I made this point numerous times
I don't think it's it's quantum decryption that bitcoin the bitcoin community should be worried about
it's AI it's AI numerous facets of AI it's it's AI coming up with clever inversion attacks
against the core hash functions and and before anyone in the comment says oh but it gets
harder over time and there are several other responses. I'm aware of all of these responses.
But if there is a secret inversion attack against the core hash suite of Bitcoin, this is a
major problem for Bitcoin. I don't think that's even the largest problem, though, for Bitcoin.
If we're going to talk about Bitcoin X-risk, I think it's actually just irrelevance.
AI, which is emerging for better or for worse, or AI agents, I should say, is the killer app
for call it cryptographic commerce and transactions.
The biggest risk is just that AI agents don't want to use Bitcoin.
I'm aware that the Bitcoin Policy Institute put out this study
saying that AI agents, you know, six out of 10 AI agents prefer the flavor of Bitcoin
versus other other cryptographic means of commerce.
I think over the long term, it's difficult to buy that AI agents, given their speed,
if they stick with any form of crypto at all
are going to stick with Bitcoin.
They'll invent their own currencies,
their own layer ones,
maybe transcendent,
transcendent forms of layer zero
and just reconceive the entire notion
of a crypto stack.
I agree with you there.
Yeah.
They'll reinvent anything and everything
towards efficiency and towards...
Well, everything Alex just said,
though, is all about transaction use cases.
And Mike has been saying for a long
time that Bitcoin's role in the world is this store of wealth that's immune from government
seizing it or taxing it because you can move it so easily. So that would be a completely different
argument. And I don't have a horse in that race, but it'd be interesting to say, well, what about
what's AI's impact on that use? I mean, data, crypto debate.
In micro, I would say a long-term store of wealth is basically just commerce by another name.
You're trying to store resources in some sense for the long term.
I would query whether superintelligence actually needs a long-term store of wealth at all.
It's going to be moving very quickly, taking rapid actions in the physical economy.
Does it even have a need for a long-term non-operational, sort of non-productive store of wealth?
I doubt it.
Well, I think computer energy is the ultimate, you know, store of possibility, so to
And those are arguably real real assets.
Yeah.
The definition of long term, too, is really interesting.
Because right now, the reason we have money at all is because, you know, we have trade.
You're going to do something.
I'm going to do something.
Oh, wait, I'm doing it now.
And the other thing is tomorrow.
Okay, well, give me the money.
And then tomorrow I'll pay you back.
And so it's just a buffer.
Because, you know, transactions don't line up perfectly in time.
If you imagine a massive fluid AI economy with thousands of times more things happening,
yeah, the alignment is a lot higher, but also the store of wealth could be milliseconds or microseconds or nanoseconds.
At that point, do we even need, quote, unquote, yeah, at that point do we need quote unquote digital gold?
I similarly, this is not investment advice.
I don't hold gold.
It's an unproductive asset.
It's just not interesting.
If we really are in the singularity, as I claim that we are, why on earth would I want to hold gold or Bitcoin?
What do you hold out?
Okay, so again, non-investment advice, but for the record, on the one hand, index funds, fundamentally betting that the market is a better allocator of assets, at least among public securities, than any individual can be.
It's basically a bet on superintelligence.
And then the other end of the barbell distribution, equity and startups, and where I hold material agency.
And to first order, that's it.
I don't hold gold and I don't hold crypto.
I just don't understand how they're productive assets.
I heard of it.
Yeah, I'll jump in on two things.
One is, you know, Peter, you mentioned that what you need is energy and compute.
And I was like, well, that sounds like Bitcoin.
But to Alex's point, one of the smartest investor types I know who's worth about $100 million,
I asked him how he does wealth management.
And he goes, 70% high dividend yielding public equities and 30% high risk startup investment funds.
And I think that speaks exactly what Alex just said.
The kind of the standard thing is like real estate utilities, etc., are all very dangerous places to be.
I'm cooking all of them.
Like, I want to cook land.
We've talked perhaps in the past about Coastal Assembly, which is using AI to, it's a company where I have a financial interest that's
using AI to grow new land. I'd like to see real. Okay, so a hot take for this episode. If,
if the crypto hot take wasn't hot enough, hotter take since I'm underslept, I think land has got to be
made post scarce. And AI will help us make real estate post scarce. I agree. Welcome to the health
section of moonshots brought to you by Fountain Life. You know, AI is having an outsized impact on
every aspect of our lives, how we teach our kids, how we run our companies. It also is having a huge
impact on health, helping you prevent heart disease. One of the key things I'm here with Dr.
Dawn Musilm, our chief medical officer at Fountain. Heart disease has been personal for you as well,
hasn't it? It really has, Peter. And my daughter was five. My husband died of sudden cardiac death.
And so this is a topic that is one that I am mission driven to try to eradicate. Prevention first
and early detection is absolutely critical. 50% of people die of heart attacks with no warning
signs. No shortness of breath, no pain, no nothing. No, silent killer. They just don't wake up in
the morning. They don't wake up. And so, you know, AI, this is our mission to advance science, to try to
help to one day democratize wellness. We know at Fountain Life, when we do this CT angiography with
AI analytics, we are actually finding that 88% of people coming in have detectable coronary
disease. But Peter, what's more alarming to me is 23% of those individuals had soft plaque. This is the plaque that
would not traditionally be seen on CT looking at calcium scores alone.
And this is the plaque that we must intervene with, with the multimodal testing we're doing,
including diagnostic laboratory studies, partnered with healthy lifestyle recommendations.
So listen, make sure you understand what's going on inside your body.
Genetically, metabolically, and cardiovascularly, you can know, and it's your obligation to know.
So check it out at fountainlife.com slash peter to find out more.
and really make sure that you're the CEO of your own health.
All right, back to the episode.
All right, I'm going to jump into our final segment here,
which is a proof of abundance.
I'm going to call it Abundance Corner.
These are stories that have come out recently.
I want to take a second and mention,
We Are as Gods, is coming out on April 14th.
So super excited about this book.
You can go check it out as We Are as Gods book.com.
The Moonshotmates, we're all going to be.
be getting together on May 4th at MIT with Ray Kurzweil. We're holding a half-day program.
We'll be doing a live broadcast from there. Stephen Kotler, my co-author will be there.
We'll be doing a conversation on the book. We'll be doing an interview with Ray. It's going to be a blast.
We have sold 100 tickets. People who bought 100 copies of the book are going to be there.
We're probably going to offer out 10 last tickets. If you're interested, go to We Are As God's Book.com
100, and you can squeeze in.
It is full right now.
We'll probably have a few people who can't make it the last minute, so there'll be a wait list.
Join us.
It's going to be a lot of fun.
All right.
Let's look at evidence of increasing abundance.
Here's a story that's interesting.
Germany just built the world's tallest windmill, 365 meters high.
It's taller than Eiffel Tower.
It's a 33 gigawatt hour per year generation.
And what's interesting is it's built inside of an old coal power plant.
So I find that pretty, you know, pretty exciting.
The coal plant left the wiring behind and they've built this on top of it.
So the turbine is being built in the Lusatia coal site in Brandenburg.
So we're going to start to see, you know, wind and solar penetrate the old energy
economics. The second article here is a 12-patient trial of a redesigned CD-40 immunotherapy,
had extraordinary results. Cancer vanished after one injection. This was 12 patients in the trial.
Two patients hit complete remission, six-saw tumor shrinkage. And, you know, this is the end of cancer
heading our way. And then finally, there was a fun study done by the World Bank that basically
showed that we don't need to actually produce more clean drinking water in Africa. What we need
is to rebalance the use. In some places, there was too much water being used and all of that,
if it's redistributed, could actually provide all the water required for sub-Saharan Africa.
and this is where AI technology can come in and help us understand how much water is required
and where it's in optimize its use.
Any comments on these articles?
I've got a bunch, but I'll just limit it to one here, just to build on the abundance side.
You know, they're doing this is separate from the list here, but AI, they're using AI to do
with acoustic sensing to prevent major failures in wind turbines.
and the systems are achieving like 99% accuracy
and identifying damage before it requires repairs.
So the cost of maintenance suddenly dropped radically
for these winter runs because we can do predictive maintenance
in a very powerful way.
And so this is all these little thousand ways,
thousand cuts in which we're reaching abundance and energy
that's totally going to change the game.
So I'm so excited about this.
But this is such a great stuff except we've misspelled abundance corner,
but that's out of my energy.
I love it.
I'll make the one comment on the immunotherapies.
I think it's also instructive.
If you think back, so we're in 2026, if you think back to circa 2000 or 2001, so about
a quarter of a century ago, the U.S. Congress was sold on the National Nanotechnology Initiative
on the premise that we'd have medical nanorobots swimming through our bloodstream
zapping cancer cells.
And yet we find ourselves a quarter of a century later where, as you say, Peter, cancer is
well on its way to being solved without the medical nanorobots. We didn't need the medical robots at all.
This is being done by basically retraining or retargeting our body's own immune systems. And I think that
does raise or flag the question, what will, if anything, we need the medical nanorobots that
Eric Trexler and others promised us, what, if anything, will we need those for? Or is it just a matter
of re-educating our own existing biology to do more intelligent things?
without needing any robots in our bodies at all?
We have an amazing system.
The challenge is, and we've discussed this before,
that our biology is optimized through age 30,
and then it's a slow degradation,
never evolved, never selected to live past that.
So a lot of this and a lot of the age reversal work going on
from the epiotech reprogramming
is how do we take our systems back to an earlier state of youth
where they're operating optimally.
All right.
A few more articles here in the Abundance Corner spelled correctly or Kortner.
Got it.
So vertical farming, I remember in my first book of in 2012, you know, first book abundance,
I talked about vertical farming.
It's finally playing out.
So it's projected to reach 40 billion by 2030.
It hit $8 billion this year.
And I think what's really important about the story is that, you know, vertical farming has a huge impact.
95% less water use.
Production yields are 350-fold greater per square foot than traditional farming.
You know, the use of AI and robotics allows you to optimize the perfect pH, get rid of all pesticides,
enable you to get the perfect spectrum for that plant 24 hours a day.
And historically, most of the vertical farming to date has been lettuce or leafy greens like that.
This is the first time we're seeing something with a higher, you know, a higher value crop like berries.
Yeah.
And super excited.
I mean, what are we going to do with all the parking garages that are autonomous vehicles, you know, abandon?
Can I give a little historical thing here?
Yeah.
So, you know, if you look at the over the last 50, 100 years, the world's biggest food production countries were the ones.
were the ones you'd expect, U.S., China, Russia, Brazil, the biggest ones, right?
But then you look over the last 50 years at the world's biggest food exporting countries.
And you know what number two is Holland?
Yes, amazing.
On a global map, you can't even put it or you can't put a pin to find Holland as that small relative to these other countries.
But they made major investments at hydroponics, aeroponics, et cetera.
They're the number two exporter of food globally.
And that just shows you what the potential is as vertical farming takes hold.
We'll be able to totally transform.
The average meal travels 2,500 miles to reach an American table.
So in food logistics, security, funnlies that yields doing vertical farming,
there are something like 10 to 1 compared to horizontal farming.
Yeah.
Like half of your cost of a good meal, you know, it's the beef coming from Argentina.
It's the wine coming from France.
It's the transportation costs are huge.
All right.
Our second story here is 100-hour batteries, go commercial.
So this is the birth of what we call air, iron-air storage batteries.
Lithium ion batteries are lithium-cobalt nickel.
They're expensive.
The iron-air batteries are iron, water, and air.
They're coming in at one-tenth the cost.
and they're now being used for grid storage.
Alex, comments on this one.
I do think evolution in battery energy chemistry is really interesting.
So the historic trend, if we put aside iron air for the moment and just focus on the bleeding edge chemistries,
I think the statistic is something like a pretty sustained 8% year-over-year increase per constant dollar in battery energy densities for the bleeding edge chemistries.
So in some sense, not in some sense, in a very real sense, there is a Moore's law for increasing
the energy densities, while at the same time we're seeing new chemistries or new-ish
chemistries like iron air that are radically reducing the cost for certain applications.
Iron air isn't for every application.
It seems unlikely we're going to see it get used, for example, for EVs.
Anytime soon, probably someone in the industry is experimented.
with it. But I think we're starting to see, judging from the explosion initially in lithium ion
and then exploding to a number of other form factors, different chemistries for different
applications. And different applications demand different prices as well. In some cases,
when you're powering data centers, you care about the volume of storage and you care
about the price. In other cases, you care about the mass and mobility. And those are cases where
lithium ion, lithium polymer probably still has an edge over iron air.
Overall, I think this is very positive.
I think, you know, I sometimes wonder as a thought experiment,
given that there was quite a bit of experimentation early on in the Thomas Edison era
with different battery chemistries, whether we could have arrived at much more advanced
chemistries much earlier like 100 years ago and whether the history of the internal
combustion engine would have been vastly different if we had seen more investment and more experimentation
up front with different battery chemistries. But overall, like obviously this is a positive
development. A final story here is AI tutors. So what is it? So a Wharton study tested AI tutors
that personalized education. What they found, not surprising that a five-month coding course
was equivalent to six to nine months of additional schooling compared to peers with fixed
curriculum.
I think we know this.
Basically, you're getting 2x learning gains using AI tutors.
They're free.
They're ubiquitous.
They're available to everybody.
24 hours a day, seven days a week.
This isn't, you know, breaking news.
It's just quantifying it.
At end of the day, AI is going to be the ultimate educator, understands your child's abilities,
understands what they do and do not know.
favorite sports star, their favorite color and can optimize it and teach somebody.
I think one of the things that AI can do better than anything is teach somebody the way they
like to learn.
I'm going to make an appeal to teachers because a lot of schools, including people that I know
are incredibly resistant for some reason.
I don't get it.
I'm going to go out on a limb and say it's cruel, absolutely cruel to a child to force-feed
them a lecture.
And they're like, I don't understand what you just said.
Well, I'm going to keep plowing forward because everyone else in the classroom understands.
Or I'm going to say it the same over again.
Yeah.
And the kid can't stop and say, wait, explain that to me another way.
With the AI, it's so much more compassionate.
And so I think it's downright cruel to kids to try to teach complicated things in any way other than AI.
Yeah.
I think.
Anyone who uses it every day, it's clear.
It's clear that that's the case.
Sorry, good, Alex.
I think there's an element missing.
So I would love to be able to just replace a teacher.
human teachers with AI.
I think it's basically a cliche at this point that, at least in the U.S., education is subject
to Belmont's cost disease and would love for AI to just replace education, both primary,
secondary, and higher ed.
What I suspect is missing, certainly for the most self-motivated students, AI, I think,
at this point, in the style of Neil Stevenson's young ladies,
Primer from Diamond Age, it's already here.
Well-motivated student can already have a conversation with a model from whichever
frontier vendor and teach themselves far more quickly than they can through human instruction.
But, but, but for the students that aren't as self-motivated,
what I think we're missing right now is an AI embodiment that holds their attention
and motivates them where they lack the motivation.
It's like the jumping.
Yeah, maybe, maybe.
Maybe, so I assume, Peter, by gaming, you're referring to sort of a quasi-addictive or-
video games.
I mean, video games are just perfectly tuned, not to be too difficult, not to be too
boring, to hold your attention and to motivate you all the way.
It's just, I don't understand why video game designers, instead of teaching kids about a whole
set of random facts who are made up, can't use, you know, a set of facts around quantum,
you know, about subatomic particles or about planets and about, you know, physics and
biology and gamify that someday.
Yeah, it's funny if you play Fortnite and you look at the weapons and the number of
intricate components of the weapons and then the characters.
They memorize these things.
They memorize them.
And then, you know, Madden NFL, like the playbook, there's like three-layer-deep menus of
different routes and play.
And before you know it, you could have learned an entire discipline like quantum physics
with that same amount of brainpower.
But I swear to God, the AI can make those topics like quantum physics incredibly fun
and engaging.
The technology is here today to do that.
It is.
Someone's just got to get it out the door.
People have been building edutainment games for decades at this point.
I grew up with like Math Blaster or whatever back when I was growing up.
But the problem is, I suspect, so as a user of these games, you're not motivating the users,
children or students with the exact outcomes.
What would have been utterly transformative for me would be not like motivating some math problem
with some arguably disconnected animation on the screen,
motivate them by actually empowering them to do really amazing things in the real world.
That's far more motivatory, I think, than just an animation or some dopamine push from a jingle.
Well, I think that's for you and probably not for the average.
Yeah, maybe I don't generalize.
I don't know.
All right.
Final item in the abundance courtner is this.
graphic, look at this beautiful exponential growth curve. This is EVs sold globally. Back in 2010,
there were barely 10,000 these vehicles. This was Elon's first roadster. And here we're up to
12.7 million EVs sold globally. In China, one in two new cars is an EV. And it's just,
It's perfect exponential growth.
Can I just run out of a fun fact?
In 2015, the International Energy Agency predicted that we would not sell a million
electric vehicles a year before 2040.
And that year in 2015, we sold more than a million electric vehicles.
So you get the predictors and the reality.
And, you know, governments and good companies like are relying on these strategic predictions,
for strategic decisions, and they were wrong before they even put out the comet.
And so it's great to see this.
And the curve still accelerating, right?
So by 2030, 2025 on this chart is going to look very modest.
And just look at the impact on the results from the war.
If oil prices, assembly shooed up, et cetera, you're protective from a huge amount of volatility
as we go to solar battery EVs.
All right, gentlemen.
A beautiful outro piece from Marcus Helker.
And I want you to look at this.
This is the Moonshot Mates Boy Band.
We're making our debut here.
No, God, I can't do it.
Enjoy.
Good, Salim.
What are you worried about?
The Moonshot Meets, boy band, everybody.
Alex, you got to be a science officer.
You're lucky.
I get to beat it while this is.
Science medical, I think, is blue, right?
I get to be science medical.
All right.
You're right.
Medical.
Why would that be?
Killer.
We're all Starfleet officers and the abundance logo resembles the Starfleet pin.
It does.
Convenient, isn't it?
I don't know how that happened.
Pull up a clip from just like two months ago and compare it to today.
It's incredible how quickly it's.
And just a shout out to the creator community out there.
Love it.
Please send us your.
your outro or if you have an intro song you want to share with us, please send it to us.
We'd love to share it with everybody.
And gentlemen, it was fun doing back-to-back episodes with you in the last 24 hours.
And looking forward to another episode next week.
So everybody, please subscribe.
We're putting out about two episodes a week.
Turn on your notification so you get it when it's fresh.
And stay optimistic.
Stay hopeful.
The future is ours to.
to create, we're creating the vision of tomorrow that we want.
If you think AI is happening to you and not for you, you're going to be back on your heels
and you're going to be in fear.
And that's the worst place to actually venture into the future.
This is the most extraordinary time ever to be alive and so blessed to have Salim Ismail,
David Blundon and AWG as my Moonshot mates.
Love you guys.
Awesome episode.
Live long and prosper, Peter.
Live long and prosper.
Peace and long life.
If you made it to the end of this episode, which you obviously did, I consider you a moonshot mate.
Every week, my moonshotmates and I spend a lot of energy and time to really deliver you the news that matters.
If your subscriber, thank you. If you're not a subscriber yet, please consider subscribing so you get the news as it comes out.
I also want to invite you to join me on my weekly newsletter called Metatrends.
I have a research team. You may not know this, but we spend the entire week looking at the meta trends that are impacting
your family, your company, your industry, your nation.
And I put this into a two-minute read every week.
If you'd like to get access to the Metatrends newsletter every week,
go to DeAmandis.com slash Metatrends.
That's D'Amandis.com slash Metatrends.
Thank you again for joining us today.
It's a blast for us to put this together every week.
This episode is brought to you by Tell us Online Security.
Tax season is the worst.
You mean hack season?
Sorry, what?
Yeah, cybercriminals love tax forms.
But I've got Telos online security.
It helps protect against identity theft and financial fraud,
so I can stress less during tax season, or any season.
Plan started just $12 a month.
Learn more at tellus.com slash online security.
No one can prevent all cybercrime or identity theft.
Conditions apply.
