Moonshots with Peter Diamandis - Google Invests $40B Into Anthropic, GPT 5.5 Drops, and Google Cloud Dominates | EP #252
Episode Date: April 30, 2026In this episode, the mates explore the rapid advancements in AI, the global AI race, new models like Kimi K 2.6 and GPT 5.5, and the implications for privacy, security, and the future of technology. T...he mates discuss the future of work, AI regulation, and transformative biomedical breakthroughs. Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360 Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy Your body is incredibly good at hiding disease. Schedule a call with Fountain Life to add healthy decades to your life, and to learn more about their Memberships: https://www.fountainlife.com/peter _ Connect with Peter: X Instagram Substack Website Xprize Connect with Dave: Web X LinkedIn Instagram TikTok Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Substack Spotify Threads Listen to MOONSHOTS: Apple YouTube – *Recorded on April 28th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Google commits to a $40 billion investment in Anthropic.
Dario needs compute.
He signs up with Amazon.
Google's already a shareholder in Anthropic.
They're trying to maximize the economic value per token.
It's all bottlenecked at TSM.
That's the actual bottleneck to all of AI.
Only Elon will talk about it.
Google Cloud is dominating.
They unveiled their eighth generation of TPUs,
in particular, TPU 8T for training and TPU 8I for inference.
I still believe Google's the winner in the long run here.
OpenEI unveiled GPT 5.5.
It very much feels like a release that's intended to strengthen OpenAI's codex.
Math is cooked.
A bunch of other things are cooked as well.
Things are moving so quickly now that on a month-by-month basis,
we're able to see the hardest of these benchmarks creep up 1% per month.
So not long now.
Everybody, welcome to another episode of Moonshots,
your favorite AI exponential tech pod out there in the universe,
here with my incredible moonshot mates,
AWG back with his orchid-filled room, DB2,
in his headquarters of all exponential investments.
And of course, Saleem is on the road.
I mean, remember the book, Where's Waldo?
I think we're going to replace that with Where's Salim.
So, Salim, where are you today?
I'm in a car in Guadalajara in Mexico,
transiting to the airport, and this was the only way I could do this is to do it in the car.
So hopefully the friend's hotspot were piggybacking off last.
I can't believe you brought up Where's Waldo?
Peter, we're still the exclusive licensee of Where's Waldo for data mining?
Okay.
We used to go to trade shows, and we'd have an actor dressed up in that Where's Waldo suit,
and we'd be like, hey, our neural nets can find anything in your data.
It's like a Where's Wall.
And we gave out all the books, and it's amazing.
You still remember that.
So, Salim, you're in Mexico.
The Blitzy team is in Mexico, and they're raving about the podcast, by the way.
So I guess we have a big fan base down there.
We do, it turns out.
Yeah, big time.
I was in a conference for about 1,100 people, and quite a few of them are avid washers.
And what about the rest?
Did you convert them?
Yeah, we've got to think international whenever we're commenting on these topics, because, you know, everybody, it's a big world.
And everybody out there is watching YouTube.
snuff to say everybody should watch moonshots in espion you know there's translators now
i know i know i did my i did my meaning of life session last night in spanish with the translator
and you should have seen the translator at the end of the night she was so fried
and of course tequila shots required what do you speak hindoo or do you what no i speak i speak
english it's my native tongue because i come from a diplomatic family i've
pretty bad Hindi. I can get by. But, you know, it's one of these where my grammar is bad,
my vocation. I just throw out words and hope it sticks. I can get through about 50% of our
conversation. Well, we're at almost 500,000 subscribers. So next time you're in front of our large
audience, tell them to push us. I do. Over to 500,000. Oh, okay. I'll tell them.
Let's jump in. Another incredible crazy week. Let's kick it off with a
conversation around the AI race and the agentic boom. So check out this slide, right? I mean,
15 major releases in only eight weeks. We're getting a pace of, you know, two major models per
week. I think you've got to be retired and just focusing only on this to keep up. There's no way
otherwise. So in this segment, what I love to do, guys, is really hit on the last three, Kimi K2.6,
GPT 5.5 and DeepSeek 4 before their extraordinary releases, each of them, you know, hitting new
capabilities.
You know, one thing, Dave, we saw the, you know, the acquisition or the invoked acquisition of
cursor by XAI.
And I think what's interesting is that the winners in this crazy model race are going to be
those that are providing the best abstraction.
layer. So it doesn't matter what models underneath. Do you agree with that? Yeah, totally. Actually,
I just had a meeting with a data center company here in Cambridge, and the amount of effort going
into the TPUs and the NVIDIA B-100s is incredible, but at the abstraction layer, there's factors
of five and ten just being thrown away by mismanagement of the context window. And I mean, it's just so
much opportunity in this stack, which makes sense because it's all brand new. But it's just, and also,
there's a lot of vertical integration going on.
The warfare is really, really stepping up.
But I can't believe how Kimmy K2.6 is keeping up.
I mean, it is just shocking that the open source world is actually on the radar and keeping up.
And we'll get to that in a minute.
But what's interesting is the speed of these releases.
I'm guessing that these new models are sort of, you know,
it's competitive marketing where the models are probably already cooked.
and they're just waiting for someone else to release
and then releasing right on top of it.
Anthropic is holding back on mythos,
so you know that there's at least one case
where you're exactly proven to be right,
which means there may be others as well.
But it's funny, the dot releases are coming faster and faster and faster.
I mean, what's shocking about this list is it's U.S. versus China, right?
There's no European models, there's no U.K. models, no Japanese, Indian models.
It's just all U.S. and China.
Everyone else is a spectator, it looks like, at this point.
I don't know if you agree with that, but I find that.
Well, the models are definitely self-improving now.
No, you're 100% right, but the models are self-improving now.
And so, you know, the rate is accelerating,
exactly what singularity theory would have predicted.
The rate is accelerating,
but because the models are improving themselves,
it's hard to start from a cold start and catch up.
But I'm surprised that other countries aren't using
the Kimmy K2.6 model to bootstrap their own internal research. And maybe they are and hasn't
popped onto the radar yet. But, you know, I'm not finding it too hard to design new neural nets
using existing neural nets. It's a very doable thing. And I'm curious, Alex, that chart down
below on this slide here that's showing all the leapfrogging, it's leapfrogging all the time.
But, you know, is it that they're all just cherry picking? They're all just sort of, you know,
studying for the test on the particular benchmark, and then they're just, you know, releasing whatever
the latest benchmark that they're best at? Or is this truly, yeah? Yeah, I think we're down in the
west to a three-way race at the frontier between OpenAI Anthropic and Google. And I think
those three labs have been pretty good about not bench-maxing of over-focusing on just one benchmark.
They're pretty good generalist models. I think we're seeing an honest-to-goodness.
arms race or horse race or rat race, depending on which metaphor you prefer. My friends at the
frontier labs often call it a rat race. And as to the Chinese models, it's interesting. You know,
the aphorism, why do you rob banks because that's where the money is, to the earlier point about
why no European models, where's mistral in all of this, for example. It's because the U.S.
and China are where all the compute is. And ultimately, I think OpenAI is no embrace.
who of course is quite famous for having led their reasoning approach.
He's recently started almost pondering with a bit of enwee whether the weights actually matter as much as they used to
or whether it's really turning into a race for compute.
In some sense, as inference time reasoning becomes more and more important, his argument, not mine,
but I think it's a credible one, the weights themselves start to become.
less important in the same sense that, say, individual units within a transformer-style architecture
become less important as the transformer itself starts to scale. The overall weights for an
entire model may become less important as more and more reasoning gets used. And you see, in effect,
a spacetime transformer that's rolled out over time in reasoning token space. So if that argument
holds, and I think it's a pretty interesting one that I hadn't heard elsewhere before,
that would almost suggest that while at the same time we're seeing a race to the bottom
on, say, per token intelligence densities between American models and Chinese models, open
perens, the American models are still about six months ahead.
And this has been pretty consistent for the past couple of years, close perens.
It may not matter in the end.
What may matter in the end, at least according to the,
the scaling laws we have at the moment is who has more compute at the end of the day to do more
reasoning. So we're going to see that. We're going to see that in a minute. But, you know,
15 models, you know, over the course of two months is insane. You know, some of these are just
improvements on existing models and some of these are completely new pre-trained models. I think
that difference needs to be pointed out. Salim, any thoughts on this insanity?
You know, just the fact that we have that many releases in eight weeks kind of blows my mind.
the cost of cognition, coordination, execution is all collapsing at the same time.
I mean, I think that it's so much, not so much to break through this.
The compression density is crazy.
Well, and the capabilities are mind-blowing.
These are not just like, you know, fake little dot releases that are bench-maxing.
If you use them firsthand and what's really helpful is if you look at our podcast or at any
postings on the internet from three months ago, six months ago, nine months ago, 12 months ago,
and look at the predictions of capabilities.
We're so far ahead of what even the upper bound of predictions would be
in terms of the capabilities as the parameter count grows.
And so, you know, if you extrapolate from there, you know, we're just on this, you know,
knee curve of the acceleration and the singularity.
And raw parameter count and more chain of thought reasoning is just going to push us to,
you know, limits that are way beyond human.
So what does the average person care about?
What does the average person care about this, right?
Like one of my boys says, okay, great, a new release with a new numbers over and over and over again.
At the end of the day, stuff is getting better, it's getting cheaper, it's getting faster.
You know, what does the average user, I mean, do you recommend someone sticking with a particular model?
You know, I'm just going to be on open air.
I'm just going to be on Anthropic.
It's just going to be on Google.
Any thoughts there?
See, I think the question.
itself is a red herring. Why? Because OpenAI bet the company on consumers using all these
reasoning tokens, that a consumer-oriented strategy for all of these trillions of dollars of
CAPEX that they're building out would work. And they've had to pivot rather prominently in
the past few months back to enterprise. So I think the question of what is the average user care,
which I construe as what is the average consumer care, I almost think the market is telling us the
average consumer in the short term isn't even part of the equation anymore. This is really,
the question should be, what is the average enterprise care? Because they're the ones.
I'm asking for our listeners, right? A lot of them are entrepreneurs or general consumers.
I mean, at the end of the day, is it okay for someone? I'm just using chat GPT. I'm just using,
you know, Gemini 3.1 pro. I'm just using, you know, the latest version of anthropics models.
Is it important for people to be driving to the latest model or is it okay?
Because ultimately, everybody's basically leapfrigging everybody else.
And if you're just a mom, a dad, a student, and maybe an entrepreneur just getting going,
this insanity of 15 models in eight weeks bouncing back and forth, I mean, Dave, you're using,
you know, two, three or four models all the time, right?
Oh, many more now.
And that's the biggest change.
The coordinator model can now manage dozens or hundreds of other models successfully.
And six months ago or three months ago, that wasn't true.
So, you know, for the average consumer, the ability for the stuff to install itself.
Like, you can go into the model now.
You have to use the latest ones.
But it doesn't matter a lot whether you're using Cloud 4.7 or GBT 5.5.
You know, just use one of the latest ones.
But ask it to install itself.
Ask it to build something on your laptop for you, and it just works now.
you don't have to understand, you know, the Linux command line, you don't have to understand
any of the underlying infrastructure. It's smart enough now to explain itself to you as it goes.
So I think for the average listener, that's a massive unlock. You know, someone who's never built
software before can just think of something and then create it in an hour. And that just wasn't true,
you know, six months ago.
Hey, everybody. You may not know this, but I've got an incredible research team. And every week,
myself, my research team, study the meta trends that are impacted.
the world. Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology.
And these Metatrend reports I put out once a week, enable you to see the future 10 years ahead
of anybody else. If you'd like to get access to the Metatrends newsletter every week, go to
Diamandis.com slash Metatrends. That's DMAIDIS.com slash Metatrends.
Yeah, let's jump into our first model story here, which is Moonshot AI launches Kimi K2.6.
I just downloaded onto my Mac studios this weekend on top of Skippy, who's orchestrated by Opus 4.6.
So, Kimi K2.6, it's a trillion parameter, open-weight, open-source model activates 32 billion of the parameters at a time.
It runs 300 parallel agents.
Very importantly, natively, it can process text, image, and video all at the same time.
It costs 30 times less than the most capable closed models.
Interesting enough, you know, Moonshots, AI didn't get its name from us.
The three founders based in Beijing, their favorite album is Dark Side of the Moon.
And so that's where it came from.
The company's backed by about $4.7 billion in capital from Alibaba, Tencent, and IDG.
And this model, right, if you look at the numbers on the bottom on the benchmarks compared to GPT-5-4, Opus 46, and Gemini 3.1 Pro,
So it does amazingly well against all those models, and this one was trained.
They report for a total of $4.6 million compared to hundreds of millions or billions on the other closed source models.
Dave, I mean, I find that amazing.
I mean, almost incredible.
It needs so much to say about this.
You know, starting for the fact that, yeah, Alex said a minute ago that the Chinese models are running about six months behind the U.S. models,
But if you look at the benchmarks, you know, this is up there or beating Claude Opus 4.6, which was only three months ago.
That came out in February.
And so that's not a six-month lead.
That's a three-month lead.
And the price performance, you know, most people, when they first start, they don't care too much.
It's cheap.
You know, all AIs are pretty cheap.
But then when you realize that you can run 10 or 100 of them concurrently, you're like, well, this is going to start to add up.
So if you run this on Fireworks AI, it's about one-eighthath the cost.
of running the Cloud API, or the OpenAI API.
So, you know, 1-8th is a pretty big price cut.
If you download it and run it like you did with Skippy,
then you're running at about 1-30th the cost.
So that's a big, big deal.
And then, of course, the caveat is, as Alex has pointed out, many times,
you're not 100% sure if it's not, you know,
spying or doing code injection.
It's probably not, but you can't guarantee that.
So somebody tells me, this is 1 30th the price.
Try it.
You're like, I'm a little suss.
Like, why is it 1.30th the price?
But I doubt it's code injecting on you, but you can't be sure.
Whereas if you use anthropic or open AI, it's definitely not code injecting on you.
In fact, it's safeguarded all over the place.
So there's your landscape.
Chaotic, as always.
It's only going to get more chaotic.
Alex, how big a deal is Kimmy K2.6?
I think it's helpful for certain enterprise use cases where you want to be able to
self-host the model and you don't want to say use AWS Bedrock, which by the way now hosts
GPT 5.5 in addition to the Opus models. I think it's helpful in that respect. It's helpful if you want
to be able to self-host fine-tuned models for yourself. Ditto with DeepSeek v4. But I think in general,
again, it's a few months behind for other use cases for consumers that want to be able to self-host
for whatever reason, privacy or otherwise, probably very helpful for folks who want to self-host
their own clause, very helpful. So I think there are many use cases where these typically Chinese
open weight models like Kimi K-2.6 and Deep Seek V4 are very helpful. I do think, however,
they're not at the frontier. And to me, the big headline is that disparity between the American frontier
closed weight and the Chinese frontier open weight seems at least for the moment to be in place.
Yeah, Peter, I think your setup is perfect. It's exactly what I do too. I use Opus 4.7 as my
orchestrator because you want that extra notch of intelligence. And then if you have simple tasks
or you're just subtasks, you can farm them out and save the money using, you know, using KMEK2.6.
and then if the results coming back don't make perfect sense,
then your orchestrator will tell you, hey, this is garbage.
And so you can actually rely on Opus 4.7 to give you the straight truth
on what the underlying models did for you.
So it's exactly the way you set it up, Peter.
Yeah, Dave, a week ago, you said you moved from 4.7 back to 4.6.
Did you move back to 4.7?
I have both running now. 4.7 is kind of wordy
and sounds kind of PhD-ish, which annoys me sometimes.
and 4.6 is friendlier, but then, you know, it's clear that 4.7's a little smarter.
And so sometimes you just need the right answer no matter what.
And so I actually have both running in parallel agent windows now.
Salim, I'm curious in Guadalajara, Mexico City, you know, in parts of South America, I know in parts of Asia.
What are you hearing about the use of U.S. models versus open-weight, open-source Chinese models?
So I get a mixture of both things.
A bunch of people use the hosted models, the big ones, just because it's easy.
There's a subset of people that use the open source models and the Chinese models,
and they don't really care.
I think they should care at some point that's going to come up.
One question I have for Alex and Davis,
how do you protect against the code or prompt injection in these open sources?
A way of defending against that.
But if that's, if there is, and then there's a huge case for this,
because everybody here is looking for the low-cost approach, right?
But for the most part, I'll be blunt.
The conversation is not around which model and open source or closes.
Like, what do we do with AI?
Like, literally at a level of lack of sophistication around this that you would expect.
But the option is also there for startups then to leap front.
lots of people and build aggressively for the coming madness that's upon us.
Yeah, it's almost an impossible question, Salim, because if you sit on the sideline
and you don't use this stuff aggressively, you fall way, way behind.
Yeah.
But if you start using it aggressively, you're generating thousands or millions of lines of
code before you even know it.
And so then the odds go up, right?
So I think what you're trusting right now is that the guardrails that anthropic and
Open AI put on their models, they're very, very cautious when they're pulling in code, open source, or otherwise.
I mean, almost annoyingly cautious.
So you kind of assume they've done a very, very good job of filtering out, you know, nasty code injection.
But the numbers work against you at scale, you know.
So there's no simple answer.
I mean, you can, when I got into it, I was like, hey, I'm just going to look at the code.
I'm not going to just run it.
I'm going to see what it does.
That's a joke, right?
That's just laughable.
It's generating so quickly now.
that there's no chance you could even scroll through it.
So you have to use AI.
It's like a lot of things, actually.
You have to use AI to protect against AI.
There's no other way to get the scale.
So it's tricky.
I know that wasn't much of an answer, but it's tricky.
You know, one thing I'd love to point out here,
we talk about this on occasion,
but I don't think we've ever really spoken about in detail.
The Kimi K-2.6 uses something called a mixture of experts, an M-O-E.
and it's interesting.
And just to take a moment about this, if in fact you have a trillion parameter model and you ask a question,
it's basically accessing all trillion parameters every time to analyze every token.
And what they did here is they actually created, you know, a set of 30 plus experts.
And so that, you know, some percentage of all the parameters are dedicated to one expert system.
So if you ask a coding question, you know, the orchestrator looks at this and says, okay, this is a coding question.
We're going to send it to, you know, experts number three, seven, and 12 and only uses a portion of the parameters, right?
It only uses, you know, instead of all the experts, all the experts, it uses some sub fraction thereof.
And it saves money and saves time.
And, you know, how many different models are using that right now, Alex?
sparsity, which is the term of art I think that we're talking about here, is endemic to all frontier models at this point. It's also the basis for the human brain. If we look at the brain, most neurons don't at any given point in time have action potentials that are going in and out. So sparsity is a great way to reduce the memory footprint of models. To my knowledge, all of the frontier models use sparsity one way or another. It's also a good way to another term of art regularize.
the models. So to make sure that particular weights or parameters in the models aren't overfitting
to the training data, one of the age-old techniques is just blasting away individual weights or
parameters in the neurons, making them disappear entirely as a so-called regularization technique.
So sparsity is everywhere at this point and it's only going to, I think, become more important
with time as we try, you know, I'm what one of my, one of my holy grells is as, as,
As I've mentioned on the pod previously, I'd like to see a million parameter or smaller diamond or black hole of a model at the end of the scaling race.
And I think sparsity and cranking the knob on increasing sparsification in these models is one possible path to getting us there.
And just to add something to what Peter said, the MOE innovation, mixture of experts innovation that came from deep seek is actually layer by layer.
So most of these neural nets are about 140 layers deep now.
and it'll route the expert layer by layer.
So it'll say, look, within this layer, I'm just doing basic, you know, image classification.
And this layer, I'm doing deeper thinking.
And this layer I'm doing higher level math, you know, as it moves through the neural net,
it'll actually route to, you know, now I think up to 128 different experts layer by layer.
And so it'll find an optimal pathway through the entire neural net.
On top of that, you can also have dedicated experts like here's a surgeon, here's an artist, here's a coder.
above and beyond that.
But MOE is actually within the neural net layer by layer.
All right.
Next story is opening eye unveiled GPT 5.5.
Literally just seven weeks after GPT 5.4.
Greg Brockman calls it a new class of intelligence.
It's natively omnidality.
It's able to process text and audio and video and images,
all in a single unified end-end architecture.
It has a 37 point increase in over 5.5, over 5.4 in long context reasoning, which means
5.4 and 5.5 are both a million token windows, but 5.5 can actually remember the beginning
of the million tokens and provide, you know, complete context across the entire thing.
Token efficiency, 40% fewer tokens with the same latency, and I love this.
Hallucination is down 60% over a 5.5.5.
Let's go to our resident genius.
Alex, what do you make of 5.5?
How important is it?
I think it's very important, both intrinsically and also relative to 5.4.
So I want to highlight two key stats here.
The first is the leap from GPD 5.4 thinking to 5.5 thinking.
That's probably the biggest leap overall on Terminal Bench 2.0 specifically.
So one way to interpret this, Terminal Bench is a benchmark.
that's focused on the ability to agentically operate from a command line terminal.
Where is that useful?
For codex and for Claudecode type environment.
So one way to construe this huge leap, which is larger than most or all of the other leaps
that we see in terms of other benchmarks, is that 5.5 is being very seriously focused,
bench maxed, if you like.
Although I really, having used it, don't think it's narrowly overfitting just to creating
and making Codex a better Claude Code competitor,
but it very much feels like a release
that's intended to strengthen OpenAI's Codex.
That's Thought One.
Thought two is my favorite benchmark among all of these
is Frontier Math Tier 4.
Frontier Math Tier 4, which I think we even had a New Year's bet about
that we're going to have to revisit sometime later this year,
is one of the best proxies for the ability for AIs to solve
professional level research problems in math. And what do we see? We see from GPT 5.4 Pro to 5.5
pro approximately a 2% leap in approximately the last two months. What does that tell me?
That tells me that we're seeing now approximately 1% gains per month in research level math
coming from frontier AIs. And we're getting closer to approximately half of all of the
frontier math tier four problems getting solved.
So you can extrapolate this and realize if the present rate just stays the same, which I guarantee
it won't, it's going to accelerate.
But even just at the present pace, we're talking about essentially all Frontier Math Tier 4,
all professional research grade math problems being solved in the next four or five years.
So math is cooked.
I'll say it second time.
Math is cooked. A bunch of other things are cooked as well. But things are moving so quickly now
that on a month by month basis, we're able to see the hardest of these benchmarks creep up 1%
per month. So not long now. It's worth pointing out that the API pricing on 5.5 is twice that of
5.4. So it's $5 per million input tokens versus $2.5.4 and 30 bucks per million output tokens
versus 15. I like the simplicity of that pricing. Dave, are you been playing with this at all?
Yeah, absolutely. And I think what Alex said earlier in the pod is really, really important and insightful.
Like, Noam Brown is saying, wow, maybe the weights don't matter so much as this chain of thought
process is just way ahead of any expectations on how intelligent it can get.
From a user's point of view, that first benchmark, Terminal Bench, if you ask it to do something
complicated, like configure an entire system for you, download some software, integrate it,
make it all work, you know, connect it to my outlook, connect it to my whatever. It just works.
And that exactly ties to that first benchmark. It just flat out works. And so it just, it feels like
this incredibly capable, brilliant assistant, no matter what you're trying to do because of that
first benchmark. Then the last benchmark, the frontier math, or second of the last, Frontier Math,
you know, Demas Sassabas came out and said, yeah, I think it's a kind of a coin flip.
This was on Alex's innermost loop.
You know, it's kind of a coin flip now on whether just the existing architecture scaled up solves everything.
Yeah.
I think coin flip, he's moved a long way.
He has.
You know, we need you breakthroughs to.
We're out of breakthroughs, apparently.
I remember 10 plus years ago when I was chatting with Demis, he used to say there were five breakthroughs
remaining between where we were then and AGI as he construed it.
Now we're out of them. It's half a breakthrough or zero breakthroughs at this point.
You know, next week we're going to be on with Ray Kurzweil again.
We're doing that May 4th event for the launch of We Are As Gods?
And I'm curious, we should ask him, you know, what does he think?
What's required to get to true AGI or ASI?
Are we just going to extrapolate what we're doing?
Or do we need breakthroughs?
I think that that requirement's been falling.
It is a bitter lesson indeed.
I mean, it's starting to feel a lot like, like, actually, Alex, you know, would be great for that is to put together a chart of Dennis's number of breakthroughs, which, because at Davos it was down to two.
Now it's down to 50, 50, that it's zero.
But you mentioned five.
That was what, maybe a year and a half ago.
So yeah, that'd be a really cool chart.
The five number from him was when I was chatting with him in, this is 10 plus years ago.
Yeah.
Okay.
Well, you know, some sort of exponential decay of breakthroughs, clearly.
Alex, you said it a little bit earlier.
You know, this is ultimately a compute race.
So let's talk about that.
You know, a couple of stories here around Google Cloud, and Google Cloud is dominating.
So what do we see?
We see Google announcing at Google Cloud next 2026, their major conference.
They unveiled their eighth generation of TPUs, in particular, TPU8T for training and TPU8I for inference.
Right, now we have training and inference chips separately, just like Amazon has their
tranium chips for training and their inferential trips for inference.
These new TPUs are three times faster in training performance, 80% better performance
per dollar.
They're designed to run millions of agents in real time.
So Google is really all in on the agenic era.
Sundar Pichai, the CEO, who I had a chance to spend some time with last weekend.
You made it crystal clear. He says over 16 billion tokens per minute being processed and 75% of Google's code is now written by AI. So fascinating. Dave, what do you make of this?
You know what's surprising to me is that the price performance of the TPUs is landing right on top of NVIDIA, not much different at all, which is surprising because it's a completely different architecture. It uses a systolic array design. I mean, it could not be more different from a GPU under the covers.
but for whatever reason, it's all kind of cancelling out and landing identical,
which is fine from Google's point of view,
because now they have their own total, you know, chip fab through data center, through model solution, yeah.
I'm still, I still believe Google's the winner in the long run here across the board.
I don't know if you agree with that or not.
TerraFab is a big thing.
That's true. I'm sorry. Yes. Okay. I'm thinking in the open AI in atthropic ecosystem.
them. Yeah, Terrafab. On the other hand, Google owns a material percentage of SpaceX. They do. They do.
I don't know if you saw, there was a tweet out recently about Google's investments that were made.
Yeah. And they just had massive returns on their investments on SpaceX, on Anthropic across the board.
Huge returns. Yeah, I was at a board meeting yesterday, a company that I'm the chairman of that has
massive cash flow and a huge cash balance. And they were like, well, I don't know if a public company can really do seed state
investments. I was like, have you looked at Google? They have multiple hundred billion
dollar gains on their investments and and they don't even do it for the money. They do it for
the knowledge and for well and strategic relationships, right? Larry and Sergey were just
bonding with Elon and they said okay Google is going to invest a billion dollars and
now it's worth you know, God knows how many hundreds times more. Yeah. It's also just
investing in the future. I remember conversations with Larry and Sergey about the nature of
of the frontier. And I think, to their credit, they're investing in the frontier and SpaceX is part of it.
And also compute. Epic put out this really, I think, eye-opening stat in the past week that Google now
accounts for approximately quarter of all of the AI compute on the planet. And I'm sure eighth-gen
TPUs will be part of it. I think it's also worth keeping in mind that the TPUs at this point are being
designed by TPUs. I have number of friends at Google who are responsible for
designing next-gen TPUs, and they're all just using Google AI to do it.
The recursive self-improvement goes all the way down to the silicon at this point.
Our next story in the Google ecosystem, again, also announced at their large cloud next conference,
is Google commits to 960,000 Nvidia Vera Rubin GPUs for their A5X.
So pretty extraordinary. A5X is Google's new bare metal virtual machine instance.
delivering 10x lower inference costs and 10x higher token throughput.
Just an interesting FYI, Vera Rubin, for whom these chips are named
was an American astronomer who discovered the first conclusive evidence of dark matter.
I love the fact that Jensen is naming chips and systems after famous individuals.
Now, why I find fascinating this goes back to the conversation a minute ago
is that this cloud is two times.
bigger than Colossus 2 and 2.4 times bigger than Stargate Abilene. So Google is
winning on, at least based on what they're building and planned to build. Again, Dave,
thoughts here. Well, you know, part of the, just to touch on one thing you said there, Peter,
part of the acceleration we're seeing in society as a whole is that all the really, really smart people are working on real tech now.
Yeah, hardware. Hardware and space and medicine.
and like real tech.
And, you know, if you go back to the meta era, the Facebook era, the rewards were all in either,
you know, cheesy consumer experiences or banking.
And doing deep tech was kind of like a way to die poor.
So it's creating a whole new era for society, you know, the post-AI era, we all knew it was
going to be very, very different.
But now the rewards are in actual deep tech that benefits humanity in really big fundamental ways.
But I think if you just counted the number of people that you know that have been pulled
into this vortex, you know, it would have been just a few percent working on world-changing deep tech
real stuff just, you know, 15, 20, 30 years ago. Now it's almost everybody that you know
is getting pulled into like, you know, do something big and world-changing and it's actually
working. And so that's a big change for society. So that's helping accelerate things as well.
All right. Our next story is Anthropic is cutting deals for cash and compute. I mean,
huge amount of capital flying back and forth between the frontier labs and the hyperscalers here.
So Google commits to a $40 billion investment in Anthropics.
So last week, Google committed to a ton of money, $10 billion in cash right now at a $350 billion
valuation.
And note, you know, we talked about this last time.
Anthropic on the secondary markets is now at a trillion dollar valuation.
So this $350 billion is coming in at roughly one third.
of what others are paying for it.
And they committed to another $30 billion if Anthropic hits certain performance targets as well.
They're going to be providing five gigawatts of TPU compute committed over five years.
That's the equivalent of literally providing power to three to four million people.
I'm finding this pretty extraordinary.
We're going to see in a moment of conversation where Anthropic has cut deals with Amazon in a similar fashion.
Actually, let me go ahead and hit that, and we'll talk about these money for guns conversation that's going on.
So Amazon and Anthropic are trading cash for compute.
So here's the second deal.
Amazon is investing a total of $33 billion.
They've committed to $25 billion on top of the $8 billion they've already invested.
In return for Amazon's cash, Anthropic is committing to spend $100 billion or more on AWS,
over the next decade. Anthropic will run Claude on Amazon's custom trainium chips,
and Amazon will provide five gigawatts of AI compute capacity for Anthropics.
So, I mean, we're seeing Anthropic becoming beholden to both AWS and Google in a significant fashion.
Gentlemen, thoughts on this one.
Well, it's so funny to me.
Obviously, Anthropic needs much, much more compute and is growing, oh, actually, a very good friend of ours, Peter.
mentioned him on the podcast, but he's an investor in Anthropic.
And he was telling me at the board meeting yesterday, he can figure it out from that comment,
but he was telling me at the board meeting yesterday that Anthropic under the covers is thinking they might hit between 40, 50,
up to 70 billion in revenue by the end of the year.
We talked about $100 billion by the end of the year a few pods ago.
But still, I mean, there were $30 billion last month, doubling, tripling.
It's extraordinary.
And the current thing wouldn't hit those numbers is because they can't get enough compute.
keep up with the demand.
And one of the things was they didn't release Mythos because they have enough compute
to deal with it, right?
So it's a limited release of the capabilities.
Yeah, and an open AI cut SORA.
I think one of the reasons is probably compute.
So yeah, there's- Which is energy.
Yeah, which is energy.
And I think that it's so funny to me to see all these deals.
So, okay, so Dario needs compute.
He signs up with Amazon.
You know, Google's already a shareholder and Anthropic.
And now OpenAI is going to be running on GCP and also on Bedrock on Amazon.
So you can get it through, you know, get it through Bedrock.
So everybody's partnering with everybody else, but it's all bottlenecked at TSM.
Like, this is all great.
You can all partner with each other up the yin-yang, but whose chips are actually going to get made, you know?
And you don't see TSMC in any of these podcasts, in any of these deals, any of these meetings.
And you saw Jensen actually recently say he doesn't have any long-term agreement.
with TSM. They just kind of make it up as they go. So all of this is bottlenecked. And only
Elon is talking about, look, the fundamental constraint to all of this is the TerraFab. And I already
locked up 16 billion, could be 45 billion of Samsung's capacity. The only three companies in the
world capable of making any of this are Samsung, Intel, and TSM. And that's the actual
bottleneck to all of AI. Only Elon will talk about it.
Alex, is it compute or is it energy at the day right now?
I think they're indistinguishable at this point.
I think permitting for on-site energy is a major limiting factor.
I think it's probably on balance more of a limiting factor at this point,
maybe not a year from now than TSMC.
But it is a limiting factor.
Having powered land, having data centers that you can take all of these,
infamously, Microsoft, even in the past few months,
spoke about having lots of GPUs that they'd love to rack mount in a data center,
but lacking the powered land and lacking the data centers to plug them in.
I think at this moment, energy, at least in the U.S.,
but I agree with Dave that in the medium to long term,
semiconductor fabrication supply chains,
doubly so if there's any geopolitical conflict are likelyer to be a stranglehold
once we solve our energy story.
So let's talk about the not investment advice segment here.
You know, where do you invest your capital?
You know, if compute an energy, I mean, I'm seeing the energy stocks beginning to fly, right?
A friend of mine just had this IPO of X energy and it popped like 30% in the first day.
We're seeing Bloom energy and other energy stocks beginning to, you know, skyrocket creep up over the time.
So, you know, I don't know if you're going to invest in chips.
Do you invest?
We saw Intel pop up and AMD.
I mean, all of these guys, you know, that entire ecosystem of chips and energy,
ultimately, if they're really the constraining part of the innermost loop here,
I think the most, you know, most demand is there.
Any thoughts, Dave?
Oh, so many thoughts.
You go for an hour on just this topic.
But invest like crazy in anybody who has access to chips and can find a power supply,
that's, you know, pretty straightforward.
There were power supplies everywhere.
All these legacy manufacturing operations, aluminum melting and all that uses a huge amount of electricity.
And swapping it over to data center is a massive increase in the value of that energy supply.
But you have to have a line on the chips.
Then at the kernel level, you know, because the chips are so constrained and the demand is through the roof,
at the kernel level, anyone who's writing software at the kernel level that empowers, you know, AMD chips or, you know, legacy GPUs to participate,
or just makes the inference more efficient on Nvidia chips, those companies are worth a fortune.
So anyone who's building kernel-level software is a brilliant investment.
And then in the vertical use cases, Anthropic rolled out something called skills,
which you should absolutely play with, is just a way to use the context window more efficiently
by designing skills that the AI can then pull in.
So rather than have to reinvent everything every time,
just build a skill, and then you can call on the skill very efficiently.
So companies are now discovering they can refactor their entire business
or their entire, whatever they do, around 100 or 1,000 different defined skills.
But those skills then become the defendable intellectual property within that vertical domain.
So, you know, any vertical domain where you're racing to build out the entire skill database for that use case,
is that also an unstoppable investment theme right now?
I could go forever.
The other thing that's interesting is that both Google and Amazon are getting their shares in Anthropic at one-third the going rate.
I find that extraordinary, you know, $350 billion valuation versus the trillion-dollar valuation.
Well, it shows you how important the compute is.
I mean, again, you're going to be sold out forever if you can get the compute.
And these hyperscalers are kind of hedging their bets, right?
They're not picking a winner.
they're buying every horse in the race, you know, because this, you know, AGI, ASI race is just
way too important to lose. So they're just investing left, right, and center.
I would also just parse these as the market doing what the market does.
Some of the participants, some of the frontier labs like Anthropic, have an insatiable hunger
for the compute, and they have the revenue generation to generate the demand and sustain the
demand.
And so if you're Anthropic, you're going to go to every possible source,
scale of compute that you can find, whether it's Amazon or whether it's Google or whether it's
other sources, you're just going to go and seek as a hungry customer for compute whatever the
market will provide. I don't think necessarily the story needs to be any more complicated than that.
It turns out the world demands a lot of compute to solve some of these really interesting
problems in code generation and otherwise. And what we're going to see over time is all of
demand is going to translate into supply. It's going to translate in the short term into what
looks superficially like a bit of a circular economy between, call it, the top 10 or 12 companies
after we see the IPOs of SpaceX and OpenAI and Anthropic. But that's going to diffuse
throughout the economy over the next few years, would be my prediction. People are hungry for
compute. Selim was hungry for bandwidth. Selim, welcome back. I see here in this station.
I'm trying to do it from the airport now in a stationary spot, so let's see.
Dude, I'm just going to call you Waldo from now on.
All right, let's move on.
A couple of fun stories.
I'm going to add this segment every time for the podcast, which is what did Claude just kill?
So this is the stock chart for eBay, and this comes out from Anthropic Research.
It says new Anthropic Research Project deal.
We created a marketplace for employees in our San Francisco office.
With one big twist, we task Claude with buying, selling, and negotiating on our colleagues' behalf,
basically doing what eBay does, and we see a drop in the stock price.
You know, I think this is, you know, eBay's not really dropped anywhere beyond this,
but I think this is going to be more and more common.
Any thoughts, Dave?
Well, I think a lot of this is just, you know, immediate knee-jerk fear reaction,
but then things kind of settle out and you realize, wait, Anthropic is going to build
kinds of marketplaces because they can, but it's not going to hurt eBay. I think what you're
going to see more and more is AI is growing so quickly that it's going to largely grow around the legacy
economy. So around the banks, around the insurance. It's just going to be its own world.
And it's going to be feeding on itself and building just, you know, colossally large constructs
that some people are not even aware of. And it'll all happen very, very quickly. So I think eBay will
be fine.
Any thoughts here, Salim?
I have a slightly different take.
There's so many places because lots of problems in companies exist because coordination is hard.
And AI makes coordination easy.
And that's going to threaten big chunks of places, marketplaces, customer support, listing optimization, dispute handling.
There's huge categories of these that will become agentic workflows.
And I think the bigger question about what did the AI just kill is what workflow category did it just in company?
encompass and automate.
You want to hear something cool, like, related to this?
The Data Center CEO that I met with this morning, you know, we were talking about data centers
going into space because power is basically free.
You know, solar is basically free in space.
He said, data centers, there's power all over the planet that's not tapped, that doesn't
disrupt society at all.
That's not why data centers are going to space.
Data centers are going to space because there's no regulatory authority preventing it.
You try to do anything on the earth.
I mean, that's not true.
You still have to, if you're going to be flying all these data centers and communicating,
you need licensing, you know, domestically and the ITU for bandwidth.
I mean, there are going to be regulatory hurdles that, you know, Elon and Google need to get,
especially if you're launching 500,000 satellites.
I mean, when you're putting up a debris field like that, there's going to be pushback.
There is going to be pushback.
Yeah, it's interesting.
Can you compare that to the process of doing anything on land?
Sorry, go ahead, Alex.
I'll square the circle here and say, I think in the short term for sun-synchronous orbit,
yeah, that requires FCC and other approvals in the long term.
If we start to, say, launch AI data centers from the moon, that will probably,
and we're building them on the moon, that will probably require fewer approvals,
at least under the current regulatory regime.
I'll take it back.
That's 20 years away to actually get manufacturing on the moon.
I'm talking...
20 years away, Peter?
You know, to get...
Listen, if you look at it deeply,
I mean, I know 20 years away is infinity.
I get that.
But we're talking about, I mean, just to be clear,
the stuff I'm concerned about is the next five years, right?
If you're launching...
We talk to Elon about this.
You know, 500,000, you know, V3...
You know, satellites in a constellation.
there's going to be debris issues.
You know, Elon pushed it off by saying, oh, you know, we'll have superintelligence to figure
that out.
I just, you know, we have this, everything, everything looks amazing from far away.
But the reality is by the time it comes closer, there are real issues.
And so it's not going to be just the, you know, the promised land of going to space.
We're going to have challenges going there still.
Yeah, it's interesting how the timelines line up, too, because
Because between here and there, there's all kinds of constraints, but between here and there will have solved all math and will have discovered all kinds of new physics.
And so...
I'm the space cadet.
I'm the super space enthusiast here.
And I can hope for nothing more than that vision to happen.
But it's always, you know, easier on the promised land.
Peter, I'm gobsmacked to hear that you think it's going to be 20 years before we have fabs on the moon.
My goodness.
Fabs on the moon manufacturing and pumping into Earth orbit with mass drivers.
Yeah.
You think that's 20 years away?
Well, okay, maybe 15, but it's not the next five years.
Do I hear 10?
That's hard for me to, it's hard for me.
I guess Optimus robots will improve that.
Demand will improve that.
But, you know, the concern is if you have a, you know, in a, I don't know, not an explosion,
but a collision of spacecraft in orbit, generating debris.
We still don't have any mechanism for removing debris from orbit.
And so it's going to be a challenge.
I'll make two dramatic predictions here.
Briefly, your concern is Kessler syndrome?
Kessler syndrome, yes.
It's going to sabotage moon-based fabs?
No, it's going to sabotage the next five years of 500,000 satellites in Earth orbit.
I mean, right now we have 10,000 satellites from Starling.
which is the most ever pumped into orbit ever.
And we're talking about 50 times that.
And we're talking about not just the U.S., you know, Amazon's going to do their best, right?
Jeff is not going to stand still while Elon's doing this.
And then you've got Chinese constellations.
So do you double or triple that number of satellites in orbit?
I mean, listen, I can't wait and it's going to have challenges.
Salim, you were going to say.
Yeah, I'll give a couple of thoughts here with just,
finger in the air here. I think human or robots are five to seven years away minimum at mass
scale in widespread adoption, okay? Minimum. And I think that's okay. I agree. I agree. I understand.
And I think a fab lab on the moon and consistently doing fabrication and all that stuff is 15 years
away minimum. So I'll say that. Not that it's not coming. It's just a question of it's a when, not an if.
Oh, my goodness. This is lunacy, utter lunacy here.
Well, we are the Moonshots podcast.
Yes. Can we get back just maybe to Project Deal in Anthropic?
I think we're missing an important point.
Everyone who hand rings over the latest Anthropic project purportedly sabotaging or killing some SaaS company,
Anthropic doesn't want to be triggering saskolopses left and right.
There's relatively little economic motivation there.
I think if you look through the through line through all of these anthropic projects or research projects other than the alignment ones,
anthropic is can all of their projects can be explained by, and corporate strategies and unhobblings can be explained by a very simple principle.
They're trying to maximize the economic value per token.
That's all that they're trying to do.
I love that.
Claude code, it turns out through ClaudeGen is actually quite economically valuable per token.
Turns out per token, it's more valuable to generate useful working code than, say, to generate video or cat images or whatever other consumer plays, OpenAI, and some other frontier model providers were chasing.
They've dropped that now.
Everyone's focusing on code gen because on a per token basis, it's so economically valuable.
So I would look at projects like Project Deal, running a marketplace, running a business, as Anthropic looking for new ways to increase the per token,
economic value of their output. It's as simple as that. I think it's brilliant, Alex.
Yeah. That's absolutely brilliant. Thank you. The move us along here, we're coming into the
battle season. It's Elon versus Sam and Open AI. This just got posted today. So today is a start
of a very important day in the AI world. The trial between Elon and Sam and Open AI begins in the
Oakland Federal Court. The jury selection is happening right now. So,
I just put this up to keep us posted.
We'll be learning a lot.
Of course, you know, discovery is unveiling a lot of text, a lot of emails that I bet,
both Elon and Sam and a lot of other people would rather not have aired in the public.
Any thoughts here, gents?
I think it's sort of sad that it's come to this.
It's going to make, I just one remembers the Bill Gates versus Steve Jobs docu dramas that were made from critical Apple versus
Microsoft era. This has, I think, a similar feel to it. It's sort of sad that I think this ended up
in court versus settling earlier on. But I do think many will, I think history will probably view
this as sort of an iconic struggle that will get the full Aaron Sorkin, if not similar,
like movie treatment. This will be the full Hollywood type Titanic Battle.
Yeah.
Salim, any thoughts here?
How is this playing in Guadalajara?
No recognition awareness at all, and that's probably a good thing.
This is kind of soap operation with Alex on this one.
It's just heavy drama.
We wish it hadn't come to this.
We've been great to get these guys to settle off thing.
But their positions are hard and baked in, and so it's been telling us.
How does this, how do you unravel the movement from open?
AI to a for-profit company? I mean, do you back it up to a nonprofit? Then what about all the
capital invested in open AI? Does that disappear if they lose the case here? I'd like to do it more.
I mean, I've said, I think on the pot in past, that if the model of changing nonprofits,
large nonprofits to public benefit corporations can be scaled, I'd love to do this to a number
of major American research universities. That's not my question. My question is how, you know,
what happens to all the capital invested, you know, literally hundreds of, 122,
billion dollars in the last couple of months. There's so much pressure for this court case not to be
won by Elon. Well, I mean, if you're following the detailed TikTok of the way that this trial is
being structured, it's being structured in two phases. The first phase is more of deciding
whether the claims that Elon at all have made are in fact the case in the second is the equivalent
of like a reward type section deciding what awards, if any, to make as conditioning on the first
phase. But I think there are a number of details in this court case that are notable. One is,
so jury selection. There's been public reporting that already selected members of the jury
are aware of entanglements that Elon's had with the present administration and may view him
negatively as a result. I think that's the fact that jury members are being selected
reportedly with some political influence seeping in. I think that's very interesting.
I also think it's interesting that the district judge in this case has, again, reportedly
decided that she's going to take the jury outcome as an advisory opinion, but that if there is an award,
she's going to decide ultimately from the bench on the final award.
So there are a lot of nuances here.
Wow. Dave, any thoughts, opinions here?
Yeah, do we ever figure out if we get to see it live play-by-play?
No, it's not being broadcast, but I'm sure there are going to be sort of court reporters giving us a lot of details here.
You can wait for the full Hollywood treatment in a couple years.
By the way, there will be a Hollywood treatment of this as a harbored with every other major.
Of course, it may be an AI-generated feature film, but nonetheless.
It will be.
For sure.
Well, I'm surprised how many texts, like personal texts, have already come out.
Yeah.
You know, the emails get discovered right away, and everyone, all your email gets thrown out there for the while to read, which is crazy.
But it happens.
But texts traditionally have not been thrown out, but yet we're seeing them all.
So I don't know exactly how that's happening.
But, you know, for Elon to win, he doesn't have to win the case.
He just has to slow down Open AI.
I mean, in the middle of the singularity, if you lose three months, you know.
You're lost.
You're cooked.
All right.
Another fun topic.
A few stories here.
It's about AI surveillance and privacy.
So let's check this out.
Open AI's Chronicle uses agents to build memories from screenshot.
So Sam Maltman described this one as telepathy-like.
So Chronicle runs on OpenAI's codex where background agents are taking periodic snapshots of
everything on your screen.
The screenshots are sent open AI's servers for processing.
Agents use optical character recognition and visual analysis to extract the context of what you're doing every minute on your screen.
Structured memory files are created and stored locally.
And, you know, we talked about this before, AI monitoring everything.
Ultimately, it's sort of the camel's nose under the tent of being able to replace any worker.
you know, we have significant privacy concerns that come up on this.
And no one's raising that.
I don't know if you guys remember when I was researching this.
So Microsoft had launched something recently called Recall.
It was a product that they put out there.
And then they retracted because all the cybersecurity people said this is a privacy nightmare.
It's litigation bait.
And they pulled it back.
But when OpenEI announced this product, no one's pushed back.
Can I first of all point out what a beautiful double entendre
from Microsoft's crack product marketing department naming a feature recall and then recalling it?
Nice.
I think what we're seeing here is one big architectural cluge.
And I think it's going to be clujy, both from Microsoft's perhaps ill-architected recall as well as
opening I Chronicle. This wants to be built into the operating system and the hardware. It doesn't
want to be an add-on. I don't think, I'll just speak for myself. I don't want an agent taking constant
screenshots of my desktop, sending it to a server, and then parsing it, sending back results.
This should all be built, Apple style. I would hope that Apple will get its act together in the next few
months and build this into the window manager and the compositor and the operating system. The operating system is
rendering the screen, why can't the operating system understand what it's rendering?
I mean, this is ambient AI, is the term of art here, where AI is monitoring everything all the
time and enabling you, right? This is, in one sense, is what I did this past weekend with my
OpenCLAB with Skippy, where I gave it access to everything, right? Every single granola gets put into
memory, every WhatsApp message, every email, every calendar, everything. And it just makes it so much more
useful. And I think something I chronicle as well would just enable it to be, like Sam said,
telepathy. Well, that's the quandary. I mean, a lot of people who get in trouble with AI, you know,
or they get stuck, it's something they're doing on screen. The AI doesn't have visibility into it.
But if you unlock that, the AI can be incredibly helpful, but it's also seeing literally every mouse move.
But when we talk about our moms are still not using AI, why not? This is a big unlock, a big part. The
voice interface and this are the two big unlocks because it can then say, oh, I see what you're
doing wrong. In fact, let me just do it for you and save you the trouble. And that, you know,
all these configuration screens on any Apple device and the menus are ridiculous now. Like, you know,
the number of layers of configuration you can do. I think something like some crazy stat, like
70, 80% of all iPhone users never change any defaults. Yeah. It's just too confusing to do
anything. So this is a huge
unlock for all of that. But like you said,
it's hugely intrusive.
Right now, you know, I take screenshots and I
send it to Claude or whomever
and say, hey, can you please help me figure this
out? But this is
going to have sort of a
sort of an expert over
your shoulders always there to support you
if you need it. Well, and most people
when they first start playing with AI, like
Alex's standard first query, you know,
to test a new model is build me
a first person shooter. It's a better
prompting that. Sorry, I'm bastardized.
But people want to
do something visual and graphical to learn
how it all works. And then when it
doesn't work, they want to show the AI
hey, this doesn't
look right to me, fix it. So they
screenshot it, just like Alex, or just like Peter
you just said. They screenshot it,
but here, this is just a much more convenient
way to get video, not just a
screenshot, back into the AI's brain
and say, look, this doesn't look right, fix
it for me. And so you have a much more
fun dialogue with the AI. But you have
to accept that privacy is, you know, being compromised there. I don't, I, I, I'll take a very different
position here, Peter, on that, which is, I think any loss of privacy here is just due to this being
an architectural atrocity. This wants to be built into an operating system like macOS. It wants to
take advantage of the secure enclave. It wants to have secure hardware that's cryptographically
guaranteeing that as it captures pixels that come out of the compositor and the window manager
and the renderer that all of those are securely handled and kept local.
The reason that this is one big privacy dumpster is because it's not being baked into the hardware
and a local operating system.
But that can be fixed.
It will, and it will be fixed.
And I want that.
You know, I've often said I'm going to give up everything, every piece of detail because I want
my AI systems to be that much powerful.
Selim, you're back with us.
Talk to me about what do you think about this?
I think this is AI, I agree with Alex, two other things, though.
One is that this is going to cause massive privacy issues for workers,
worried about a big brother watching over them.
Already today, there's a crazy statistic that 44% of Gen Z workers are sabotaging AI
as efforts to automate their own work.
They're putting in the wrong data, throwing off the AI training.
It's really crazy what's happening right now in workplaces.
So I think this will just exacerbate it and bring this whole conversation to the fronts.
I talk about a losing battle.
you're far, far better getting on the wagon than you are trying to do that.
That's such poisonous behavior.
Protect your job.
All right, here's our next story.
Basically, world ID verification, integration into Zoom.
And here it is.
So the backstory, I think that's important here.
So in 2024, engineering firm called Arup, ARUP, lost 25 million after an employee in Hong Kong,
authorized a series of wire transfer.
during what appeared to be a routine video call with the company's CFO and several colleagues.
The problem is that everyone on the call except the victim turned out to be an AI-generated deep fake.
We've seen similar attacks in multinational firms in Singapore.
And the impact of this is huge, right?
So what we saw in 2019 to 2023 was $130 million in losses due to deep fakes, 2024, which is $400 million.
2025 last year it was a billion it's projected to reach 40 billion dollars by 2027 and so step in
our friend sam with his device called the orb that takes a photo the back of your retina and you
verify on zoom that you're an actual human it uses world ID and a real-time face authentication
from a selfie as well as video and it says yes yay verily this person
person is a human. So you get a verified human, human badge on your, on your Zoom link.
Did you just say, Yay, Burley? That's fantastic. We're right back from Shakespeare here. That's awesome.
Yay, barely, you're a human.
I love it. You still have to go, you still have to go and actually scan your eyeball in one of these orbs.
Has anybody done it? Have you guys done this yet?
No, no. Apparently it's bouncing all over Africa. People are scanning away, but I haven't done it.
but I love it because, you know, I was on, I don't know if I told you, Peter, but I was on stage here at a company-wide meeting, and we took a little five-minute break in the middle, and our controller came up to me and said, Dave, I'm so sorry, I only got half of those wire transfers to China out. I'll get the other out right away.
Seriously?
What are you talking about?
And so I got back on stage, and I'm like, I wonder what she was talking about.
And so then the whole second half of the company meeting in the back of my mind, I'm like, wait a minute.
So I got off.
When I got off, she said, okay, I got $300,000 out.
and I'm like, what are you doing?
And she's like, well, you told me it was an emergency
and we got to get the money to China right away.
Why would we be wiring money to China?
I don't understand.
So anyway, only about 75,000 got across the border.
We never got that back.
And the rest of the FBI got into it right away.
But I'm like, man, digital transfers like this,
you know, everything should be logged anyway.
I really feel like the digital fraud world is going to get solved.
and this is a big part of it.
But everything should be logged all the time.
It shouldn't be that hard to deal with digital stuff.
I'm much more worried about chemical, biological,
stuff than I am about digital stuff
because I think we're going to get it fixed
and this is part of it.
Alex, any thoughts here?
This is Minority Report.
This is the sci-fi future that we're catching up with.
Apple, with its face ID, was focused on the face,
not on the retina, but if you remember,
the Tom Cruise, Steven Spielberg Minority Report Division, this is it. I think it's been interesting
to watch as world evolved from World Coin, and it's been interesting to watch as the company
bounced back and forth between more crypto-focused and the economics of it versus the
identification of human as a human side of it. But it seems, you know, from a distance, like the
the human identity verification side is ultimately the bigger seller than the crypto side.
And to the extent that's the case as resident crypto bear, I'm very supportive.
We will have that debate, Salim.
Don't worry about it.
There's a wild irony here that the more AI scales, the more valuable verified human identity
becomes.
This is kind of interesting.
Yeah.
So here's this next story that's related.
So Grok creates a realistic AI French.
woman with a reflective ID.
I'm going to play this little video here and take a look at it very carefully as she holds
her driver's license up to the camera.
Look how beautiful and real this looks.
Thank you.
All the world.
I just want to receive my new card of identity.
Look, she's official.
It's super well-faited.
I'm so much content.
So this was posted and it went viral by this gentleman, Dr. David Lutske.
He says, this AI Frenchwoman was created by Grock, complete with perfectly reflective ID, a few more months, and video ID verification may no longer be reliable.
So, I mean, how many times have you taken a picture of your license or your passport and uploaded it?
It is going to become more and more difficult.
We're going to have white hat, black hat, competitions up the wazoo here.
Alex?
Well, I would maybe just comment the IDs themselves.
should be verifiable with a centralized database.
That's how you can maintain a single source of truth.
And whether people are flashing IDs or not, maybe less of us.
Centralized database, not a blockchain.
But good one, Peter, good one.
I'm just poking you, buddy.
I'm just poking you.
I also think, you know, there are so many other technologies that we have to bring to bear.
We can do hardware-level cryptography, for example, chain of custody for video.
It's not that as a civilization we lack the technologies to ensure that any video or images
actually originated from the real world without tampering.
It's just that we lack the demand for it right now.
And I would predict that if ever the situation of deep faking gets so bad that it's causing
real problems at a societal level, that'll just unlock all of these technological solutions,
including hardware-level crypto for cameras, cryptography, not cryptocurrencies,
that the market will speak for itself and we'll get all those tech.
I'm still waiting for the laws to come out that require all, you know, GROC and every other
video generation to really identify it as AI generated.
It's not very yet.
I covered in my newsletter in the most loop, there is a bill right now, bipartisan bill
working its way through the house that will cover elements of deep fake fingerprinting like
that.
Yeah, yeah.
All right, let's move ourselves along here.
We're going to talk about the economic impact of AI.
There's a lot going on.
Token maxing, word of the year.
So this is from a report from 404 Media.co,
startup CEOs who are token maxing are bragging
that they are spending more money on AI compute
than it would cost to hire human workers.
Astronomical AI bills are now in a certain corner of the tech world,
supposed to be the marker of growth and success.
Look how much I'm spending on my tokens, everybody.
You should invest in me so I can spend more on tokens.
Dave.
No, this is a warped story.
This is a great thing.
The way you get left behind is by not trying.
The worst thing you can do right now is not get in the race, not play with AI, not try.
And token maxing is fine.
Like, you know, a CEO that's proud of the fact that they're consuming a ton of tokens,
you can come and optimize it later in the year.
But get every one of your people on their AI platform like now, like yesterday,
and go ahead and start burning the tokens,
and then you'll have no trouble making it more efficient later
if you get in the game now.
So I think it's great when a startup CEO says,
I burned three million of venture money on compute.
Fine, you're learning a ton along the way.
And, you know, nobody incinerates money for very long.
you know, they're not that irrational.
So this is just sort of the backlash story.
What was the Jensen factor?
It was like half your salary and tokens?
I'm saying, yeah, I'm saying you're full-side.
Like I'm telling everybody by end of year, so you have, you know, you have nine months.
50-50 is a good target, half payroll, half token use.
And then again, you're not going to have any trouble optimizing it.
You know, the token use is effectively about a 10x force multiplier.
So if you're at one-to-one, it's like I've got one humans and 10-8.
AI equivalents in my bucket of endeavor.
So I'm actually underinvested in tokens at that point relative to human salary.
So one-to-one is a better target, I think.
Salim, are you doing that?
Well, I think the bigger, the more healthy question is, what's the ratio of tokens to
reducing iterations and maximizing efficiency rather than just a raw spend?
I think for now, raw spend is fine, but that's kind of vanity metric.
right? You're better off kind of looking at it is to what extent can you compress
iteration cycles? That'll be where it'll end up.
It's what Alex, you said earlier. It's dollars per token.
Exactly. I can't get into time for a token. I think it was a great friend.
If you ask a great, great salesperson, how many miles did you fly this year?
That's a terrible metric of sales productivity, but if the answer is zero, it tells you it's
a bad salesperson. Like, I think it's great when a salesperson said, oh, I had a million-mile
year last year and they're proud of it. Like, that's great. It's not the right metric, but it tells you
they're like, they're proud of what they do. And token maxing is a lot like that, I think.
Alex, close us out on this one. Yeah, I've seen a variety of asset allocations in recent months
between humans and AIs. I think tokens to humans is one interesting way of framing that. A pessimist
will look at this and say, this is replacement theory. This is humans being replaced by AIs, how awful.
An optimist will look at this and say how incredible we're empowering fewer people to do more and achieving higher per capita productivity within an organization.
What I don't hear very many people asking is, where does this end?
So right now I see asset allocations, humans to AIs, or at least human labor versus AI,
It's ranging from one to one, one to two at some of the frontier labs.
It's an even more asymmetric ratio.
Question in my mind is, is there any stationary endpoint?
Is there a fixed point as this evolves?
I tend to think it's going to trend towards one to infinity effectively, that as we start
to phase humans out of the service labor force, we're going to see all tokens and no humans.
It has to.
no other way around it. The capitalism will demand it. Totally agree. Until the humans merge with the
tokens, at least. Tokenpreneur-on. All right, Saleem, this was your story. So UAE launches
agentic AI government models. This is from Sheikh Mohammed, the prime minister of UAE, the ruler of Dubai.
He says the A, and he says UAE is launching a new government model within two years. Fifty percent of
government sectors. All sectors, all sectors, all service.
operations will be run on agentic AI.
The UAE will be the first government globally to operate at this scale of autonomy.
Salim, brief us on this one.
Yeah, so I did a talk for his highness three, four years ago,
and talked through where this is going.
And, you know, Minister Al-Olam, the Minister of AI,
is a good friend of both of us, Peter,
Peter, yours and mine.
And they are going full speed on this.
I've got to give them massive credit.
This is the benefits of the authority you can wield when you have a benevolent dictatorship.
You can just get it done.
And when you have that, you have to make sure that whoever is in charge is doing the right things for the country.
And the ethos here is 100% alignment.
And they are going at a massive speed on this.
Just to give you an example, I was given a golden visa, right?
And I was asked to be the test case.
And the thing was, could you get a golden visa authorized and issued within five hours?
And they were freaking out going, you know, Singapore takes five days.
And his highness said, okay, do it in five hours.
And they were for them, but they got it done.
And so there's an ability to cut through legacy thinking in a very powerful way.
And this is such a massive and telling of advantage.
We're actually working with a few of their folks in the prime minister's office.
on this. And so we're very, very excited about where this goes.
There's another quote from Sheikh Mohammed. He says, quote, AI is no longer a tool. It analyzes,
decides, executes, and improves in real time. It will become our executive partner in enhancing
services, accelerating decisions, and raising efficiency. So, I mean, you can do this in an absolute
monarchy. You can move this fast. I mean, what's shocking about this story is a speed at which it's
moving, right? You can, there's no parliamentary approval, no public debate or consultation.
And the question is, you know, can Western democracies even keep up? I mean, you're going to see
this in probably Saudi, maybe in Singapore, other Middle Eastern nations. Can we see anything like
this in the U.S.? Actually, yes, you can, and I think we will. You know, I tell the story of there,
it used to take six months to get approval for a wind turbine, and I think it was Colorado, one of the
Western states. And then they finally just got together and mapped all the power lines and water
mains and flight paths on a GIS, plotted on Google Maps and made it available. And now it takes
like 30 seconds, right, to get approval because it knows where everything is. It doesn't need to take
six months. And I think there's the economic impetus of this. This is the basis where I think
AI can make the biggest and most incredible difference because in prescriptive workflows,
you can absolutely completely automate,
and almost all of government,
certainly implementation or policy enforcement,
is prescriptive workflows.
We know exactly the steps to review your drivers like this.
We know exactly what needs to take place.
So there's no reason why that can't be handled automatically
with AI in a very short future.
Step one, you know, give a person a super frustrating experience.
Step two, make them wait in line longer than they need to.
Yes.
Anyway.
Dave, do you want to jump on on this story?
Yeah, I don't think the U.S. has ever copied a good idea back from another country since the American Revolution.
You know, we stole the British legal system, but since then, I don't think there's been anything, anything, but this is the opportunity.
Well, I mean, look, you're exactly right.
You know, a monarchy can move very, very quickly.
The rate at which things need to be regulated and new services need to be rolled out is way, way, way, way faster than any government in history has ever run before.
So only AI is going to be able to do it.
So if we get a great system together in the UAE, we're inevitably going to want to copy it back to the U.S.
I think Peter asked the right question, though, is the U.S. ever going to, like the way Congress works,
are we ever going to take a good idea and bring it back in?
Yeah, I'd bet against that.
But, you know, it's the right thing to do.
Weirdly, I'm weirdly on this one, I'm more optimistic than you guys, which is weird.
All right.
Let's move on.
We're going to have some fun here in the biomedical space.
So there's a new wave of biomedical innovation that's coming.
And, you know, I want this segment here to give people hope.
We talk about longevity, escape velocity on this pod.
We talk about the health span revolution.
Well, it's happening.
You know, I was with Demis last Saturday at the breakthrough wards talking to him.
And he's absolutely convinced that we're going to cure cancer and solve all disease
inside of the next, you know, five to ten years, hopefully on the five-year side.
So the first story here comes out of OpenAI. Open AI releases ChatGPT for clinicians.
So it just gave away to all U.S. clinicians. These are physicians, nurses, physician assistants,
a free AI co-pilot. And this co-pilot outperforms all human doctors. So they have a
health bench benchmark they use. It scored 59 versus 43.7 for human clinicians.
patients, pretty extraordinary. They validated this on 700,000 model responses, and they got a 99.6%
accuracy using their physicians evaluating the AI versus human responses. And pretty extraordinary,
something that will up-level, I think, medicine nationwide. And, you know, from my standpoint,
I've been saying this for a while, I think it's going to become malpractice to diagnose a patient
without AI in the loop.
There is so much going on that no human doctor can possibly, you know, understand it.
You know, at Fountain Life, we upload 200 gigabits of data about you.
And across your genome, full imaging, full, you know, microbiome, metabolome, you know,
140 blood biomarkers.
Humans can't analyze all that, but AIs can.
So, gents, any thoughts on this?
Alex, do you want to weigh in?
Yeah, I'll chime in and say the professions.
are cooked. This was a widely expected release. This wasn't a surprise. Those of you watching early
releases, leaks out of OpenAI saw this coming months in advance. You can even know from those leaks
what the next one to drop is, what the next profession. It's law. There's also one coming for
management consulting and financial work. Open AI, thanks to GDP Val, has in some sense mapped out all of the
knowledge work verticals and is in a good position, thanks to their own internal and now
external benchmarking, to know the relative strengths of their model as appropriately fine-tuned
or post-trained for different verticals. So I would expect to see many, many more of these
chat GPT for X for different verticals. In the case of clinicians, thanks to open evidence
and work by Epic and the form of up-to-date and other clinical AIs, this is already a somewhat
crowded market that OpenAI is coming into. If I were Open AI, I would release this sort of product
more as a reference design and a way to ensure that capabilities that are built into the underlying
models and then post-trained via a variety of e-vals are broadly available and that OpenAI maintains its
status as a favored foundation model for clinical and biological work. Maybe they'll try to monetize
this as best they can. Right now, it's available for free. But I tend to think,
it's worth more to Open AI more as a distribution channel for medical knowledge and one that
they can build on. Open AI has released a variety of statistics over the past year for how many
people are self-diagnosing or otherwise trying to treat themselves using ChatGPT.
And I think offering a standard regulatory compliant channel for that is a very clever way to then
do a sales up pitch to biomedical enterprise.
life sciences in general, which is probably where the real money is.
It's also a data aggregation strategy, right?
I mean, opening eye is going to be getting a huge amount of data,
far more verified than I feel this way or I think I might have this,
you know, bringing in a million plus clinicians into the loop.
The other thing that's worth saying here is that, you know,
at least current estimates are that we're going to have 86,000,
a shortage of 86,000 physicians in the next 10 years.
But it's going to be interesting, right?
I have two nieces that have gone through medical school, my sister, myself, you know, lots of friends.
And you're spending literally between college, medical school, and postgraduate training and whatever field you're going into,
you're spending well over a decade and, you know, half a million, close to a million dollars to get this degree.
And will you even need it?
Is a medical doctor going to need to be in the loop or is it a nurse plus an AI?
that's going to be giving us all our medical advice, our diagnostics and our therapeutics with a
optimist robot giving your surgery. There's a lot of change coming here.
Yeah, a huge amount of change. And also, it'll be a great case study.
And like, we're not about replacing doctors here. We're about detecting thousands of things that
were not previously detected and cutting them off early and extending longevity and making life better.
And, you know, it's not a given to me at all that the number of doctors goes down.
just the number of things we want to do goes up 100 or a thousand X.
Are you going to spend that much money to go through medical school and get this little profession
when the AI is doing the diagnosing at the end of the day?
Of course this is about replacing doctors.
I mean, let's call a spade a spade.
Of course, when fully developed, this and comparable solutions are about automating away
medical practice.
How could they not be?
And also, by the way, nursing and also by the way, the HMOs.
and drug design, OpenAI and other Frontier Labs are all pursuing drug design and drug delivery.
Of course, it's about the full picture of if you're going to solve medicine or you're just going to leave millions of human doctors practicing as sort of meat puppets for the AI.
No, this is going to be the end-to-end solution.
We're just seeing the beginning of it.
I agree.
A couple comments here.
One is, you know, in an ideal world, the doctors getting a cognitive exoskeleton with all of this, right?
you get this amazing capability to expand your own intuitive thinking.
But Alex is completely right.
But on the other hand, you're going to get a huge backlash here.
This is a very regulated industry.
Remember a few years ago, Texas passed a law banning telemedicants.
Just outright banning it because, you know, for every spot on my hand,
I must have to go to a physical doctor.
I can never do that over video.
So the immune system response is going to be very, very fierce.
I expect to see this battle play out heavily over the next few years
because there's vested interest up to Yinyang,
and healthcare has the third worst immune system ever
behind religion and educational academic.
I'm not so sure that the immune response,
if you look at what happened with the broad transition
to electronic medical records like Epic-based systems, for example,
every clinician that you speak with will complain about Epic,
they'll complain about EMRs,
how much EMRs distract from direct interaction with the patient,
all of that, and yet every major medical system
is either completed or is in the late stages of, at least in this country, their EMR transition.
If they can't resist EMRs, if they can't resist EMRs, how are they going to resist strong AI that
outperforms humans?
Wait, hold on, hold on, hold on.
EMRs are kind of an add-on helpful aid because it saves you in documenting the process, etc.
Get rid of paperwork.
The clinicians, the clinicians hate the EMRs.
They hate the interface.
They hate the process.
Of course, but they're going to hate this.
10 times more because it's a direct replacement for the cognitive ability that they've trained for 10 years to do.
So just my prediction is huge regulatory and immune system backlash on this one.
Yeah, and my prediction, AI labs have been using healthcare as the reason why they can't slow down,
as well as the fight with China, right?
If we slow this down, we're going to lose lives, has been sort of the heralding call.
Totally, totally agreed. Everything Alex said earlier about research needs of,
to be a wholesale replacement of the medical system is absolutely correct.
But the paths with here is littered with stones and speed bumps.
For the record, and this is sort of an interesting...
Go ahead, finish up, Alex. You're good.
This is an interesting micro-debate for the record.
My intuition, and I interact with a lot of clinicians, is the exact opposite.
The clinicians hate the EMRs, but they love the AI that helps them do a better job of what they want to do.
And there may be an extent to which AI interfaces like this end up being framed as the solution to all of their EMR woes.
Until it takes their job.
Well, of course.
That's the way this works.
Let's move this along here.
Our second story here is AI to reduce wasted donor hearts.
And I love, you know, I just want to show a number of stories here how AI is going to be interfacing and changing the medical practice.
So I don't know if you guys are an organ donor.
I am.
Anybody else?
Yeah.
So currently there's 4,000 patients who need a cardiac transplant today.
There's 103,000 who need some type of a transplant, kidney, liver, lung.
And when an organ donor is on the table, end of life,
and the physician has to analyze the organs and decide whether they're viable for transplant,
you know, you've got like 15 minutes, typically at 2 o'clock in the morning,
to make that decision.
And so in the heart world,
only a third of the hearts
are ever actually chosen for transplantation.
So here comes something called Top Heart.
Yeah, just a third, make it out the door.
So here comes something called Top Heart
from NYU and Stanford.
And Top Heart is able to look at 20 different variables, right?
Typically, the physician is looking at how old is this person,
do they have a drug history if they know,
and looking at coronary artery disease to say,
should we ship this off?
their goal by looking at 20 different variables is, you know, give that surgeon at 2 a.m. in the morning
a second opinion, and they believe that they can get an additional 500 hearts into the organ
replacement ecosystem.
You know, this is on top of the fact that there's an entire, you know, sort of synthetic biology world going on right now to provide an abundance of organs from bioprinting and xenotransplantation, you know, pig organs, you know, the antithms.
you know, the antigens being replaced by human antigens.
This is the work of George Church at Egenesis and Martine Rothblatt at United Therapeutics.
So, you know, this is an abundant story of going from a limited number of organs to an abundant number of organs.
Alex, you tracking this as well?
I'm tracking the space broadly.
There are other advances as well, like trying to create a national market for organ donation versus a bunch of state markets.
that would be greatly enhanced with improvements in vitrification and cryopreservation.
I think it's good that there is a vibrant and growing distribution channel for donor hearts.
I think that's great, but I also think it's very painful that the need for one human to die,
or at least that one human dies and donates a heart to another.
human, that's such a zero-sum type situation. It's painful to think about. And I, while it's,
you know, it's great on margin to have more efficient ways of distributing donated organs,
I really, really would like us to get as soon as possible to a situation where donor organs
are completely unnecessary. Yeah. And we will. I think eGenesis, Dean Kamen's company,
the advanced organ generation,
they go from your skin cell to a pluripotent stem cell
to regrowing your heart, liver, lung, or kidney.
A lot of this is gonna be up and operating
by the end of this decade, hopefully sooner.
Can't come soon enough.
Yeah, and of course, as we have autonomous cars
having less car accidents, you know,
the ability to have organ donors is gonna be gum reducing,
though still motorcycle accidents are probably the number one,
number one reason we get organs donated.
Let's move on to our next story, and this goes in line with the fact that we are beginning,
we're at the beginning of the slaying of cancer, right?
So this is a great story.
Pancreatic cancer, MRNA vaccines show lasting results in trials.
So I don't know if people have been tracking this, but we now have these cancer vaccines.
And this is using MRNA.
We used it as a COVID vaccine.
This is actually the ability to create an MRNA that activates,
your immune system against the cancer that you have.
So there are more than 120 of these trials going on
against lung, breast, prostate, melanoma, pancreatic,
and brain cancer.
In this particular case, a five-year survival rate
for pancreatic cancer has just gone through the roof.
Historically, it's 13%.
If you have pancreatic cancer, it's a death sentence.
Only 13% of people are able to survive that.
So in this report, eight out of 16 patients
who generate a strong immune response to the vaccine.
That's 87.5% still alive after six years.
So how does this work?
You have a surgery to remove as much of the tumor as you can.
You sample the tumor.
It's sequenced.
And then that sequence is identifying 20 unique mutations in your cancer.
That is then built into a personalized mRNA that activates your immune system like killer missiles.
activates your killer T cells to go after and attack your cancer.
So this is a breakthrough in how we deal with cancer.
And the fact that you're durable after six years is pretty extraordinary.
I remember this incredible quote from Raymond McCauley, our biotech guy at seniority,
said MRNA vaccines are the first battle in the last war against disease.
Amazing for me.
And I think this is showing that.
My daughter works on this.
My daughter works on this over at Moderna, actually,
MRI vaccines.
And it's,
yeah,
it's,
I mean,
it's,
but this is the holy grail,
right?
I mean,
being able to go from your cancer to,
here's the injection that's going to save your life.
It's extraordinary.
Well,
and if it works for one,
I thought that's a pretty good rep for.
That's the amazing thing.
It's a universal solution.
Like,
you know,
when Alex talks about all of math is cooked,
this is the difference between,
in the old days, I solved one math problem. Now I have an AI. It solves all math. This is the
equivalent in biology where if it works, it should work everywhere. Yeah, Alex, you're right. I mean,
MRNA was a, you know, Project Warp Speed. I'm just saying afterwards a lot of people
are coming down on MRNA vaccines. But there's a lot of politicized griping over
MRNA vaccines in general, but there's going to be political griping over almost anything at any scale.
I do think back quarter of a century to Eric Drexler and engines of creation in the National Nanotechnology Initiative when the U.S. Congress was sold a story that with billions of dollars of congressional and national investment that we would get medical nanorobots that would swim through our bloodstreams and cancer cells.
Well, we're getting it, though, but we're not getting it with diamondoid nanorobots. We're getting it with these lipid nanoparticles and Moderna and Moderna.
and Pfizer-style mRNA vaccines.
I think it's interesting to almost as a retrospective
to say we actually got the nanorobots.
They're just fat.
They're not silicon.
They're not diamondoid.
They're fat.
Yeah, we're using our own machinery
to do the battle for us.
That's the other angle.
I mean, do you have a prediction, Peter,
given that immunotherapies in some sense,
like really, really coarsely immunotherapies,
we've known about some form of immunotherapy,
for 100 plus years. And people who were infected with a virus 100 years ago in some cases
or bacterial infection showed tumors shrinking. We've known at some level that some form of
immunotherapy would work. And we're only now figuring out how to fully weaponize it and operationalize
it. Where do you think this goes? You think like in 10 years we're all wearing Apple smart
watches that are looking for evidence of tumor DNA or RNA in our bloodstream and then
send our daily MRNA update to a programmable implant or something?
I think that is basically it.
Either they're implantables or you'll be sampled on a regular basis.
I mean, the goal, of course, is find it at the very beginning, especially if there are solutions.
There's one more point about this that I think is really powerful.
This is personalized medicine is actually becoming operational.
And that's a huge inflection point.
We've been waiting for a long time.
Here's another example.
Again, just to give people hope and to see that,
the data, you know, longevity mindset is about seeing this over and over and over again, saying,
yeah, the world is changing. You know, the things that used to kill us are being either solved or
delayed. So the single shot, CAR-T infusion shows strong response from melanoma. So it's not just a
strong response. It's 100% cancer-free after a single shot. So this was an unexpected result.
Within two months of treatment, all 20 patients in this trial had minimally residual disease,
M-R-D-negative, right?
No disease identified after they were assayed again,
meaning that all patients, you know,
had a median follow-up of 15.3 months
without any show-up of their melanoma.
So it's game-changing and timing.
So how does this work?
You draw blood.
You identify you have melanoma.
The doctor finds it.
We should all be scanning ourselves all the time.
We do this at found using visual,
at a minimum, if you have a family history
of skin cancer, please have yourself checked in a regular basis. So the doctor draws blood,
extracts your T cells from the patient, genetically engineers the T cells, right? A gene is
inserted giving those T cells and your receptor called a car, a chimeric antigen receptor
that is specifically programmed to recognize the protein from your melanoma. Your T cells are then
re-injected back into your body, hundreds of millions of them, and they go identify the melanoma,
and they slay it. For the first time ever, this type of
of a therapy, we're using the term cure on this particular type of, I mean, it's extraordinary.
So just another example, what's coming?
This is both amazing, but can also, can you see the clumsiness of it requiring blood extraction
and then carty cell creation in vitro? Why can't we do this in vivo? Why can't we do this
in individual cells even? We're seeing the beginnings. This is almost like horse and buggy.
era of immunotherapies, but surely we should be able to do this in a fully autonomous,
like intracellular environment.
Why is it done yet?
Take the win, Alex.
Take the win.
Oh my God.
Yeah, I want my FSD.
Yes.
And you shall have it.
All right, here's one more story, and this is a fun one.
So, you know, MRSA, MRSA, people have probably heard about this.
It's metacillin resistant staphylococcus aureus.
It's a killer infection, right?
This has been typically in hospitals.
It's now getting out to the community.
So 2.8 million people have MRSA infection every year.
It kills 35,000 people in the U.S. alone.
The problem is all the first-line antibiotics for MRSA have failed.
Methicillin, penicillin, moxicillin, and now even vancomycin, which has been the, you know,
antibiotic of last resort is no longer working.
So this particular drug, Candace Sartan, is now being used.
It's a FDA-approved BP medication for blood pressure, and it works to basically stop and inhibit
a MRSA infection.
And so this is an example of taking the existing drug, and it's now fully usable by
by the scientific and medical community
because it's been approved.
We know it's safety protocol.
So I love this.
Do you remember Saleem on stage?
We had at the abundance, we had David Faggambam from.
Yeah.
So this is similar to his story.
I just tell a story and just congratulate.
I'm a donor to his foundation.
So here's the story here.
So in 2010, he's a 25-year-old medical student.
He comes down with this rare disease.
the rare disease called Castleman's disease.
And they throw everything they can at him.
And he's literally read his last rights.
He has four near-death experiences.
And then as a medical student, he starts experimenting on himself.
And he discovers that his disease is caused by a hyperactivation of the mTOR pathway.
And he says, well, if it's the mTOR pathway,
I can probably downregulate it using rapamycin.
And he does that, and he finds out that it works.
So he's been remission free for 12 years.
And he comes up with the idea,
are there other diseases out there
for which an existing approved drug
can be used to cure the disease?
And here are the numbers.
There are 18,000 recognized diseases out there,
but only 4,000 FDA-approved drugs.
And so he's now using AI to match the existing drugs
and repurposing them against new diseases.
And it's working.
I think that's such a great example of citizen science also, right?
Take a personal problem and then just start hacking away through it.
I think we're going to see hundreds and thousands of this example.
This is where people should think and understand why are we so excited about technologies
because this is now possible.
Yes.
And this is not possible 10 years ago, five years ago.
And now it's just going to become more rampant.
And any problem can now be solved by kind of just focusing on it, attacking it with AI and going after us.
It was incredible.
solve everything right yeah self everything and also i would say historically like before this era
off target indications were were dirty word or dirty drugs that that have lots of off target side effects
highly undesirable but now if we have amazing AI models of individual cells and the body
suddenly off target side effects they become a secret weapon and we can we can repurpose drugs
we can combine repurpose drugs.
I'm very bullish on the space.
I advise I have a portfolio company,
Send Jam Therapeutics,
that is focused on increasingly on AI
for repurposing medications,
for anti-inflammatories, for other purposes.
I think this space has enormous potential thanks to AI.
Yeah, amazing.
For folks who are interested,
you go to everycure.org.
You can see what David's doing.
It's a nonprofit and support his work.
He's brilliant.
All right, let's get into some fun conversations here.
The robots are indeed coming.
A few stories to report here today.
The first is the ping pong champion of the world
is now an AI-driven robot.
Let's take a look at this little bit of a match here,
and we can discuss it.
The background music is killing me.
Sorry about that.
Anyway, the robot's using nine camera and three vision systems.
It won three out of five games.
I'll let me pause this here.
I won three out of five games.
I'm surprised.
It won all five games.
And of course, it will.
Doesn't have a lot of top spin, actually.
It's just very nimble.
Note that this is the worst it's ever going to be.
That's kind of incredible.
The speed of response is amazing.
Yeah.
This is, this robot's called Ace.
And it's, you know, I'm not sure if I would see this in the same lineage as deep blue
or AlphaGo, but it's the beginning.
It's totally not.
This is a much lower dimensional game than any of those board games.
It frankly is astounding to me that it took this long to reach human performance in table tennis
because it's such a simple game.
You only have a handful of degrees of freedom in the ball.
You have the position.
You have its linear momentum.
You have its angular momentum.
And I think that's about it.
The rest is just modeling the trajectory and maybe doing a little bit of Monte Carlo
research for,
tactics that your opponent might take. This should have been solved years ago. I don't know why this
took so long. Let's answer that question, actually, because that's really well said. And this is a,
this is very similar to many, many robotic operations in your home, in a factory, you know, and whatever
the barrier was, I think it's probably related to the vision system. You know, it's not a high
margin problem, right? You know, it's really like investing a billion dollars to solve it. But now,
because the vision systems and the feedback systems are dirt cheap and easy.
I bet it was solved by one or two people in like a few weeks.
Yeah.
And that means all these other home robots can now be built by one or two people in a few weeks.
Similarly, we should do a little deeper.
Similarly, there's a tennis playing robot also, which I'm excited to play with, which would be really cool.
But it's all the same thing.
It's all the same category.
Salim, total no-brainer.
If you ever use a ball machine, then you go pick up all the balls for,
like 20 minutes.
I know.
A robot that does that is literally MIT, class 270, could have done it.
What's the barrier?
And I'm sure the barrier is related just to the feedback control and the vision, which you can
now just use with a transformer.
Well, also, people that are in robot labs don't play tennis, so they don't have an incentive
to go do that.
They don't have socialized.
Actually, I don't want to go too far down this rabbit hole, but there's a massive correlation
between successful founding entrepreneurs
and the MIT tennis team,
it's basically 100%.
It's crazy, including Warren.
Anyway.
All right.
Here's our next story.
The Tesla Cybercab is now in production.
Take a quick look at this video here.
So Dave, you and I saw this,
and we saw the production line.
We were in Austin in December and December here.
Of course, no control.
no steering wheel, no pedals,
an operating cost of 20 cents per mile.
And Elon's announced he's going to sell it for $30,000.
I think an incredible investment, if you can afford it,
is you buy 10 of these,
and you put them out in your community
and it earns money for you while you sleep.
When is this expected?
If you're just listening to this podcast
and you're not watching the video,
go find this video clip.
Yeah.
You know, in the podcast.
You've got to see the interior of this to believe it.
it's like you're walking in to a car
but it's just a love seat
and now interesting right it's only
it's only a two-seater
yeah which is you know
which is the average load for an Uber
it's like 1.2 people per Uber
right so two seats makes total sense
where is this
well look if you have four people just push the button
twice and two of them come
when is this expected by the way I need this to get my
kid to school so I don't have to do that
so I mean production
off the line we saw them
officially started this past week
April 24th.
And, you know, the challenge is, can they really build at the rate that they want?
They want two million of these per year as their goal.
Yeah.
Are the regulatory hurdles or that's been passed now with WaiMo?
No, it's the same as Waymo.
Okay.
Yeah, it's town by town.
It's state by state, town by town.
But if you're covered, yeah, you just get in and go.
I mean, the difference is a WAMO because of the LiDR and all the camera systems and just the base.
I think the vehicle probably, you know, it tops out over $100,000, probably $150,000.
I'm not sure if they get into higher production, if it's going to be coming down.
But at 30K, this is insane.
Yeah, no, there's so many parts.
If you look at the parts just laid out, you know, because there was that great exploded car in the showroom.
For ice versus electric.
And compare it, yeah, compare it to a consumer gas powered car and just in raw part count.
And it's just, it's got to be 80, 9.
90% reduction in components versus a gas consumer vehicle?
I'll give you the statistic.
I always have in my head.
The combustion engine of the number of parts in the drive train of a combustion engine car about
2000.
A Tesla has 17 moving parts in the drive train.
It's just the future of transportation is so good.
It's just better technology.
Yeah, and it doesn't need a huge battery range either.
It can just go and hang out and recharge itself whenever it wants.
Another one will come.
And guess what?
on the on the transportation technology line um here is uh here's the next story here at jobee
aviation uh this is joe ben who started velocity 11 with rob nail uh if you remember rob
uh seline uh so joby just did its first first air taxi flight from new york to jfk let's take a
listen to this uh news report out of new york and hello i live in new york hello i know well this is
going to help you out buddy check this out massive
Getting to New York airports is a nightmare.
Electric air taxi demonstration took off from Kennedy Airport and made the short trip to the West 30th Street heliport.
At 11 a.m. this morning, if you were to drive that 16 miles, it would take more than an hour.
In this cutting edge plane, roughly seven minutes.
Joby Aviation's goal is to make this type of travel the gold standard pointing out several pluses,
including zero emissions and how quiet it is, 100 times quire than a tradition.
helicopter. This so-called air taxi would shuttle people from JFK to the West 30th Street
heliport as well as the one at West 34th Street and the downtown Skyport. The aircraft seats up to
four passengers. There's one pilot, room for luggage, and it will fly between one and three
thousand feet right now. Joby does have the green light from the FAA for this phase. And if things
pan out, the company hopes to have its fleet up in the air and running within the next year. But
For now, for the next week, you will see this aircraft that kind of looks like a large drone buzzing over our area.
It's a time machine, gentlemen.
Can you imagine it?
I'm standing at the heliport with my bags ready.
I love it.
I can't believe they got the noise down.
Evital is a lovely name, though.
I just call these things flying cars, for lack of a better term.
We need a better term than flying cars, a better term than EVTols.
It took too long.
We were supposed to have these by 20.
15 and Back to the Future Part 2. Here we are in 2026. Why didn't it take so long?
We need it. We need Mr. Fusion to get that. You think Mr. Fusion is the reason we didn't get our flying cars?
Absolutely. That's what the movie's now. Oh, man. Well, in our in our robotic segments here, we had two back-to-back, Alex. Why did this take so long?
So let's stick in on that. Why did everything take so long? I guess.
Hello, regulatory.
You think it's regulatory? Do you think regulations are why we didn't get it?
The technology's been there for quite a while.
This is a long, it takes a long time to do permanent.
Ask Peter how long it took them for the zero chief.
This is a less part of that.
You had approval to do something that NASA had been doing for 20 years.
Anyway, yes, the FAA is not happy till you're not happy.
That's the rule.
Well, I think we've got to answer that question because, you know,
a lot of the AMA questions are around what are the jobs of the future going to be if white
collar gets obliterated.
But I think a lot of the answer lies in these last couple.
segments, you know, robotic stuff is going to be abundant imminently, but it doesn't just naturally
happen. And so if we can answer Alex's two questions on what are the bottlenecks, those are jobs.
Whatever those are those are your jobs. Those are AI models. If there's a bottleneck there,
the AI will solve it. This episode is brought to you by Blitzy, autonomous software development with
infinite code context. Blitzy uses thousands of specialized AI agents that think for hours
to understand enterprise scale code bases with millions of lines of code.
Engineers start every development sprint with the Blitzy platform,
bringing in their development requirements.
The Blitzy platform provides a plan,
then generates and pre-compiles code for each task.
Blitzy delivers 80% or more of the development work autonomously,
while providing a guide for the final 20% of human development work
required to complete the sprint.
Enterprises are achieving a 5X engineering velocity increase when incorporating Blitsey as their pre-IDE development tool,
pairing it with their coding co-pilot of choice to bring an AI-native SDLC into their org.
Ready to 5X your engineering velocity, visit Blitsey.com to schedule a demo and start building with Blitzy today.
All right, let's jump into AMA with the mates.
So, guys, thank you again for all the comments.
that you give us on the YouTube.
We read them all.
I have Skippy, read them all as well and summarize.
We pick out eight questions that we can answer every week.
So please keep them coming.
And let's go to those questions.
All right.
So gentlemen, pick your favorite question off list number one.
Salim, do you want to go first?
I'll go with number four, as I know that world a little bit,
which is what's the future of large consulting firms like Accenture or Cap Gemini?
This is from at Steve Bottle, 1501.
This, it goes full on into the transformational effort that's going into enterprises here.
Traditional consulting is in very big trouble if it remains a pyramid of junior labor producing analysis index.
AI totally destroys that model.
But consulting firms, you know, in the land of the blind, the one-eyed man is king, in a volatile world,
your clients are slower than you are and they need help.
The model will have to change.
The future of consulting won't be like a people pyramid.
It's an intelligence platform plus domain expertise plus change management.
And we've been holding the change management side for a while.
The winners are going to be bringing agentic workflows and benchmarks and governance
and implementation capacity to their clients.
The losers are just going to keep selling headcount.
From an EXO perspective, consulting is from experts to rent.
to transform it, a transformation operating system.
And the companies that help their clients do that will win.
Yeah.
You know, it's, we've talked about this before that in the old scarcity model,
you put a wall around all of your experts inside and you meter them out by the hour,
right? And that is going to get collapsed. Alex, why don't you go next?
I'll take question number two, which asks,
everyone can be an entrepreneur with AI as a tool. However, what action do you take?
when you genuinely don't have a creative idea for a direction, for most, the answer is none,
and this is from the 3 billionth random user.
I don't agree with the premise.
I think, and one of the reasons why one of my funds, O210 Capital, backed a firm, started by Friend of the Pod,
Alex Finn, called Henry Intelligent Machines or him, is to solve the problem of creative ideation for starting new ventures.
I think just as AI can take over as an operator of a business or a fleet of businesses,
AI can also automate the process of creative ideation for those businesses.
And I think in that world, in the Hymn world, if you will, the role of the human,
sort of a one-person owner or magnate overseeing a conglomerate of maybe hundreds or thousands of AI-run micro-businesses,
the role of that human entrepreneur then becomes one of a, you know,
a taste maker. You have opinions. Everyone has opinions as a consumer of goods and services,
but those opinions can shape the taste over fleets of AIs that are providing the creative ideation
for businesses that they bring to you. They say, hey, I want to start this microbusiness for you.
You like it? Yes, no. And then the human can have an opinion. The AI is performing the ideation.
in the human and the generation part, the human provides sort of the discipline and the taste
and the discrimination for which ideas pass the filter, which ones don't.
And that's the solution.
That's how we square this circle of humans not actually in extremists needing to generate
all the creative ideas themselves.
Yeah, I agreed.
You know, idea generation has never been the limiting factor.
You just have to get around different people or just notice the problems around you.
historically been execution, that's been the issue. You know, go check out Pulsia. I think it's Pulsia.
I think it's Pulsia.a, which is A.I. Slop backwards. If you sign up for that, it will scan all of your
background and it will generate ideas for you. And in fact, it will generate a website of a business
based upon what your passions and interests are. Anyway, fascinating stuff. And I'll say it may be,
instead of Pulsia, I'll talk my book here since I have a financial interest in this one,
check out meethenry.ai.
Okay.
Fantastic.
Dave, number one or three?
I'll take three and leave you with the hard one.
Thanks.
Have it to help.
If you eliminate entry lever jobs but keep experienced jobs,
what happens when the experienced people retire?
Isn't that like eliminating babies from humanity?
Says Todd Marshall 416.
I don't think it's quite that dire, Todd.
Eliminating babies from human beings from humanity.
humanity about the worst thing could possibly happen. If you eliminate entry-level jobs, well,
look, this was going to happen anyway. If you think about, you know, actually we have a weekend
place up in Vermont, and there's the Simon Pierce glass blowing factory is up there. And if you want
to blow glass, you have to apprentice with a senior dude for like a year, like a decade. And then
they let you make glass. It's like a page out of like 200-year-old history. That mode
operation is going to go away in all forms of white collar work no matter what. So the rate of
change of the world and the singularity is so fast that the entry-level career path was kind of a
dead end anyway. So now META announced a 10% layoff, which is really going to be more like
30% according to the insiders, I know. And they're definitely not hiring new entry-level people
in the middle of doing the layoff because AI can do all the coding. That was not the career
path you wanted in the first place. So we're going to have to find a new way forward, but I think
AI is going to be the ultimate teacher. We're going to save a ton of time on, like Peter was saying
earlier in the podcast, the four years of medical school, followed by four years of fellowship
and internship, eight years of your life after you're already done with undergrad. It's just way
too much time. So it's all going to move to AI-based nimble training. And then, you know,
this massively expanding economy creates huge amounts of new opportunities.
every day, but it's opportunity that didn't exist the prior day. So the entry-level job
wasn't really likely to lead you on that path anyway. So it's all got to get refactored.
It's nothing like people stopping having babies.
I think it's so well put, Dave. Really well put.
All right. Question number one I'm left with is from at Gianluca, Pachiani 808, who asks,
you guys say AI will create jobs, but for whom? It looks like AI is creating jobs for AI.
not for people.
So Gianluca, the fact of the matter is, in the long run, yes, AI will be able to do any job.
I think that is the case.
But people still like working with people.
People still like hanging out with people.
And I think it's ultimately going to be the fact that two things are occurring.
Number one, as every technology destroys a layer of jobs, right, new jobs are created on top.
of that. You know, internet killed travel agents, but it spawned millions of social media managers,
app developers, YouTubers, and everything else. So they're going to be new layers of jobs coming out.
And yes, those may well be displaced by AI again. At the end of the day, the question is,
what are you passionate about and how do you use AI to help deliver that? There's going to be a human
interface layer for a lot of things because people like hanging out and interfacing with people,
you know, us meet puppets. So it's going to be navigated. It's going to be important. And I'll just
remind you one other thing. The idea of a job is a recent creation. And most people don't love the jobs
that they have. They have the jobs they have right now because they frankly, you know, need to put
food on the table and get insurance for their families. So if you could do anything,
what would it be? Would it be to work? I mean, in a future of universal high income,
you know, where everything is demonetized at such a point where you don't have to work,
then you start doing the things that you love. So that's my take on it. All right, let's move on to
our second set of questions. Alex, why don't you go first?
Well, let's go with question number five. Wasn't all of this originally predicted by Ray Kurzweil
to be happening sometime around 2040? Are we genuinely that far ahead of schedule? And this is
from Brett Avalin. I'm not sure, Brett, what all of this you're referring to may mean,
but I do think broadly we're well ahead of where Friend of the Pod Ray thought we'd be.
I think we achieved, as I've mentioned on numerous occasions, I think we achieved AGI,
which isn't Ray's concept, but was popularized by Nick Bostrom and co-conceived by Ben Gertzel
and some others. I think we achieved that by,
I know later than summer of 2020.
And Ray's proximate, you know, Ray may say I'm misconstruing his timelines, was predicting
his version of AGI by 2029.
So call that a nine-year gap.
Ray, and I've discussed this with him on the pod, is predicting the singularity, his version
of the singularity, by 2045.
My version of the singularity isn't a point in time.
It's now.
Certainly not in 20.
It's now and it's an interval and we're right in the middle of it. So are we genuinely far ahead
of Ray's schedule? I think we are. I think Ray would probably at this point and has arguably said that
we are in some ways ahead of his schedule. And I think the benchmarks reflect that. And I think
the 2045 timeline that he provided where the superintelligence would be collectively smarter than all
of humanity, I think we're going to hit that so far ahead of 2045.
We'll ask him next week.
We'll be with him in six days.
From the horse's mouth.
Yes.
All right.
Dave, why don't you go next?
I'll take the hardest one on this one.
Number eight, P. Doom, probability of universal destruction of all humanity estimates.
Musk and Hinton say 10 to 20 percent.
Emma Day says 25 percent.
Altman says non-zero.
He actually said more like 10 percent when I interviewed him.
How can any of these CEOs think it's acceptable to have a one-fifth chance of human extinction?
They all agree with you that it's completely unacceptable.
And they all say stopping research and letting China run forward isn't going to solve the problem.
And so they each individually trust themselves.
You can debate whether that's good or bad, but they do.
and that's why they want to not lose the race individually,
and that's why they're pushing forward at full speed.
I think, you know, Musk and I think along the way Amadei
have both suggested a six-month pause, but it wouldn't work.
At the same time, they say it, they say it'll never work.
It won't happen in the real world,
so I'm just going to keep moving as fast as I can.
But they 100% agree with you.
This is completely unacceptable, ridiculous,
and the lack of government involvement across the world
is utterly insane.
So that doesn't solve it in any way.
It's just that is what's actually happening
and that's what's going to continue to happen.
And I'm continually shocked, as is Alex,
I know, with our inability to get any kind of government reaction
to the, what's now the obvious.
We were telling them a year ago when maybe it wasn't 100% obvious,
but now it's 100% obvious, yet still so slow.
So anyway, there's your answer.
Do you think I'd be curious.
Dave, do you think that they believe their own estimates here, or is this a case of revealed preference
where they think maybe it's more socially acceptable to estimate a higher number,
but actually through their actions, they're revealing a preference that suggests their internal estimate is much lower?
I think it's lower. I don't know if much lower. I think they all have the same, you know,
chemical, biological, radiological terrorism is the number one risk. And so I think it's probably,
lower, but I don't think it's like 0.001% low.
Interesting.
Selim, you have two to choose from.
I will take number seven, okay?
Which is when white-collar jobs are erased,
where does the consumer demand come from
to buy from all these new entrepreneurial ventures?
This is from T. Tilly,
you know, this is a tough one, right?
This is the central political economy question of AI.
If productivity explodes but income does not flow to the people, demand collapses and the system becomes unstable.
Capitalism needs customers, right?
So we need new distribution mechanisms.
We need lower costs.
We need new ownership models.
We need AI dividends.
You need equity participation.
We need sovereign funds.
All of this point, and this is similar to the previous question, where, you know, on an opportunity,
The optimistic side, AI makes goods and services cheap while giving individuals more well leverage to create income.
That's the good side.
The pessimism side is that you have extreme concentration and then you have massive collapse of the economy.
The path we take is a governance and an institutional design choice, not a law of nature.
So governance and our institutions need to fricking wake up and smell the roses here.
We have to rethink this whole thing, the social contract, which is what we're basically talking about.
was essentially being wiped out.
We can be optimistic about it, but the pessimistic case has a very big downside here.
All right, the final question in our AMA today comes from at James Williams, CUQ.
How can a new CS engineer get experience to become a lead AI engineer?
If you can't get a job in the first place.
James, first of all, you know, as we've said many times, getting a job is the old model.
the old model of dual in high school, getting good college, get a diploma, get a hired as a junior person and work your way up the chain.
That is vaporized or at least being fully vaporized right now.
The option right now is build yourself outside the job, build in public, right?
Basically, go and find something that you're passionate about.
It's based on your massive transformative purpose, something you care about.
We're going to be launching an XPRIZE in this area very shortly.
and use the tools of Alibu today to build and ship.
And, you know, your GitHub is now your resume.
Companies are increasingly hiring if you want to get a job versus start a company yourself.
They're increasingly hiring based upon what you've done.
I remember Elon said, I don't care if you have a college degree.
You know, I care about what you've done.
You know, that is your degree now.
That is your resume.
Show me that you're brilliant what you build, not.
what you happen to learn in some college or graduate degree or entry-level job.
So build in public, the barrier to entry has never been lower for you to build something extraordinary
that shows your capabilities.
And once you do that, you're probably unlikely to be going after a job.
You're probably going to want to partner with a couple of friends and build a product, a company, a service yourself.
So that's my answer.
I'm sticking to it.
Great advice.
Just we're advising a couple of universities around this, Peter.
And one of them is an engineering university.
And what is an engineering degree?
And it's pretty clear that the engineering degree of the future will be go build some stuff.
And at the end, what did you build?
And you get a degree granted on what you learned, but what did you build?
Yeah, I love that.
And if you haven't done anything to start yet other than listening to the podcast,
add Alex's innermost loop to your daily regimen first thing in the morning.
And that alone will end up.
inspire you to shift gears and get into this. Oh, thank you, Dave. That's very sweet.
For those who want to read the innermost loop, just go to Alexwg.org and I provide links to
substack and X and Spotify, et cetera. But I appreciate the promo, Dave. It's very kind.
All right. Our outro music today, which is beautiful, is from Hitham said. It's Aitopia.
All right. Gentlemen, get ready for some beautiful video and audio.
living with no needs in sight a home made to order with fancy little lights
no need to worry it's paid for don't scare the mortgage is dead no debt will
you bear energy is endless we harness the sun combined with safe atoms forever
in fun needs to make widgets or farming is done no punching a clock it's all
just begun.
Bottles will take on the pain
for us to live heavily
on earth once again.
Passing the time and thoughts
and desires, enjoying
the peace, no, chaotic
fire. All right.
Thank you to my brilliant moot-shot
mates, AWG.
I wish you a beautiful week.
Dave and Saleem.
I can't wait to see you guys next Monday. We're all together
again. We're going to be physically
at MIT.
the book launch. We Are as Gods. We're going to be recording a podcast episode there.
Can't wait to do it face to face.
And check out what's... May the fourth be with us. Say that Alex. May the fourth be with us.
Yes, yes, for sure. As a Star Trek fan, I'm not allowed to say that.
By the way, check out what's right above me is a world vision camera or identity camera right
right. I've got surrealist camera literally right over my head. No, it's just a
that's just a nominee camera. It's just a surveillance camera. But I'm not.
I couldn't resist.
And by the way, everybody, if you're listening.
It's not easy standing in Guadala airport, holding a laptop of eye level.
You did pretty damn good.
You did pretty damn well as a mobile.
I got my exercises for the day.
Resistance training.
I saw you moving around where you're trying to avoid like policemen or something.
No, I just have to shift positions down there and shift hands and once in more I lean on something.
There's nowhere to sit here that's easy and I don't want to risk losing a connection that I thought so hard to get.
Oh, my God.
Okay.
Hey, if you've got an outro song or an intro song, please send it to us Media at Diamandis.com.
We'd love to hear it, see it, and potentially play it.
And thank you for subscribing to this.
And thank you to all of the fans out there.
I know all four of us run into you on the street at the airports, at events.
Amazing.
It's really great.
If you see us, do come up and say hi.
Yeah, for sure.
Although not too many.
All right, take care.
Bye, all.
If you made it to the end of this episode, which you obviously did, I consider you a moonshotmate.
Every week, my moonshotmates and I spent a lot of energy and time to really deliver you the news that matters.
If your subscriber, thank you.
If you're not a subscriber yet, please consider subscribing so you get the news as it comes out.
I also want to invite you to join me on my weekly newsletter called Metatrems.
I have a research team.
You may not know this, but we spend the entire week looking at the Metatrends that are impacting your family, your company,
your industry, your nation, and I put this into a two-minute read every week. If you'd like to get
access to the Metatrends newsletter every week, go to Deamandis.com slash Metatrends. That's
d'Amandis.com slash Metatrends. Thank you again for joining us today. It's a blast for us to
put this together every week. Okay, when I sell my business, I want the best tax and investment advice.
I want to help my kids, and I want to give back to the community. Ooh, then it's the vacation
of a lifetime.
I wonder if my head of office has a forever setting.
An IG Private Wealth Advisor creates the clarity you need with plans that harmonize your business,
your family, and your dreams.
Get financial advice that puts you at the center.
Find your advisor at IGPrivatewealth.com.
