Moonshots with Peter Diamandis - Demis Hassabis on AGI, Robots Scale Production, and Elon’s $1T Mars-Shot Comp | EP #253
Episode Date: May 7, 2026This episode features a dynamic panel discussion on exponential technologies, AI advancements, robotics, and the future of humanity. Experts explore the implications of AI, robotics, biotech, and soci...etal shifts, offering insights into what the next decade holds for innovation and civilization. Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360 Steven Kotler is a New York Times bestselling author, and founder of the Flow Research Collective and Flow Institute, known for his work on flow and human performance. Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy Your body is incredibly good at hiding disease. Schedule a call with Fountain Life to add healthy decades to your life, and to learn more about their Memberships: https://www.fountainlife.com/peter _ Connect with Peter: X Instagram Substack Website Xprize Connect with Steven X Instagram LinkedIn Website Connect with Dave: Web X LinkedIn Instagram TikTok Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email Substack Spotify Threads Listen to MOONSHOTS: Apple YouTube – *Recorded on May 4th, 2026 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Demis Hesabas, the CEO of Deep Mind, says AGI may not need a major breakthrough.
I've argued in the past that we achieved AGI in the summer of 2020.
We know, I would argue, what AGI is, and we know how...
Please define it for me.
Figure AI, they've increased production from one robot per day to one robot per hour.
Their target is 100,000 robots between now and...
2030. OneX Technologies production goal is 10,000 robots this year, 100,000 in 2027. The prediction
that comes both from Elon and from Brad Adcock, up to 10 billion humanoid robots by 2040.
It's probably right. What I think is perhaps even more interesting is what does a post-humanoid
robot form factor look like? There's the kinds of conversations and imaginations that one can
have in this, you know, exponential singularity that we're living in.
Everybody, welcome to a special episode of Moonshots here at MIT.
This is a conversation around all things happening to uplift humanity.
You know, I want to read something I wrote this morning as an introduction to today's episode
because it's important.
So welcome to Moonshots where we distill the singularity.
No politics, no bullshit, just the technology that changes the world, the breakthroughs,
helping us uplift humanity, the news that matters most,
changing our world, impacting our companies and our families.
Our mission to help you understand what it is,
why it matters with a dose of optimism.
So any fans of moonshots here in the audience today?
Nice. Love it.
It is an honor and a pleasure.
One of the things last night was going to sleep,
I just had this sense of absolute gratitude
for the people in my life.
who I love and I have a chance to work with.
And I like to bring them on stage.
First up, the man who is my guru on all things exponential investing.
Let's give it up for DB2.
Dave Brendan.
Back to MIT.
Thank you. Thank you.
My partner at Link Ventures and an extraordinary friend.
Dave and I go back, I don't see how many decades.
Don't say it.
We were roommates at MIT.
Right over there.
Inde Daulde Kai up on the third and fourth floor.
Basically, it was Dave, myself, Mike Seller from Strategy.
And it's been a friendship that's lasted.
Thank you, pal.
I love the lobster on your shirt, by the way.
Yeah.
Of course.
Yeah, the MIT gear.
We actually buy it at the coop,
but we don't steal the MIT logo.
And then we put the lobsters on afterwards.
Okay, nice.
All right.
Yeah.
All right.
Let's turn the conversation next to another MIT alum.
another member of the Link Exponential Ventures team, our resident genius.
Alex Puesner Gross, give it up for AWG everybody.
Thank you.
Yes, he is real.
Maybe. Maybe.
Maybe.
I mean, the hologram technology has gotten really, really good.
It seemed real.
Could be an Android.
That's true.
Let me just check.
Very plain.
Yes.
Is that how you check for me?
Is that the test?
The test?
My fourth and final moonshot made someone who is the godfather to my two boys, a dear friend,
co-conspirator, a co-author, and a co-founder of Singulari University.
Give it up, everybody, for Salimus Mel.
Hug, hugsies, hugsees.
Oh, I'm not doing that.
Yay, good to see you, buddy.
Likewise.
Looking careful.
Looking good.
And we have a special guest with us on this.
the stage today. He is three times a Pulitzer Prize nominated author. He's the author of 17 books.
He's my co-author of Abundance, Bold, and the Future is Fast as You Think. And now we are as Gods.
Let's give it up for Stephen Collar.
Nice. Nice. So, you know, it is so good to be back on campus. It's been too long. I mean,
And one of the things I love about being a, you know, a partner with you at Link, Dave, is that I get to get back here on a reasonable basis.
And Alex, you and I met, and I remember we met and we had this like intense high bandwidth connection instantly.
And then went to dinner and continued it, talking about everything from, you know, AI and ASI and aliens.
And it just was like awesome.
And it never stopped.
It never stopped, and I'm so grateful for that.
And I have to say, you know, I think probably Salim and Steve, not as MIT alumni, we feel sorry for you.
I went to Waterloo in Canada, and we call MIT the Waterloo of the South.
Was that framing?
MIT is what again?
Yeah, exactly.
Center for bad spellers.
Yes.
My favorite joke was, you know, at the local supermarket.
You know, a guy's in the line that says 10 items or less,
and the guy has 20 items in this cart,
and the woman at the cash register says,
so is it you go to Harvard and can't count or MIT and can't read?
Yeah.
Anyway, okay.
I'll ask for better jokes next time.
So let's jump in.
We have a lot on the docket today and a lot happening.
For those you watching this podcast online and not live here, this is a special episode we announced back a few months ago.
It's for everybody in the room here who's purchased over 100 copies of We are as Gods,
helping us get it as a national bestseller so far in a number of different lists.
Yeah, thank you.
And hopefully soon in New York Times bestseller, so thank you all for that.
Let's jump into the news.
So first news item, Elon's insane.
Marshot comp. I mean, talk about over-the-top exponential. You know, there's never been
anything like this. So the SpaceX board votes on a comp package that is worth someone
somewhere in the neighborhood of half a trillion dollars when it pays out. So here it is.
You get 200 million super voting shares 10 to 1 when he hits a Mars colony of a
million people or more. I mean, this is not a Mars landing, not
Not, you know, a photo for Mars, not a mouse on Mars.
It's like a million person, you know, set up and a $7.5 trillion dollar market valuation.
It's like an all or nothing.
And here's the number at 200 million shares at 7.5 trillion.
That pays out a half a trillion dollars.
Easy here.
Have you seen any comp packages like that?
Well, no one else because it never existed before.
And the super voting situation is new in the world, too.
You know, Mike Saylor pioneered that when Microstrategy went public.
No, Larry and Sergey did in Google.
Well, Saylor was way before Larry and Sergey, though.
So those of us were dumb.
What is super voting?
Well, it's on the slide.
I know it's on the slide, but what the hell doesn't mean?
It means that your vote counts 10 times more than everybody else's vote.
Actually, Mike Saylor copied it from Sumner Redstone.
I think Saylor went public in 91 or 92 with super voting stock.
And Goldman Sachs said, that's insane.
no one will ever buy into that.
We're out of the deal.
And he said, you know what?
I'll live without you.
And so now Sailor still has control of micro-strategy all these years later.
Then Sergey and Larry copied it, and it became popular in all the rage.
So then Facebook went public, or meta now, public with super voting stocks.
So it just locks in those founders.
And it's been interesting to observe from an investor point of view because, you know,
the governance you would say, well, look, wouldn't a board be a better governing body,
more stable, more reliable over time?
But then those founders are the ones actually going to Mars, going to the moon, sending up satellites.
Like all the kind of really world-changing, crazy-sounding stuff comes from that structure.
It has to be a founder-led, you know, entrepreneurial tech-forward company.
No one else, I've rarely seen a replacement CEO drive that level of capability.
And you would know better than anyone that you read the book.
Like a lot of things that are.
going to happen in the next few years sound crazy. If you walk out on the streets of Cambridge,
Cambridge is unusual actually, go streets of Omaha, and say, we'll have a million people on Mars,
and they'd be like, you're nuts. And so then if that was your board, they would block you from trying.
So you need that ability to just act for these crazy-sounding ideas. But in exponential time,
where we are right now, everything sounds crazy, yet it's going to happen.
Yeah. Yeah. The second part of his pay package is 60.4 million restricted
shares when he brings online 100 terawatts of space compute, not gigawatts, right?
He's been talking about 100 gigawatts of space compute.
And now he's looking at 100 terawatts.
Alex, what do you make of that one?
I think inevitable, it's going to happen.
Other than the obvious clips about the Dyson swarm, blah, blah, blah,
I do think there's the seed here of the future of corporate governance in an age of what we
might historically have called moonshots.
Right now, we have traditional notions of sea corporations.
that exist to maximize profit for shareholders or return for shareholders. We have various
notions of B corporations or public benefit corporations that exist in addition to optimize public
benefit. I think I can see here the nucleus of almost a third type of corporation that Elon
and his board are pioneering, which is corporations that exist to achieve moonshots and that compensate
accordingly. I would love to see every S&P 500 company have as in
ambitious outcomes and comp plans corresponding to these for their CEOs.
Yeah.
And yes, we get the Dyson one.
I got a Saleem question on this.
That's an exponential organization comp package.
It seems like an exponential organization comp package.
It seems like it's, but he doesn't run exponential companies at all.
He runs top-down monarchies.
Command of control.
Yeah, command of control.
So like.
Yes and no.
So, you know, in a classic EXO in this style,
the founder holds the MTP and he holds the vision.
People buy into that vision, and then he holds them to task.
So if you say you're going to build a rocket engine,
and you're going to design that rocket engine to get you to Mars,
people have to step up and go, I'm going to do that and make that happen.
He delegates very well, Elon.
He may get involved with engineering decisions and go,
well, how are you going to achieve this, etc.
But once they've agreed, they get it done.
So the inspiration comes from the top, flows directly down through the org structure with no mitigation.
And Vidi is the same.
There's like five layers between CEO and the top individual contributors in the company.
So the holder of purpose, and I think Alex put it well, these are all exponential organizations now,
where the pay package is a moonshot.
And if you achieve that, if I'm an investor and he achieves that, I'm laughing all the way to the bank.
I'm thrilled the bits, right?
Who gets upset about that?
Unless in retrospect, you're an idiot.
Because you just signed up for that no matter what the pay package was,
if that was the ratio of overall capital creation and value creation to that.
And I think the bigger picture here is that these are all trying to achieve the impossible.
And if somebody's trying to achieve the impossible,
you give them all the respect and support because it literally is insane what they're trying to do.
And by the way, I've had two conversations with CEOs for kind of moonshot companies that are like,
okay, I have to set this kind of a goal.
And it's only possible in a founder-led company.
It's rarely going to see it in a company with a board of directors and a CEO who's the third or fourth CEO in it.
And by the way, if you're going to set a moonshot goal with the CEO, the board's got to be completely supportive of it.
Right?
I've seen one exception.
What's that?
I got approached and asked to do a workshop with the C-suite in this senior management of Gucci at an offsite.
And the CEO said, I will turn this into an EXO.
And he did.
They went completely nuts.
And they actually 10x the company over the next three, four years.
But did they, but this is not a 10x.
This is a thousand X.
A million X, right?
Let's at least, like the aspiration is there in certain people.
The ability to achieve it, et cetera, is there also.
This is a whole.
level. Nobody's ever seen this. We look at it and go the guy's nuts. Yeah.
We know it's nuts anyway, but still. So can I ask a geeky under the hood exponential
organization question? Sure. A lot of the conversations that I have when I'm working
with people running companies these days is about dashboards and what information
actually in an EXO what information needs to go into a dashboard that people aren't
looking at. So if you were trying to build a dashboard for this, for the moon, right?
what's in that dashboard that other, most people don't think about.
Okay, so there's a, there's a very, this is a great question because it becomes very difficult
to navigate this. If you're a true EXO and you have an MTP, I'll give you the example of TED, right?
When TEDx, Ted launched TEDx, okay? I'll do a little thought experiment which we put in that first book.
And Peter, by the way, you contributed massively to that original books, so I just want to honor that.
If he just stood up and said, we're going to do 20,000 TEDx events in,
five years, he would have lost the team.
You're barking mad, I'm quitting.
I can't sound up for that.
If what he did do is said, we're going to have an MTP, ideas were spreading,
we're going to let the community decide.
We're going to have a set of clear rules, and let's see where this goes.
A classic linear approach would have been, we're going to do five TEDx events this
quarter and 10 the next quarter and 15 the next quarter and 20.
If you added that up, you'd add up with about 2,500 TEDx events over that period of time.
Instead, by setting an MTP in a real-time operating plan where you're tracking metrics in real-time,
so that helps steer the ship, that's the rudder, he ended up with 20,000 TEDx event.
And nobody in the world would have guessed that that would have been the outcome.
You go from a single environment to a global media brand at near zero cost.
That's the amazing part of what's possible if you can orient that way.
So your dashboard, probably with OKRs as the management structure and the performance structure,
The best models we've seen are an MTP with a one-year operating plan that's instrumented in real-time.
You can't put the full five-year vision in it because people will just freak out.
They can't cognitively get their...
Keeping Elon in the news, it's been going on this past week.
It's Elon versus Sam.
It's the trial of the decade.
So Elon has accused OpenAI of breach of charitable trust in unjust enrichment.
He's seeking damages of $150 billion plus reversion of.
of Open AI to its full nonprofit status,
plus the removal of Sam Altman and Greg Brockman.
Judge Rogers blocked Musk's team from making a trial
about AI extinction, which came into the conversation over again.
I don't know if you guys have been watching it.
I've got a couple of points I want to make,
and then we can open it up for discussion.
The first is a bombshell in the last couple of days
where Greg Brockman's diary was fully disclosed,
so he kept a diary.
which seems kind of insane.
Who does that?
I no longer...
Did he keep diaries?
So rare.
So it's, you know, people have called this the smoking gun.
In his diary, he goes on to say,
we truly want the B-Corp.
The true answer is that we want Elon out.
If three months later we're doing a B-Corp,
then it was a lie.
Can't see us turning this into a for-profit
without a nasty fight.
So he's basically, you know,
communicating that they're navigating
around Elon towards this for-profit plan.
The flip side of this is for damaging facts
that the trial brings out on the other side.
There was a 2017 equity email.
Elon's own team was negotiating to get equity in the for-profit
on his behalf, and Musk admitted that he didn't read
the fine print of transitioning to a for-profit.
He also admitted on stage that XAI has been
distilled its large language models from OpenEI's models itself. And then Altman apparently did offer
Musk equity in the new company, but Musk turned it down, calling it a bribe. All right. I hate that we have
this going on right now, but it's front and center. Alex, do you want to kick it off? Yeah, a few thoughts.
One, aside from the point that I've mentioned previously on the pod, which is that this is going to make for an
absolutely amazing made for television movie someday made by Aaron Sorkin or maybe straight to cinema.
I think there is an interesting teleplay playing out before us about the future of corporate governance.
And I think this connects naturally with the previous story about Elon's own comp packages,
which is that in an era when it's possible to start a not-for-profit with a moonshot goal,
open AI was originally crafted with the goal of essentially front-running Google Deep Mind to age.
GI and counterbalancing Google DeepMind, which Elon and others perceived as potentially creating a singleton, a single superintelligence.
They were that far advanced.
That was his concern.
I had discussions with him around the time also this was happening.
So Open AI was originally intended in part to be a counterbalance to create a competitive ecosystem.
The problem is technology is advancing so quickly now that you can start a not-for-profit with seemingly an idealistic long-term goal.
but the dog catches the card these days.
If you set a moonshot goal for a nonprofit,
what happens when it actually succeeds
and the outcome is economically transformative?
So the teleplay that plays out is everyone actually,
after starting this not-for-profit,
actually realized we're going to catch that car.
We're the dog that catches the car,
and this is probably better structured as a public benefit corporation.
The other thing is nonprofits are not designed
to attract sufficient capital
and to maximize the return on capital,
obviously. And capital is still underlying, you know, part of the innermost loop.
Doubally so. And this is the same parable that also played out with Anthropic. When Anthropic
underwent its great schism and a number of folks, senior leaders from Open AI, left Open AI.
They were initially just forming Anthropic to be a pure alignment lab. And then they discovered,
well, actually, if we want enough funding for alignment, we need revenue. Well, if we need
revenue, we need a model and we need capabilities. If we need a model and capabilities, we need
raise capital, oh wait, we need a for-profit public benefit corporation. So I think that the
history, this sort of sad lesson we're learning over and over again that's now being litigated
in Elon v. Sam that resulted in Anthropics formation over and over again is it's actually
possible to realize moonshots now and as a result, don't do it as a not-for-profit, do it as a
public benefit corporation from the start, figure out the for-profit arrangement from the start
rather than fighting it all out in federal court after the fact.
Nice.
Do you have a...
That's hard.
That's easier to set it about that.
By the way, you know, our resident genius here always has the best takes.
I disagree, by the way.
Okay.
That's really hard to try.
Try setting up a B-Corp.
It's easy.
We set one up.
We flipped Singularity University from a nonprofit to a B-Corp.
It wasn't easy.
Friends don't let friends start nonprofits on you.
I know.
I will never start another nonprofit again.
Having a business model that enables you to earn capital,
so you're not begging for money,
for money. I mean, the ratio of donation to investment is like 100 to 1.
Hey, everybody, you may not know this, but I've got an incredible research team. And every
week, myself, my research team, study the meta trends that are impacting the world.
Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology.
And these Metatrend reports I put out once a week, enable you to see the future 10 years ahead
of anybody else. If you'd like to get access to the Metatrends newsletter, every day,
week, go to deamandis.com slash Metatrends. That's DeAmandis.com slash Metatrends.
Selim, you disagree. What do you disagree about? So two points. One, about the trial, I have no
comments. I think it's a lose, lose, lose situation. Everybody loses. It would have been way better
if they could have settled this in a separate way, but nobody wants to settle in that environment.
Regarding for-profit, non-profit, we're entering a post-capitalist society over time. At some point,
money doesn't make sense.
In which case, your moonshot can be a nonprofit,
and in fact, it should be a nonprofit.
The chase for money will corrupt the target in many, many cases.
The ones that are succeeding, Elon, for example,
really doesn't give a damn about money.
It's just the cleanest structure that gets you to what you need to achieve.
If a nonprofit would get you there, you'd do it as a nonprofit.
He'd also have less hassles about pay packages and all the rest of it.
And maybe eventually...
You're assuming, if you haven't a true MTP, then you don't really care about the money,
and it's not a motivating factor.
You're really interested in achieving the goal, in which case, whatever the cleanest structure that achieves that,
will get you there.
In this case, for now, capitalism, for-profit fine, you can raise money on the public markets.
You need that energy to get you going, but I think we're going to enter a point at some point,
I'll say two, three years where it won't make sense anymore.
Dave, this is the polymarket from this morning.
The polymarket gives Elon a 33% chance of winning this.
What do you make of that?
Actually, the headline on this is that Elon has a very slim chance,
but if you look just like a couple weeks ago, it's at 50-50.
Yeah, you can see it.
There's the time horizon.
Yeah, it's going to be like all these.
It's going to bounce around a lot.
But I said on the podcast a week ago, too,
that Elon doesn't need to win to win.
He doesn't need to win a settlement to win.
He just needs to slow down Open AI.
and their recruiting and their morale and their momentum
and all these documents coming out paint a really ugly kind of picture.
And so-
How do you reverse $122 billion investment that was just made?
Yeah, you'd have to distribute it or something,
but he doesn't care about that.
It's not in this for the money at all.
When you look at this is the trial of the century,
it's the trial of the millennium.
Like the outcome of this trial could dictate
a big part of the future of all of human endeavor.
And both of these guys know that,
AI is self-improving.
And it's on the cusp right now of exploding
into this superhuman exponential capability.
And at all the control of it,
these are supervoting structures.
So the control of all of that lands in just a few hands.
Now, if you said, well, these are just business guys.
Well, Sam Altman actually had a governor run plan for himself,
for the governor of California.
These are not apolitical people.
You saw Elon.
Mark Zuckerberg started putting assets into a trust
so that he could get ready to run for,
something he gave up on that after he was in front of Congress. But these are not apolitical people
who are just sort of working on a startup. They're aware that this is the future of all of humanity
that's at play here. It's a great point. Stephen, you're not playing in this field as closely as
all of us are. What do you think of this insanity? So I have a rule. If I'm not actively working
on a problem, then I don't like talking about it because then I'm just gossiping. And as far as
As I can tell, this is just gossip.
It is just gossip.
And I also, like, but I got to push back on your statement.
Like, you know my feeling.
I think most, like, I think comments like you just made about AI are absurd.
Like, are you working with a very different machine than I'm working with?
Because the machine that I work with, and I work with AI as a scientist, and I work with AI as a writer, and I do it on a daily basis.
You should use a paid model.
Dude, shit.
Nobody likes him much.
He doesn't have many friends in the room.
Yeah, so, like, I can't, like, I, I, really, I think AI is fake.
I, like, I hear you guys comments, and I'm like, if you, you know, Alex, I'll give you an example.
Alex the other day, your podcast made the comment that, and you think I don't listen, that, that,
You went out and asked a bunch, 10 CEOs, how long until an AI is smarter than your best 10 employees?
And their answer was, oh, it's there now.
And I listened to that and went, well, then their 10 best employees are must be fucking morons.
Because my experience working with AI on a daily basis is this machine doesn't know shit.
I spend more time trying to teach it how to write than watching it right.
I spend more time fixing the AI.
And as a scientist, I think it's absurd as well.
well. I'm doing neuroscience with it and I'm watching the AI, which is supposed to know every
fucking thing there is about neuroscience, miss huge gaps in huge fields. This is where it's a
convergent thinking engine. It doesn't do divergence. And in this quarter, it's how you don't
agree with me. So like I think this is, I mean, it's the future of humanity. I'll take the
high road on this one. Okay. By the way, I'm opinion and nobody much likes me.
I would say, taking the high road, I would say if that's your experience, you have almost an obligation to humanity to create benchmarks to encode this knowledge that you have that you think isn't being accurately or fully or effectively reflected right now in AIs.
If you were to create a Kotler bench that encodes all of your wisdom, all of your writing knowledge.
But it's not my wisdom. It's just basic writing skills. Let me take a quick poll here in the room.
How many people are in the camp of, oh, my God, this AI is not impressing me.
It's got challenges and so forth versus AW.
Raise your hand.
If you can.
And how many people here are in the camp like, oh, my God, this is amazing technology and it's extraordinary.
Oh, I'm not telling you it's like extraordinary.
Just for those watching us, it was about 96% in favor of AWG.
Oh, wait, I forgot.
We had a live audience.
Sorry.
And 4% yes.
Okay.
I'm going to move this on.
No, no, this is an important conversation.
Okay, all right, go ahead.
One of the most important we ever have, right?
I sit kind of in the middle of this because I think the technology is unbelievable.
We're going to do unbelievable things with it.
It's just going to take a lot longer to get to where we want to than we want, right?
And this is, I think, what Stephen's talking about to some extent.
The other part of it is you've heard my beef about this, which is we have no idea what intelligence.
is. We have different facets of intelligence. There's about a dozen different facets of emotional
intelligence, the Eastern concept of presence or awareness. Some of us have musical intelligence or
spatial intelligence if you're an athlete or linguistic intelligence. If you're a business leader
using emotional intelligence a great deal when you're making business judgment calls,
and that's just not in the game for an AI right now, which is very analytical and numerically
driven. So there is definitely a concept that I fall apart on this when people go, AI will become
smarter than human beings. What the hell do you mean by smarter? And there are 14 definitions of
AGI at last count. Which one are we talking about? Right. And so this is the huge, we have such a
huge gap. It's amazing that we can raise hundreds of billions of dollars for a concept that we can't
define measure or task. But other than that, that's great. So now no question that it's making massive differences
and our ability to navigate cognitive tasks will absolutely get there.
But I would be kind of in the middle here because Stephen has a point,
the technology is not where it could be,
but we will, I can happily see how you can get there pretty quickly.
So I'm kind of in the middle.
I just need to wake one point because before Steve makes a tragic prediction error.
Oh, I don't make predictions.
I leave the future as under you guys.
I'm just a reporter.
I just want to make the point that if you look at the benchmarks that Alex was
referring to for AI to self-improve, very hard math problems,
like humanity's last exam, are the trigger of self-improvement.
It's not a great writer, and it's definitely not funny.
But if you said to me, well, it can possibly
build a better AI algorithm until it's funny,
that's intuitively might make sense.
That's not true at all.
I spent six years as an AI researcher.
I tell you all you need to do is tweak the algorithms,
try different parameters, try different transfer
for functions and do it iteratively at very high rates and the algorithm just magically.
Just like it's more like evolution than writing a book.
It's like it's like looking at you know the furry little mammals after the asteroid struck
66 million years ago and say they're never going to be able to write code and you know.
But let me let me here. So here's the counter to this and this I think is the difference between
like coming in it as a as a writer or an artist or a creative versus coming at it as a coder.
it hasn't gotten better.
As a writer.
As a writer, it has not improved.
In fact, I've seen it get worse.
And I'm not just talking about one model.
I'm talking about most of the models.
Compared to what? Compared to GPT2 four years ago?
I compared to GPD3, I think, like, honestly, well, first of all, chat GBT is so much worse now as a writer than it was a year ago.
When we started this book versus now, it's got worse.
I don't actually write books with it, so I actually don't even know.
But I do use all the top-tier models to design new neural network tests and experiments.
So exactly the self-improvement problem.
And every dot release is dramatically, shockingly better.
I can literally go to it and say...
I'm not saying you're wrong.
I'm looking at the same data.
You are.
I'm seeing the same improvements that people are saying, right?
I see the claims.
But my daily experience, and I'm both as a scientist.
And now, as a scientist, it's amazing what it can pull together.
But what's more amazing to me is the really obvious stuff it misses.
Like, as a neuroscientist, one of the reasons, one of the ways I built all my organizations is I saw gaps in neuroscience that needed to be solved with people from multiple disciplines, right?
So I pulled in, embodied cognitions, and I pulled in all.
And the AI still can't do any of that.
Let me give you another thought experiment, just to speak forward.
we because people underreact and it's more dangerous to under react than to overreact.
But if if I go to somebody and I say, look, here's here's Claude 4.7 and here's
Stephen Cobler. Let me give them both this very difficult writing task and write something
really entertaining in the next five minutes and you just kill it, you crush it.
You get great, I'm better than Claude 4.7. Now you say Claude 4.7, I want you to write it a
billion times and then I'm going to put a prose process on top of that to select the very best
article, you would normally as a human say, that's cheating, that's bullshit, that's cheating.
But when you're talking about AI self-improvement is perfectly fair.
Better lesson. Perfectly fair. Okay, that's fair.
Stephen, I would just, again, encourage you as a scientist, as you say, to be rigorous in how
you measure progress or lack thereof. If you're going to assert that there's been a regression
in terms of creative writing capabilities, for example, create your own benchmark and then
show the world. Show the world that there's been a regression.
And I will promise you, if you construct it well, you will get the Frontier Labs interested in including your benchmark in every other EFEL suite.
As an optimization function.
That's right.
And you will get your better creative writing.
Salim, do you want to have a final point here?
Two points.
One is my very first prompt in the chat GPT was rewrite the Bible Genesis chapter one as a rap song.
And it completely blew my mind.
It was something I never did.
Do that.
I remember that.
You said that around. I remember that.
But I will also say that I know you very well, Stephen, and you're such a profound writer,
I could completely imagine that no AI would satisfy anywhere near what you can do individually.
Now, if you take Dave's approach, yeah, we'll get there, right?
It would be an AI representing me right now in this chair would be way better than me,
just because it would remember everything I've ever said, every anecdote I've ever used,
every metaphor ever used, and bring the right top.
thing at the right time in a way that I just can't do it with my stupid one-liter brain with
a napkin-sized needle cortex.
But let me ask you a question.
How is it going to do that?
Because if I put, for example, this whole book into chat, GBT or GROC or take your pick,
it's got a working memory that's about 2,000 words long.
It can't even remember.
No, it's a million tokens right now.
Yeah, I hear what you're saying and I'm telling you my experience of working with it as
as a writer is not going to show up the clothes.
I think, again, you're such a profoundly amazing writer.
This is not a story about Stephen being an exception.
We clearly have a point of view, and I love...
Let's go on.
But I'm going to move us on.
All right.
This is why they don't let me have a boot shot.
Sorry, often.
And by the way, it is gossip talking about it,
but it's gossip and these trials are critically at the center
of this ecosystem that is going to impact humanity for the next millennium.
And it's important to understand it.
Okay, but let's move into robotics.
This is a story about figure robotics, figure AI, run by Brett Adcock,
is scaling their manufacturing.
Dave, you and I have a chance to go and visit a great pod with Brett.
By the way, Brett's agreed to come and be one of our opening speakers at the Abundance Summit
in March of 2027 and bring.
the figure robot with them. Let's take a look at the figure robot in this video.
So in all honesty, who first thinking is it a deep fake or is it real? Right? I mean,
that's we need to flip that in our minds. It's a real video. They've increased production
from one robot per day to one robot per hour. And their target is 100,000 robots between now
and 2030. So just showing you figure because it's one of the best funded they've raised at
You know, 30-plus billion dollar valuation, my fund, bold capital.
Are we an investor at Link into not yet?
No, no.
We didn't get him.
Here's another, but we're going to be, I mean, one of the decisions you've made, Dave,
and we've talked about is the fact that the window for the hottest AI companies
is going to be closing and the window for what will be opening.
Well, so we're really gearing up for robotics, actually, because software has a limited life left in it.
Because, yeah, I write software so, so, so well.
Robotics and data centers will go on for a decade or so, maybe more.
Here's OneX Technologies.
Again, Dave and I had a chance to go meet with Burnt Borneck, and this is their robot called Neo.
They've just opened up a new manufacturing facility in Hawthorne.
California, 58,000 square feet.
Production goal is 10,000 robots this year.
Burnt has promised me a robot by this fall.
We're gonna keep them to it.
So, Burton, if you're watching, first of all,
friend of the pod, thank you so much.
I'd love my Neo-Neo, whatever version of Neo-Gama,
whatever's coming out.
100,000 in 2027.
Shipments expected later this year.
Let's check out their video.
Now, that guy's job is very limited.
A lot of humans in that video.
Yep, doing very simple things.
That was the eye opener.
You went to the Gigafactory and Elon was saying,
look, the robots are going to make the robots.
I'm like, I can't imagine this thing making tiny little parts.
But I didn't get it until we got to the Gigafactory.
Everything is already automated.
The robots assemble the automation equipment.
It arrives in like Amazon boxes.
You take it out of the box.
You put it on.
You turn a screen.
Now you've got a new.
manufacturing line. So the bottleneck is the humanoid actions. Everything else has already automated.
Philip, I think this is our answer, by the way. You want to know what, like, you want job security
over the next 25 years be a robot repairman. Yeah. Because they can only do that. We literally just
invested in the company. Literally like be a robot, figure out how to repair robots. But there's an entire
ecosystem of businesses around this, like insurance, right? Now, the third story in this,
this triplet
is Elon predicts
1 million optimist production by 2030
Tell me how this isn't just
a play to get investment money
Well it probably is
To get people to buy his robots
Before cheaper Chinese robots show up
Every time he opens his mouth and makes a statement
About robotics
I think he's just trying to sell it before China gets here
The guy has been able to scale Tesla
To be the dominant auto manufacturer
On the planet
And bring the price down
at a rate like nobody else.
And these robots,
which, you know, he said publicly,
and I agree with him, that Tesla,
you know, a decade or two from now is going to be known
as the production company of Optimus Robot,
not cars.
And Mark Zuckerberg announced that his company was going to become,
wait, what was it called?
Meta.
It was about the, what was that place called?
The Metaverse.
Has anybody been there yet?
Have you met my sarcastic friend here?
Yes, that's right.
I'm just pointing out reality.
Yeah.
Well, that's the way entrepreneurship works fundamentally is you declare a target and you evangelize the target and then you attract the talent and the capital is necessary to hit the target.
If you choose your targets wisely, they're achievable targets.
If you choose them poorly, you're a bad entrepreneur.
That's just that simple.
But if you don't declare the target, you know, Boston is famous for this actually.
Being very conservative and not even announcing something to the world until it's proven eight ways till Tuesday.
It doesn't work.
It is working less well over time because exponential change is on this accelerating rate and so much more as possible.
But nobody cares that Elon stands on a stage and lies?
At least, the neural link.
He's directionally correct.
Neurolink is vaporware.
We're going to, we're going to be able to link humans to the internet?
No.
There's foundational neurobiological problems with Elon's plan.
And how do we know?
Because everybody's quit his damn company and go out and fart in different neurotech companies because his way is.
so absurd and nobody cares. I'm sorry, I don't think you get to lie in public in the
technology. I'm starting 17 companies. I am an entrepreneur. One of the best things about capitalism
is you can have the market fight it out. So yes, there are multiple BCI companies.
I have a unique perspective here. I can see the steam coming out of Alex's ears.
Alex has been really nice to see so far. I think this falls under the category of don't feed the
trolls.
You know, too-shaven.
Can I give you my beef on this?
Please.
Okay.
You've heard me rant about the humanoid side of it before, but I've actually a rationale for this.
Robots are really good of repetitive tasks and navigating constant things and not making mistakes
when they're doing like the guy put a screwing in the thing.
When you have a repetitive task, you don't need a humanoid robot.
A humanoid is designed for very adaptable and very.
adaptable environments which are not that repetitive and that's completely counter to where you'd use a robot.
Look at the numbers right there on the chart.
Weield robot drones, etc., etc.
It goes to the form factor and you've heard my standard rent, at least give it another pair of arms.
Every time anybody ever holds a garbage bag open, you need a third arm.
And why not give it that, for God's sake?
So I end right there.
So I think there's a spectrum here of how repetitive and robot-like is the task,
versus how adaptable in human.
If it's a very adaptable human task,
and you don't need a robot,
and then if you have a repetitive thing,
then make it look like the thing it needs to be,
like a wheeled robot or a dishwasher,
which is really good at washing dishes at scale.
So I'm gonna add one piece to the mix
and ask you, Alex, to comment here,
which is the prediction that comes both from Elon
and from Brad Adcock of a target of up to 10 billion
humanoid robots by 2040.
which you think is totally sane.
Well, with humanoid robots, the one thing that Salim didn't point out that I sort of disagree with is we are going to need that for taking care of aging humans.
If like one thing, while longevity escape philosophy to me seems absurd, are we going to live a lot longer?
Hell yes.
Don't ever call me again.
I know.
I know.
I know.
But you need humanoid robots for human form factors, right?
form factors, right? Like they were originally designed because we built nuclear reactors
where we had to send them in for disaster robots, right? They were the form of a nuclear reactor
was built for humans. So in the early disaster robots, we needed humans to go in there and do stuff
because it was a human environment. We're bringing in robots into our lives to take care of ourselves.
That's a human form factor. So when I see and I, when you're dealing with human health as a driver
of economic development, that's like we put money into that. So I think this, this,
That's a very valid point.
Yeah, I hear you.
This is the counter argument.
Alex, can you call him a troll on this one?
I can tie you up in the following way.
So just for some background information, right, there are on the order of 1.6 billion cars on the road today, right?
And 100 million cars manufactured per year.
And if you look at the iPhone production rate, in year one, iPhone had 1.3 million devices.
devices and now we're up to 250 million iPhones produced per year. What do you think about these
numbers that Elon's projecting at a million by 2030 and billions in 2040s? It's probably right.
I've done the same extrapolations. Others have. You extrapolate the humanoid's curve here.
You find that in the early 2030s, the number of humanoids is predicted to cross the number of wheeled
robots. I would assume there's relatively little alpha left at this point in any counter
prediction that it's not going to be the case that humanoids pass humans by the end of the 2030s.
What I think is perhaps even more interesting is what does a post-humanoid robot form factor look
like? Selim, I know you like to add lots of arms. That's your...
Wheels, anything. That's your favorite body shape. But I think there are many problems. I think
He's Indian. They have these goddess, gods and goddesses with all his arms.
Thank you.
That's all that.
That totally makes sense.
You see why.
It's been predicted for a long.
I remember sheaf is a destroyer.
Let's just point that out.
But I would point out there are more exotic body shapes that I would expect to see by 2040 that will make human, humanoid shapes look positively prosaic.
I do expect by 2040 we're going to get our nanorobots, for example, our Drexlerian nanorropos.
Drexlerian nanorobots. And I would expect many, many trillions of those to be in our solar system.
And I think we look back from 2040 at this prediction of 10 billion plus. And we'll laugh at
ourselves that this will be the equivalent of predicting atomic vacuum cleaners to help housewives
from the 1950s. Well, of course we're going to have 10 billion plus humanoids. But what about
the nanorobots? Yeah. Yeah. Welcome to the health section of moonshots brought to you by
Fountain Life. You know, my mission is to help you use the latest technologies, including
AI, to not just do your work at home, teach your kids, but to help you live a long and healthy
life. I'm here today with an extraordinary physician, the chief medical officer of Fountain Life,
Dr. Donne, Musilum, Dawn. Let's talk about cancer. You know, I know from the member database that
we have at Fountain, are members who come in, who think they're healthy. It turns out 3.3% of them
have a cancer in their body they don't know about.
That's right.
You know, the majority of cancers that we screen for,
those aren't the ones that are necessarily taking the lives when found at a late stage.
We know that when cancer is found early, the chances for cure are much higher.
We know it's much easier to treat a cancer when found early versus when found late.
What we're finding in our members is over 3.3% were found to have these cancers
that were otherwise wouldn't have been found or detected.
Yeah, you know, it's interesting.
people, you don't feel the cancer until stage three or stage four.
And if you don't know what's going inside your body, it's like driving your car with your eyes closed.
And you can know.
And so when members come through found, how do they detect cancers?
So we're doing full body MRI.
And we also do early cancer detection screening.
This is very, very important.
And these are not typical tools used in the conventional care setting when it comes to prevention.
This is a hard thing because currently these are not studies that insurance would yet be covering.
but the goal is to collect these numbers, do the research, and work hard to democratize wellness.
Yeah. So at the end of the day, you can know what's going on inside your body. It's your obligation to know.
So check out FountainLife. You can go to FountainLife.com slash Peter to get access to the latest technology to help you detect cancer at the very beginning at stage one when it is curable before it gets to stage three or stage four in your world to hurt.
Okay. All right. Let's move it to China, which, by the way, is.
a centerpiece of robotics.
And so this is a ruling of a Chinese court
that AI is not legally allowed to displace employees.
So here's the story.
This is in Hangzhou.
There was a case of a tech worker named Zhao
who reportedly was earning 25,000 yuan per month.
It was offered a demotion of 15,000 yuan per month after AI
automated parts of his role, and the court sided with him after refused to pay it out.
The court logic was simple. Adopting AI was a business choice, not an unavoidable external shock
that made the employment contract impossible. So Chinese courts have reported ruled that companies
cannot simply say AI can do this more cheaply, therefore we're going to fire you.
So the question ultimately, the framing of this, and we've had this conversation, is if you're an
employee and you can employ AI to do your job better than you or do your job while
you're working out or do ten times your job overnight does the economic value of
that AI's abilities accrue to the company or does it accrue to the employee two
different very different outcomes for society or to the state or to the state and
And by the way, we also, you know, this whole idea of UBI, does the government pay UBI.
We've seen Sam Ollman recently talk about the fact that no society should have a piece of the
AI ecosystem.
We can get into that.
Who wants to take this up?
I'll just comment.
There's a bit of obvious historic irony here.
China has come full circle from communism to anarcho-capitalism back to communism again.
And it's maybe just a bit ironic that China is
leading the world in terms of, call it socialism with Chinese characteristics or communism
with Chinese characteristics for how to handle AI disruption and AI disemployment.
It would be an ironic, you know, under, I think this falls under the category of we-
The most ironic outcome tends to be the one that we live in where China is setting global
standards for how to deal with AI disruption for employment. It's a very strange future that
we love him. Selene, you must have an opinion. So three points. One is the, this is clearly a
artifact of the massive breakdown in the social contract, right? We have no idea. And the second
point would be that all of our labor laws are on human labor. And as we move to AI agents,
it's going to be about task, a management, AI, coordination cost dropping, radically, execution
cost dropping radically. And we're looking at that. The third thing I would say is what this
indicates as the broader transformation at play.
You often see when you have these broad transformations,
these nonsensical things that occur,
it's actually an indication of the broader transformational play.
I remember Paul Safo pointing out that the coal museum
in Kentucky is using solar panels.
And there's like this surreal irony around that
that actually indicates there's a broad transformation in place.
So when we see these, it's actually a signpost
that we're in for a radical transformation.
And this is something we've been talking about a lot.
How are we going to navigate the social contract?
The very first two questions from the audience earlier were exactly about that.
How do we navigate this?
And this is the problem we're going to have to deal with over the next 10, 20 years.
It's really interesting to think how different China is from the United States
and how many other ways to govern there could be that we haven't explored.
The variety is just mind-boggling because China's 80% optimistic about AI.
the U.S., even in particular, is 25% I think optimistic and majority pessimistic.
I just love the technology.
I just think it's very limited compared to what you guys think.
But in China, they're also totally comfortable with being spied on constantly by the government.
Cameras everywhere.
And over here we just don't admit it.
We don't admit it, but we also don't like it.
And we may, whether or not we admit it it's happening, but we fundamentally at our core,
can I ask you a question about this in terms of China versus,
the US. If it's China versus the US and there's really this AI war cooking that everybody seems to think there is, isn't this, didn't they just say, hey, we're not going to participate in this? Because, I mean, didn't they just tie their legs together in the middle? I mean, like, no, no, they're adopting AI and robotics like you wouldn't believe. Oh, no, I get that. But they're doing it not at a, like, this is going to hamstrung the speed of development.
Well, the problem is they have a diminishing population, right? They're below the replacement.
level by a lot and keeping people employed, keeping them having meaning and not revolting
is an important part of the equation here.
And China people are dropping out of the workforce at an insane rate.
It's a real crisis.
And so if China said, you know what, no one could get fired because of AI, it wouldn't
change the adoption rate much at all because they're growing with this declining
workforce anyway.
And so they could easily pass that law without it having a huge impact.
And it would actually very, very much smooth out the societal unrest.
There's also huge.
You've heard me talk about this before.
I really disagree with the whole framing AI versus China versus the U.S.
I think it's a nonsenseical thing.
I mean, I remember spending a few months traveling around China when I was younger,
and my conclusion at the end of it was that the Chinese people are so entrepreneurial
natively that you need communism, put a lid on it.
So you have these cultural artifacts where you actually need it to navigate and manage the
population in an effective way rather than it being some huge ideological thing.
Alex, right? Yeah.
You don't know how to respond to that.
I think we need to.
Alex, speech for the first time I'm ever.
I'm a little bit.
No more about grandmothers, okay.
Just up level just a little bit.
Yeah, I don't buy the premise that China through internal policies is somehow
hand-stringing themselves in a race for AI supremacy against the US.
Parathetically, the race to AI supremacy, I think, is perfectly real.
It's not fictitious.
It is a real arms race.
And it's a real arms race not just for AI itself, but for what comes after AI.
When we have superintelligence, that's going to unlock so many, I think, transformative
scientific innovations and discoveries and inventions.
And that's really what the prize is.
Yeah, we haven't seen anything yet.
That's right.
If there's new physics, for example, I have financial interest in a company, physical superintelligence
that's working on superintelligence for unlocking new physics and new applied physics.
That alone is a prize worth an international arms race over.
And just to the claim or the assertion that somehow China,
by introducing policies that are notionally worker-friendly in terms of AI substitution for labor
or somehow hand-stringing themselves, I think it's preposterous.
There are a variety of other industrialized countries, including Germany,
that have policies of favoring small and medium-sized businesses when it comes to automation.
I think China, if anything, this is a way from an internal policy perspective for China to ensure a certain level of human employment
while also creating state incentives top-down for creating new types of jobs that are AI adjacent.
I don't think they're tying themselves up.
Also, they manage the PR of AI through this.
They have high enthusiasm about AI.
They want it to be high.
It's no different in some sense from our government constructing policies with the hypers
for requiring that hyperscalers power or pay their own electricity supply bills.
No different, except it's at the consumer level rather than at the enterprise hyperscaler level.
Yeah, I guess the problem is that in the US we sell ad views.
Media sells ad views. And the way you sell ad views is by creating crisis and worry and it's all over your book.
Peter, you know this.
In China, they don't have to deal with it.
In China, they don't have to deal with that.
So they want AI enthusiasm.
They want robotics enthusiasm.
Guaranteeing employment is a good way to get people on board with the mission.
But they need the robots like you wouldn't believe because of their aging population.
All right.
I'm going to move us forward.
This story came out this week.
Demis Hesabas, the CEO of Deep Mind, says, AGI may not need a major breakthrough.
He says, I think there's a 50-50 chance that we could still, that we still need a breakthrough.
maybe in world models, but my bet is still strongly on foundation models because of how successful
they've been. Alex, yeah, take away. I chatted with Demis 10 plus years ago at this point,
and at the time, he thought it was five plus major breakthroughs that would be needed to achieve
AGI. Now we're down to zero or one. And I don't disagree with Demis. I've argued in the past
that we achieved AGI in the summer of 2020, no later then, with the discovery of, or the publication, rather, of large language models or few shot learners, which was the GPT3 paper.
We know, I would argue, what AGI is, and we know how...
Please define it for me.
That's an insane statement.
Like, you, I, no.
Welcome to Moonshots.
It is the narrowest technology in the world.
You ask it to be general about anything.
It's dumb.
Stephen, the shoes on the other foot now, isn't it?
Are you working with a totally different machine than everybody else?
Like, do you have secret powers that you're getting a different AI?
I think we should let them just self-combust to that assertion now.
And I think what you meant was the seeds of AGI were born at GPT3.
I think it's just incremental improvements after the fundamental discovery, which I think...
Just like DNA and in RNA and, you know, Darwinian evolution were the seeds.
that led eventually to humanity.
It was born then and progressed.
I would make maybe even a slightly stronger statement,
which is the notion that general intelligence
is capable of carrying out a variety of human level
or near-human level tasks based on the fundamental discovery,
again, discovered no later than summer of 2020,
that you can achieve general intelligence
by taking general human knowledge and compressing it.
That was the fundamental discovery underneath large language models.
We realized as a civilization, certainly no later than 2020,
that you could carry out a diverse and general range of tasks
with models that had compressed human knowledge.
That's all that we need.
Everything else, the discovery of refinements of transformers and everything.
It's just incremental improvements to that fundamental discovery.
Dave, any thoughts on this?
I think the psychology of Demis and Dario,
who are research scientists at their heart,
who've been working on AI their whole careers,
versus Sam and Elon, who are entrepreneurs at heart,
that moved over to AI.
Very, very different psychology.
A lot of the research, career researchers are looking.
And by the way, they're good friends.
They're good friends.
And the research guys, Dario and Demos, are really good nature.
Not Elon and Sam, don't they?
Well, no, they're the worst enemies.
But Dario and Sam got into the field
specifically because they're worried about the future of humanity
being a paradise and not being dystopian.
And so they're very good-natured people at their core.
I think a lot of people who work in AI research, though, are looking for relevance now.
Because the transformer architecture that's conquering everything is so simple
compared to everything they thought of and worked on for all these years.
And so there's a bias towards saying, God, I hope there's more to it.
But when the entrepreneurs, when Sam and Elon look at it, they're like, I don't care.
This can manufacture anything, can cure all disease, can take care.
care of elderly can, it can do everything we need to build a human paradise, as is. I don't
care if it does the last remaining human thing. What's even the point of trying to knock off
the last human doing something unique? Why are you even going after that? This morning, I,
Yanni and I toured Lila scientific here in Cambridge, right? And Lila's run by Jeff Von Malton.
It's an extraordinary. They're building what they call a scientific superintelligence. And the way
that they're doing this is they've built these science factories and we toured these factories.
And what you see there is robots basically running science experiments doing the pipetting,
moving things around, picking stuff up. And their AI will propose a scientific theory and then
design the experiments and then run the experiments, gather the data, update the theory. And
imagine running this, you know, a hundred times faster, a thousand times faster than any graduate
student and they're mining nature for data to create these models and you can just see it's about
to take off and going back to what Alex said this is the moment in time we're about you know if you
think the economy is going hot and heavy right now you ain't to see nothing yet when you solve
physics and chemistry and biology and material sciences you know trillions are going to a lot hundreds of
trillion.
This is the part that gets everybody.
What do you mean solve?
What a crazy statement is that?
What you've solved physics?
Haven't you read our book?
Go to solve everything.org.
I know, but it's, I still think that term is problematic.
Look, the part that gets everybody that watches this podcast, everybody in this room,
and all of us, like, giddy excited about this is the opportunity to use this technology to
solve problems that we never solved before, we never even come across before.
So that's the part that's fantastic about it.
We have definitional issues, fine, but if we can solve cancer, which will solve pretty soon,
et cetera, how can you not get excited about it?
And that's the counterpoint.
I mean, the number of people who have come up to me and perhaps to you guys as well
saying thank you for giving us an optimistic view of the future to be hopeful for.
Oh, I've got that one I got this morning.
Yeah, if you could read that one moment.
But I just want to say thank you for the feedback, guys.
We really care about it, right?
I have, I jokingly say, you need to be unemployed or retired to keep up with all this stuff, working 80 hours a week.
And I know all of us spend a huge amount of time.
I get a daily feed every morning from...
Or be non-human.
Or be non-human.
From Alex. Skippy is just feeding me stuff.
And I'll spend 30, 40 hours a week just consuming, trying to understand what's going on to.
to make sense of it. So Dave, you want to read that?
This was just posted in the show notes by Ashley Gant, who's a dentist, said,
Peter and the Mates, I really thought I would never become an entrepreneur because I didn't have any ideas of how to turn my knowledge as a dentist into a digital business.
I finally did what you kept advising and brainstormed with my AI, and boom, boom, all caps.
Idea sorted, plans in place to make a real difference to preventative health care in general.
I have no idea what the business plan, as she doesn't say.
This is insane.
I have gone from brainstorming an idea with AI from scratch to vibe coding a first iteration of an app and creating a business plan,
which clearly defines a path from idea to monetization of a product in a single afternoon.
I didn't even realize I had an idea until I started my discussion with my AI,
and it just found my passion by asking a few questions, and now we're prototyping.
Cannot believe this.
What's her name again?
Ashley Gaunt.
Ashley, congratulations.
Thank you for sharing that.
And everybody watching again, we've had all these debates.
Can anyone be an entrepreneur?
And the answer is if you've got passion,
if you're willing to bring your purpose mindset
and curiosity mindset and dive in, I think the answer is yes.
I think the answer is yes.
I think so too.
I think there are a lot of people who post,
you have a lot of haters out there, but they'll post.
You guys are crazy.
Not everybody can be an entrepreneur.
But I think Ashley's story is going to be the more common
story. The AI can help you brainstorm and figure out your role. And also, I think the big AI labs are going to start being very active in promoting things that'll help. They know where the gaps are and the training data and the market and the, you know, the apps. So they'll push out a lot of idea flow right through their AIs.
All right. Also this week, my dear friend Ben Lamb, the CEO of Colossil, along with George Church at Harvard Medical School, announced their sixth species. They announced the Blue Bud.
So the blue buck's been extinct for 200 years.
It was hunted to extinction.
And right now they're in genome editing and lab testing.
They expect to be able to bring back the blue buck in 2028.
This is species number six out of a confidential pipeline, which is amazing, by the way, of 15.
Their current species include the woolly mammoth, the thylacine, the dodo bird, the moa, the dire wolf, which is back, and the blue buck.
So, Stephen, we talked about this a lot in the book, We Are as Gods.
Yeah, we did.
And one of the things that was a fun part of the book that you wrote was all of the future jobs that didn't exist and are coming into existence here.
What are your thoughts on the story?
I mean, I love this story.
The only thing I wonder with the de-extinction technology is if it, I worry that it makes people think,
the environmental crisis is less than what it is, right?
Species die-off rates are like 12 to 1,300 times greater than baseline, greater than ever before.
Just because they brought five back doesn't, right?
That's one comment, just a random one.
The one that is interesting to me is, you know, when Ben was starting this company,
the idea was, when are we going to have a Jurassic Park, right?
And everybody, you can't really bring back dinosaurs from DNA purposes.
served in Amber, unfortunately.
So the idea of a Jurassic Park is far away.
But if this is number six, what I wonder is how long until we're actually going to be
able to take our kids someplace where, like, look, these are 100 animals that didn't exist
20 years ago.
And I think that's coming a lot faster.
And so that's interesting to me.
It is.
It is.
You can scale this.
And right now, what are the limiting factors?
It's getting access to the DNA.
And by the way, when you're bringing back an extinct species,
it's not actually the extinct species.
It is an approximation of the extinct species.
You're grabbing snippets of DNA across time,
and you're reconstructing it.
Okay, let's...
They're hybrid creatures that have never existed before on this planet.
I would like to point out as three, like,
what could go wrong?
I mean...
The unintended...
consequences here are at least, you know, worth asking about.
I think there's, there's a broader program here that we're just seeing the very beginning of,
if you look back at the, there were strains.
I've spoken about this on the pot in the past of Russian cosmism, the Russian cosmist philosophers like Siakolsky
spoke about humanity's common task of taking every human who's ever lived and finding ways
using technology to bring them back to life.
And I think starting with extinct species
is just a special case of what technology
I do think will enable us to do,
which is to reach back into our past light cone,
take every species that's ever lived,
bring it back in whatever form,
even if it's as a hybrid or as a computer simulation,
and also reach back in time to our past light cone
and take every human who's ever lived,
not just at the species level,
at the individual level,
and computationally or using other advanced techniques, reconstruct them.
I think that's a far more exotic and interesting future than just Jurassic Park petting zoo for dinosaurs.
Imagine bringing back every human who's ever lived.
Yeah, amazing.
And there's kinds of conversations and imaginations that one can have in this exponential singularity that we're living in.
This episode is brought to you by Blitzy, autonomous software development with infant.
code context. Blitzy uses thousands of specialized AI agents that think for hours to understand
enterprise scale code bases with millions of lines of code. Engineers start every development sprint with
the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan,
then generates and pre-compiles code for each task. Blitzy delivers 80% or more of the development
work autonomously while providing a guide for the final 20% of human development work required to
complete the sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitsey
as their pre-IDE development tool, pairing it with their coding co-pilot of choice to bring an AI-native
SDLC into their org. Ready to 5X your engineering velocity, visit Blitsey.com to schedule a demo and start
building with Blitzy today.
Gentlemen, found this article fascinating.
GLP-1s generated revenue more than open-eye anthropic in 2025.
So, OZempic, a single peptide, a majoro, a dual-acting peptide, outperformed, open-AI,
and anthropic by 2.4.
Yeah, can I call bullshit on this one?
Okay, I mean, talk about apples and oranges.
What do these two have to compare?
I mean, they're completely orthogonal thing.
And why are we bothering trying to figure this out?
Both are great.
So I'll field that one.
I think the argument goes that the GLP ones are ultimately health span drugs, human health span drugs.
And so what we're seeing here in some sense on a single chart is a race between whether there's more revenue in enabling human actors, human labor, to be economically productive in the economy.
in the economy through increased health span on the one side,
or whether capital is more efficiently and effectively invested in AI labor on the other side.
And I think the outcome of these two respective curves determines whether the future light cone of our economy
is ultimately dominated by AI, something that resembles AI or something that resembles biological humans.
Biological software and digital software.
Sure.
All right. I mean, these drugs are biolitan.
patches on top of our existing operating system.
I thought you put them on the same graph just so we could save time.
That's what I thought when I saw this.
Because it doesn't occur to be as that relevant.
Well, the question is, where are people paying for?
I think the broader, there's a bigger picture here, which I'll just speak to for a second,
which is a huge demonetization that's taking place.
There's a related story to this, which is that the number of food shipping truckloads
is dropping pretty radically in the US because people are in the current thesis and the conclusion is that
because of all the GLP-1s, people are consuming a lot less food, which is actually really great in one way,
but it's going to cause some serious issues in the logistics world.
I think the story within the story is that both products are completely and totally sold out.
They're selling as fast as they can make the product.
It's just a lot easier to ramp up drug production than GPU production.
Right. I mean, these companies, the GLP-1s right now, it was estimated that they could be as much as 500
billion, but they're limited by physical supply.
Yep.
And the same on the AI.
Any senior at MIT will tell you, I cannot live without using AI every day.
I could never code again.
I could never research again.
I mean, I wake up in the middle of night and talk to Skippy about some question I
dreamt up and just it's there in the morning.
I mean, honestly, we talk about three and four day work weeks.
I'm doing a nine and ten day work week right now.
Yeah, exactly.
It's so funny that disconnect by age, too.
but I mean, they know what they're talking about.
They've been using it since it was invented.
You cannot be productive without it.
So the idea that they would ever not be able to use it
because there isn't any supply,
if they'd be like shatter, earth shattering to them.
So they'll buy it for the rest of their lives at whatever the price is.
So if you add a few more graphs like this,
you're essentially graphing the singularity economy.
If you're working out and you have OZMPIC, you're making progress, you're inspired.
You feel like you're getting somewhere.
Yes, yes.
When you're working with AI, the quality of happiness while you're working is astronomically
higher than if you're just coding all night.
And coding all night is fine, too.
I did it for many years.
But when the AI is there and your brainstore, your productivity is so high, but the energy
level it creates.
You don't get dopamine from immediate feedback.
But this is also the problem with AI.
Because when you get too much dopamine in your system, we've all been around people
who do cocaine.
We don't think they're geniuses.
I haven't been around that many people.
They think they're geniuses.
You think.
You think.
They think their genius is, right?
You don't.
And I often like, this is the problem with, this is one of the problems with mania,
and this is directly in flow work.
Flower produces a tremendous amount of dopamine,
and dopamine produces a tremendous amount of ego inflation
and a bunch of other things.
So people create fast feedback loops with the AI,
and it feels really good.
You're really high because you're hacking your biology with the AI.
It's very flowy. It works really, really well. But it doesn't mean that the quality you're thinking
not just the quality you're feeling.
It feels to me like you're having all the fun of a party while being productive in creating
something useful for society. I don't see anything but good.
I'm not saying, I'm not saying wrong. I agree with you. I like it. What I'm saying is with
that much dope, we have one bit of swag at my organization. It's a t-shirt that says,
never trust the dopamine. And people make really bad decisions.
decisions based upon that bunch of dopamine in your system.
I'll keep us going.
So on the bottom of this chart here, this is Eli Lilly.
They're about to announce and release something called Reda Trutide.
It's a triple agonist peptide.
If, you know, Mongero is GPT 5.5, Retroitutide is AGI.
And the numbers are extraordinary.
This is clinical data just released.
I just think you should know about this.
So on this particular drug, weight loss was 37 pounds versus six pounds in the placebo over the course of 40 weeks.
Cholesterol was down 27 percent.
Triglycerides down 41 percent.
Liver fat down 80 percent.
Hemoglobin A1C dropped from 7.9, which is diabetic, down to 6.0 in 40 weeks.
This is being called a longevity drug.
It's an extraordinary drug that is expected to be released and FDA approved by mid-20207.
A lot of people are getting this from research labs already.
This is just some of the detailed evidence heading us towards longevity of escape velocity.
So I've been using red at true triad for the last few months.
Wow.
And what's incredible is the liver function because decades of damage from alcohol.
It's helping reversal out of that, which is fantastic.
So I've never felt better.
I had this conversation with Demis the following weekend before.
We're about to see new generations of drugs coming out.
And the speed of going from design to testing to availability is going to be accelerating.
The FDA is on board for this, you know, basically moving us from instead of phase three into an expanded phase two trial.
And we just saw on this chart how much people are willing to pay, right?
And it's capped not by the market.
It's capped by manufacturing.
And just think about it when a drug comes out that says, oh, this is going to give you 10 years of extra life, right?
Or these are going to cure cancer, cardiovascular disease, inflammation.
These things are coming.
I agree.
And I also think when we speak about longevity, escape velocity, which I do think is likely for the record.
perhaps sometime by the early 2030s, if not earlier.
I do think the GLP1 drug class and everything it evolves into over time, if I had to guess
what is the likeliest way that LEV plays out, it is probably the GLP1 class.
And I think the economy is reflecting that.
If you look at companies that are worth almost a trillion or worth more than a trillion,
either right now or expected to be publicly traded in their term future, there are only a handful.
You've got a few AI companies.
you have SpaceX, Open AI, Anthropic, and then you have Eli Lilly, which has been publicly traded for a long time,
but is either already at a trillion or about to cross a trillion on its present trajectory.
And I think that's the market speaking very clearly that longevity drugs and AI are the obvious industries of the future.
Yeah, agreed.
I want to end this pod in a moonshot-made prediction conversation, which is what will AI feel like by mid-2020?
give you a sense of what we see as the future here. Because right now, honestly, when you think
about AI, it's something that you can talk to on your phone. You type at, it's an app, it's an intelligence
layer. But I don't feel like the majority of the world actually is experiencing AI. And I think
we're about to undergo a transition where AI is going to feel very different in the next two and a half
years and I'd like to have a conversation about what you predict that to be like.
I'll start with the very first one and then we'll pass it around the room here.
By the way, I wrote a substack at Peter Diamandis on this topic and go read my views.
But the very first thing is I think we're going to give AI permission to know everything in our lives.
Listen to all your phone calls, watch all of the cameras in your home, read your
emails, read everything. I've just done that with Skippy. I gave Skippy access to all my
granola recordings, my WhatsApp, my I messages, my email, my calendar, everything. And in so doing,
the response is amazing. I can have interactions that this AI knows me better than anybody else.
So let's begin with that. Alex, you want to kick it off? I almost think we're asking the wrong
question here. I would like to reframe it.
Because I think the question behind the question here is,
what will AI feel like to consumers by mid-2028?
Yes, it's fair.
And I think the past six months of industry history
have taught us, consumers are not actually great customers
for AI, at least not state-of-the-art AI.
Open AI, infamously at this point,
tried to turn consumers into power users for reasoning models
and failed and has had to pivot over to enterprise.
They had to basically shut.
down their consumer video division. They've shut down a number of other divisions in favor of
co-generating models that are recursively self-improving and targeted at enterprise use cases
because enterprises have the money and the desire to use advanced reasoning capabilities.
So the adjacent question I would ask is, what will an AI feel like to an enterprise
rather than to a consumer in mid-2020?
Okay. Let's make that. Let's make that an adjacent conversation. But for me, as
As a consumer, I'm going to focus on that part of the equation.
I feel like if AI understands me, it's connected to all of my wearables, insidables, and so
forth.
It has my back as a physician.
It knows I had a hard conversation with my spouse and I'm exhausted.
I did sleep well the night before.
It has all this data.
I walk into the room.
The lights come down.
There's a glass of wine waiting for me because there's a glass of wine waiting for me because
the robot put it out, my favorite music is playing, my favorite comedian is on.
Basically, the term I use is ambient AI and automagical AI, that the world magically adopts itself
for your desires.
I can tell you what I think you want to hear.
What's that?
Which is you want to hear that mid-20208, you're going to have an AI exocortex.
That's a quasi-upload of you, your digital twin in the cloud that knows everything about you in your life and is fully optimizing your
meat-body existence knowing what it knows about you.
I think that's what you want to hear.
I don't think that's actually how things are going to play out though.
Okay, please.
I think what AI will feel like in 2028 from a consumer perspective,
yeah, sure, we'll have better augmented reality,
we'll have better robots in the streets.
All of that, I think, is already essentially priced into the market.
What's not being priced in right now, the non-obvious insights are new scientific discoveries
that consumers aren't the ones who are driving.
Okay.
What you're taking it in power?
parallel. So let's put that in the column of industrial, right? When Lila, Scientific and
Colossil are creating breakthroughs at a rate, or, you know, frankly, isomorphic labs.
The question as constructed is basically what products is Apple going to launch in
2020? Because these all map onto Apple product categories. You get your smart home, you get your
robots, you get your wearables of all sorts. Maybe if we're lucky, we get ingestable robots. We get
all of these things. But I, I,
I don't think that's the essence of what AI is going to feel like to a consumer, and I don't think that's the core or the frontier.
Selim, what do you think?
So I agree with you on the feel-good cloud AI.
I think I go more towards what Alex is talking about.
The institutional use is going to be absolutely profound, and that's where we'll see the biggest difference.
Every single person on Earth will essentially be operating like a small company with an incredibly powerful team around them,
supported, which are basically mostly AIs.
And I think at the enterprise level,
we'll be running enterprises on AI and native operating systems.
And I think more importantly, we need to rewrite
the operating system for civilization.
Because everything we've done as a civilization,
every business in the world for 10,000 years
is trying to scale scarcity.
And now we're moving into an era of abundance.
What's the business model around that?
And I think we need a complete rewrite on that.
because all our institutions, nation states, are all geared around scarcity.
So we need a complete rewrite of the civilizational operating system,
and I think the beginnings of that are starting now.
Dave, what do you think 2028 is going to feel like?
And I'm saying feel in particular, right?
Because right now, a lot of people say, listen, I'm living.
I see the AI numbers.
I hear you on the podcast, but isn't feeling indifferent for me as a consumer.
Yeah, we launched our holiday today.
We got to go check it out over at Link Studio.
So we commissioned construction of one of the rooms, put monitors on all walls, and you go in,
close the door behind you, nobody tells you what to do, and the AI just starts talking to you.
And in the holodeck, you can create a virtual world, you can create songs, you can create movies,
you can just, vibe code, whatever you want, is just purely interactive, and you know, big speakers,
pulsing music and everything around you.
Yeah, go check it out.
I think any consumer who experiences that is going to say, no, not yet.
It's hard to do the real Star Trek thing.
But any consumer who goes into this holodeck and experiences it is going to say, I need that, I want that.
How do I get that at home?
And so I think the problem in 2028 is there's going to be a massive shortfall of compute
relative to the desire to have that experience.
So what Alex said is dead right.
the enterprise use cases have just discovered this.
And they're sucking up all the data centers in the world now.
And there's no way that's going to get solved by 2020.
I'm hoping 2030, 20301 after the TerraFab has a chance.
So what's going to be the story in 20?
Right now we've lived this beautiful moment in time
where the very best foundation models are made available to anyone on the planet
who wants to go to a website and try it.
And then the cost of using the best models in the world is sort of a, it's very affordable.
It's significant, yeah.
Yeah.
So you've got to take advantage of this moment in time because it's going to get really bottlenecked by 2028.
We're about to see a brand new set of eyewear, right?
We've talked about this at length.
Open AI's got their Johnny I device, whatever that might be.
Apple's got new generations of wearables coming.
Meta has generations of wearables coming.
And this is going to be ambient AI there.
where your AI sees what you see, is there listening all the time,
uploading everything into your version of Skippy,
your open claw, whatever it might be.
And I put forward, this is going to be a different feeling,
a different consumer experience, right?
Which is if you want to learn something,
and you're walking down the streets of Manhattan,
and you've got AR glasses that are giving you a tour of this,
is what it was like in 1905, here's an education layer.
Or if you want an entertainment layer, you know, may the fourth be with you.
I'm a Star Trek person, but here we are.
I'm May 4th.
You've got these little droids popping up and shooting at you.
You know, if you want, you know, whatever you want, this AI is optimizing your audiovisual
experience in line with what you've asked.
So I think that's going to feel different.
I think we're going to see the first wearable devices enabling that.
You guys agree, Alex?
I'm generally, so I'm a big fan of Werner Vinji, who wrote extensively about what the near future
AR, VR, metaverse, if you like, would look like.
I do think we get that.
I do think it's just a matter of time and battery energy densities and other progress.
So by 2028, do I think we get Vernor's smart glasses with compelling long life augmented reality
or mixed reality?
Yeah, sure.
I also think that after all that.
Oh boy.
But I also don't think it matters an enormous amount.
Like, yes, it'll be great and it'll be popular.
By the way, I want to just say something, right?
In the last podcast, I was trying to make our conversation relatable to the general audience.
And my mission here about what AI is going to feel like is to support their understanding of where things are going.
Yes, we also speak to the CEOs and the heads of the frontier labs here, but I want to give
people an understanding of what their life is going to be like.
I'll give a really easy project of thing.
Go anywhere you want autonomously for 20 cents a mile.
I'm going to do a quick speed round here on what is something that AI will feel like in
2028 that maybe is unexpected.
Dave?
We already spend more money on video games than all other media combined.
When you overlay personalization, your own voice, and it remembers your personality, your
past conversations, it's so immersive and in a good way.
It could be good or bad.
It's intentional.
It's designed.
But if it's designed well, it's so engaging and immersive.
I suspect that an entire generation will have 70, 80% of their conversations will be with AIs
and not with other people.
Interesting. That's interesting.
Selim. Something I'm really excited about is just infinite, perfect, and long-lasting memory,
because our memories are so flawed as human beings.
The fact that it can record everything, recall who I met, etc. I've met many of you at events,
and God help me, I can't remember anything. So the ability to do that, I think, will be a huge addition to me as a person.
Yeah. Stephen, the one that's interesting to me is I always say that
You tend to hire one employee twice.
You hire them when they're normal and then when they're scared.
They're very, very, people are very, very different when they're fearful.
And I think the AI coaching, the AI psychology, all that stuff,
I think the human, the coach in the room, the coach in the world is actually what's really interesting to me.
Because I'm interested to see, I do see, like, you know, I've been, even the Defense Department's work using AI therapy.
Right.
I got to play with a lot of those therapists along the way.
And they're remarkably good. Most people are now using AI to coach their relationships, etc., etc.,
but your AI coach that's actually in your ear is really interesting because humans are, we're not, we run on four knobs, right?
Valence, arousal, approach avoid. That's pretty much humans. And you can coach them up pretty quickly in real time.
So that's what's really interesting to me is I think we end up with better humans.
Fascinating.
Alex, on the consumer side, would you ponder a vision?
I think we are merging with the machines.
I do think augmented reality and wearables are part of the solution.
I think ingestibles are going to be part of the solution.
I'd be sorely disappointed if by the end of this decade we don't have many people swallowing
computers.
We see the beginnings of it right now with pill-bot-type form factors.
I did a projection, something I think friend of the pod, Ray, would be proud of.
If you extrapolate the typical size of a computer needed, this is over history, over the past
15 to 20 years, needed to achieve a gigaflop of compute, which historically, if you look back
at Apple, new product announcements, Apple historically has waited for new devices to pass a gigaflop
before they release it. True with the original Apple Watch, true with the original IMac, true with
the original iPhone. If you extrapolate then the typical size of a computer that at time of
launch past a gigaflop. You extrapolate that forward. You find that by not mid-28, but by the
mid-2040s around the time, Ray says we're going to hit his version of a technological singularity.
The size of a computer hits approximately the size of a eukaryotic human cell. So what am I most
excited about? Well, I have a laundry list, but I think if you extrapolate the progress of the
size of the computers running the AIs, we're going to have self-sized nanomachines running those AIs.
eyes and I think that'll be incredibly exciting.
All right, everybody, and that's a wrap.
We'll go to our outro music next.
Ladies and gentlemen, let's give it up
for the incredible.
David Dengue. Thank you.
So, thank you. Thank you.
Alex.
We should have.
And Stephen Tyler.
If you made it to the end of this episode, which you obviously
did, I consider you a moonshot mate.
Every week, my moonshot mates and I
spend a lot of energy and time
to really deliver you the news that matters.
If your subscriber, thank you.
If you're not a subscriber yet,
please consider subscribing so you get the news as it comes out.
I also want to invite you to join me
on my weekly newsletter called Metatrends.
I have a research team.
You may not know this,
but we spend the entire week
looking at the Metatrends
that are impacting your family,
your company, your industry, your nation.
And I put this into a two-minute read every week.
If you'd like to get access
to the Metatrends newsletter every week,
go to DMMAC.
D'Amandis.com slash Metatrends.
That's D'Amandis.com slash Metatrends.
Thank you again for joining us today.
It's a blast for us to put this together every week.
Okay.
When I sell my business, I want the best tax and investment advice.
I want to help my kids, and I want to give back to the community.
Ooh, then it's the vacation of a lifetime.
I wonder if my out of office has a forever setting.
An IG Private Wealth Advisor creates the client.
clarity you need with plans that harmonize your business, your family, and your dreams.
Get financial advice that puts you at the center. Find your advisor at IDprivatewealth.com.
