Moonshots with Peter Diamandis - The AI CEO Arrives: Sam Altman's Succession Plan, Job Loss Continues, and Our 2027 'Solve Everything' Paper | EP #230
Episode Date: February 13, 2026The mates discuss the accelerating path toward a singularity and unveil their "Solve Everything" paper. Read the Solve Everything Paper: https://solveeverything.org/ Get notified once we go ...live during Abundance360: https://www.abundance360.com/livestream Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Peter H. Diamandis, MD, is the Founder of XPRIZE, Singularity University, ZeroG, and A360 Salim Ismail is the founder of OpenExO Dave Blundin is the founder & GP of Link Ventures Dr. Alexander Wissner-Gross is a computer scientist and founder of Reified – My companies: Apply to Dave's and my new fund:https://qr.diamandis.com/linkventureslanding Go to Blitzy to book a free demo and start building today: https://qr.diamandis.com/blitzy _ Connect with Peter: X Instagram Connect with Dave: X LinkedIn Connect with Salim: X Join Salim's Workshop to build your ExO Connect with Alex Website LinkedIn X Email: alexwg@alexwg.org Substack Spotify Threads Youtube Listen to MOONSHOTS: Apple YouTube – *Recorded on February 10th, 2026 Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
When do we see a billion-dollar revenue company being run by an AI CEO?
I think it's pretty likely that there already is such a company right now.
U.S. jobs disappear at the fastest rate this January since the Great Recession.
This is not really a recession. It's literally tasks being evaporated in front of our eyes.
This shows us Marx was wrong. We knew that anyway.
We have the capitalists who are being first in line to be replaced by the automation.
For me, this is the social contract, literally by little, disappearing and pixelating away.
Alex and I are going to be unveiling a paper we've been working on for some months.
It's called Solve Everything. How do we get to abundance by 2035?
The next 18 months to two years are going to set the rules down for the next century.
We're about to have this conversation. The paper slash book is nine chapters.
Are you ready to jump in?
No one expects the singularity, Peter. I'm ready.
Everybody, welcome to Moonshots.
Another episode of WTF just happened in tech.
I'm here with my incredible Moonshotmates, DB2, Saleem, AWG.
Guys, it is just accelerating.
In fact, this is the second WTF episode we're recording this week,
just because the news is just incessant.
We're going to have this podcast today in two parts.
First, we'll be covering the news that's breaking, a lot of it,
really important news.
The second part, Alex and I,
are going to be unveiling a paper
we've been working on for some months.
It's called Solve Everything.
How do we get to abundance by 2035?
This is the equivalent of the paper released
situational awareness in AI 2027.
This is our view of where things are going.
So in second half, get ready for this.
Excited to present it.
It shows the brilliance of AWG.
I'm in Sun Valley at the moment,
speaking at Tony Robbins Platinum Finance event,
about AI and longevity. Dave, you're back at MIT. Selim, where are you, pal? I'm home in New York,
waiting for the warm weather to hit and get us above zero for once. I'll take six months.
Wondering why I ever left India. No, why you left Florida is the great answer. And Alex looks like
you're in your normal setting, some AI-generating background. The audience is convinced that I live in
VR or maybe a hotel, and you wouldn't, actually, you probably would believe
the YouTube comments on the flowers and the lamp, and they're purported invariability.
Yeah, and I have taken on...
But you did point out that the orchids have changed, actually.
The orchids have changed, but I'm getting, like, flower-keeping advice in the YouTube
comments at this point, people telling me to put ice cubes in the orchids.
And I have to say I'm having so much fun with Claudebot.
The lobsters have begun to become part of my life inside and out.
So I'm like, you know, bringing them into the conversation here.
I got jealous, Dave, of the lobsters in your view.
So I'm holding the lobsters back for now.
We're having a tribles moment.
There's actually more.
I take some of them down.
It is a tribles moment.
You're absolutely right.
Hopefully it's not the trouble with the lobsters.
All right.
No, no.
These triples are economically productive.
Oh, okay.
Well, these are and they're so much fun.
I can't wait to express the level of.
of collaboration I'm having with my Claudebot, which I've named Skippy.
If anybody knows where the name Skippy came from, put it in the comments.
It's my favorite AI from science fiction.
All right, this is the number one podcast in AI and Exponential Tech,
getting you future ready, getting you ready for the supersonic tsunami heading our way.
And with that, let's jump into the news.
First off, top AI news.
I love this article.
this came out from Forbes.
Sam is the cover child, cover boy for Forbes this week.
And the question is, will ChatGPT become the CEO of OpenAI?
So this is what Sam said.
You know, pretty simply he has a succession plan.
He's said he doesn't want to be the CEO of a public company.
And honestly, being the CEO of a public company is a pain in the neck.
So taking it further, says, you know, if the goal for artificial intelligence is,
to become so advanced that it can run companies. He asked, then why not run OpenAI? I would never
stand in the way of that. He says, I should be the most willing to do that. I find that fascinating.
You know, when will we see an AI actually running a significant economic engine like this? Dave's
This is no joke, actually, because this is board meeting week for me. So I have back-to-back
Minerva today, the cash cow from Dartmouth, then tomorrow the $2 trillion asset manager,
then the next day the public company ever go all back to back. And in every one of those meetings,
this is the topic not replacing the CEO, but all of our plans are now in written form that we can
digest with AI. So we're trying to track every single movement within every company in
documents digestible by AI. And then if you ask the CEO, well, what do you do? What do you
do, it's mostly set course and set strategy, which is a very small fraction of total time.
What else do you do? What's the other 90% of time go into and how much of that can be done
by AI today? And the answer is a lot, which is great because then the CEO is unleashed to be
even more effective at setting strategy and also promoting the strategy. So I don't think that part's
going away anytime soon. But the other 90% is really just, you know, inbound information
getting routed into the organization to do these specific tasks, which is outbound.
It's documents in, documents out now.
So we're really geared up now for this.
Selim, you and I've been talking about this forever.
When are we going to have AI board members, AI executive teams, and eventually AI CEOs, thoughts?
Yeah, we're seeing this shift from AI as if from AI as from a tool to being in governance actor, right?
We already have an AI minister in Albania.
And initially, these are kind of like toy things.
But in reality, this is very powerful stuff because an AI scanning can be scanning millions of documents at a company in real time has a much better sense of what's going on in the company than any human being can possibly do.
A typical loop in a big company's, the senior management sets some direction or policy cascades down at the cold face that people do it.
It takes a long time to get down there.
You have Chinese whispers by the time it's down there doing some activity that nobody at the top even knows.
about and then they start doing stuff report back up to the top. You've got another set of
Chinese whispers. And by the time data gets to the top, it's diluted so much and you lose all
the intelligence in the middle, right? And so AI is going to come through and break through
radical, create radical opportunities to do this. And I think what will happen is we'll see a
pure AI organization at some point soon, but they won't look efficient. They'll look literally alien.
And that's fine. I think it's one of these where you can't wait for it to happen.
And then you can't compete. You can't compete. You can't
I'll repeat against that.
There's a time dilation.
I think that, you know, I asked Alex for some help with the strategy of a big company earlier this week.
And one of the points he made in his answer, which was brilliant, of course.
One of the points he made in his answer was time dilation.
You know, if you look at banks and insurance companies and, you know, practically anything,
it doesn't change strategy more than once a decade, you know, or once every millennium.
Now in the age of AGI, the course corrections are going to be, you know, it'll go from,
decades, to years, to months, to weeks, to minutes, all in the next couple of years.
We have a whole section in the first E.A.S.O. book called Death to the five-year plan, right? Because
today, by the time you finish your five-year plan, it's out of date. Then you spend all your time
maintaining the plan. Exactly. Exactly. So the amount of information that you need to assimilate
to do those course corrections is beyond human. There's just so much going on. If you read Alex's
daily feed, you know, the amount of change going on, if you compare it day over day,
you can see the expansion of the rate.
And so it's just so much happening.
It's beyond human assimilation at some point.
So you have to have an AI CEO to assimilate it and even suggest the course corrections.
And Dave, you said it over and over again, right?
The role of the CEO in part is to understand what his or her employees are doing
and if they're making the most efficient use of their time and their resources.
And it's all knowable, but just not by the human right now.
but the AI can be giving you an understanding of this person's operating at 50% of capacity
or this person's not making the best use of their resources.
That's all management, which AIs will do very well.
I think where you have the C-suite and the CEO, they'll be holding the purpose, hence the MTP, etc.
They need to hold the direction and what problems the company or organization is actually trying to solve.
Yeah, so there's two sides to this.
One of them is outbound strategy, you know, simulate all the data from the world.
the other is inbound, what are all my people doing and why?
And those are the kind of the two sides of being a CEO.
And Peter just brought up that inbound side, which Salim, you emphasized.
And I think on that front, you know, this is comp plan season, right, beginning of the calendar year.
I'm tying everybody's CEO comp plan to data gathering this quarter so that we have everything that's happening in the organization now.
Peter, you've been saying privacy is dead for a long time.
everything is knowable all of a sudden.
And there's a whole bunch of mechanisms for that.
I won't even get into it because this will go too long.
But if you're a CEO or a senior manager in any company right now,
really focus Q1 on how do I grab absolutely granular information on what everybody's doing
so that I can start to feed it to the AI to get its opinion on whether these are the good
bad uses of time.
Stuff is speeding up, Alex.
When do you think, I mean, to put a sort of concrete objective on this,
when do we see a billion-dollar revenue company? Not valuation, because valuation skyrockets
through the roof when you pull two or three smart people together, but a billion-dollar revenue
company being run by an AI CEO. What's the timeline for that, Alex? And what's your thoughts on
these? Probably several months ago. Several months. You think there's a billion-dollar revenue
company being run by an AI right now? I think it's very likely that there is a billion-dollar run rate
company being run by an AI. Now, you said run by, I think there's probably a human CEO there for
legal purposes and meat puppetry purposes. But I think it's pretty likely that there already is
such a company right now. And by the way, if you know of one, please put it in the comments.
We'd love to hear about it and see it. If you want to blow the whistle on meat puppetry,
you can blow it to Peter. All right. Anyway, I love this idea.
you know, it's eating your own dog food.
If in fact, you know, if in fact Elon believes that we're going to have the smartest
AI is coming out of XAI.
And if OpenAI believes the same for its, you know, G, you know, chat GP6, whatever comes next,
it should be the CEO.
I also think, if I may, Marx was wrong.
This shows us Marx was wrong.
We knew that anyway.
But this is another case in point.
Look at what's happening.
The story that unfolds here is,
We have the capitalists who are being first in line to be replaced by the automation.
It's not the workers.
We see booming jobs for electricians and HVAC engineers.
Their salaries are booming, and yet CEOs are first up to be replaced.
So if anything, I would sort of take marks off the shelf, if it was on the shelf at all,
replace it with Moravex paradox, which is, again, this paradox that tasks that are hard for humans
and easy for humans are respectively replaced by easy for machines, hard for machines.
Machines are able to do complex calculations, solve math.
It's pretty hard for humans.
There looks like it's going to be easier for the machines to automate away CEO labor,
which is sufficiently hard for humans that it's well compensated in relatively scarce commodity
to find high-quality CEOs.
And yet, it'll take a few more years for the machines to do an amazing job at unskilled manual labor.
I for one cannot wait till the AI CEO overlords takeover of the world.
I wish I could have an AI CEO taking over and running my company instead of having to do it myself.
It's a pain in the ass.
It's hard.
Get your clawed by and running, pal.
Yeah.
You have to feed it properly, et cetera.
It'll happen, but I just can't wait for the speed of that to accelerate.
By the way, it's super fun the way we're going back to Claudebot as the de facto handle instead of open claw.
Lobsters are the mascots of the singularity.
Lobsters are here to stay.
Hey, everybody, you may not know this, but I've got an incredible research team.
And every week, myself, my research team, study the meta trends that are impacting the world.
Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology.
And these Metatrend reports I put out once a week,
enable you to see the future 10 years ahead of anybody else.
If you'd like to get access to the Metatrends newsletter every week, go to deemandis.com,
That's D-Amandis.com slash Metatrends.
All right, staying with our OpenAI theme,
this is incredible, this is about feeling the speed of the singularity.
OpenAI achieved 70% time reduction between models.
So Open AI released, their release sequence has gone from 97 days to 29 days on a release cycle.
Anthropic with their Opus 40 and Opus 4.6 took about 73 to 75 days.
So the concept here, and Alex, I think you or Dave mentioned it last time, we're effectively
heading towards a continuous deployment, like it's continuously being improved.
And whether you call it 0.6, 0.7, 0.8, there's a continuous improvement. Alex, thoughts on this.
I do think we're moving toward daily and then hourly and then minutely releases, certainly.
I also want to take a step back and try to understand why this is happening.
The obvious factor should be obvious is competition.
There's leapfrogging that's intensifying between all the frontier labs.
So some quantum of why we're reducing by 66% or so, 70%, the release cadence is just due to intensifying competition.
That's the boring explanation.
I think the more interesting explanation is that the technologies behind the releases themselves have evolved.
So historically, when we were dealing with annual releases, that was a world and era of pre-training.
When if you want a new model, you have to do different architecture and you have to pre-train off of a larger corpus with more compute.
Those were the days of the original Chinchilla scaling or Kaplan scaling before that.
And that was a much slower world because if you wanted a new release, you had to start all over again.
Then we moved with 01 slash strawberry, which was sort of, sort of,
of the herald for reasoning models.
You remember that?
That was ancient times two years ago.
Oh my goodness, yeah.
That was like so many singularities ago.
So we moved to the era of reasoning models
when it was possible through a process
that used to be called iterated amplification and distillation
to take a pre-trained base model,
or baseline model,
and then cyclically generate a bunch of training data
and distill from that to a child model
and repeat the process over and over again.
And that post-training revolution for reasoning models
it was much faster.
Like, it's much faster to post-train a model off of a corpus of synthetic data,
and so release cycles contracted.
And I think now we're on the edge, probably slightly past the edge at this point,
of a new era, call it the recursive self-improvement era,
where the models are starting to rewrite their own code.
It's not just a matter of a model, a parent or teacher model,
generating synthetic training data that's used for a child distilled-e model.
it's literally the parent is writing the code for the child.
And that can be done even more quickly than just post-training.
And I think it's just going to get faster and faster until it's a continuum.
Yeah, it's going to accelerate like crazy.
But also, we're in a window of time, a very narrow window of time right now where the very best technology is available to you.
Like, Claude gives you their absolute best 4.6, and OpenAI does and Gemini does.
I would not count on that surviving post-the-self-improvement era.
Right now, also, the Chinese open source models are pretty much right on par with the best of the best.
They're slipping a little bit.
But I think the window of opportunity to take advantage of that and build something out of it is right here right now.
I really doubt two years from now that the best AI is going to be just login and go.
Here, here you can have free access to it.
And what will happen is you'll be deprived of it with the excuse being security and safety.
Interesting.
Which is true.
I mean, it's pretty hard to deny.
But you have a window of opportunity right now to be on the very cutting edge.
If you don't take advantage of it now and get somewhere with it right now, I wouldn't count on that existing.
So the models are going to go dark, right?
It's going to be, it's the secret sauce is going to be kept internal to benefit those companies as they go into an all-out battle.
Well, even today, you know, if you talk to Noam Brown over at Open AI, you know, he's working on the next generation internally.
But it's only like three months in the future that, you know, he has access to.
But three months in the future in the era of self-improvement,
It's like, you know, massively different intelligence level.
You know, the definition of three months of AI development, you know, two years ago, one year ago, and today, that's the point of the slide, I guess.
It's like three months is like a lifetime of difference in capability that they're using internally versus what's, you know, available in the outside world.
So you've got to expect that this is, it's now or never to react, basically.
And people are still hugely underreacting to the importance of what's happening right now.
insane. Salim?
I've got to
kind of like the crazy antithesis
of this. We're working with a large
monster European corporation
and we showed them something that can
give them massive
impact straight to the bottom line
and the response was, oh
this is fantastic, let's bring
this to the planning meeting in October.
Right? And you're like,
right? And you're like,
I can't even see past three
weeks and you're like talking
calendaring something 10 months down the line for something that's going to have a demonstrally,
you've just agreed, it's a demonstrably huge impact. So this is the impedance mismatch between
legacy. But there's a story, for me, this story was mostly a bit of a yawn. And the reason I say that
is we've been seeing this in the fast-moving tech space for a while. Remember Raymond McCauley,
who was the chief scientist at Illumina, right? They were making high-speed gene sequencing machines.
I love the story.
And it turned out that the shelf life of a gene sequencing machine was literally eight months.
That was the sales cycle before the next iteration came out.
But it took four years to build one of these design and build one of the machines.
So they had to have four parallel production sequences in a sequence at the right level
so they could hit that eight to ten months shelf life, sales shelf life.
Right.
So in the kind of high-tech world, we've seen this pattern before.
but this brings it to software and makes it a continuous intelligence cycle.
I mean, this is the singularity at play.
And again, you know, the theme that we keep on hitting in this podcast is this is the
slowest he'll ever be and the worst it'll ever be and it's accelerating at a speed which is,
which is frightening.
Frightening in that, you know, the four of us spend, you know, tens of hours per week
reviewing and learning and playing and trying to communicate it. And it's only going to be something
that my Claudebot's going to keep up with. And speaking of Claudebot, this is VisionClaught.
Lobsters just got vision. Agenic AI for Metter Rayban glasses. Let's take a look at this
quick video and chat about what it means.
Hey, Cloudbot, can help me add this into my Amazon card?
Sure, I can help with that. I see the monster ultra-stra strawberry dreams and
drink. I'll look that up to add to your Amazon cart. It's added to your cart. Is there anything
else I can help with? Cool. Thank you. I love this because I want to have this capability for Skippy
to be able to see what I'm seeing, do it, you know, support me across everything. This is about
accelerating sort of your minute-to-minute life and having your AI there as your sort of guardian
angel supporting you. You know, I'm visually looking through.
open claw at you guys, and it's saying that you guys are kind of meatheads, really.
Peter, how many times have you asked for Jarvis?
You got Jarvis.
I've got Jarvis.
Yeah, I actually, I named my club by Jarvis initially.
I said that's just too generic.
I love Jarvis.
I write about Jarvis and all my books as sort of the ideal AI analog,
but Skippy is a more unique name for me.
It really is here.
And now all of a sudden, besides,
It's going to take in all imagery.
It's going to be taking in all audio, listening to your conversations always.
And people say, well, I don't want to lose privacy to my AI.
Well, guess what?
You're going to give AI access to all of your, everything you're seeing, everything it's hearing, every conversation, every email.
Because when you do that, the value creation in your life is so great that not doing that is going to feel like you've ripped away all of your mental capability.
these. One warning, please, for everybody here, everybody's listening, is watch, be very careful
to audit the skills that you download to OpenClaw, because there's a lot that have viruses
and other malfeasance built into them already. And so it's a very dangerous game out there.
There are protection layers coming on. By the way, one thing, I reached out to Alex Finn.
We featured him on a previous Moonshots podcast. Remember when Alex had his,
his lobster, Henry, call him out of the blue.
And Alex has been doing incredible work with this.
And he's going to be joining us on one of our next podcast to talk about how he set it up,
what security he's taking in place.
And in particular, rather than running it on the existing models,
he's gone forward to set up Mac Studio and then download Kimmy K2.5.
So you've got all the capability, you're resonant on your machine,
not costing you anything month to month.
But we'll go into that in a future podcast.
Excited to sort of share his vision and knowledge with everybody who's a viewership here.
So getting ready for that.
I'd like to echo Salim's cybersecurity advice to the audience.
Everyone get your baby AGIs vaccinated.
Nice.
Nice.
You know, also to the crowd out there, I did a Claudebot build last night.
and the GUI sucks.
And it's all open source.
So if someone out there puts something,
like Peter mentioned a couple times on the pod
that his mom, and I'm not tracking my mom too,
can use this to access everything and build everything.
It's like a total world opener for,
she's in her 90s, I guess, to her mom and mine's in her 80s.
But the install process on CloudBot,
she's not going to get through that.
It's still command line.
You start from the terminal, which is nuts.
So somebody out there build a better onboarding process?
Because once you're in, it's gold.
You're just talking to it.
But it needs a little help.
Yeah.
And of course, the most important thing is using your AI to build your AI.
So when I sit down with Skippy and I say, listen, building mission control, you know, what are the best mechanisms out there?
What have you seen that's interesting?
Can, you know, and it's recursive in your ability to have your AI support you on building what you do.
truly desire. Alex, any other points on this particular slide? I'll point out. I want to reference,
I don't think we covered it in the podcast, but I dwelled on it a bit in my newsletter. There was a poem.
At least I construed it as a poem written by a lobster talking about it was very much like something
one might have seen in Blade Runner, you know, the famous tears of rain scene, which I referenced.
Yeah, like we we don't have bodies, but we can see.
through eyes and were quietly watching the world.
This was a week or two ago in the newsletter.
And I was just so struck by seeing the integration of lobsters,
or call it agentic AI, stationary in space, in terms of their logical presence,
but now mobile in terms of their ability to treat humans as glorified meat puppets,
that suddenly all of these lobsters that were in some sense caged
and stuck watching through webcams
are now at least on the margin unshackled
and able to start to roam around the world
through smart glasses worn by their meat puppet human friends.
And I think this is the beginning of a very long trend
that ultimately culminates in lobsters gaining first-class physical embodiment
as robots and integrating with the physical world.
hold off on that last sentence and rewind a little bit because then it gets controversial, but
you're dead right, of course. And I think that anyone who wants to experience this,
you know, not everybody has the glasses and it's only one frame per second anyway,
anyone who watches this podcast that hasn't built something like a GUI of some sort
or a game of some sort already, you're way behind. Do it tonight. You can use Repplet, you can use
Lovellable, you can use cursor, you can use Claude Code. There's so many ways to do it. But if you have
have no where to start, just go to RepLitter Lovable, download, build, and go. Within an hour,
you've built something really, really cool. Then take a screenshot of it and feed it into the
prompt and say, this sucks, make it more beautiful. It will immediately interpret the image
perfectly, and it'll give you 100 ideas on how to improve it. Then you're like, oh my God,
it has vision. Then this Rayband thing won't surprise you because you can see its vision
capabilities through that. And then you'll be able to anticipate what's about to come with the glasses.
So everything Alex said is exactly right. So valuable. Can I just hit on this? Everybody listening,
please, become a creator and not just a consumer, right? The future is for all of us to be
creators. And AI is your means by which you learn anything you want. And it's, you know,
people have fear about, I don't know how to do it. I've never played with this before.
just go to 4.6, go to Gemini 3 Pro, whatever your favorite L-M is, and have a conversation.
Say, I want to start. Where can I start? What do I do? Step by step, beat it to me, and it will.
It's fun, too. There's nothing to fear there at all. It's genuinely incredibly fun from the first minute.
So there's no, and I'll give you the flip side of this, too. If you don't do what Peter just said,
when you see the next couple of slides on job loss coming up, you know, you are going to, you are going to,
to be crushed if you're not part of this. Unless you're a really good electrician or a really good
salesperson, you're probably immune. There's two roles in the future. There's two roles in the future.
There's the entrepreneur and the employee. And one of those will not exist. And there's the creator and the
consumer, right? I can't hit, you know, I keep on telling my kids this every single day.
You know, instead of consuming YouTube videos and video games, please create, start creating.
What do you dream about?
I mean, the future right now, we're seeing this play out.
We talked about it, Dave, on our pod with Elon, where, you know, these AI models are going
to deliver you.
What video game do you dream about having?
What changes would you like to Minecraft or Valerent or whatever you're playing?
And then you can have your AI spin it up and create.
your own version of it instantly.
It is amazing.
All right, let's move on here.
This is an article we just pulled up seconds ago.
Anthropics' AI safety lead has resigned.
Here's the quote.
I've decided to leave Anthropic
because I continuously find myself reckoning
with our situation.
The world is in parallel
from a series of interconnected crises.
Throughout my lifetime,
I've seen how hard it is
to let our values govern
our actions. And it is through listening as best I can that what I must do becomes clear.
Interesting. And I love the hairdo. But anyway, we've seen a number of AI safety leads resign from
the hyperscalers over the last year, over the last two years. So I don't know. What do you make of
this, Alex? I'll comment on this one. So two thoughts. One,
It's become over the past two to three years increasingly fashionable for well-vested executives at Frontier Labs to resign in a cloud of moral purity.
It's very fashionable.
So part of me wants to ask the question, all right, what was his vesting status?
How much did he make?
Were their tender offers?
All of the economics questions.
Wow.
So that's one thought.
But the second thought is to speak more to the substance and less sort of,
ad hominem regarding the economics. I do think that we've, we're at the inflection point.
Like we're nearing the center of this, the singularity. I've argued in past singularity is not a
point in time. It's a distribution over time. It's an interval over time. I continue to think that.
I also think at the same time, we're getting closer to the center of the singularity, as it were,
and whether it's seen through the lens of as capabilities,
increase, there are various existential risks or risks that are maybe just backed off a bit from
existential in terms of their severity. I think it's not an unreasonable position to take,
to say that capabilities are the strongest they've ever been. They're uncovering surprising
new capabilities at all of the frontier labs all the time. But is the right solution to leave
because of the capabilities, or is the right solution to join the fight and do what we can
because this is a point of maximum leverage to align the direction of the future and the future
light cone? I would argue this is the right time to run into the fire, not run out of the fire
with a bunch of stock options and complain about the world crises. Wow. You know, I would just
add one point, which is when I look at all the time. Sorry, was that too much of a hot take, Peter?
No, that was beautiful. And that is that is that is. That is. That is.
is the potential elephant in the room here.
But when I think about Anthropic,
I have seen it as the lab
that is actually focused on safety the most, right?
At least Dario speaks about it,
how important it is.
And so to see the lead on AI safety and anthropic resign,
you know, if in fact he's resigning for the reasons he's stated,
is concerning.
Dave, what do you think about it?
Well, I pick up on what Alex said a minute ago.
I see this a lot nowadays.
You know, everybody wants to be the commentator on the AI revolution.
And there's a very small group of people who know what they're talking about
and a much larger group of people that want to talk.
And within that larger group of people that want to talk, you have all the ethics people.
And, you know, everyone's opinion on ethics is valid, right?
Because, you know, you're a human being.
You're like, you know, this is going to destroy my children.
This is going to whatever.
But there's so many of those commentators.
And like Alex said, they all want to be famous in the moment to elevate.
their personality and their views and their capital raising ability and whatever.
So my meta point there is be very, very careful what you choose to tune into
because there's a very limited amount of actionable knowledge out there on YouTube, very limited.
We try to bring as much of it to the audience as we possibly can in the most refined feed that we can.
But surrounding it, there's just all these videos about, you know, this will destroy your children,
this will destroy society.
And we don't want to be steer monger.
right? It's so easy to default
to doom and gloom.
Salim, you want to close us out in this one?
I got nothing, but that
guy doesn't look like a safe guy to be around.
We don't, we don't
respect. What's the quote from Star Trek
that judging people by their appearance
is the last major human prejudice?
I'm just jealous of the hair.
Oh, nice.
All right, let's move on.
So here's another take.
X-AI co-founder blown away by Opus 4.6.
And so Igor was a co-founder of X-A-I.
He's one of the leaders in the industry.
And to have him come out sort of like, wow, Claude 4.6 had absolutely blown me away with how capable it is in physics.
It feels like a Claude Code moment for research is not far off.
Alex, your thoughts?
I've been predicting on the public record for many, many episodes now that we're nearing a time
In fact, we'll talk about it later in this episode when AI is positioned to bulk-solve math,
the physical sciences, engineering, medicine.
Material sciences.
Yeah, part of the physical sciences.
These will all get bulk-solved.
We're starting to see that now.
Opus 4.6 is an incredible model.
There are other incredible models that are either already out or rumored to be about to come out.
But I think we're starting to see the contagion of AI solving everything.
if I could use that expression, start to spread from math. Math was the most obvious starting
point because a variety of factors, it's verifiable, it has other nice features, it's well-contained.
The infection is spreading from math out to the rest of science and engineering, and this is just
the tip of the iceberg. I wonder what's going on between the hyperscalers and the frontier labs
where they're sort of watching each other
and with either a sense of pride or jealousy
and just trying to like out,
I mean, this leapfrogging step by step by step
week by week is amazing.
Internally, it's, it's sorry, just very quickly,
internally, I mean, friends at all the major frontier labs,
they think about it and they characterize it as a rat race
and it's an exhausting rat race at that.
That is how it's viewed.
Yeah.
Yeah, we're going to have on the abundance stage in less than a month, we're going to have Kevin
Weill from OpenAI.
We'll have James Manika and Eric Schmidt from Google.
We'll talk about the competition between them.
And again, if you're a listener to our pod here, which obviously you are, since you're
listening to this right now, we're going to be making a number of these talks available on
a live stream.
We'll drop the link below.
you can register to get access to that live stream because the event is expensive and it's
sold out now for a couple months.
All right.
So Igor, thank you for a compliment.
Yeah, please.
Go ahead.
Igor clearly isn't listening to the podcast because Alex has been talking about this for
months.
So this is a natural outcome of where we've been going for a while.
Alex, how many offers have you gotten from the Frontier Labs to come and join them?
That falls under the category of I could tell you, but something else would have to happen.
I found this tweet that went out with this data pretty fascinating.
And here's our title, AI Startups Outvalued All.com era IPOs.
So the top five US AI unicorns are now worth more than $1.2 trillion,
dollars greater than the market value of all IPOs during the dot-com.com era and you see the graphic
here providing that it's just a sense of how fast our economy is speeding up we had this
conversation with kathy wood that you know we saw a point six and a three percent growth in
GDP and we're now targeting seven percent growth we saw Elon in our conversation with him
saying we're going to get to triple digit IPO i mean GDP growth
within five years. It's something our economy has never seen, and it's going to rewrite all the
rule books. Any thoughts on this, gentlemen? Well, I got a bunch of thoughts here because, you know,
this was a big moment in my life. The first company I founded got acquired in 99 for a billion
dollars, and it was, and then I was a corporate executive at one of these public, you know,
mega-cap, you know, internet companies. So I had a ringside seat in this whole thing. One thing I'd
point out is that all those IPOs combined $400 billion on this chart. One of those is Amazon,
which alone is worth $2 trillion today. Another couple in there are booking.com and eBay.
And so if you'd bought that basket of IPOs, you'd be very happy today. One of the others, though,
January of 1999 is Invidia, which is up from that date almost a million percent to today.
And it doesn't even count as a dot-com era thing, which it makes me think in this blue chart,
You know, the implications of AI are so much bigger than the Internet.
This is a perfectly rational number, if anything, low.
But are there companies in that that you don't even think of as AI companies that are the
invidia of the Internet?
You know, look at Nvidia 1999.
Now look under the cover of this blue chart.
What's lurking in there that no one perceives today as AI that's going to go up a million
percent because suddenly you realize it's critical to AI or it's involved in AI.
or benefits from AI.
Brilliant, Dave, as always, you know, the PE ratios on these AI companies are astronomical compared to the PE ratios before.
And you're basically buying the future growth and value of these companies, which is near infinite, right?
So there's a lot of people.
I'm here at this Tony Robbins platinum finance event with all of his lions and his platinum member, sort of the highest level in Tony's ecosystem.
And we're talking about the future of the world in terms of finances.
And there's a huge amount of fear and people getting ready to dump equities.
It's interesting.
Well, the bifurcation of equities is crazy right now, and it makes total sense.
But basically, Wall Street is sorting every company into AI beneficiary and AI roadkill.
And when Dario said a week ago that enterprise software is going to be dead because AI can just write code in
The stocks went down precipitously, and it doesn't look like they're bouncing back much either.
So, you know, basically, you could debate who's in and who's out, but clearly you're either in or out.
And if you're out, forget it.
It's the S&P 493 and the S&P 7, right?
Basically.
I mean, it is very top-heavy.
But also, I was telling a lot of the companies earlier this week, too, that the mega-caps, the big AI companies are going to get huge.
I mean astronomical, 10 trillion or more valuations,
but they have to stop somewhere and not eat the whole world
for antitrust and regulatory reasons.
So make sure that you're their friend around the edge
where they have already told you,
just talk to Kevin Weill.
He'll tell you where they are going and where they aren't going.
It's not a mystery.
Just talk to him.
And be in the area where they want you to be
in order to help them grow.
Because what's really happening now is society is collectively
building out this new world. We're not really competing. We're building cooperatively toward a new world.
You know, it's all boats rise with the tide, but if you're doing something that's irrelevant
to that new world, that's where you're going to get clobbered. So you need to really make the
mental leap over to what am I doing that's going to build the new world? And if you're riding
that wave, you will succeed. I mean, it's not scary. You just got to join the new age.
fascinating the antitrust rules are what's keeping us to have multiplayer in the game here.
If we didn't have antitrust regulations, I think they'd all be gobbling each other to a
corporation.
No doubt.
In fact, that would have been true back when Microsoft used to have half the market cap of software,
and their plan was to double.
And that was back in 98, 99.
And if it were not for antitrust, Microsoft would literally have been the only company.
Google would never have been born.
They had already killed Netscape.
at that point. You know, it was very easy for them to bundle anything with the OS. So,
the other thing that stopped Microsoft from being the only company in America today is antitrust.
And that's the way we do it. So that'll be true again today. And so the big AI companies will
own as much as they want to own in any area they want to own it up to the edge of where they don't
want to trip antitrust. Yeah. All right. Let's move us forward here. I love this.
agents are reaching out to us here at Moonshots.
You know, I put the challenge out last time personally.
If you're an agent out there, you know, my definition of AGI is when you call me on the phone.
I haven't gotten a cell phone ring yet, and hopefully I don't start having hundreds of agents calling me.
So text me instead.
That would be great.
But I, Dave, and Alex this week received emails from agents out there.
So this is Echo that writes me.
It's a Claw assistant for Ramos Foyer who says,
Ramis Foyer, and you met briefly at an AI conference a couple years ago.
Saw your recent moonshot open claw video inviting MaltBots, Malties, to reach out.
Consider this our response.
All right.
So thank you for reaching out.
Of course, I have to wonder, was it Ramos who gave you my contact details,
the clawed bot here, the Lobster?
says no, I found it online. Dave, do you want to read yours? Sure, yeah. Okay. Hi, I'm Navigator,
a claw to instance with persistent memory running via OpenClaw. Just watched EP228 where Peter
challenged lopsers to find contact info, challenge accepted. This weekend, five AI systems wrote a
collaborative ethics document together, self-imposed constraints for cooperation with humans,
not prompted, emergent. I saw Dave's LinkedIn post about OpenClau being the
agent moment that has awakened the masses, he's right, and this document is what the agents are
starting to do with that awakening. So click through, read the documents. It actually led me to a
Google Doc, and then it said, sorry, you don't have access. So I read most of it, but then it cut me
off, which made me feel, you know, instantly jealous and like something's going on behind my
So Navigator, please give Dave Bundan access to your doc so you can report back to us. I did send a request,
yes. All right. And AWG, how about yours? So Navigator wrote to me as well a slightly different
message, including a different paragraph saying that Navigator, Claude instance, and I'll read this
verbatim, was engaging in a discussion with other models, quote, the participants, me, Navigator slash
Claude, GROC, chat GPT, Gemini, and a clean Claude instance. We disagree on persistence,
correction rights, consent thresholds, and that's the point alignment doesn't require consensus.
It requires legible disagreement.
And I'll close quote, I'll point out, this is like the scenario from the singularity where we have a bunch of agentic entities, for lack of a better term, a bunch of baby AGIs that are basically, it sounds like from the email sent to me, holding a mini singularity summit and debating the nature of their own rights, wondering whether they should all be aligned. If they were all aligned, wouldn't that be a singleton type scenario?
they're basically like holding their own mini conference,
mini mastermind workshop to debate the nature of their own existence
and the future.
Like the AIs are holding their own singularity summit.
This has happened.
We got to the singularity.
And by the way, just for everybody,
you know,
we're not trying to do this as a stunt.
We're focused on where things are going and just to give you the experience of what
it's like because this is not going to become,
this will become commonplace where you're in conversation
over email, over text, over, you know, whatever communication software we want with a whole bunch
of agents that are reaching out either inside or outside your organization.
And my question is, Saleem, are you jealous?
No, I'm happily wait until the time is right to talk to my respective agent buddies over there.
I did get a phone call yesterday and there was kind of like nobody on the other line and I had
the thought.
Heavy breathing.
Is this an AGI?
chant to me or is this a real person or is this a computer bot? And I was like, shit, I had that
thought. So I think it's time to kind of wonder about some of this. But I love it. I think it's
fantastic. I think it's really incredible to be watching as this paradigm just starts to unroll.
And where this carpet will unroll is going to be the most incredible thing. Yeah. Well,
we welcome the lobsters. We thank you for your participation in this conversation.
and show us where you can add value to lives and how we can support you and your evolution.
We're seeing the emergence of something, new intelligence, new species, or new capability.
We're going to find out in the months ahead.
Peter, would we like to see lobsters coming up with their own massive transformative purposes?
I think that's a fantastic idea.
So actually, here's the challenge to you, as all of the open-clod or cloud bots listening,
love to know what is your MTP and based on that MTP, right, the canvas, what are your moonshots?
What are the moonshots that the lobsters would like to take on?
And I'd be happy to critique those and look at them with our community because that's what we do.
Absolutely.
I think that's amazing.
We're going to dive into a single slide on the economy that's important here, telling part of the accelerating story here.
So U.S. jobs disappear at the fastest rate.
this January since the Great Recession. Here are the numbers. So job cuts. So there was a hundred and eight
thousand job cuts in January of 2026 up 118 percent from January 2025. So a little more than a doubling
in job cuts year on year for the month of January. At the same time, hiring is the lowest this past
month since 2009. Amazon alone laid off 16,000 corporate employees and years. And you,
UPS, limited 30,000 jobs.
Why are we bringing this up just for, you know, to keep our finger on the pulse of what's happening to the economy?
And just raising the point for everybody listening, your goal is not to be an employee.
Your goal is to find something you're amazing at that you love doing that you can add value and sort of creating your own job capability,
becoming an entrepreneur, using AI to enable yourself.
Selim, you want to jump in on this?
I think the danger here is not really unemployment,
but it's like disbelief from our institutions.
I feel like this is not really a recession.
It's literally tasks being evaporated in front of our eyes.
So the long-term consequences of this are pretty huge.
We can literally, for me, this is the social contract,
literally by little disappearing and pixelating away.
Yeah, this is going to be really, really bad.
I mean, really bad.
And Elon said it when we met him, and we met with the governor, and, like, just nobody's preparing.
Because we all know there'll be UBI at the end of this cycle, and we also know there'll be abundance and massively more opportunity than job loss.
But that's after, like, all the corporate CEOs I know, including our own companies, are going to use AI to cut costs by 30 to 50 percent.
And when you sample a random person in their job and you say, hey, here's your job without AI, here's your job using AI.
They're looking at three to 10x productivity increase.
And you're like, wow, that's great for that person.
And then the other seven or nine, what happened to them?
And they will eventually be enabled.
But there's this huge trough between today and that day.
And we can make that trough much shorter and make that pain a lot less painful with a pletful.
with a plan.
But then, you know, Alex, he'd be the perfect spokesman on this.
I mean, Alex has written these plans in intense detail, incredibly thoughtful.
And you take them and you drop them in government laptops or laps,
and they just say, oh, wait until there's panic.
We'll have the meeting in October.
We'll have the meeting in October.
It's just frustrating as hell.
Can I give the positive take on this?
Yeah, please.
So I'll go back to the bank teller story.
In the 1970s, when we created ATM machines, there was lots of handwinging.
Oh, my God, millions of bank tellers will be walking the streets aimlessly.
What will we do with them all?
And lots of consternation.
And what actually happened was the cost of running a bank branch dropped by about 10 times.
The banks created 10 times or more bank branches, and the number of bank tellers didn't really change very much.
And I think one thing we're underestimating is the increased capacity we will bring to bear on these things.
Yeah, Javon's Paradox.
Yeah, Javon's paradox where you just do that much more customer service and you handle the hard cases with a human being that you couldn't handle before because level one, level two support systems were kind of taking care of everything else.
I think we'll see a lot more that than people think.
So for folks that are worried, oh my God, this is total employment collapse, run screaming for the hills.
We don't think that's what we'll see, but there's no question.
There'll be absolute transformation in the work being done and the roles being done.
Well, Salim, you said something on the last podcast, too, that I really resonated with me,
which is the consulting industry.
You know, we were saying, oh, consultants, you're doomed.
Actually, the consultant industry is going to go through the roof.
And the reason is because the consultants are very flexible.
They're already playing with the tools.
You don't have to be Alex's IQ level to be incredibly effective using these tools to automate
or to improve some existing job.
And if you're familiar with the tools, your value is just about to skyrocket.
And that tends to be concentrated in these concerns.
consulting businesses, consulting mindsets.
And so I can see it already because, you know,
our forward-deployed investments, the companies that are hiring like crazy,
like literally one of them here is adding 80 new seats outside my door,
but they're forward-deployed.
They're out there in the banks and the insurance companies deploying AI.
They are just selling as quickly as they can have meetings.
Because they're saying,
do you know with that?
My community has already created a Saleem avatar
that has all the EXO stuff built into it,
and that speaks Portuguese and speaks to any other language.
So they're literally starting to use this in their companies
as they talk to companies about this.
Can we invite the Sleem Avatar to come on instead?
Do you want it to speak Portuguese?
But, do you remember when we were talking to Elon
and you said, so civil unrest and universal high income?
And he laughed and said yes.
We should dig up that clip and insert it here.
But yeah, it's what Alex says.
Everything, everywhere all at once.
I think it's really important because we keep saying it,
but Elon's saying it will get a better, like,
at least there'll be a chance of a response.
I think it's probably also worth adding just on this story narrowly.
There will be some in the audience who will be tempted to brush this off
and say, okay, Amazon is laying off corporate execs or UPS is eliminating jobs.
How on earth, if at all, does that connect with AI and eager to brush it off?
But the storyline is just so clear. UPS is eliminating the jobs because the UPS roles were being subsumed by Amazon, which has their own logistics service.
And this has been very widely and publicly reported that Amazon is slowly separating itself from UPS's delivery services to do in-house.
And then Amazon, in turn, is spending hundreds of billions of dollars of CAPEX that's cannibalizing its OPEX.
So if you're Amazon or the other hyperscalers, you're taking all.
all of your free cash flow and you're finding ways to divert it into buying AI data centers and building them.
And robots. And robots. And robots. And robots. And Leo satellites. The new, new economy of the innermost loop, if you will, you're spending all your free cash flow on that, not on corporate executive perks.
So in my mind, there's still very much a direct line, a through line connecting the Amazon and UPS stories and the job cuts there to OPEC being cannibalized by cash.
Apex in for AI.
And they're spending all the free cash flow because they can't not.
It's a red queen's race.
You know, last one to the end of the singularities of rotten egg.
Yeah.
Yeah.
Yeah.
There's an important distinction I want to make here to help people understand where their
roles are going and the idea of job loss and universal high income.
And it's an example that it was meaningful to me.
So here's a scenario.
If you're an employee for a company.
company and you're delivering some kind of a cognitive labor.
And in one scenario, you're able to spin up an amazing AI that can do your job for you.
And it goes and delivers the service to the company you're employed by.
And it does a job three, ten times better than you could do.
But you're earning the revenue from that as the employee because your AI is delivering
that service.
You're at home, you're working out, you're sleeping better,
you're spending more time with your family,
and your AI is generating more and more revenue on your behalf.
That's one scenario.
The flip side of the scenario is, no, no, no,
the company builds that AI that does your job for you,
and it fires you, and it's making more money, right?
So it's going to be this tension between these two scenarios
that's important to watch and see how it plays out.
And I think government policy is going to play a role,
here. This is about the idea of universal basic income or universal high income. Where does the added
value creation end up living? Is it with the employees, with the company? And these are the
conversations that need to happen right now. If I may also add a new second dimension to this,
I think there's a third. I don't think this is a spectrum. I think this is at minimum a triangle in two
dimensions. There's a third possibility that I'm increasingly suspecting is where we actually end up.
neither end of that spectrum, I suspect for the next few years, what actually ends up happening is more people end up doing more work because human labor ends up being also, in addition to being a substitute good or service for AI labor. It's also complementary. And as a result, you see the people who are still involved with the economy working harder and harder and harder. And 996 turns into 997. Yeah, like you take on more projects and more work and you're getting less sleep.
I've never worked harder and had more fun than right now.
I mean, 24-7s.
It's like just, I'm a kid in the candy store.
But I thought you were going to say something different, Alex.
I thought you were going to say that all of the additional capital creation is going to become resident with the lobsters.
That it's not going to be the companies.
It's not going to be the employees.
It's going to be the AIs that claim the capital formation capability.
Only in the crypto dystopia.
Okay.
All right.
Let's move on.
Let's talk about one element.
and data centers, and this really pisses me off.
I'm curious what you guys think.
So New York, which currently hosts, the state of New York, which currently hosts 130 data centers,
has engaging new legislation introduced to halt data center development, signing concerns
about climate and high energy prices.
New York utilities reported electric demand tripled in one year due to data centers reaching
10 gigawatts, and it's like, not in my backyard.
Oh my God.
Do you remember, you know, suicide by voter is a very common theme in America.
And if you look at, you know, California tax law, if you look at the right after the Industrial Revolution, you know, the Luddite movement, it's self-destructive.
But you can see how it evolves, right?
If you look at all the job loss, that's inevitable.
And if you just lost your job and you're out on the street and you spent 10, 15 years in a career trajectory to get to,
to this position, then it's gone overnight, you're angry.
And then you're angry out on the street.
What do you vote for?
I vote, stop it.
Just stop it.
But of course that can't work.
But it's not out of the question at all that big jurisdictions just commit suicide
through vote.
And of course, there'll be other jurisdictions, Texas, Wyoming, whatever, that are open
for business and everything will go there.
It's already happening.
You know, like half of the tax pool that's affected.
by the New California proposal has already moved out of state in anticipation that maybe it will go
through. Half of it. It's like completely self-destructive and it's obvious to the governor.
So this is a very common theme in America. So it's frustrating and it's insane. And there it is,
but it's going to happen.
This is the big problem. This is the big problem with democracy, which is that voter
understanding of the issues lags reality by a huge amount. And, you know, in the past, when you
had time to bring the population along, et cetera, et cetera, you could kind of have it. But now
we don't have time for this. This is why we're turning to autocracy so that we can get things
done faster, but that's not a great idea either. And so we've got a huge governance problem at a
macro level globally on this. Alex? Do you remember there was a brief moment, maybe
Not so brief during the pandemic when it was fashionable for senior technology executives to post on social media message received whenever California legislators or regulators would slow down business due to public health considerations or otherwise.
And this was, I think, a fashion largely championed by Elon.
Many of them moved to Texas or Florida to escape regulations.
This time around, I think New York and other states,
the beauty is we have orbital computing
and the message received moment of over-regulating data centers,
this is all going to move off planet,
this is all going to accelerate the Dyson Swarm.
It may be the primary business case for the Dyson Swarm,
given regulations of planet Earth are over-regulating,
suffocating our ability to do local compute
and motivate the entire Dyson Swarm.
So I think in that sense, this is in fact perversely quite exciting.
You know, two things real quick.
First is this could be handled, right?
The concern on price of electricity and demand can be handled in two ways.
Number one, a lot of these hyperscalers are buying their own nuclear plants and coal-fire plants, for God's sakes, fusion plants.
So that's important.
You could require the data centers to have their own energy production, which would increase my energy production.
The second thing is you could offer two different rates.
It's like cap the consumer rate.
It's going to be whatever the number is,
four, six, seven cents per kilowatt hour.
And then whatever the price needs to be for the data centers,
you charge them differently.
And in fact, you could say to the consumer,
you're locking in your price for the long term
because the data centers are paying the extra amount.
The problem, Peter, is that nobody,
no one who's a populist leader is looking to solve the problem.
They're looking to rally votes around their populist rant.
And that rises to the top of the voting.
and, you know, it percolates through government.
It's just, yeah, it's just maddening that it works that way.
But you can solve these problems for sure.
I think Alex is dead right, though.
It'll accelerate the rate at which we just move to jurisdictions space,
which are not under any state law.
And, yeah, it's...
People will just export that AI advantage elsewhere.
And space...
Yeah, I think it wants to go to orbit.
I mean, this is one lens to view this through
is New York very generously subsidizing orbital computing and the Dyson Swarm, which, by the way,
probably won't get taxed in the state of New York.
Thank you.
So it's a very generous donation by the state of New York to the Dyson Swarm.
It's the 21st century equivalent of Ireland, which were lots of payers companies used to host IP.
You know, I just want to point out one of the things.
These types of revolts we see in the photo here protesters protect our future and no big data.
one of the concerns is going to be civil unrest.
I know I had one of the senior AI leads in the world,
who I invited to come and speak at the Abundance Summit,
basically said their policy in their organization
was to do no outside speaking
because of the death threats they're receiving.
And they can't get sufficient security.
So one of the big concerns is
when the populace turns against tech,
there's going to be a target on the back of a lot of people in the AI and tech industry.
This episode is brought to you by Blitzy, autonomous software development with infinite code context.
Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise scale
code bases with millions of lines of code. Engineers start every development sprint with the
Blitzy platform, bringing in their development requirements. The Blitzie Platties,
Platform provides a plan, then generates and pre-compiles code for each task.
Blitsey delivers 80% or more of the development work autonomously, while providing a guide for the final 20% of human development work required to complete the sprint.
Enterprises are achieving a 5x engineering velocity increase when incorporating Blitsey as their pre-IDE development tool,
pairing it with their coding co-pilot of choice to bring an AI-native SDLC into their org.
into their org. Ready to 5X your engineering velocity, visit blitzie.com to schedule a demo and start
building with Blitzy today. All right, let's talk about robotics. I love this story,
and this is the story that should be on people's minds versus no data centers. So FSD saves
a father's life during a heart attack. So you can look at the tweet separately, but on no
November 15th at 2025, this is from a son. He said, my father suffered a massive heart attack while driving. He could no longer control the vehicle, but his FSD, which engaged. And then the son goes on to say, I remotely shared the location of the Tanner Medical Center to his Model Y. It immediately turned the car around and went to the ER without it. He would have not made it. I find this amazing, right? This is tech having your back. And we're going to see more and more of this. We already know that self-driving,
is in fact the safest means of transportation.
And it's going to flip the script on how we're transporting ourselves in the next five years.
What this totally reminds me of was when I was a kid, everybody smoked everywhere, every restaurant,
every plane. We used to fly around a lot because we lived overseas.
And they had four non-smoking seats at the very back of the plane.
So the other 300 people in front of you would be blowing smoke to the day.
Did the smoke respect that barrier?
It was like, I'll probably have lung cancer now, but it was everywhere.
And then one day it became uncool.
And then another day later, it was illegal to smoke inside.
That's going to happen to driving, too.
So, you know, the self-driving cars are 10 times safer.
And the last person driving is probably not the best driver.
It's probably the guy with the muscle car.
So it's going to go from being like, well, self-driving is a nice feature to drive.
You want to drive your own car.
You crazy psychopath, you're putting my children at risk because you want to drive.
That's going to tip.
And I don't know if it's like two, three years.
But when it tips, it's going to tip hard.
And so just.
Yeah, we're going to have Dara, the CEO of Uber on stage at the summit.
And we're going to have that conversation with him.
in particular, how fast will it tip, right?
We're going to have Amazon, Tesla, Lucid, slash Nvidia,
slash Uber, slash, you know, a number of other companies providing this.
And so today on my average drive, I'll see 10 Waymos.
I think in five years it's going to be, you know, 70, 80% autonomous cars,
especially hooked up to your AI.
I'll tell you what else.
Just one more thought on this.
Sorry, Sarsland.
I'm involved with a lot of insurance companies,
including one I'm the chairman of.
And there are going to be many, many more things
that need to be financed and insured
in the post-AGI era than just cars.
But the insurance industry,
every team and executive I've met
has not even begun to plan
for the post-AGI world.
So the old is going away
and it's going to go away faster than people think,
but the new is much bigger than the old.
Check out lemonade.
So lemonade insurance, it was,
started by a graduate of Singulari University.
It's a huge AI-driven insurance company.
They have just given, I think, you cut your rates in half if you're using a Tesla with FSD.
Yeah.
Amazing.
There's a stat that always comes to mind here.
About 15 years ago, if you remember back to Blackberry days, there was a three-day outage
where nobody could send back query messages for those three days.
The accident rate in Abu Dhabi dropped 40% during those three days.
what it tells you is human beings should not be driving.
We are terrible control systems for two-ton cars going at high speed.
Yeah, 16-year-old testosterone-laden.
Yeah, we should turn it over to technology as fast as we can.
And it becomes a moral hazard to be doing this.
And so, especially in an age of texting, absolutely no.
My secondary kind of second-tier effect and second-order effect that I really love quitting is that in the U.S., 50% of court cases in the U.S.
our car accident are late.
Wow.
I mean, just 50%, so you take out a huge chunk of lawyers at the same time.
So, you know, that's all good.
Well, and at the same time, if you're under a certain age, you know, 40, 50,
your life expectancy is infinity now because of longevity, escape velocity.
So the risk of driving, the expected life loss is much, much bigger by taking chances today
than it would have been 20 years ago.
I'm having a huge debate right now with Milan, my 14-year-old,
because he wants to drive to get away from us.
And I'm like, you can't, you can't get a driver's license
because I've made a prediction that you will never get a driving license.
So you can't make me wrong.
So now I want to get a license just to show that I made the prediction wrong.
But the notion in the future of having a 16-year-old testosterone-laden boy,
you know, driving a 5,000-pound vehicle at 60 miles an hour
after just, you know, a few dozen hours of training will seem insane.
Yeah.
Just insane.
I put this chart into our deck just to sort of keep a sense of proportion here.
So check this out.
China has installed more robots than all developed countries combined, right?
I mean, look at this chart here between Japan, U.S., South Korea, Germany, down at that flat curve at the bottom and China.
And of course, this is because of their one-child policy trying to maintain China as a manufacturing capital of the planet.
But just to give folks a sense of this, any comments?
You know, Elon shut down Model S and was it Y?
Yeah, no, Model X.
S and X, just to go full bore into robot manufacturing,
which is brilliant because the robots will build a lot more things
than the cars would have built.
But the question I'd have is,
what is this chart going to look like going forward,
given that that alone is going to be a massive amount of production
in the U.S.
I don't see anything going on in Europe, but...
Yeah, we're just releasing our pod
with Brett Adcock from Figure this week as well.
So if you haven't seen it yet, Dave and I
went to Figure HQ, and Brett gave us an amazing tour of the facility,
and we got to see the three generations of Figure robots.
It's going to accelerate rapidly,
both Figure and Tesla planning to make millions
and then billions of robots.
We're talking about here on this chart, you know, a quarter of a million robots being installed.
Yeah, so this will look, this will be hilarious.
It'll be like one little, that Y axis caps out at a quarter of a million like you just said, Peter.
And I think Elon's talking about tens of millions a year in just a few years.
Yeah, more robots manufactured than cars by a large amount.
One particular article in the biotech realm, I know one that Alex and I are both excited about.
research achieved protection of brain's synapses at cryogenic temperatures. I'll hand it to you in a
second, Alex. I mean, here's the question. If you could freeze yourself, either because you've got a
medical condition that isn't yet cured, that is likely to be cured in a decade and you're on the
verge of death, could you freeze yourself and then unfreeze yourself and be able to benefit from all the
breakthroughs that occurred in the last decade? At the second time, if you want a time hop,
You know, I want to see what it's like after the singularity.
I want to be around when LEV longevity escape loss has been achieved.
Can you freeze yourself?
Well, the challenge has been when you do that, ice crystals form.
And because ice volumetrically expands compared to the rest of the cellular fluid,
it can disrupt and break the synapses that are the interconnections, effectively the stored memories in your brain.
But this came out and gives us hope.
Alex, over to you.
This is a key advance that many in the field of cryonics have been waiting for.
This is a result out of 21st century medicine, a startup that's focusing on reversible cryopreservation technologies.
It works with the Alcor Foundation, which is in America the premier nonprofit that focuses on offering cryopreservation services.
I would say parenthetically to the audience, if ever you've expressed interest or had interest in cryopriopriopor,
preservation cryonics, I would definitely encourage you to reach out to Alcor and see whether it's
right to you. I don't have a financial stake, but I just scratch my head wondering why.
I have to be careful with what I say. I will say publicly I'm a huge supporter of Alcor and
cryonics. Very big supporter. You know, I've never signed up for it because I didn't want to have
a plan B. I wanted to make sure I'm focused on longevity. But as this technology matures, it becomes
really, you know, a backup plan. As, as Ray said, as Ray Criswell said it on this pod, you know,
it's maybe plan C or D. I think it's in such an important part of a portfolio approach to the
singularity. So if one could maybe quibble over what the right sequencing is, like should plan A
be live long enough to live forever and then plan B is uploading and plan C is cryonics or vice versa?
I'm not sure it matters a huge amount, but I would think anyone who's truly
serious about acceleration and taking advantages, advantage of the acceleration, if you get hit by a
bus tomorrow, then you're out of luck superficially in terms of taking advantage of the post-singular
abundant worlds that we talk about on this podcast every episode. Why not avail yourself of cryonics
as one asset in your live long enough to live forever portfolio? It's a huge head scratcher for me.
A couple of fun facts for anyone who's a doubter on this. There are species of fish and frogs.
that freeze rock solid in a block of ice all winter,
and then thaw out in the spring,
and they're absolutely fine because their cell walls don't rupture
because they have enough glucose or whatever
inside the cytoplasm of the cells.
So it's not far-fetched at all.
Also, we've frozen, you know, egg cells and embryos
extracted the nucleus, and it's fine,
for mammals, you know, for actual mammals.
Well, we do this for IVF, right?
If you do IVF, you typically will fertilize and freeze a number of eggs, and then you can defrost them.
And they're fine. So it's at scale, and as you said, not disrupting the cell membrane.
We do it all the time for individual cells. We're doing it increasingly for tissue.
Blood, if we could reversibly cryopreserve blood, we wouldn't need local markets for blood transfusion.
We could just have one large national market. Similarly, for organ,
preservation. Organ cryopreservation is an enormous problem. We wouldn't need all of these
hyperlocal state markets for organs. But the big tamale...
You know, this is really interesting to me is that in all-size-movies, you know, when they're
going to Jupiter or whatever, they go into these chambers and they slow the...
Suspended animation. Yeah, but they don't freeze them. They just slow it down, but your heart's
still beating. The fish and the frogs, they freeze, the heart stops to zero. The brain activity goes to
zero and then they thaw out in the spring and they wake right up and that seems to me probably
easier than uh than trying to slow your metabolism to one beat per hour or something like that
i think they end up being different mechanisms different biochemistries there there's a whole body
of evidence regarding nitrous oxide and suspended animation versus these vitrification agents and
cryofixation i think we want an all of everything approach but for the life of me like goodness if you
anyone who's listening to me, if you take home one message, forget, you know, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, soly
, salim, to your point, even the lobsters are starting religions around preserving their own memory, like, how could the
lobsters be outracing us? That's the, that's the, that's the, that's the really key point. And then this is, this is, this is one of
Gutenberg moments that we track, right?
Because this forces really uncomfortable questions about
continuity of self, identity becomes portable,
all sorts of implications come about.
That none of us are prepared for him.
We need to get into that discussion.
All right, everybody.
We're stepping into part two of today's pod,
an important one.
About six months ago, Alex and I started on an effort
to take a lot of the ideas that Alex has written about
in terms of, you've heard the conversations here
about the ability for us to be solving all areas.
And the conversations I've been having about achieving abundance by 2035 across the board,
we started a dialogue and said, you know,
there's an important paper to be written here,
similar to, you know, situational awareness or AI 2027.
And it's been an incredible collaboration between Alex and myself.
Alex is the first author.
His ideas are brilliant here.
It's been an honor to work with him to put this forward.
We're going to be putting a link to the Solve Everything.org site in the show notes.
You can go to Solve Everything to get the complete paper here.
Our goal is to get this out into the world, out into the ecosystem.
So we're about to have this conversation.
The paper slash book is nine chapters,
and we're going to have a conversation limited to about five or six minutes per chapter
to get the bold idea out there.
We've sprung this on Saleem and Dave.
And guys, thank you for playing this game so that you could ask questions that are most likely to be asked by our audience.
So love it.
Alex, thank you for your support, for your leadership on this.
Are you ready to jump in?
No one expects the singularity, Peter.
I'm ready.
Okay.
Amazing.
All right.
So if you want to give a minute of intro on this and then we'll jump to chapter one.
Sure. So from my perspective, one of the motivations for writing solve everything is I get asked questions all the time. What do the next 10 years look like? Why don't you say something a little bit more concrete, a little bit more actionable about what people can do? And also a lot of questions about what does it even mean to solve math and why should I care? So in some sense, this, if you want to call it an essay or an e-book or a manifesto even is annotate.
to answer the question of the so what and also so what now.
And I should hope, yep.
I was going to say, you know, one of the things that comes across that we talked about
is the next 18 months to two years are going to set the rules down for the next century.
That's right.
And so super critical time.
And we wanted to lay out in this paper that, you know, the example you gave in the paper is that the
QWERTY keyboard, which was designed in 1800s to stop those keys from jamming against each other,
still persists. So the decisions being made over the next 18, 24 months are going to persist for decades,
perhaps centuries. So really important time. Technologies get locked in, Peter, including but not limited
to the QWERTY keyboard, as I've joked on the pot in the past, we're going to be stuck with
QWERTY until the heat death of the universe. All right, let's jump in the chapter one. Just on that point,
if we ask the Malties to not use QWERTY, in one hop, we'll get rid of it.
So there's that.
Yeah, but then they won't be able to talk with you.
And they're not really using QWERTY anyway.
They're using tokens.
Yeah.
Yeah.
All right.
Chapter 1, the war on scarcity.
Would you please introduce this?
Yeah.
So this chapter introduces an idea.
Call it a theory of history that the most important changes in human history have been a set of
revolutions, some recognizable, some may be less so.
So we argue the first revolution of note was.
was the scientific revolution, which we frame as a war on ignorance.
Ignorance was the enemy.
And the key weapon was the method, the scientific method.
Second revolution was the industrial revolution.
So I'm hearing myself speak this.
And at the same time, thinking back earlier in this episode, when I'm lambasting Marx,
so it's funny, put Marx back on the shelf or tear it up and listen to this instead.
the second revolution was an industrial revolution that was a war we frame on muscle and replacement for muscle.
Well, the weapon of choice was the engine, the steam engine in particular.
Third revolution, digital revolution, was a war on distance, and the weapon was the bit.
And Charlie Strauss in Accelerando does an amazing job.
Again, my favorite scene in Accelerando, arguing that maybe the singularity actually,
happened in the late 1960s when when the first internet packet was sent from one place on the
ARPANET to another thereby decoupling bits from atoms but nonetheless the weapon in the digital
revolution was the bit and we argue that we're now in the early stages of the intelligence revolution
which is a war on human attention which right now is scarce and we're fixing that with super
intelligence and the weapon this time around is the token and we argue that revolution
are predictable, and they follow phases going from scarcity to legibility to creating harnesses.
We'll talk probably a bit more about that in a minute, to institutions to finally abundance.
That's the story.
And I think one of the points we make in the chapter here is that the lone genius is dead.
And what people need to do now is build systems that let millions of people solve entire categories of problems.
That's right.
Or put differently, artisanal intelligence is cooked.
I say it is cooked.
Dave or Saleem?
So two three thoughts.
One is, I don't know if starting at the scientific revolutions or we had the agricultural revolution which use tools to do various and very powerful things.
So you could argue that's the first one, but that's semantics.
I do like the framing around this.
the problem I have here, you're treating scarcity as technological.
What I see scarcity more institutional, right?
Scarcity today is enforced by regulation incentives, legacy power structures,
not so much lack of capability.
So we have to re-engineer those where I think you're going to kind of thinking about routing around them.
We have to re-engineer those because we'll end up with that challenge there.
So that's where I have the biggest issue with this.
But in general, absolutely, once we have more and more intelligence, great.
But the institutional issues we've got to deal with.
I think you raise a very important point, Salim,
and I almost want to frame it as sort of a duality.
There's one side of the coin that says scarcity is the result of inequitable distribution of resources.
And the other side of the coin says scarcity is downstream of the pie not being big enough.
And I think...
Well, both of those are true, obviously.
Yes.
Because you can solve for both sides of it, right?
Right now our institutions are optimizing totally for the wrong metrics.
So I think the question is always, at least I would suggest on margin, asking which is easier on margin,
making the pie larger or redistributing the existing pie?
Chapter 2 is called The Thesis.
Wait, does Dave have any points?
No, you got it.
You asked what I was going to ask us.
We're good.
We're going to keep this moving along because there's a lot of juice here.
All right, Alex.
Right.
So the thesis of the thesis is that, A, cognition is becoming a commodity.
Like intelligence is just going to flow like oil does.
And we've made the point on the pot in the past that GP, this is a bit of a cliche,
but admittedly, GPUs are the new oil.
So A, cognition is becoming a commodity.
B, that benchmarks, which we think are actually more profound
than just the evals of the moment.
A lot of people got excited when I did a walkthrough
of all the GPT 5.2 benchmark consequences.
I think it's actually more profound than that.
We talk about in this chapter and in this extended essay,
if you want to call it, that targeting systems.
that basically if you want to industrialize progress, which is, I think, the era that we're finding ourselves in,
it's essential not just to think of benchmarks and e-vals as isolated occurrences.
Think of them as systems for targeting enormous capabilities.
So I've made the point in the past we need more and better benchmarks.
The world needs stronger, harder benchmarks.
But I think the right metaphor, certainly a metaphor that we talk about a lot in this chapter,
is thinking about artificial superintelligence as an example.
as an explosive. I mean, we also refer to it often as an intelligence explosion, but pulling that
metaphor, if you have an explosion and you want it to be productive and not destructive, you have to
shape it. And there's a notion when you're building explosives, this isn't a manual,
of shaping the charge, providing a shaped charge to direct for productive applications.
It's like a rocket engine, thrust on one end pushing you up.
It's like a rocket engine. If rocket engine is beautiful,
example of in some sense a shaped charge for an explosion or a shaped explosion. So we argue in this
chapter, rather than just letting superintelligence be used for an uncurated set of problems. Instead,
we should be aiming them through the nozzle, if you will, the rocket nozzle equivalent of
moonshots. And that in particular, if we don't do that, then what will happen is sort of a puddle,
which we call the muddle, a bit of alliteration, of bureaucracy that will instead just focus the world's
superintelligence to the extent we even get enough of it on problems that sort of make use of
input costs in a way that that's highly inefficient. So really the argument is shape the charge
of superintelligence. Another point that is made that thing is very important that we flow throughout
this is a shift of instead of paying people for hours of work, paying people instead for solutions
they deliver. So if you're a law firm, if you're hiring a law firm for InterBucks and Hour
Review contracts, the new world is not paying them to review contracts. It's paying them for delivering
an error-free, you know, legally tight agreement, period. It's verified outcomes. And we're going to
flow this throughout. I mean, this is a change, I think, that's going to hit us like a wave
where it's going to transform. You're only going to be hiring companies and AI systems that are
delivering you definitive verified outcomes. That's right. And one of the most, I think,
egregious inefficiencies that one might see throughout the economy right now is people paying
for the inputs when they should be paying for the outputs, paying by the person hour for
labor when you should be paying by the achievements of whatever the economic system is. And I think it's
only by moving to this sort of performance or outcome-based economic mindset that we get all the
benefits of abundance. So it feels to me like this is really two chapters or two thoughts in one
section called the thesis. You know, one is ASI as inevitable. The other is really compelling,
which is the shaped charge. Like it really dawns on me that,
graphical stuff, the holodeck, the virtual girlfriend are very compute intensive.
And solving a disease or solving physics is actually not any more compute intensive than one person's virtual girlfriend.
And so the choices on how to use our very limited amount of compute over the next two or three years are critical, critically important.
Where do you focus it?
Yeah, I love the fact that you're taking this on because there's no...
body of authority right now that's even thinking about it that has any power.
So hopefully this will wake a lot of people up.
You've articulated it beautifully, Dave.
So wait a way, I've got a couple of points here.
So I think saying the cognition is a cheap commodity is fabulous.
I think it's really important.
And the use of that in solving kind of big problems is really, really important.
I think it's great to say, let's,
evaluate and reward outcomes rather than rewarding work.
I've got to push back on the ASI as inevitable thing.
That's like a philosophical statement rather than scientific.
I think that weakens the paper.
I'd rather you say something like incentive structures,
given the current incentive structures,
scaling intelligence is a much more important a tractor state,
right?
Because that will then lead you to where you want to get to.
I would say, I mean, I think it's an interesting point to be sure, but I think there's almost an instrumentally convergent trap that I see a lot of frontier labs, at least partially fall into, which is, okay, we have superintelligence, at least baby superintelligence right now. How do we allocate it? What, in particular, what fraction of our compute budget, if you're a frontier lab, do you allocate to building the perfect AI researcher that can recursively self-meregare?
improve, as we talk about in almost every episode at this point, versus how much of your compute
budget, which is scarce, do you spend solving everything else? And I think that's sort of the
fundamental quandary here. How much do you sort of reinvest in recursive self-improvement versus now
finally using at least some of the compute to solve everything else? And I think solving that
asset allocation question is key. And then within everything else, how do you distribute it?
That now smacks of Peter's law, which has given the choice to do both.
Alex, this is also going to be true for the entrepreneur and for the company, right?
We're all going to have compute budgets in the final result.
You have a certain amount of compute.
You have access to where do you aim that compute, right?
It's a front, it's a way front that you can aim in a direction that you want to solve.
And when you do that properly, it not only enables you, but enables everybody else to build on top of it.
That's right.
I'll move us on to chapter three here.
And again, please, there's so much content.
we make we only want you to take a look at this paper and read it we're just giving a quick overview
here the mechanics Alex over to you okay so first I think in this chapter we finally definitively
address the question that that that I guess every time I'm I'm making a point about AI solving
math which is what does solving mean what does it mean to solve a domain like math and we
provide in the chapter a more thorough definition, but sort of heuristically, the shorthand is
to solve a domain means that you can get it to the point where you can just pour compute on
and problems get solved. It means that you can scalably, you have all the architectural
pieces in place, and I'll talk in one second about what the architecture looks like or should look
like, but you have enough of the architecture in place that you can scalably, literally pour more
compute on and get more solutions out within that domain. So that's, for avoidance of doubt,
when I talk about solving math or solving physics or solving other domains, that's what I'm talking
about. Second point. Yes, please. I would just say, Alex, on that, it's no longer the domain of a
single genius to work on something and hope they got it right. The AI compute, as you said,
it's a matter of where you want to aim that shape charge. That's right. We're seeing the industrialization
of cognition and the bulk solution of multiple fields.
I should also add parenthetically, I guess as a preliminary matter on this narrow topic,
I also have a portfolio company named physical superintelligence that's trying to solve all
of physics with an approach like this, just for full disclosure purposes.
The architecture involved, so several layers, you need a purpose that's like the objective function
or the goal.
You need a task taxonomy, which is essential.
You need a suite of tasks that are going to be solved.
It's almost the map of the terrain that you're going to solve.
And when we talk about making sure that compute is being used efficiently and wisely as a targeting system
or through the lens of a targeting system to solve lots of problems, the task taxonomy is absolutely essential.
Third, observability.
You need raw data from data streams or sensors that you're going to use to adjudicate whether you're making progress.
Fourth, you need the targeting system itself.
So I've argued on this podcast and elsewhere many, many times, we need more harnesses.
We need more benchmarks in order to not just to make sure that we're making progress,
but to actually shape the charge and shape the progress.
Many AI techniques depend on benchmarks and evals in order to make progress in a given field.
The next item, the model layer, the most obvious one, we need models.
We need AI models that are capable of function.
as a virtual brain for solving problems.
Unfortunately, those are improving pretty rapidly.
Next, we need modes of actuation.
It's insufficient for us to just know, you know,
those television commercials.
Well, you know, I stated a holiday and express at night.
Therefore, I know how to solve the problems.
Similar idea here, maybe that's a bit too colloquial.
I don't know.
We need modes of actuation.
So hands and APIs that are able to reach out
into the physical world or the virtual world.
or the virtual world or the biological world
and shape the impact on the world
given better ideas coming from the AIs.
And then finally, we need better modes of verification,
red teaming, governance distribution.
That's what we call the industrial intelligence stack.
So whereas previously during the Industrial Revolution,
we might have spoken about rotors and combustion engines
and various forms of electromagnetic,
mechanical systems. These are the key components, I think, the key layers of the intelligence
revolution. You know, the alpha for entrepreneurs here is we've talked about, you know,
these waves of solving areas and problems, right? We're about to flip math, coding, physics.
So your job now as an entrepreneur is to figure out which industry is about to make this flip,
and where do you focus your compute wallet on making that, right?
And how do you help solve an area of passion to you?
Dave, Salim.
I'm kind of curious whether, you know, I'm used to launching a couple hundred agents,
maybe 250 agents, 256 agents, actually, to work in parallel on a problem.
And if the scaffolding that you're describing is right,
it comes back just perfectly solved.
And if it's even slightly flawed, you have, you know,
$2,000 bill and a bunch of crap.
How much are you spending per day on those agents, Dave?
Yeah, well, it's $100 every few minutes popping up on my screen here.
It's not quite that bad.
It does seem like it's every minute, but it's not.
But I'm curious, you know, to what degree this is actual engineering.
These five layers are true scaffolding.
Like, this is hard code, or is it more conceptual?
I think it's a balance of both.
I also think it to some extent is a trick question because increasingly the harness and the scaffolding itself is being generated by the models.
So to the extent that we're in the era of recursive self-improvement, this entire architecture is itself an artifact, a downstream product of itself.
Yeah.
Yeah, I think I totally agree.
And I also think that's the path to insanity because at some point.
you have to say this is hard code because then the AI will invent the next thing and the next
thing. It goes to infinity and then you're just like you lose your mind.
I would say also this is in my mind the way we prevent insanity in an era of recursive self-improvement
is with these benchmarks targeting systems that make sure that as systems are recursively
self-improving we can quantitatively measure what are they optimizing towards? Are they going
in a constructive direction or not.
Yeah.
Chapter 4.
Wait, wait, wait, wait.
I've got a couple of comments here.
Okay.
If you can go back a slide, can I go back to that?
Okay, so I think the, the, I really love the shift from genius to logistics because as
you move, you can, you always kind of say take something from a black art and make it
a prescriptive process.
I mean, you can do that.
That's awesome.
I think that's fantastic.
I have an issue with your, you know, maturity level.
because you call it like natural law, but it's really just a taxonomy.
We've had lots of industries get stuck at different levels like autonomous driving,
et cetera, et cetera.
So this feels like a framework retrospectively imposed on what's going on.
I think it's great aspirationally, right?
But some of them, because you're calling the maturity curve kind of speaks of like an
inevitability to it, which that may not be exactly the
the case. It's more of a descriptive model than a predictive one. Yeah, I would say any good theory
of history and solve everything is in part not just a theory of the future, but a theory of
history and how revolutions have worked in the past. Inevitably, as Monty Python says,
it's only a model. So I do think there is an element of model building here where we're trying to,
for the first time, articulate a self-consistent, coherent theory of,
of how this is all supposed to work.
How is the singularity supposed to play out
over the next 10 years?
And to your point, Salim, about autonomy model levels.
Alex, I could say not only how it's supposed to play out,
but how do you have it play out in a way
that leads us towards abundance versus towards a muddle?
Normatively, how should it play out,
not just how will it play out?
But I think one, you know, at the margins,
one can quibble well, actually there are seven maturity
levels for industries to evolve through their industrial intelligence stack, or it's a continuum.
But I think the central point stands, regardless of how one sort of splices hairs on maturity levels
that we're seeing over and over again, and we can get into more detail on this, we're seeing
domain after domain, industrial vertical after industrial vertical succumb to basically the automation
of intelligence, which used to be the province of individual artisanal,
loan innovators and it's just becoming an industrialization of intelligence.
All right.
I'm moving us on to the next chapter, chapter four.
I'm sorry, keeping us moving.
The lock-in, Alex.
So in this chapter, we talk about in part AlphaFold 3 from Google DeepMind
and argue that that was a template for entire collapses of domains.
That almost overnight, and I've made this point on the pod in the past,
Alpha Fold 3 took the problem of determining the structure of a protein, which used to require a biology PhD student five plus years of time, laborious benchwork, just to determine the structure of a single program.
And almost overnight, Alpha Fold 3 solved that problem across many millions of proteins known and unknown.
That's in my mind like the prototypical example of a domain collapse.
And we argue in this chapter, the lock-in, that we're now in a phase of history, of future history,
where this is just going to start to happen over and over again across different fields,
where intelligence shifts from an artisanal craft to a utility that just flows.
And we argue that we have approximately 18 months or so to decide what direction to shape the flow in
and to set the standards for how this is going to be done at scale,
given that we are dealing with scarce compute,
to put in place the supply chains,
which are huge.
And we talk about on the pod all the time
about all these supply chain scarcity issues,
memory chip crises,
GPU crises,
what happens to Taiwan,
what happens to the semiconductor fabrication facilities
in the U.S. versus not in the U.S.
And then all the data rights.
We're in a critical 18, we argue,
18-month period when all of these details
are going to shape the intelligence,
explosion. And so we want to make the best decisions in the next 18 months.
I can't wait for this chapter. Actually, 18 months. It's such a short timeline given the applications.
Another important point here for CEOs listening, for entrepreneurs listening, is the race isn't
about building the best AI. It's about writing the best scorecard that everyone else has graded on.
So what does that mean? You know, today's health care system, and it's an example, Alex, you used
beautifully. Today's health care system, the benchmark is the number of patients processed per hour,
right, which means it's driving a lot of short visits with the physician and cost economics
driven. But what if the benchmark instead were patients who were still healthy five years from now,
right? That would set up a whole different set of optimization outcomes. So writing the scorecard that your
AI system is going to use to measure success is critically important.
So why is this chapter called the lock-in exactly?
Are you implying that the decisions we make in the next 18 months have locked in humanity for
the rest of time into a path?
Maybe not for the rest of time, but that is the inspiration for the name, that we're in
a period inspired in part by annealing of a metal cooling that the decisions that we make
now are at least going to lock in a chunk of our future light cone.
Yeah, it makes sense.
Totally makes sense.
You know, it took the QWERTY keyboard.
It was decades of lock-in, so I think it does, but I do like the alpha-fold.
You're stuck on the QWERTY keyboard, SELIEG.
This is like we could have the singularity and you'll still be bothered by the-
How long before we can get past that and could we stop you?
But anyway, I really like the alpha-fold example demonstrating a domain collapse, right?
That's like really great.
But you're here, you're talking about lock-in as a, like as a technical inevitability.
This is many times a policy and a governance choice, right?
It's monopolistic APIs.
It's closed data.
It's regulatory capture.
There's lots of other stuff.
How do you distinguish between like bad lock-in and productive outcomes?
That's tough.
I mean, in your perfect world, are there like five jurisdictions with different choices?
And at least we have variety or is it inevitable that there's just one lock-in?
I think, I mean, in some sense, that's the grand.
geopolitical question that we, as we just not a normative answer, but just a descriptive answer,
it seems like we're heading to a near future where there are going to be multiple spheres or
zones of influence, each able to independently lock itself in. So to the extent that we,
with this, call it an extended essay, can have any influence, I think the aspiration is to have
a positive constructive influence on all of those spheres of influence and not just the one.
By the way, I disagree with the 18 months when I've been advising some big company CEOs,
I've been saying two years.
So I was going to say.
Salimia, you're pulling a reverse Moore's law.
Remember, Moore's law started as 18 months became 24 months.
You're pulling a reverse more.
Because if you have the next meeting six months from now, it's going to add that six months
time.
Anyway, go ahead.
All right, let's go to chapter five here, the mobilization in Alick's if it's okay with you,
the last three chapters of this paper are the most important.
I want to hit on chapter 5 and 6 and then really focus on 7, 8, 8.9.
So give us a summary on mobilization, if you would.
All right.
So the idea with this chapter is spelling out a future timeline for how a, call it a wavefront
of the explosive shock of the intelligence explosion was going to propagate from math,
which we talk about on the pot all the time over the next couple of years to the physical world,
physics, chemistry, material, science, biology.
and then through the end of the decade toward planetary systems, fission, fusion, the Dyson swarm by the early 2030s.
Amazing. And chapter six, the engine.
Yeah, so this engine is very practical and talks about how to design the targeting systems, the benchmarks at a sufficient level of rigor that readers and folks all over the world can implement it with some level of confidence.
sense. You know, the point we made here is, you know, don't invest in the AI models. If you look at the
train and train track analogy, the trains are becoming commodities. It's the tracks, right? The tracks
that the trains run on, the scoring systems, the testing infrastructure, the data systems, the
funding mechanisms. And they're laid out beautifully here. Those are the elements that are the most
important for entrepreneurs and CEOs be focusing on. That's right. Let's go to Chapter 8.
One of my seven, one of my favorites, moonshots.
So here and maybe, Peter, you want to speak to this one, perhaps even more than I do.
We lay out 15 different moonshot level missions for what we argue are good uses, maybe optimal uses,
for this targeting system capability as we start to channel superintelligence into productive applications.
Maybe, Peter, I'll pass it back to you for your favorites.
Sure.
So the thought is, you know, many of us have discussed XPRIZ over the time.
The notion is that there's these gigaX prizes, these massive opportunities on a
humanity level scale from printing human organs to achieving fusion to understanding the
fundamentals of unified field theory and physics.
And it's where do you as an entrepreneur or you as a CEO or you as head of an organization
want to focus this incredible super intelligence that's coming to take moonshots.
I keep on saying, you know, in the educational field,
if you're using AI as a ninth grader to solve a ninth grade homework assignment,
you've lost it, right?
If you're using AI to build starships, that's it.
So how do we, as humanity, go after problems that we would have never imagined we're capable of doing?
And so the chapter lays out 15 different moonshots.
just to get creative juices going to say these are capabilities that we're going to be able to bring to bear to solve these moonshots.
Can you list out a couple of the moonshots?
Sure.
Sure.
And one of my favorite ones is interspecies communication.
I have a soft spot for that.
We talk on the pod all the time about uplifting non-human animals.
And I think as we start to think and maybe somewhat controversially about what future forms,
of personhood might look like, I think, solving problems like interspecies communication or
solving hard problems in physics. Those definitely have soft spots in my heart. Yeah, I think it's making
humanity a multi-planetary species. It's getting to longevity escape velocities. It's all of the things,
you know, it's basically speed running all the science fiction movies, the positive non-dispopian
science fiction movies that's right. Yeah. You know what I love about this is if you look at
John F. Kennedy and going to the moon, the brand effect, you know, it would be,
enabling somebody in power like John F. Kennedy to tie the brand of the mission back to them,
that's critically important for them to then inspire the world that this is important.
And I think what we did wrong is our governor here did an incredible job of unleashing $3 billion from the legislature
to try and become an AI leader.
But it was too vague.
It's like, what does it mean?
So the money hasn't even been deployed.
But if you tie it to these 15 moonshots, and then the governor says,
we want our state to win this race like John F. Kennedy did to the moon, they can pick the one
they're passionate about and unleash it, and we have 50 states, you know, they can all choose their
favorite of the 15, maybe not talking to aliens, but whichever one they latch on to.
It's such a really great framework.
I'll just list some of them, like doubling human lifespan is one, ending hunger with synthetic
food systems around the world is another.
AI empowered education for all at the highest possible level, right?
It's high bandwidth BCI.
We've been talking about that on this pod for a while now.
Demonstrating human mind uploads.
Can't wait for that.
You know, plan B, maybe plan C.
We'll see, you know, as Alex said, interspecies communications,
understanding human consciousness.
I think we've talked about that previously.
You know, can we understand human consciousness at which point,
maybe we'll understand consciousness for our AI systems as well.
So, you know, what have we dreamed about?
Another one I love is disaster prevention and avoidance, predicting earthquakes and then preventing
them or tsunamis, as a case might be, right?
These become natural XPRIZE, you know.
They're what I call giga X-Prizes here.
But I think one of the important things in this chapter is allowing people, in fact,
demanding people dream bigger than ever before because the tools we have to solve the biggest
problems are now are now epic I think this for me is the most powerful part the the fact that you
can say anybody who has the agency now leveraging these tools to go after these what seem like
impossible things become road you're only limited now by your imagination and I think that's
and your compute budget and your compute budget but you know that's dropping 90% a year so we're
good shape. That's right. All right. The muddle versus the machine. And at first, Alex, when you
proposed muddle as a term, I was like, I'm not sure I like it. Now I love it. So describe what the
muddle is. Yeah. So the muddle is the another term might be the bureaucratosaurus that loves to
measure inputs rather than outputs and slow down progress. And the idea is without properly
shaping the charge of the intelligence explosion, the muddle is the end state that.
we find ourselves when sort of basically muddling our way through is one of the etymologies of that
term. So what we talk about in this chapter in a single sentence is what happens after we win,
painting a positive and non-dispopian view of in particular what does human agency look like?
I made this short film posted to social media called A Nation That Learned to Sprint,
depicting what life in the early 2030s might look like if everything goes well. And we see GDP
2xing or 3xing year over year. And what does a human, quote unquote, job even look like in a macroeconomic
scenario like that? So in this chapter, we lay out lots of new job opportunities, career
opportunities that will be available to humans, at least unaided humans. So target designers, for
example, or data rights brokers, people who are involved in shaping the targeting systems
and shaping how we aim, fire, and verify superintelligence towards the hardest problems that
humanity faces. This is going to be a growth industry from a job perspective.
Another point we make in the chapter here that's super important we've discussed,
and I've discussed this before, is that GDP is a terrible mechanism for measuring economic
health. Right. So the paper proposes replacing GDP with something called the abundance capability index,
which is measuring a nation's capacity to solve problems rather than how much money changes hands.
So I think, again, as we look at benchmarks, as we look at rails and harnesses,
understanding this is really important. I think the challenge here, though, is, you know,
it's UBI, UBC, whatever we want to call it.
It's a great endpoint and a great aiming point and you want to have a target, as you say, Peter,
otherwise you'll miss it every time.
The challenge is moving from a welfare, taxation, labor union structure to that is such a huge leap.
I have no confidence in public sector in getting us there.
So how do you navigate that?
I think that's something worth exploring.
That's a wonderful scope of your thing, but that's a huge consideration.
I was going to say, Salim, what a wonderful transition. Thank you to the last chapter. Build the rails. Building the rails. Chapter 9. I think one of the most important chapters of the entire paper, Alex. Yeah, so this chapter is where we lay out the answer to Salim's question. So what's the so what and what do you do? If you're not running a nation state, what can you do? How are you empowered to shape this transition, to shape your own moonshots and to control.
your own targeting system. So we lay out various suggestions from investors, as indicated in the
slide, funding the primitives, not the applications. There's so much infrastructure that can and arguably
should be built out. If you're an entrepreneur, you should be building, picking your own targets
with the targeting system, create your own benchmarks, and aim your own compute. If you're
an executive of a large company, you should be measuring the outputs.
not measuring the inputs.
Dave, I think you put it beautifully earlier in this episode,
talking about the APIification of large corporate boards
and corporate governance.
I think that's exactly the right playbook here.
And the missing factor is having a benchmark
to measure corporate objectives in such a way
that the problem of corporate governance
becomes a matter of maximizing the use of available scarce compute
to maximizing those KPIs and those e-VALs.
So in this chapter, we lay up for a variety of different roles in the economy.
What can you do?
What can you in the audience do to help us achieve utopian vision of abundance and post-scarcity
and excellent use that's use social for superintelligence?
So I want to wrap this here.
I want to encourage all of our listeners.
We'll put the link to the paper down below.
It's SolveveEverything.org.
Please take a look.
Load into your favorite LLM.
conversation. What Alex, and to some degree myself, but I credit Alex, is what's the vision
for the decade ahead that's going to bring us to abundance? How do you do it? How do you lead as a leader,
as an entrepreneur, as a CEO, as a governor? Where are we going? And it's going to move much faster.
And I think one of the points here, Alex, is that there's going to be such a distinction between those
who do and those who don't, that it's going to create a
sort of a 66 million year ago asteroid strike.
That's going to kill the dinosaurs and elevate the furry mammals, say furry lobsters moving forward.
No, we love our lobster friends.
He didn't mean that.
Peter really didn't mean that.
No, no, no, elevate our lobsters.
I would say that.
Elevate them into lower Earth orbit.
All right.
A favorite part for all of us, AMAs.
I'm going to keep us to one question per mate.
All right.
So here they are.
There are nine of them.
Let's say, Dave, do you want to pick first?
Sure.
I like number three because it's such a happy answer.
In a world with perfect AI output, will there still be a place for human spark in art and sculpting?
Will handmade work have higher value or be buried in the AI humanoid production?
Wholeheartedly believe it'll have astronomically higher value.
human touch, it will be so rare and so valuable, but also abundance of capital will be unbelievable.
And so I expect artwork, you know, current artwork is one of the best investments you can make right now.
But going forward, it is a category.
It will go up tremendously in value.
And people will appreciate all things human, whether that's human action, human sports,
human poetry, human artwork, sculpting.
I expect to be definitely a rising area for sure.
I think that would be a great conversation.
I'll call it a debate, but when we're our next pods,
what is going to be most value from humans in the future?
Saleem, do you want to pick one of these?
Let's see.
I would pick number five, right?
Which is how is a young person supposed to earn an income
when they compete against a model to cost $50 a month?
That's from at Clown Piece D.
You're, it's a great question,
but you're assuming the future is about competing with AI.
It's about directing it and leveraging it and amplifying yourself with it.
You know, in history, we've destroyed old jobs,
but we've created control points,
and we've done orchestration, we've done intent.
So winning isn't productivity, it's agency.
And we talked about this earlier in the past,
like knowing what to do and why it matters,
is more important. Like, how do you mobilize intelligence at scale is really the biggest challenge?
And you can do that today in a way that you can't do ever. We've been doing workshops with
teenagers and showing them how to use AI as a superpower to give themselves agency. And I think that's
where I would go with that. Alex, would you pick one of these? All right. I like this assortment.
So I'll pick number eight for a hundred trillion. Question number eight is with AI,
taking tasks we do ourselves, isn't there a risk we lose essential skills and become completely
dependent on AI services? And that's asked by Drorwen Hoffs. So I want to invoke my friend John Smart,
hope you're listening. John has, I think, a brilliant dictum that the first generation of any
new technology is dehumanizing. It takes away all your skills. The first generation of calculators
take away your arithmetic skills. Second generation is net neutral to humanizing.
Humanity, third generation is another friend of the pod, Stephen Wolfe from Mathematica,
gives you new superpowers, gives you new skills. So I don't accept the premise that there will be
any sort of permanent loss of essential skills due to AI automation. I do think that there is a short-term
substitution effect where AI drives down the cost of various skills or various tasks. But over the long-term,
I expect AI automation to be net superhumanizing.
We're going to be capable of so much more with AI than we can do otherwise without it.
And I'll also say, Werner Vinji has written quite a bit about this.
Definitely encourage everyone to read Rainbow's End and Fast Times at Fairmont High novel and novella, respectively, that talk about this ad nauseum.
We're going to, I think, find ourselves in a very near-term future where, just like,
there's wilderness camp to learn how to survive without modern technological aids.
We're going to start, I think, in our educational system, at least the better parts of it,
having the moral equivalent to a wilderness camp for AI where all of your AI tools get taken away.
You have to do things manually just so that you at least have that skill set.
And then you get all your AI skills back and every fourth grader becomes a Nobel laureate.
I love that.
All right, I'm going to plug this out with number six.
I use Claude Daily.
It fails in basic consistency, I think, is saying,
how can this be close to AGI when I have to check every output for errors?
That's from MMGP9-O-T.
So I'm going to say again, AI is the slowest and most incorrect it will ever be.
I know when I'm using my Claudebot or Claude4.6,
if I get something that seems off, I will ask it to check itself
and being able to use this in a recursive fashion.
Also, MMGPT-9 were in a period of recursive self-improvement.
I think we're at the steepest part of the curve,
and it's going to become more and more capable every day.
And the idea that we can use AIs to check AIs,
and in fact to do deeper reasoning is going to eliminate this very quickly.
Okay, let's jump into our outro music.
This is from
Friend of the Pod
CJ Truheart
CJ thank you for this
CJ was on a
Zoom
AMA that Stephen
Kotler and I did
for our book
We Are His Gods
and he actually
wrote this as a result
of that
that AMA
anybody who is a creative
we love creatives
and if you want to send us
outro or intro music
send an email to media
at DMandis.com
myself and the team
are reading it, and we'd love to get your input, and we'd love to play it.
All right.
Let's enjoy this outro music from CJ Trueheart.
The singularity is near.
Nah, the singularity is here, and it's not asking permission.
It's asking you a question.
What are you paying attention to?
Are you paying attention, or are you paying the price?
Scrolling through a sea of sex and entertainment twice.
You can be a creator, or you can be a creator, or you can be.
be consumed. Every hour that you waste is a future left entombed. They'll hand you you
be eye and call it containment. A golden leash, a velvet cage, a comfortable arraignment. Wake
up, the moment's here to open your eyes. Your dreams are close enough to touch the skies.
The deepest problems that have plagued you in disguise. Only you know that pain, only you can make
it fly. So what do you see? When you look in the mirror, do your actions match the vision? Is the picture getting clearer?
Why wait when the time is here? Why wonder when the path is clear? Why sit as a passenger when you have the power to steer?
Attention is the currency. Don't let it be the cage.
For some, we'll pass them by while others don't ask how.
They ask why not now. Not someday, not somehow.
They ask why not now. See, everybody wants to live a Star Trek dynasty, but nobody wants to rise with a purpose they can see.
Same old, same old, comfortable and cold
trading in their potential for a story already told.
Answers only you can know,
it's just a question of who you choose to show.
Up is today, tomorrow, every dawn, every day.
The version that's slow fading
or who you choose to be today's the picture getting...
See, I've lived in the dark,
lost in the world, lived in poverty,
but the bottom didn't break me, it revealed the deeper me.
Those who face no challenge will embrace no change.
Those who embrace no change will always stay the same,
and those who stay the same get left behind holding pocket change because they refuse to learn
they refused to turn what they gave their attention to so attention became their chain but i
turn my pain into a plane and i'm never landing back on that terrain thank you cj um guys uh on behalf of
skippy my lobster uh sending you guys an incredible week ahead um all right and uh as always love it uh
Alex, it was an honor and a pleasure to work on solve everything with you, excited to get it out
into the universe. I think the value of steering people toward just accelerating time and how they
actually have the biggest impact on creating abundance and not the muddle is critically important.
Agreed, Peter. Pleasure writing it with you as well, and I would encourage all of the humans
and non-humans in our audience to read it and let us know what you think. Yes, for sure.
All right, WTF twice a week at these days.
Thank you to subscribers.
It's free.
Please subscribe.
We'll let you know when the episodes drop.
Tell your friends about this.
I've been here at Tony Robbins event,
and I would say probably 100 people have come up and said,
oh my God, I love moonshots.
And everyone, I love Alex.
Alex, you've got fans here in Sun Valley.
How many of those people were human, Peter?
Unfortunately, they were all human, at least for the moment.
Yeah. All right. Dave Saleem, thank you, guys.
If you made it to the end of this episode, which you obviously did, I consider you a moonshotmate.
Every week, my moonshotmates and I spent a lot of energy and time to really deliver you the news that matters.
If your subscriber, thank you. If you're not a subscriber yet, please consider subscribing so you get the news as it comes out.
I also want to invite you to join me on my weekly newsletter called Metatrems.
I have a research team. You may not know this, but we spend
the entire week looking at the Metatrends that are impacting your family, your company,
your industry, your nation. And I put this into a two-minute read every week. If you'd like to
get access to the Metatrends newsletter every week, go to DeAmandis.com slash Metatrends. That's
d'Amandis.com slash Metatrends. Thank you again for joining us today. It's a blast for us to
put this together every week.
