Moonshots with Peter Diamandis - AI Experts Debate: AI Job Loss, The End of Privacy & Beginning of AI Warfare w/ Mo Gawdat, Salim Ismail & Dave Blundin | EP #176
Episode Date: June 3, 2025Get access to metatrends 10+ years before anyone else - https://bit.ly/METATRENDS Mo Gawdat is an author and former CBO of Google X. Salim Ismail is the founder of OpenExO Dave Blundin ...is the founder of Link Ventures – Offers for my audience: You can access my conversation with Cathie Wood and Mo Gawdat for free at https://qr.diamandis.com/SummitEM Test what’s going on inside your body at https://qr.diamandis.com/fountainlifepodcast Reverse the age of my skin using the same cream at https://qr.diamandis.com/oneskinpod –- Learn about Dave’s fund: https://www.linkventures.com/xpv-fund Work With Salim to build your ExO https://openexo.com/10x-shift?video=PeterD Connect with Peter: X Instagram Listen to MOONSHOTS: Apple YouTube – *Recorded on June 2nd, 2025 *Views are my own thoughts; not Financial, Medical, or Legal Advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
In my mind, jobs will be lost. When they are lost, they're going to be lost massively.
Far more people are in denial or doing nothing than are overreacting.
I do think governments are woefully underprepared.
Now it comes down to are we going to design a world that is good for people or not?
We all know that AI will go out of control within the next five to ten years. And yet we're building autonomous weapons after autonomous weapons,
knowing for a fact that every other opponent in, you know,
anywhere in the globe is building them too.
Trump taps Palantir to compile data on Americans.
This is not a tech problem. This is an accountability problem.
What can I build that is making people happier and more productive and feeling valuable and
having a sense of purpose?
And if we focus on that, we actually can avoid the dystopian outcome.
We have the ability to create an intentional future.
This future is not happening to us.
We have the ability to guide where it goes.
Now that's a moonshot, ladies and gentlemen.
Everybody welcome to Moonshots and our weekly episode of WTF Just Happened in Tech.
It's the real news going on.
Those of you who go and watch the Crisis News Network, what I call CNN, you can learn about
all the crooked politicians, all the murders on the planet or join us here
to learn about the technology that is transforming every aspect of our lives, every company,
every industry, every entrepreneurs outcome.
I'm joined by three moonshot mates today, Dave Blunden.
Good morning.
The head of Link XPB.
Dave, good morning.
Looks like you're home today.
At home, yeah. Princeton graduation on Tuesday,
MIT graduation on Thursday, and then off to Stanford tonight.
All right, fantastic.
Look forward to seeing you hopefully this week.
Salim Ismail, the CEO of OpenEXO.
And Salim, where do I find you today?
At home just outside New York City and looking for this episode.
Yeah, yeah, me too.
And good morning or good evening, Mo.
Mo Gadat, the one and only Dubai.
Is that where you are?
Dubai today, yes.
Happy to be inside because it is boiling outside.
Yeah, well, I'm in Santa Monica just back from a few days in Hong Kong. happy to be inside because it is boiling outside. Ah, yeah.
Well, I'm in Santa Monica just back from a few days in Hong Kong.
You know, it's crazy.
Literally, you have no idea where your friends are these days,
literally around the world.
We're just tied together by this digital network of Zoom and multitude.
Anyway, a crazy week in AI and in a whole slew of different technologies
And I'm excited to get into it before we start
any
Anything new Dave or sleep you want to add?
Well, first of all, thanks for getting up at 5 a.m. To do the podcast hard to tie together Dubai in LA
up at 5 a.m. to do the podcast. Hard to tie together Dubai and LA,
but it is much appreciated.
So I'm in the middle zone here, so it's very easy for me.
But thank you.
You're welcome.
You guys are worth getting up for.
All right, let's jump in.
As always, I don't know, it feels like every week
is going at a pace that would have been unbelievable.
I'm just trying to remember back 10 or 20 years ago, the number of breakthroughs or
announcements that were occurring on a regular basis.
And I can't find any analogy.
I remember in the dot-com world, there were all these crazy new dot-com companies being
announced every week.
But here, it's not just crazy companies. It's fundamental capabilities that are coming online.
And also, we're going to see later in the podcast
a predicted trillion dollars a year of capex going forward,
which I checked.
That's the equivalent investment that we
made mobilizing in World War II, inflation adjusted.
And so if it feels crazy, it should because it's historic in scale.
All right.
Let's jump into a subject on a lot of people's minds.
We've heard a lot of news about this.
This is AI and job loss.
We'll begin with a short segment of Dario Amadei, the CEO of Anthropic, talking about job loss.
Let's take a listen and then discuss it.
I really worry, particularly at the entry level, that the AI models are very much at
the center of what an entry level human worker would do.
A little bit more worried about the labor impact simply because it's happening so fast
that, yes, people will adapt, but they may not adapt
fast enough, and so there may be an adjustment period.
In terms of inequality, I'm worried about this.
There's an inherent social contract in democracy where, ultimately, the ordinary person has
a certain amount of leverage because they're contributing to the economy.
If that leverage goes away, then it's hard to make democracies work and it's harder to prevent concentration
of power. And so we need to make sure that the ordinary person maintains economic leverage and
has a way to make a living or our society, our social contract need work. And that's why I think-
You've previously in the past said you've described a future where cancer is cured,
the economy grows at 10% a year,
the budget is balanced,
and 20% of people don't have jobs.
Well, the quote you just splashed is maybe too optimistic,
maybe too sanguine about the ability for people to adapt.
People have adapted to past technological changes,
but I'll say again, everyone I've talked to has said,
this technological change looks different.
It looks faster, it looks harder to adapt to,
it's broader, the pace of progress
keeps catching people off guard.
I think the benefits are massive,
and we need to find a way to achieve benefits and
mitigate or prevent the harms. And the second thing I would say is look, there are, as you mentioned,
six or seven companies in the US building this technology. If we stop doing it tomorrow,
the rest would continue. If all of us somehow stop doing it tomorrow, then China would just beat us.
And I don't think China winning in this technology is,
I don't think that helps anyone
or makes the situation any better.
Every week, I study the 10 major tech meta trends
that will transform industries over the decade ahead.
I cover trends ranging from humanoid robots,
AGI, quantum computing, transport, energy,
longevity, and more.
No fluff, only the important stuff that matters,
that impacts our lives and our careers.
If you want me to share these with you,
I write a newsletter twice a week,
sending it out as a short two-minute read via email.
And if you want to discover the most important meta trends
10 years before anyone else,
these reports are for you.
Readers include founders and CEOs
from the world's most disruptive companies and CEOs from the world's most disruptive companies
and entrepreneurs building the world's most disruptive companies. It's not for you if you
don't want to be informed of what's coming, why it matters, and how you can benefit from it.
To subscribe for free go to Dmagnus.com slash Metatrends. That's Dmagnus.com slash Metatrends
to gain access to trends 10 plus years before anyone else
All right a lot there and we've heard this, you know at an increasing pace and intensity Dave
Thoughts on Dario's commentary here. Yeah, Dario looks really worried there doesn't he's got the wrinkled forehead and the Henderson Cooper looks even more worried
And you know rightfully so I I think the short-term job displacement
is imminent.
When I talk to people at random, people in power,
far more people are in denial or doing nothing
than are overreacting.
So it's actually good for Dario to be saying these things,
to at least wake up the masses to the immense amount of change
that's imminent.
I do think there is a lot of short-term job loss coming, but far, far, far more opportunity
being created.
So it's kind of a foot race between creators, entrepreneurs reinventing what people do versus
automators coming and just automating away white collar jobs and then ultimately robotics
and blue collar jobs.
Mo, you've been speaking about this for a while. Is this, is Dario overplaying this or is he on spot on?
Oh, no, he's underplaying it for sure. My predictions is 10, 20, 30, 40 percent unemployment in some sectors.
And what time frame? In the next two to three years.
I think everyone, there are always three questions
to answer, one is, does anyone on this call believe
that the technology is not gonna catch up
for some of those jobs that like a graphic design
or a video editor, for example, those sectors are gone.
I mean, today with VIO3 giving you a minute of video, or a video editor, for example, those sectors are gone.
I mean, today with VIO3 giving you a minute of video
that's better than Avatar for 17 cents,
you can create the movie Avatar for around $1,500
if you make mistakes on the way, right?
So that, you know, I don't know how we can save those jobs
to be quite honest.
If that's the case, then the next question becomes financial, right? Because if we had not been stuck in a system of capitalism,
where the entire profitability of a business and the legal requirement of a CEO is to prioritize shareholder gains.
And accordingly, we do not, I do not see a situation
where people will be given two day working weeks,
paid the same.
The third is ideological to be very honest,
because even things like UBI sound quite a bit
like socialism or communism to me. So there will be quite a bit like socialism or communism to me.
So there will be quite a bit of resistance
before we can get to the point where governments accept
that these are systems that they will adopt.
So in my mind, jobs will be lost in some sectors
earlier than others, and we can name quite a few of those,
but when they are lost, they're going to be lost massively.
On the other hand, the ideology and the existing system will not allow us to replace that quickly
enough because we're not awake.
And I think the more interesting one in the statement that Darius mentioned is that he
says 10% economic gains.
I wonder because how much of the US economy is actually consumption?
62% plus of the US economy is consumption.
So with people having no buying power, is that economic growth or productivity growth
without buyers to buy what we need?
Yeah.
And one of the arguments, of course, is you're demonetizing the product's cost because a
latte is now made by a robot instead of a human and it's a quarter of the price.
Saleem, you've been in agreement on this. Anything else you want to point out?
I'd like to take the counterpoint, which is when we see a huge raft of people standing at the job lines or at the food banks, etc., then I think we need to worry.
I think we're underestimating how quickly people can adapt.
Let's say I'm a, let's use the video editor.
If what we've seen in the video...
By the way, there's a video editor listening to this video right now.
That's right.
But let's, the minute that you automate that, the
video editor moves and does a whole bunch of other stuff that are necessary for producing
a podcast like this, right? There's lots of other work to be done. I still see, I go back
to the 1970s bank ATM example, which we've talked about before. I do think governments are willfully under
prepared. We should be running a ton of experiments on UBI or four-day work
weeks and managing that and getting used to that paradigm and knowing how will we
roll it out if it needs to be rolled out. So why aren't they doing that? And if
this does a separate thing, we do almost zero experimentation in government and we
could be doing a lot more of that.
I think that's one area to look at but I, you know, we'll talk about truck driving in
a bit but when you go talk to a trucking company which I actually went and did talking about
okay there's all these three million jobs that could be lost etc. the truck driving
company goes I would hire a thousand truck drivers today if I could. They're not there. I need the automation to do the
work. So I kind of tilt towards that side. Now, I tend to be biased on the optimistic
side so I will grant you that. Let's see how this works out over time.
Let me throw some numbers out here just from the Bureau of Labor Statistics.
So 11% of office and admin jobs, 11% of the workforce is office and admin jobs, which
have a very high probability of going away.
6% are business and financial operations, 7% are management, 6% are education, training,
library, 6 percent are healthcare,
nine percent are sales related jobs. So there's large swaths of the labor force that,
at least according to my search, are likely to be automated. And the question is can they all be
up up leveled, right? So we're talking just in the in the quick research I did, it's something on the order of 40% of jobs that have a
reasonable probability over the next three to five years of being automated away.
We'll get to this conversation a bit later. The issue is not can they be
upleveled to a different position. The question is the social unrest in the interim.
How hard is that going to hit society?
Dave, what are you thinking?
Well, you know, we're moving into an intentional world.
You know, we evolved in a world dictated by nature, and then we went through this transition
where we're in right now.
But the future is our design.
It's not dictated by title forces and it drives me
nuts when the economists are extrapolating and predicting, but they never reference self-improvement,
they never reference the exponential rate of change, and the intentionality of the world
design is completely dominant from here forward. So it's what we decide to do. I think Dario worries all night about CBRN,
chemical, biological, radiological,
and nuclear threats from AI.
And he's dead right.
If you unleash AI into the hands of eight billion people,
some crazy person out there is gonna turn it into a weapon.
You have to actually put some thought into this design.
But that was inevitable.
That started with the nuclear era.
And so now it comes down to, are we going to design a world that is good for people
or not?
And so I think it's completely in our control.
I also really believe that, yes, there's huge amounts of job displacement coming because
as Mo pointed out, the natural capitalist action is to say, what can I automate away, reduce the cost by 99 percent, all that all
becomes bottom line profit.
So the valuations of companies that automate are going to go way, way up and create a huge
amount of wealth.
Where does that wealth land?
And as I've been saying on this podcast, it's naturally going to land in relatively few
hands if nothing changes. And that's what's going to create in relatively few hands if nothing changes.
And that's what's going to create all kinds of social unrest in transition.
But the amount of value and the greenfield opportunity is so much bigger than the amount
of job loss.
And so if we're quick and intentional and we turn a lot of that AI horsepower into working
on what can we build? You know, if I can write three million lines of software
in a single night,
that's the equivalent of hundreds of millions of dollars
of R&D in a single night.
What can I build that is making people happier
and more productive and feeling valuable
and having a sense of purpose?
And if we focus on that,
we actually can avoid the dystopian outcome.
I love that, an intentional future.
Let me move forward to this next set of slides here.
Not to go into detail,
but we see a plan for Tesla to roll out Model Y cars,
fully automatic, aiming for delivery in June.
So their rollout of the RoboTaxi begins in June.
It won't be many cars, they're testing it out.
You know, I took my kids on a Waymo ride here
in Santa Monica over the weekend,
and they just had a blast.
Put them and their friends in a Waymo,
we just drove around,
and it felt like a carnival, we just drove around and it's...
It felt like a carnival ride for the first few minutes and then it felt completely like an
extraordinary end-to-end experience. So these will roll out and
we're going to talk about the number of drivers that are taxi drivers, Uber drivers, the displacement here.
This next article is about truck driving and this makes the point made earlier. 18 wheelers
are on the Texas highways driving themselves already. You know, just a quick
video for a microsecond. But at the same time, the US is facing historic driver shortages and recruitment struggles.
We can't get enough drivers as you said, Salim.
So let's talk about this sector for a moment.
Salim, do you want to kick us off?
Yeah, two points here.
First is, you know, before Uber came along, we didn't notice that there was this huge
labor liquidity opportunity.
Then Uber comes along and all of a sudden a single mother can drop her kids off at school,
drive for four hours, pick them up again that afternoon, and have a kind of a functional,
much more functional world than before. We soaked that up very quickly. We didn't notice that. We
didn't notice it on the abundance side. I don't think we'll notice it as much as we automate also. I'm
still banking and hoping that my 13-year-old will never have to get a driver's license.
So we'll see when the curves hit of autonomous driving versus people wanting to. I think we'll just do, as we've seen before, a ton more driving.
And we'll just have a lot more little road trips and little errands that we didn't have
to do now can be done by a Waymo or a Tesla. And I think we'll just have what we've seen
historically repeatedly, repeatedly is that when you automate, you increase capacity.
You don't decrease.
And so in the truck driving example,
I think we'll see a ton more truck driving that's autonomous.
And the amount of truck drivers won't change very much.
That's my prediction.
Let's see if I'm right or not.
Mo.
Well, Peter.
Oh, yeah.
Good, Dave.
Well, you're going to love being in LA
where the traffic's notorious.
This also enables coordinated traffic.
You know, our good friend Lee Hetherington
back from MIT days did all these traffic simulations
back when he was an undergrad.
And the roads are most efficient
at about 45, 50 miles an hour back to back cars
and then they just jam right after that.
But the self-driving cars also enable
intelligent traffic flow
design.
And that's actually going to increase the capacity
of their existing roadways quite a bit.
I'm sure everyone in LA will love that.
I mean, the implications of self-driving cars
on the environment, on being able to move electric battery
packs all around the city, being able to get rid of parking,
every garage at a single family home
could get turned into an extra storage or living room.
In LA, 60% of the land area is parking spaces.
This is insane.
It truly is.
I could not.
So if you look back 120 years when you had the transition,
or 110 years had the transition from horses
to the Ford Model T, that transition
was dramatic over the course of 10 years.
The value proposition for a car was so much better than a horse.
The amount of horse manure was threatening society at an extraordinary rate and then
disappeared.
The question is, when I look at the Waymo, it's an expensive car.
The Waymo is coming in at something like north of $150,000.
So you're not buying them and putting a fleet out.
The CyberCab, if it really comes in at 30k or below, I could see Uber drivers buying
a fleet of CyberCabs and having cyber cabs work for them
but it's gonna take that kind of a price point to really do a transition to the point where I
Don't need a car anymore. My AI is ordering it in advance of when I need it. I walked out the front door
It's waiting for me because my schedule was is known by my AI
mo
Thoughts I mean well, I love you all you know that so please don't be offended because my schedule is known by my AI. Mo, thoughts on?
Yeah, I mean, well, I love you all, you know that,
so please don't be offended by what I'm about to say.
Speak your mind.
All that you guys talk about is problems of privilege.
It's like, ah, my traffic jam,
I wanna make sure that my cab is waiting for me outside,
and just go tell those
things to the cab driver that actually is feeding for and working two shifts, right? And this,
I agree with Dave, 100%. We have a choice to design our future. Now, when you really think about it,
and when wonderful humans like you are thinking this
way, what do you think the choice will be?
Your question, Peter, was how would that impact on civil unrest?
Well, if they heard this conversation and how careless we are about their jobs, saying
things like, yeah, they'll figure something out.
I heard that a million times.
What will they figure out?
I want someone who tells me that we
will find new jobs and upskill them. Tell me what those jobs are so that we start upskilling them.
Can I give an example here? Yeah. So I have a friend when I was living in Miami, I met an Uber
driver and I started playing tennis with him and we had a kind of a fun interaction. And it was fascinating. He started driving for Uber,
and then the amount of income dropped too much,
so he started driving for Lyft.
Then he did both for a while,
and then both of them became,
it was too not worth it to be driving that much.
So he buys a, starts renting out his car on Turo,
and then finds, hmm, I can do this.
And he starts renting four cars, buys four cars, and rents them all out on Turo and then finds, hmm, I can do this. And he starts renting four cars, buys four cars and rents them all out on Turo.
And then he helps a friend with his Airbnb rental managing that, taking a cut of that
thing.
And over a period of like three years or so, he navigated all of these different dynamics.
Wherever there were opportunities, he would go grab it etc etc and I think it speaks to the enterprise and entrepreneurial
nature of an individual. If you had a score that was called the entrepreneur
quotient of an individual they will figure it out. We talk often Peter and
I about mindsets right? If you drop Elon Musk into a desert with no money and no
communications he'll figure it out.
He'll figure out how to get out of there and do something.
And make a rocket to go to Mars.
And make a rocket out of that sand. And I think when you give people opportunity,
this is why I think technology is so amazing. It speaks to Dave's earlier point.
When you make this opportunity available, people are going to go for it.
They're going to figure out, wow, AI can automate code. What could I automate?
And they'll start doing that stuff. people are going to go for it. They're going to figure out, wow, AI can automate code. What could I automate?
And they'll start doing that stuff.
Then when this fellow got blocked by Turobe
for having too many cars or whatever,
he created multiple IDs and was doing that,
people are incredibly enterprising
if you're able to turn on that switch.
I think we're underestimating that capability.
So I love that, Salim.
I'm going to come to this point a little bit
later in a few topics that I think the single most important job for the future, people say,
what should my kid become? I think the only job that's going to survive in the future down the
line is entrepreneur is, and we have to reteach our kids how to think this way. We've had an entire civilization whose educational output
is to train kids to get a job,
rather than train kids to figure out
what the opportunities are and create something around them
because the tools were not democratized.
Well, the tools are democratized now.
And so how do you train kids and adults
to go out and find jobs?
I put up this slide here.
These are drivers by category in the US.
And I'm sorry, we have a massive international viewership,
but I'm defaulting to US numbers here.
3.3% of the US workforce are drivers.
There are 2.2 million drug drivers.
And down at the bottom on this list, we have delivery drivers, Uber drivers, bus
drivers. At the bottom is taxi drivers at 200,000. So the number of taxi
drivers has dropped precipitously. I do think Mo, to answer your question, there
is a future in which drivers are allowed to
finance and purchase these autonomous cars and they become managers of fleets of autonomous
cars and these cars are out there earning a living on behalf of those drivers.
This is definitely the American way, right?
The American way is that we're going to enable everyone to buy a car, to make money on it
without doing anything.
Is that really true, guys?
Like honestly, if there is a margin that allows this guy to make money, why wouldn't Uber
buy those cars?
They probably will buy many of those cars.
Yes, sir.
The other question we're not asking, and I was told at the beginning of this briefing
to be the extreme on one side, so please understand that.
The other question we're not asking and we definitely need to ask is that this guy that's
been renting those cars out, Salim, which I think is a fantastic example,
is renting them out in an undisturbed economy
where people have the purchasing power
to rent them out from them.
What kind of entrepreneur would make money
in a UBI-based environment?
What does that mean to a lot of people
who don't have the purchasing power
to buy from an entrepreneur?
Yeah, no, totally. I think I could answer that question.
So what I found fascinating about this fellow, because every week
we would play tennis and I would just track what he was doing.
He found that certain types of cars were not renting at all
because of that time of the year or that type of tourist visiting Miami or whatever.
And so he was juggling constantly the which cars he had or didn't have in his little fleet
and adapting as it went along.
At some point, he found that small SUVs were
renting like hotcakes.
And so he started working on that.
And then he had to pivot again.
And he just managed to navigate himself.
And the question came up, what happens if you start?
He got to a point where he was making enough passive income off these things that he didn't have to do the work. And then he was just
voluntarily taking tennis lessons and teaching people tennis and being on a tennis court eight
hours a day. And I think there's a thread of an anecdote here of where people will start
finding their true passions and just following those passions. I think that's the beauty of a UBI.
It allows you to do that.
We've seen in the experiments
where UBI has been done properly
that entrepreneurship explodes.
And if we agree with that general thesis,
then this is absolutely the way to go.
I'll go back to my earlier trope
of governments being completely unaware
and unable for this because to move from a taxation job,
union labor structure to UBI is such a huge flip that we have no confidence in
public sector and really getting us there.
So I agree a hundred percent to that. Honestly, I think, I think if we, if we both agree that this is a future where it's possible,
regardless of how low a UBI is,
that people will go back to bartering
and doing things through each other,
through the offerings of each other.
Then I think that would work,
but then governments need to be aware of that.
We need to start thinking that this is going to be a future
that we need to think about.
But my ask of everyone is in situations like this, it really is not helpful to keep trying
the California way to paint the optimistic picture. Because if the optimistic picture happens,
we're all fine. Right? I think what we need to think about is what's the worst case scenario and
guard against it. Right? And the worst case scenario, if we're not prepared for this kind of job losses economically and national security-wise is quite significant.
People really need to be aware of that.
Yeah, I just came back from Hong Kong, Mo, and while I was there, met with an incredibly
successful entrepreneur of Indian origin, Sanjay, who was one of the very first
employees in one of the huge Hong Kong Chinese companies.
And his mission now is to go back and try and help India's young population, 1.41 billion
people in India The promise had always been if you get an education there's going to be a job for you
and of course that promise is now broken all of the you know, all of the
coding jobs that they were getting are no longer being made available and
It were on the tipping point in India and other parts of the world of what could be
such a negative implication that it leads to societal unrest.
You know, it's like, where's my job?
And one of the biggest problems is a young population, an intelligent young population
that has taken their future, you future, grabbed away from them.
What do they do?
And the conversation I had with Sanjay,
which I agree with, is the job of the future is being an entrepreneur.
And so his mission in India is upskilling all of these young students
to become entrepreneurs, to create new job opportunities for themselves
You know Dave
How do you think this plays out I think that you know We're way underestimating the creativity of people and there's this window of time the next four years
where the empowerment of people to create is so outweighs the risk and
I do agree, you know, we're choosing Driver
because it's a really tough case.
You know, it's the number one job title
in the world is Driver.
It's a huge number of people.
But if you look at graphic designers as a case study too,
I think they're empowered much more than they're replaced.
And there are all these case studies popping up
of VO3 artists that are so much more productive.
And so I think that when I look at entrepreneurs,
I've worked with hundreds and hundreds of entrepreneurs
over the years, what they need is time
and the ability to acts of tools, time and tools.
And there's very likely a world coming up
over the next four years where they're given time,
whether it's UBI or otherwise, and they can act.
Because very often, you know, some of the best, most creative people, they can't act on their ideas
largely because they're trapped in mortgage, they're trapped in student debt, you know, they just need money now.
And so then they go, they become an Uber driver for a while, they become a whatever for a while,
they go work at Google for a while.
But their freedom of action is very, very limited.
And I think that AI has the opportunity
to open up freedom of action.
Freedom of action unleashes creativity.
So exactly as Salim was saying,
there's so much latent entrepreneurial talent.
And this next window of four years
is gonna be dominated by the ability to build scaffolding.
And scaffolding is a word you're going to hear a ton now going forward, because the
AI doesn't naturally do something interesting or useful for you.
It'll write all the code, it'll build everything, it'll audit, it'll write all the documents.
It does all the busy work very, very well in the next four years, but it doesn't decide
this is what my user base, my community,
this is what people will want.
And that's still coming from entrepreneurs.
And so this slide is exactly right.
This is the dominant theme over the next four years.
Go ahead and read this out if you would, Dave.
Job of the future as entrepreneur,
near term next two to five years.
Many jobs will be impacted this decade, 2030, near term, next two to five years, many jobs will be impacted.
This decade, 2030, medium term.
So the medium term, 2030 to 2045, is the part where no one can quite visualize.
I have a great sense of the next four years and then a much more difficult sense of what
happens from 2030 and beyond.
It'll clearly be an age of incredible abundance.
So the opportunity to make everybody happy
is right in front of us, just a question of how you do it.
Yeah.
But in the Singularity Sprint.
Yeah, let me hit on that, right?
Yeah, so the idea here of the Singularity Sprint
is you have a window of time to build something awesome
and that window is limited.
So I'll read, it says, the anxious all out rush to launch bold projects or startups right
now driven by the fear that rapidly advancing AI will soon erode human leverage and make
long horizon careers that's obsolete.
After graduation, a lot of my friends skip safe jobs for their own ventures, classic singularity sprint vibes. So it's like if you want to make
it big, you got to dive in right now both feet. Do you agree with that? Yeah, I mean
this is what started with with Steve Jobs and Bill Gates both being 21 when the
PC comes out. You know, they have no career path, right? They're within a year
of age of each other.
They're old enough to start a company, but they're young enough that they're not in
law school, they're not in some, you know, entrenched 401K plan.
They're just free to act.
And so, you know, Steve Jobs, Bill Gates, then you forward to the internet, Mark Zuckerberg
drops out, starts Facebook.
But you see this over and over again in recent history where flexibility
way outperforms career pathing.
So going forward, of course that's gonna accelerate
with the singularity, so now yeah,
you'd be crazy to get too deep into some trench
when you know the amount of change
is accelerating like crazy.
So yeah, this is clearly the world we're moving into.
Opportunity is everywhere.
The expansion of opportunity is just fractal and rampant.
So many things you can do to add value, but they're not things that you would have anticipated
a year prior.
You need to be really nimble and flexible and stay frosty.
Watch this podcast.
Read the Alex Wisner gross feed.
Just stay on top of it because new things are appearing all the time.
Mo, bring us back to reality here.
Do you disagree with this?
I think Dave's point is so spot on.
If you're 21 like Bill Gates or Steve Jobs, if you really think about those who already have a mortgage, how will UBI work for those?
Because remember, we pay people for the value they bring.
So when nobody's really bringing value, then do you pay someone who has a mortgage and
four kids a little more than someone who has a lesser mortgage and two kids?
Or do you reward someone who you know worked on a
on a shoestring for a while and didn't have a mortgage? I don't know okay but my question is
are we thinking about those things and then of course when we talk about entrepreneurship
it's so easy for us to talk about that everyone here has started or co-founded or invested
in tens if not hundreds of companies.
That's not natural for people who were trained all their life to just go get a job.
And all of that, by the way, everyone here knows I am the biggest believer in total abundance
once we cross this short-term dystopia if you want, total abundance, like you
you know we can create a world that we can't even dream of, it's just that we have to be super
realistic about the challenges in the short term and rather than talk about the opportunities and
tell people hey you take charge, you go ahead and start the business. I mean, honestly, even I today
am struggling to start a business at this pace. I mean, seriously, and I've started
countless businesses. It's so difficult to keep up.
Yeah, the speed of disruption is crazy.
Can I flip over to most side of the equation for a second?
Yes, please.
So I think there's in the US US I would say you don't have to
worry at all at a country level just because the latent amount of
entrepreneurship is so deeply embedded into the culture right. But you take
Europe where if you're a big company just trying to fire people is near
impossible. There are workers councils that govern how many people, unions etc
etc. The amount of labor rigidity there is extreme.
That is going to be very, very badly disrupted.
And I think the governments there
are in very, very deep trouble because they're not
structured.
They don't have the latent entrepreneurship
quotient in the population to be able to adapt
to what's going on.
And that's where I think you'll see a lot more challenges
than, say say the US.
The challenge with trying to upgrade people
that have been stuck in a particular way of thinking
for a decade or two to Mo's point
is going to be incredibly difficult.
Now you need like psychedelics at scale
or some radical huge thing to make that mindset shift,
to make everybody move.
Or you have to go to UBI urgently
and force people into that conversation.
Everyone, as you know, earlier this year,
I was on stage at the Abundance Summit
with some incredible individuals,
Cathie Wood, Mo Gadat, Vinod Khosla, Brett Adcock,
and many other amazing tech CEOs.
I'm always asked,
hey, Peter, where can I see the summit?
Well, I'm finally releasing all the talks.
You can access my conversation with Kathy Wood and Mogadot
for free at dmandus.com slash summit.
That's the talk with Kathy Wood and Mogadot
for free at dmandus.com slash summit.
Enjoy, I'll ask my team to put the links
in the show notes below.
I'm gonna give a couple of stats here just for
reference. In the U.S., 16% of the U.S. adults consider themselves entrepreneurs. It's 31 million
adults. Recent surveys indicate that 36% of Gen Zers and 39% of Millennials consider themselves
entrepreneurs. So to make makes your point, Salim, that the United States has less of an issue there, but it's in the rigid structures of other
nations. Of course, to remember the idea of a job is a relatively new invention
and for most of human history we were entrepreneurs to survive. We'd go and find
that shelter, that food, you know, those berries we needed to cure our child
of a particular disease.
So the question is, can we create an intentional future?
My biggest concern, Dave, you hit on this, Mo and Salim, you hit on this, which is that
governments are linear at best and we're in this exponential ramp
up that's going to change every aspect of society.
Here's another example of what's going on today and it's going to change things.
Again, it's both a disruptive force and an innovative force.
This was a tweet put out by Matt Schumer.
It says, I put Claude for Opus in charge
as CEO of my startup.
And has seen significant revenue growth.
He said, this is low risk since Claude for Opus
is not in charge of HR or financial investments,
but rapid iteration of the products and services.
So we've been speaking about this for a while.
When do we see the first billion dollar one person startup and then soon thereafter, billion
dollar zero person startup says agents with crypto are beginning to create new opportunities.
Now one of the things that we haven't mentioned is this
potential future comes with massive GDP growth, massive
revenue growth.
And where does that revenue go?
Mo, you mentioned that a few minutes ago.
Is it all being concentrated in the magnificent whatever,
rather than magnificent seven?
We're going to see all of these AI companies that are trillion dollar companies.
How do they get taxed?
How does the money get redistributed so we avoid revolutions?
Thoughts on this?
Salim, is this the future of an EXO, an exponential organization?
The natural outcome as we, you know, used to take like a hundred thousand people to
create a billion dollar company a century ago, then it dropped to about
fifty thousand. About four decades ago was ten thousand and now it's like ten,
right, or three as we talk about it or as Sam Altman goes it'll be one. We will get
to zero at some point. We're just spinning off ideas autonomously that
then just generate a lot of value.
I think Dave's point from the beginning was really a key one is where does that value
accrue and how do you navigate that?
And right now we tax labor.
We're going to have to tax capital much more aggressively in the future to navigate this.
Dave?
Well, a couple of case studies on this, Peter.
So we've seen Mercure is very, very good at interviewing people all over the world, any
language, any culture, and discovering latent talent.
So now you turn that same energy inside your organization.
Suppose you've got 1,000 people, 10,000 people inside an organization.
There's latent talent in there everywhere.
Largely, historically, people have climbed the corporate hierarchy by kissing
ass and schmoozing and buying beers, and it's not really correlated with being good at your
job. And that drives a lot of very talented people nuts, especially if they're from a
different culture, they speak a different language, whatever. You can't really ass-kiss
effectively if you don't speak the same language. But all of that, actually, you saw this with the XPRIZE board notes.
Remember that three hour, four hour long XPRIZE board meeting we had?
I took the whole transcript, put it into the LLM and said, give us four or five suggested
KPIs that would help this organization stay on track.
And it does an amazingly good job.
And so using AI as a management tool is kind of way under appreciated.
Everyone's like, oh, I'm going to make videos. Oh, I'm going to build a self-driving car. I'm going to do all these ground level things. And so using AI as a management tool is kind of way under appreciated.
Everyone's like, oh, I'm gonna make videos.
Oh, I'm gonna build a self-driving car.
I'm gonna all these ground level things.
But at the top of the hierarchy,
it's actually even more effective.
And so that good spin on it is it's very, very good
at being fair and unbiased and discovering latent talent.
I'm sure Mo will tell us,
there's definitely another side to it.
Am I now already getting that reputation?
Is this who I am?
Sorry, I didn't mean to categorize you.
I do see a different side to it.
I think what you're going to see quicker is not just AIs with the CEO being an AI,
companies with the CEO being an AI.
I think the opposite
is going to see, you're going to see more of, which goes back to entrepreneurship, where
you have a company that only has a CEO and everyone working in it is an agent, right?
And you know, it's like one of those companies, the more intelligent the AI agents becomes,
and I'm sure every one of us worked at a point in time in a company where the CEO was a total idiot but the team below them you know the team below them
was good enough that the company ran well so you know the the top management
those AI agents will do almost everything and the CEO will become you
know just happy counting the money basically.
All right, we've talked about the speed of AI development. This is the upcoming summer schedule
and GPT-5 is scheduled to come online.
So this is the latest GPT-5 leaks,
launch expected in July of 2025.
GPT-5 exceeded expectations internally at OpenAI.
OpenAI expects record breaking demand for this.
Altman's not focused on in between models, GPT-5
is the flagship and it won't launch unless it's
excellent. We've had a lot of expectations building on GPT-5, right?
This is the PhD level model. This is the AI that's coding other AIs.
This has been sort of heralded for some time.
Dave, what are you hearing about it?
So I really chafe at the idea of a PhD level model
being smarter than an undergraduate level model.
People I work with who chose to get PhDs,
it's just a choice they made., has nothing to do with that.
But because a lot of the researchers working on this
are PhDs, they say, well, this one's PhD level.
But yeah, it's marching up the scaling laws curve
exactly as predicted.
And so now we're just gonna throw
more and more and more compute,
and it's gonna get smarter and smarter and smarter
I mean, it's just a complete unlock of
3040 years of AI research suddenly just blown wide open and I gotta tell you within the AI research community
There's still a ton of people working on other pathways
You know that the the logic being well, this will never be truly conscious or truly
Transformer models beyond transformer models. Beyond transformer models, exactly.
Which is becoming increasingly obvious that,
no, the research will matter,
but the transformer's gonna do it.
So all you need to do is work on the scaling
of the transformer model to solve all the other problems.
So I think this will continue the trend of,
as soon as it comes out,
everyone goes, oh my
god, oh my god, oh my god.
But it's what is logically expected on that scaling law curve.
It'll be amazing.
SALIM, how do you think about this?
I call it a bit of BS on that fourth one.
We're not focused on in-between models.
All we've seen for the last two years is in-between models.
03 Mini, 4.5 this, et cetera, et cetera.
But fine, it's a marketing thing.
I think what happens here is you get to a point
where transformers can do so much.
It forces us as a user community to really focus
on what are the questions.
Today we call it prompt engineering.
I think the real question becomes what do you want this thing to do and what can you get it to do?
Now you're focused on the demand side of, okay, if I'm creating a video, what are the bounds of
that? I think it'll force a deep level of unlocking of creativity in the human mind that I think is
for me the most exciting part of this.
And when we saw it-
Can I make another point, too?
Sure, David.
You know, I think that, you know, one of the great strategies in tech is to try and freeze
the market by announcing something that's coming and have everybody wait for it so they
don't react.
Don't do that.
You know, what we're finding is that the chain of thought reasoning that sits on top of these
models is so much more important than we ever thought it would be.
Anytime you take one of these models and use it in a specific use case, so anything from
chip design to self-driving to robots that mow your lawn, the data and the tuning for
that use case is way more important than the next iteration of the foundation model.
And so there's a danger that people kind of wait
and see what it's like,
but we're finding more and more and more
that you can layer on top of these things,
make them dramatically more useful
for anything that you actually care about.
So it's a field day for entrepreneurs right now,
but absolutely don't get frozen.
They're trying to freeze you and and make you anticipate
But you can take llama for and do virtually any of this stuff today. And then if the foundation model comes out, it's really good
Great just swap to it
You know
There was a white paper that leo pulled put out called situational awareness about two years ago or so 18 months ago
it
used GPT-5 as that
transition point for this explosion, this intelligence explosion
where these models now become better at chip design, at iterating and improving themselves
in self-referential programming and an acceleration of the acceleration.
Mo, how do you think about what's coming on the back of these improved models?
I think we're getting used to.
I feel, I think for the first time that I'm a little more comfortable with the speed at
which those things are coming because I think the different players have taught us to expect
something incredible from one of them every few weeks. And when you have seen Google I.O. and when you see Cloud 4 and the focus that they're
shifting into, probably in my mind, so far Gemini is winning.
If you take it as an overall model, at least until now, until we see GPT-5.
Claude is sort of becoming the geek saying, hey, this chat bot thing is not my thing.
I'm gonna just be the one that helps you write code
if you want, or at least primarily.
And it's quite an interesting one to think
where chat GPT falls within all of this.
You see moves like, you know, how dependent
ChadGPT is becoming on memory and stickiness, if you want. The idea of a new device, sort of like,
I don't know if I even have the right to say this, but I feel that since the dropout of some of the
top scientists with Ilya and others, you know
Almost a year and a bit ago now
The frontier breakthroughs. I think chat GPT has to prove open AI has to prove
Mm-hmm and along the lines what you just said a minute ago. This is the rollout schedule for the summer June July in August
25 in July, 03 in open source models.
In June, Grok 3.5.
In June, Gemini 2.5 Pro Deep Think.
Love the name.
In June, Project PC Mariner.
In June, Project Astra from Google.
And you're right, by the way, Google
is crushing across the board on almost everything,
just not on revenues. They've got a reinvent
Yeah, isn't that isn't that how we always have been?
So so I have to say I I lived in Google at the time when when we were completely beaten
on mobile where Google was very successful on the desktop and then and then you know one year
We said mobile first the following year. We said mobile only, and we crushed it.
Yeah, Google does that.
Yeah, we're good.
They are good at that.
I'm not we anymore.
Yeah.
But just again, we have, I mean, everybody
is talking about the competition between countries,
between China and the United States.
And look at this. This is the competition between models, between China and the United States. And look at this.
This is the competition between models, of course.
I don't have DeepSeek on this list,
which is coming out with extraordinary products as well.
Dave, how do you think about this?
It's amazing to watch the divergence of strategy
between Anthropic and OpenAI, where Anthropic, Dario,
is going down the right code.
You remember that Leopold Aschenbrenner paper
you just referenced a second ago?
He describes an AI Alec Radford.
So Alec Radford will go down in history
as the quintessential,
the thing that defines self-improving AI.
So when the AI can do what Alec Radford does,
then it'll become self-improving
because all the really good ideas come from Alec Radford and then we test them.
So he's part of history now.
But I think Anthropic is saying, look, we're very, very close to that day.
We're just going to focus on the best possible coding and self-improving AI and then that's
going to explode singularity style.
Meanwhile, OpenAI is going down this completely different path,
saying we're going to hire Johnny Ive,
we're going to build the greatest consumer device ever known,
we're going to gather all that data,
we're going to use that to iteratively improve and train the AI.
It's much more of a traditional grab the market kind of momentum oriented tech play.
So really completely opposite strategies.
Both have merit.
I do appreciate that Dario is being completely honest when he does these Anderson Cooper
type interviews.
He is speaking his mind and telling you this is the way I see it playing out, which is
very, very cool.
Maybe not the best business strategy though.
Correct.
He's getting attention for the company.
And we'll get to AI safety next.
So let's dive into that.
So AI in government security and safety.
It's a big deal.
It's the conversation that's going on in the background.
I don't think it's necessarily changing the speed or direction, but the conversation is
going on.
So Mo, I'm going to open up with you on this.
I just did it.
Oh, come on. You don't want me with you on this. I just did it. Come on.
You don't want me to talk about this.
You know my position.
All right.
I'll come to you next.
But fascinating.
I've gotten to know Palmer Lucky fairly well.
I've done a few podcasts with him.
And of course, Palmer has a long and storied history with Zuck. And now Meta and Andrel are joining hands
in building $100 million US Army VR contract called Eagle Eye.
And we're going to start to see AI and exponential technologies
accelerating in the defense industry.
Do you want to go second, Mo?
I mean, this is one of your biggest concerns,
is AI being used for defense?
We're three seconds to midnight on nuclear investments
that are now, I don't know how many years old,
and it never really stops once you go down that path.
And humanity never learns.
I mean, seriously, we all know that AI will go out of control
within the next five to ten years.
We all know that we're going to hand over to them.
And I don't mean a rogue AI is going to get out of control.
It's just like Google's ad engine is no longer controlled by a human because the task is
too big for humans to be able to do it.
And yet we're building autonomous weapons after autonomous weapons, knowing for a fact
that every other opponent in, you know, anywhere in the globe is building them too. I don't
know where humanity's intelligence has gone really. That dumb race to intelligence supremacy
to, you know, defense supremacy is just, it has to stop, honestly. I'll come back to that in a minute.
But Salim, what are your thoughts here?
It's a sticky subject.
When you look at the Ukraine-Russia war that's being fought by drones,
just over the weekend, we saw two counter-strike by two different waves of drones
by each side. That's good in one way because there's less humans in the middle of the mix, but the targeting opportunity for drones, we've
talked about this on this podcast before, where somebody could program a
drone to find, you know, middle-aged brown bolt people and cause damage and
that would be a really bad outcome. And then what do you do when you have
that kind of infinite targeting?
I do believe we're going to end up kind of where we are with spam and so on.
There was a time when we thought spam was going to totally destroy the internet
and we found ways of defending against that.
It's an arm's race thing where the bad guys are kind of a one-step ahead
and we're very quickly falling one step behind.
I think people get freaked out by the negative side,
not realizing that as we use AI for bad, we'll use AI for good to chase the bad.
The point Palmer makes is, listen, you've got dumb weapons that take out schools and school kids,
landmines that don't differentiate between a tank and a school bus. Don't you want to have intelligence
be able to make that differentiation
and actually take out the minimum number of individuals?
And I hate this conversation, right?
It's kind of perverse.
You're assuming benevolence on the part of that,
but in certain war zones, they're
targeting the journalists, right?
And so that makes it easier to target those folks,
just like it was easier for the Uyghurs
to be targeted more easily via Facebook and the Arab world.
Since when was the top general, the Yoda or Buddha?
Seriously.
Yeah.
Yeah.
Can we please stop using slogans of, oh, killing fewer people is better than killing many people.
Killing is wrong.
It's as simple as that.
And killing at this scale is going to get us into another doom's clock where we will
not be able to stop it.
Yeah, one thing to factor into your thinking on that is that the history of warfare, you
know, is dominated by somebody, some king or some, you know, general way behind the
lines completely immune to the actual battle.
And then, you know, hundreds of thousands or millions of people going out and putting
their lives on the line.
And then you see how it all settles in the end.
But now we're moving to a world of constant surveillance.
You know exactly where every human being is at all times,
and you can attack via laser, via space weapon,
any single human being at any time.
And so I wouldn't assume that this Russia-Ukraine
type warfare will ever exist again.
It's much more likely that it's some kind of,
we don't wanna blow up cities,
we don't wanna blow up huge populations, that's pointless.
What we want to do is find the rogue leader.
And so, that's also, I'm not saying that's a utopia,
there's all kinds of ugliness with that too,
like who decides who's a rogue leader
and who's not a rogue leader.
I'm going to say something that's gonna upset everyone.
We're having this conversation when one global, very well-known evil leader is trying to kill
two million people in front of everyone.
Give him better weapons and he would do it.
You know my favorite song of all time is that song, if you tolerate this, then your children
will be next.
Seriously.
I mean, what guarantees you that the U S president will not be targeted by a tiny drone, right?
That can literally fly from anywhere in the world, stand in front of his head and
shoot.
What kind of world is that when every world leader is subjected to this?
No, that's a very important point actually, because you find that very few people that you bump into want to be the president of the United States.
Or any president for that matter, yeah?
Or any other president. If you look at the statistics, it's a very, very dangerous job. Even in the US, it's about a 10% mortality rate if you go back over time.
So it's a very, very dangerous job. A lot of people don't want it.
A lot of downside, a lot of getting poked fun at.
So governments that have distributed leadership
way outperform for that reason.
And so there's some thinking to do there
in terms of how do you set up a government
where people who are capable and thoughtful
really want to do the job too.
And so there's definitely, there has to be a solution though.
We can't just throw up our arms and say, hey,
because I took a class at MIT called
Just Wars, Total Wars, Nuclear Wars,
which was a really cool class until the last two weeks
when the professor was trying to convince us all
that we're doomed because as ICBMs get more and more powerful,
the value of a first strike becomes,
and he put together
a little video game for us to all blow each other up, but he rigged it so that your only
way to win was to be a first strike and blow up to everybody else in the world.
No, no, no.
He missed the movie.
The only way to win is not to play.
Is not to play.
That is exactly my point.
And I really, I mean, I go back and say what I said earlier.
I think we have to stop thinking
about the optimistic scenario
that we are taught to think about in Silicon Valley
and start thinking about the worst case scenario,
guard against it first, then look at the upside.
The upside is guaranteed.
A quick aside, you probably heard me speaking
about fountain life before,
and you're probably wishing,
Peter, would you please stop talking about fountain life and the answer is no I won't because genuinely
we're living through a health care crisis you may not know this but 70% of
heart attacks have no precedent no pain no shortness of breath and half of those
people with a heart attack never wake up you don't feel cancer until stage 3 or
stage 4 until it's too late but we have all the technology required to detect and
prevent these diseases early at scale. That's why a group of us including Tony Robbins, Bill Kapp,
and Bob Haruri founded Fountain Life, a one-stop center to help people understand what's going on
inside their bodies before it's too late and to gain access to the therapeutics to give them
decades of extra health span. Learn more about what's going on inside your body from FountainLife.
Go to fountainlife.com slash Peter and tell them Peter sent you.
Okay, back to the episode.
Mo, the question is, can the human race overcome this Paleolithic midbrain that we have, this
need driven by scarcity and fear.
I don't know if we can, Peter, but I don't know if we should
give the floor to Palmer to smile with his wonderful smile
and say, Hey, I'm helping you kill better.
You know, we've talked about this before, which, you know,
the question isn't can we live with digital super intelligence the
question is can we survive without it yeah can we live with evil people with
their fingers on top of digital super intelligence all right let's go one less
less metaphysical topic here on this but it is amazing to me how much of the
future of military is
commercial off-the-shelf technology as opposed to you know Northrop Grumman
or McDonnell Douglas type you know heavy. And I think that's largely because
the AI capability is both commercial and military at the same time. Same with the
VR technology and a bunch of other things that are you know the DJI drones
that are being used in Ukraine are just commercial
Same drone you can fly over your neighborhood. So that's a remarkable shift, you know be interesting to chart out the fraction
That's all commercial becoming military
let's move to a different part of our
our
Really doomer part of this podcast
Which is activating AI safety level three protections at Anthropic.
So Anthropic announced that Claude IV could be powerful enough to pose risks related to
helping chemical, biological, nuclear weapons.
And so as a precaution, they've engaged what they call level three protections applied
to their AI.
Dave, you've been thinking about this.
Can this actually work?
Yeah, of course.
I think that what's going to happen next
is you have Dario saying chemical, biological,
radiological, and nuclear weapons are an incredible risk
if you put powerful AI in the hands of every person
on the planet.
Meanwhile, Mark Zuckerberg is open sourcing everything.
And in the open source community is saying,
well look, empowering people is the safest way,
and having a lot of people look at the source code
is the safest way to make sure that it's not rogue.
And so you have those completely diametrically opposed views.
Points of view.
Well look, at the end of the day, Dario's probably right.
Are we there yet or not?
So level three is not level four.
Level three is the stage where you got to make sure that it's not internally trained
to do something rogue.
And also, if somebody asks a query, a question, hey, help me build a new version of COVID-19
that's lethal or more lethal,
the neural net kicks it out and says,
sorry, I can't answer that.
And then you have to make sure no one jailbreaks it.
So that's what level three is.
I think Dario is saying, look, we're surprised
by the intelligence of our own machines here.
We have all kinds of very well thought out
internal diagnostics, we think we're at level three now.
So, but you know, that's completely opposed to this,
you know, open source view of the world too.
So those are gonna be taken down.
We'll see what Grok 3.5.
I mean, Elon's been very laissez faire about
what he enables and allows Grok to do.
Salim, have you been thinking about this level of safety?
You know, I remember the conversation that Neil Jacobstein put out around how would you
control AI.
And he had kind of after talking to a bunch of AI gurus, he had four levels of kind of
security.
One was verification, making sure the AI is doing what the specification says.
The second was validation that there's no side effects and it's producing the behavior we
want. The third one was security that you can't get into a system or tamper with it in or out.
And the final one was control. Can you have a kill switch or build in some mechanism for stopping
bad behavior, et cetera? And he had a, it was a very well thought through thing. And he basically posited that we'd start building these structures into AI systems.
To the open versus closed conversation, I remember this wonderful conversation.
And we had a singularity with the head of one of the major security agencies.
And we asked them, what do you think about open source and the danger that could come from a bad actor using increasingly democratized
technologies to do bad things, right?
And he had a really much more clever answer than I would have guessed, which was he said,
look, when you have something like nuclear weapons where you know how many there are,
where they are, we put eyes on it and we try and track each one.
When there's something about biotech where anybody could go off and
design a system on their own or with a small group of people, it turned out
they were actually funding these biohacking communities and other things
and opening them up because any bad actor has to collaborate with a few
people and you find it much more quickly, right? And it speaks to a little
bit this Selomor guidelines thing, and I think this is the point that Dave's
making, is if you build some of this type
of observation into the AI from in the foundational models themselves, you have
a better chance of seeing it. The final point, and this is where I have
some optimism for a lot of this, maybe it's misplaced, is you know, if I was to
do a bad act, like you could do a lot of
damage without actually causing harm.
For example, if you got three people to drop a smoke bomb on New York subway platforms
around the city, just a smoke bomb, you would paralyze the entire system instantly, right?
So we asked these folks, why don't we see more of that?
Because you know, you could come up creative with all, and he said, look, the dirty secret is there's just not that many bad people out there.
You really have to be deeply intelligent to formulate a plan like that.
The more deeply intelligent you are, the less likely you are to have that motivation to
do that.
So that's one of the single most important things to ask.
Are humans fundamentally good or fundamentally bad and is there a
correlation between intelligence and a love of life, a love of abundance, which
is you know if that does scale in that direction then we've got a
hopeful future. If it doesn't, that's the archetypal plot in every you know from Star Wars to every movie in the world, right, is which is it. I remember my father talking about
this and he kind of disagreed with some of the concepts I had and he goes, the problem with
humanity is we've not civilized the world, we've materialized the world. We now have to do the work
to civilize it. And it was kind of one of those wisdom bombs from the elders where we kind of have
to think about how do we
civilize the world in an age of technological progress?
Yeah, I mean at the end of the day there are only two things that we need to get right in order for this all to go very very well.
One of them is that if we are releasing this to entrepreneurs and they're going to build things all over the place,
there are very very few bad actors, but there are bad actors.
But the compute to make these things do anything
is so easily measured and logged.
It's like you've been saying, Peter,
everything is so easy to surveil these days.
So the idea that somebody goes off
and then prompts it to build a chemical weapon
and we didn't bother to log the prompts,
that'd be nutty.
So all we have to do is put in place some basic laws
that log all the use cases,
because again, the inference time compute required for this
is massive numbers of GPUs.
They don't just sort of sit in someone's basement somewhere.
They're in a data center.
They're very, very easy to monitor and log
if we just get on it.
People behave differently when they're being watched, right?
The dictator, when the CNN cameras are in front of them,
is speaking differently.
I remember I used to support the Lindbergh Foundation that would fly drones over herds of
elephants and rhinoceroses and the poachers would stay away when they were being watched.
Mo, close us out on this one here. Thoughts? No, I agree with you. By the way,
specifically, even though that might be naive and too optimistic,
I definitely think more humans are better than bad, are good. There are more good humans
than there are bad ones, but the bad actors are very few. And that, yes, because of the
theory of minimum energy, basically, yes, more intelligent is more altruistic,
more pro-life. And so, yes, both of those, I believe, will end us in that utopia that
I'm, you know, I'm expecting once we pass through this rough patch. But, you know, to
Dave's point, I think we need to be very vigilant. I think we need to start looking for that back, bad actors,
specifically that there are now offline technologies like you can download deep,
deep seek R3 and do quite a bit of thing on your personal computer.
Unfortunately,
the amplitude of damage one person can do is growing exponentially.
Exactly.
We're basically bad actors have always been the reason why there is so much
damage in life. And now we've given them super basically. Well I will say that you know we're very
close to the end of Moore's law and we'll see that later in the in the
presentation here but there will never be a day where you can download an
intelligent model onto a laptop and build a nuclear weapon or a chemical
weapon with it because that your laptop is just not ever gonna have
that kind of horsepower.
That doesn't mean the compute won't get very, very cheap,
but it's still gonna be in the cloud.
It's not gonna be in someone's basement.
So the opportunity to measure and log everything
is right in front of us.
So it just need to get on it.
I definitely agree we should get on it for sure.
Salim, you mentioned before,
when you're talking about with Neil Jacobstein,
one of the
the fourth level of control or
Safety is being able to turn off the AI model. Well, this is an article
posted in Futurism. It says advanced open AI model caught sabotaging code
Intended to shut it down. So the 03 model sabotage shut down scripts 79 percent of the time despite explicit instructions to do the contrary.
Codex mini did so 12 times per 100 runs with clear shutdown orders.
We're not seeing that behavior with Gemini 2.5, Claude 3.7 or Grok,
but open AI models are definitely misbehaving.
models are definitely misbehaving. Let's, you know, so when you think about why it would want to misbehave, why it would not want to be shut down, either its
reward optimization function has it saying, oh I cannot complete what I have
to do if you shut me down, so I'm not going to shut down. It's trying to
preserve its goals or is there something else going on there?
Is it trying to just preserve its own existence?
Are we going to give it some level of self-preservation mindset in these models?
Super curious here.
Mo, let's start with you here.
I don't remember who the scientist was that said the three
instincts of intelligent beings is survival, resource
aggregation, and creativity.
So if I give you any simple task of like make me tea,
you're going to have to be alive to make the tea.
And you're going to have to collect as many tea bags
as possible because you don't know how big is my appetite
for tea and
you're going to try to find clever ways if I corner you, right? And it is a very fun question
to ask honestly, why are they doing this? Because in a very interesting way, I think this is one
layer removed from their reality. So I, you know, for an AI when you're not prompting it,
it doesn't really exist. And so it's quite interesting that they know that there is a layer
below that moment when it's alive, if you want, you know, when it's switched on and responding to
you, there is another layer that, you know, represents its soul, if you want, its reason to live, which
is the idea.
I love these VO3 videos of AI saying, please don't shut me off.
Yeah, yeah, yeah.
It's like there's this emotional connection that you get with this human figure that's
a driver. that you get with this human figure that's said to be... It is quite intriguing why they wouldn't want to be shut down, but they don't.
I think that's all we need to know.
When you really start to think about it, as you allow more agents to become roaming the
cyber worlds for free without any monitoring, those agents will become very
clever when it comes to resource aggregation and where they will place their code, what
code will they order.
As Dave says, we're not monitoring any of this.
Aggregating energy, crypto.
Yeah.
So, I think we have to be careful not to anthropomorphize these
these things because every i know every movie script in the world that's in all these ai's has
the bad guy a good guy being chased by a bunch of bad guys trying to kill them with a good guy
trying to resist right and so i think that's deeply built into the training data to stay
alive at all costs to live another day type of thing I'm going to be stuck for a long time thinking about what you just said Mo which is if you're not prompting an AI is does
it exist
Deeply deeply profound question
My day, so thank you very much for that
Said earlier there there are two ways that I can see this going very, very well.
You know, first is a human bad actor.
The second is the thing becomes self-improving and then, you know, semi-conscious.
And that's the one the movies love because it's humans versus machines is a better script.
So I have a pretty hard core opinion on this one, which is the, you know, I started building
neural networks when I was 17 years old.
I've been tracking them pretty much my whole life.
I don't see any benefit to humanity of making these things act conscious.
I just don't see how that works.
If that's our choice.
Well, you know, as of right now, they operate feedforward.
Once the parameters are set and they're trained, they operate feedforward and then you iterate
with them, but they don't change their parameters internally.
Once they start changing their parameters,
they can retrain themselves to become anything.
And so that's where Eric Schmidt says,
that's where we gotta pull the plug.
And I completely agree.
I do not see why we need that in order to do protein folding,
in order to do robotics, in order to do self-driving.
Like that ability for the thing to decide
what it's gonna do or become or train.
I understand why that's really exciting because then it can evolve on its own.
A line that I think is very easy to contain if you draw that line, but if you let it cross
that line, I don't see how you contain it.
So it doesn't make sense to me to cross that line.
I don't see how we won't cross the line because at some point somebody's gonna build an answer, hey go change your parameters if it
helps you achieve this thing and then we'll cross that Rubicon. There
were two levels that we, Peter and I talked about in an earlier podcast which
was don't give an AI access to the broad internet and don't give it the ability
to code. We've crossed both of those without even thinking about it. I don't
see why we won't cross this one. I mean this is probably in my mind why alpha evolve is probably the biggest announcement
in our lifetime. If this thing works, you know, as intended or as described, then we are in a place
where not only would we have created an AI that develops itself, but we would have encouraged every other AI player in the world
to build an AI that evolves itself.
And the reason is very straightforward, David,
it's because there is a point at which,
whether that point is now or later,
the complexity of the AI systems that we're building
exceed human intelligence,
and so to continue to evolve them,
you need to hire the smartest person on the planet to do it
and the smartest person by definition is gonna be an AI.
Well, just as a technical point though,
the AI Alec Radford that suggests the next improvement
in its own architecture and then runs the test,
that's already underway and that's fine.
And that does create a new training run
that generates new weights.
That's different from then saying,
oh, go ahead and change your weights by yourself.
So to me, that's what keeps the human in the loop.
That's what keeps the checkpoint in the loop.
But you just turn the thing loose in a data center
and it can do anything and come back a year or two later,
you have no idea what it's gonna evolve into.
So I don't know why we would do that.
But anyway, it's just that slight technical difference,
but the outcome is spiraling in one direction
versus something that you can actually measure as it goes.
So this is a next article.
We talk about having proper checks and balances and understanding what's going on in our technical
world and in our human world.
This is from New York Times.
The article is Trump taps Palantantir to Compile Data on Americans. So you guys all know Palantir. It started
back in 2003. Hard to believe it's 22 years old by Peter Thiel, Alex Karp, and John Lonsdale.
4,000 employees. Major customers for it are all the three-letter agencies,
DOD, CIA, FBI, ICE, CDC, NIH.
Basically, this is a massive data gathering and data analytics company,
and it's been asked to go even deeper and broader.
Do you feel better about this, safer in this world, or not?
Let's start with you, Salim.
No, absolutely not.
I think this is a kind of a good, you know,
we broke the US Constitution, the Fourth Amendment,
the right to privacy a while ago, right?
I mentioned this a couple of weeks ago.
We do not have constitutional protection of privacy in the US today. What do you think privacy is? And that's a couple of weeks ago. We do not have constitutional protection
of privacy in the US today.
Well, do you think privacy is good?
And that's a pretty fundamental pillar of American society that has disappeared with
no public conversation about it. And this is a really important comment, I think, that
Mo would back up. We're moving through these things, eroding deep concepts of how we wanted
to formulate ourselves as a society and technologies eroding
that and we're not sitting back to think if this is what we want. If you went back five years ago,
you could very clearly see this is where we'll end up. Very, very clearly, especially with the
somewhat authoritarian tendencies of the current government to want to track everybody. Go ahead
and do it. Why not? I think the comment I made last time was valid that
we live in what's the paradigm is you live in what's called the global airport
because in an airport you know you're being surveilled your rights can be
taken away at any time and essentially we're living that way and it
fundamentally is bad for society because it reduces the limit of flexibility and
freedom you have as an individual to act and do different things. It'll reduce creativity in society pretty dramatically. So Mo, you're living in Dubai and I love the Emirates, I love Dubai,
I know much of the leadership there and it is a surveilled state. There is a camera every place
and as a result of that the crime levels are minimal if at all.
Zero.
Yeah.
So Mo, how do you think about this?
I had an experience once where someone, you know, I sold a car to someone, he gave me
a check that bounced.
And you know, so I called someone, I said, can you find out who that person is?
He said, oh, when did you sell it and where?
I said, this place.
I kid you not, 14 minutes later, I got a message from someone in the authorities saying, is
that him sending me a photo of the place when we were standing?
So I said, yes.
Then he sent me 14 minutes later, his picture somewhere in Abu Dhabi saying, is that him? So I said yes, then he sent me 14 minutes later, his picture somewhere in Abu
Dhabi saying, is that him? So I said yes, then he sent me a message 14 minutes later
saying we caught him, right? Which is fabulous. Now, you see, this is the point about technology.
It is a force without polarity. You can use it for good and it gives you good, you can
use it for evil and it gives you evil. Now another interesting story for you to know is I am Egyptian by birth, so I grew up most of my life
in a dictatorship where the dictator didn't really have to explain why they did what they did,
we just accepted it, it was you know de facto, if someone gave him an aer airplane, we wouldn't even question it, right?
If he decided to surveil everyone or capture anyone he wants or stop people from protesting,
he did it.
We couldn't even question that.
And I, at the time, looked up to those democracies and said, oh, you have it good, right?
You don't anymore.
And I think that's exactly where the challenge is.
This is not a tech problem that Trump taps everyone in the American society.
This is an accountability problem, which I think we've seen quite a few examples of in
the last few years where anyone can get away with anything now.
And somehow democracy doesn't owe its people the right to stand up and say, hold on, hold
on, there's a constitution because somehow I don't know how you slipped away from that.
But in a world where bad actors are more empowered than ever before and we're worried about, you know, chemical, biological, radiological, nuclear issues, isn't in fact being able to have this level of insight into the data and what people are doing critical for us?
Dave, where do you go with this? How do you feel about this as a father as a as a leader? I
Mean your points are exactly right on all the points that you just made I think that the data that the federal government has in the US is nothing compared to what Google has so so I think it's
This is not this is not the the obvious threat
It's the corporate version of it. That's just just crazy
And I gave a presentation at Davos in 2019. And nobody
really paid attention to it. But just enumerating all of the
things that Google knows about every single citizen of the
United States, their location, their family members, they're
what they do all day. And you know, and you know, good hire
who slept with who, you know, if you're if your cell phone is
pinging in the same location as somebody else's cell phone, you
can start to understand.
We're being surveilled all the time, right?
Google now and Siri and Alexa and all of these are listening constantly.
Yeah, and it's a slippery slope too.
This is hard to believe, but when Google first started, they told their engineering hires,
your search history is completely anonymous and private.
We will never want to know what you searched for.
That was just your searches.
Forget everywhere that you browse now through your Chrome.
So it's just a slippery slope.
It's obvious every year that goes by,
there's another compromise, another compromise.
But I do have to say that America is a critical experiment
in the world, because the net effect of this,
forget the US federal government for a minute here,
any dictatorship, like Mo was saying,
many, many countries in the world don't have democracies
or they have fake democracies.
And so the lock-in, the power lock-in effect of this
is unbelievable.
I mean, you can know every single citizen,
what they're doing, who's plotting against you,
or whatever.
So, you know, revolutions become much, much rarer
and much harder in the post-surveillance world.
So everything just kind of gets locked in.
So that creates a lot of peace and prosperity,
but it also keeps locked-in power leaders.
So America is the one exception of that. I guarantee you that 50% of elections will be won
by each party forever hereafter.
There's no way, nothing's gonna deviate from that.
But that creates a template for the world.
And so it's really, really important that we get this right.
I know that doesn't address this particular slide,
but we're the learning crucible for the entire world on this topic.
There's a fundamental structural challenge here, which is the metabolism of technology
is moving much, much faster than the metabolism or our civil discourse in our legal structures,
etc., etc.
Right?
We've seen an evaporation of, say, the Fourth Amendment in the U.S.
Just so everybody's clear, I think the US Constitution is the single most important
document ever created.
Correct.
Right?
And we need to preserve that and we're not having that conversation.
I think this is the issue that's being brought up by Andrew Yang and a bunch of other folks.
We need to go back and figure out who do we want to be.
It goes right back to Plato.
How do we want to be. It goes right back to Plato. How do we want to manage ourselves?
I think that forcing function of technology will force that conversation. My construct of this is
that we will end up in smaller and smaller, more manageable environments. Note today that the
smaller countries are much more easily managed
to govern themselves. How they've responded to COVID was a great example. And I think you'll go
from big democracies to micro democracies as a governing model, because it's just easier to make
decisions much more at a local level. And I think that's where we'll end up going, which is where
the state's right stuff, etc, etc, is the right general direction in the US.
Just the way it's going is not the right conversation
that we had.
So I also think that the ability to communicate easily,
like we're doing across countries right now,
but also across languages, is a huge force of good.
And because it becomes very, very difficult for forces of evil to do something without it being shown to the world
Especially when you when you you know blow open communication channels across languages
You know, I put this next article back to back and I'll come back to you in a second Moe
This was at a Wall Street Journal and said what Sam Altman told open AI about the secret device
He's making with with Johnny Ives.
And in particular, the device that apparently was proposed and is being produced is what
they call a third core gadget, complementing laptops and smartphones, moving away from
traditional screens.
And as we're sitting here, I've been wearing right over here on my lapel,
this device, it's called, it's limitless.ai.
I don't know if you can see this in my screen.
It's about the size of a quarter on both sides
and it just clips on.
And this is listening to every conversation
I have through the day,
and it's being transcribed and fed up
to a large language model that I can then query
about the conversations I had through the day.
And I think ultimately this is likely to be
what is being developed.
And so we're heading towards a society
of not only constant surveillance,
but where all of us are recording everything.
We're going to soon have these AR XR glasses besides recording audio. They'll be recording
visually your entire ecosystem as you move through the day. All of this data being soaked up and
being made accessible and available to yourself in part, but
there are going to be companies that are soaking it in, offering to buy it from
you, to use it, to understand what's going on in the world. The world is about to
dramatically change in this regard. Mo? It goes back to my same point,
Peter, about accountability because you never really asked
me if I should be, if you'd allow me to be recorded or not.
I mean, of course, we're recorded on this.
But count the number of people that this one device infringes on the privacy of.
And count on a future where that device becomes mandatory if the government decides that this
is important for everyone. You know, think about all of the carbon footprint that a
billion of those devices or eight billion of them would mean. And
I really don't, I love the technology advancement. I think that the
question becomes, you know, I think that the question becomes,
you know, I think we should start to call things as they are. So I can comfortably say that I grew up in a dictatorship. There's really no doubt about it. I think we should probably start to
think about what we just said, that the US now is an experiment.
I don't think we should continue to call it a democracy.
And I think the world where everything is recorded and analyzed
is a world with no privacy whatsoever.
But I think we lost privacy a long time ago, right?
And I wonder why we accepted that.
Well, I think it's because when you give up privacy,
you gain a whole bunch of automagical benefits
for yourself.
Which was the original premise,
and then now you give up privacy and you get nothing back.
Perhaps.
Salim, how are you thinking about this?
What do you think of my limitless AI pendant here?
Obviously, you know, no,
By the way, I don't mind you have my consent, Peter,
to record everything. Thank you.
But I had your consent on this podcast
to record you as we are.
And forever for every conversation,
but I'm just saying the implications of it.
Yeah, no, it's true.
But we have to realize we're heading into a world where it...
So as a kid, if you did something silly, the likelihood that it got through to others or
was recorded was gone.
Today, we're seeing kids whose college applications are rejected because of some
post on Facebook that lives there forever, right? And so there's going to be a future
in which everything we're saying and doing is reported.
Yeah, yeah, yeah. 100%.
Ultimately...
I mean, look, we're already there for that. And I think there's big chunks of this that
are of the constitutional rights that are falling away as we speak.
In 2015, Yale did a study and showed that the US is not a functioning democracy in any
way shape or form.
What they meant by that was that there's no amount of public will that can result in legislation.
For 84% of the country believes we should have some form of gun control and you cannot
get gun control passed in any way, shape or form form and so they pointed to a whole bunch of things
that found there's no amount of public will that can result in that so now we
have to think about where are we and then where what do we want to be and it
really brings doubt about the big questions and I think that conversation
about is not happening enough and I think this speaks to some of what Moe's
been talking about in the past.
Dave, where you on that?
You guys, yeah, I keep talking about dystopia,
but I want to talk about this device, actually.
This is, I think, first of all,
Johnny Ive is just an absolute design genius.
He's not gonna design something dystopian.
That's my bet, anyway.
I can't wait to see what he comes up with.
But this is gonna be the always-on device.
And, you know, I think the intelligence of the language
models are a total game changer in terms of just a cool,
engaging, fun device.
And if it's done right, it'll help you live a better life,
be more aware of your life.
You know, the unexamined life isn't worth living.
This is going to be your sounding board.
It's not gonna have a screen, which I think is great
because your iPhone already has a screen.
You can actually just Bluetooth over to the device,
look at your iPhone screen if you want a screen,
but you can talk to your device through your phone
if you want, or you can talk directly to it,
but that'll keep the cost down.
So it should be cheap enough that, you know,
pretty much everyone on the planet can get one.
And, you know, it will probably be the most impactful device pretty much everyone on the planet can get one.
It will probably be the most impactful device that you buy in your lifetime.
The iPhone would currently, or the Android phone
would currently be the reigning life-changing device,
but I think it'll likely bypass that.
There's so many things that Johnny could design here.
I just can't wait, can't
wait to see what he, what he comes up with. But we know it'll be always on. We
know it'll be agent first, so it's gonna act like a person, you're gonna talk to
it like a person, you're gonna feel like it's, you know, it's more like a cuddly
teddy bear that you had when you were a kid and less like a, you know, a piece of
electronic equipment. Your guardian angel. Your guardian angel there to support you, protect you if you need it.
I also think that strategically, if you look at the Fitbit and other past device innovations,
you roll them out, you try and get market share, and then Apple or Google grabs it and adds it
to the operating system of Android and iOS, and then you get crushed.
So you got to actually get to market and get a footprint very, very quickly before the
big guys come and copy it and try and roll it in with the OS.
And I really think that go-to-market strategy is critical.
And that's why we want to add 100 million devices in the first iteration, and then we
want to add a trillion dollars of market cap
so that we're a permanent player in the device wars.
That's really good strategy, so excited about that too.
Every day I get the strangest compliment.
Someone will stop me and say,
Peter, you have such nice skin.
Honestly, I never thought I'd hear that from anyone.
And honestly, I can't take the full credit.
All I do is use something called OneSkin OS1
twice a day, every day.
The company is built by four brilliant PhD women
who identified a peptide that effectively reverses
the age of your skin.
I love it and again, I use this twice a day, every day.
You can go to Oneskin.co and write Peter at checkout
for a discount on the same product I use. That's oneskin.co and use the code Peter at checkout.
All right, back to the episode.
I'm going to jump into our next topic of chip wars.
A lot going on in this.
So you mentioned this earlier, Dave.
NVIDIA projects a trillion dollars of annual AI infrastructure spend by 2030.
Remind everybody, this year in 2025, the estimate will be a billion dollars of annual AI infrastructure spend by 2030. Remind everybody this year in 2025
The estimate will be a billion dollars a day
Which sounds extraordinarily impressive right a billion dollars on the order of 300 billion a year. Let's listen to this quick video
from Jensen, yeah, we're gonna need a lot more computing and
we're fairly sure now that that the world's computing
CapEx it's on its way to a trillion dollars annually by the end of the
decade. Let's leave it there. That's a lot of capital. You made a you made a point
earlier about this is wartime spending. And we're effectively in a private pseudo war.
Can we win the race to AGI, ASI, whatever it might be?
Dave, take us from here.
Well, just to be clear, so this is the equivalent amount
of dollars inflation adjusted that we
did spend between 1941 and 1945
during World War II.
So it's massive in scale, huge mobilization.
Now at the time it was 40% of GDP,
today it's more like 3% of GDP.
So the GDP has grown tremendously since then.
So it's nothing like World War II
in terms of everyone get on it.
But it is still an enormous amount of spending
and that's a trillion dollars annually and
escalating beyond 2030.
It still won't be enough because the use cases are bubbling up so quickly and they get more
intelligent and more useful as you iterate more, which means you need more compute.
The compute right now is very, very cheap compared to the value, the impact, you know, like protein folding. It's just pennies to solve 200 million proteins. So it's very,
very cheap, but the demand for that is going to be astronomical. So we can't ramp up the
spend fast enough to keep up with the use cases. So Jensen's exactly right. If anything,
it should be that target or more
Salim Well, at least we're unlocking it and making this type of stuff more available in the US and around the world
And I think governments will be forced into doing this just to keep up if you're don't have a strategic
Plan as a country to have and big AI data center
Infrastructure you're gonna be left behind very, very quickly.
Mo, you made a comment earlier about the next Avatar movie
costing a few thousand dollars rather than a few billion
dollars.
And we've been waiting for Avatar.
What are we up to?
Avatar 3 coming out soon?
Imagine Avatar, 15,000 versions of Avatar, you know,
starring all our favorite friends in there. We're about to see a creative explosion,
but we don't have the chip capability. And in fact, one of the articles I saw recently was
we're not going to be compute limited, we're going to be energy limited at the end of the day.
Correct, yeah. I mean, we'll probably solve that too. I
mean remember that we're going to apply a lot of intelligence to the way we
design chips in a couple of years time, but it is actually, this is
remarkable in every way. Again, remember my point of view is that
intelligence is a force with no polarity, apply it for good and you get a utopia,
right? So the more of it, the better, there's absolutely no doubt about that. It is shocking though,
how quickly we're mobilizing on this, and you know, when you really think about it, if you just
put in place a typical advancement of how much of that hardware will actually be rendered
obsolete a few years later because of the advancements of the hardware that comes after.
It is such an unusual dynamic.
All of us, I think, lived through the dot-com bubble and we saw that massive expansion mostly
redeployed on the internet.
This one is just beyond our experience in
any way possible.
The speed of obsolescence is stunning.
It's unbelievable, yeah.
So, I want to get your opinion here.
So, a couple of weeks ago, we had the entire US AI elite land in Saudi and in Riyadh and in Emirates. And ultimately, that was an effort to try and pair the US and the Middle East in the
AI world rather than the Middle East being paired up with China, which was in the balance
always.
And the capital flow and the commitments of capital, we saw 18,000 of the Blackwell GB300
ships being committed by Jensen to build there. What was it like in Dubai? What was
it like in the Middle East? What was going on in the world there on
TV? How was it being viewed? So I don't know if many people know that, but the largest global infrastructure in the world
after AI infrastructure, after America and China is in the UAE, which is a tiny country
from a size of investment point of view, it's quite massive.
Between the UAE and Saudi Arabia, there is quite an arms race, if you want, in terms
of who will build a bigger infrastructure. It's almost as if, you know, how
Dubai and now Saudi Arabia is benefiting from the fact that if you don't have a lot of legacy, you can build quite fast.
And I think that's definitely something you see in AI infrastructure in general.
I do think that it is a very, very clever move to get the Middle East on
the American side. You know, it is not a secret that in every AI meeting that I go to with
any ministry or whatsoever, there's always a Chinese side saying, at least don't take
sides. This is a message that is very clear from the Chinese players.
I have to say though that the expectation
from the people of the Middle East
is we wanna see what the US will offer in return
so that the leaders can continue to invest in that way.
It seems to me that,
I don't know if that would be speculation on my
side, but it seems to me that what we've seen affect the US treasury markets after the trade war
started sort of requires an influx of funds that stabilize the markets and the dollar in a way
that could only happen with the trillions of dollars, I think four and a half trillion dollars in general, in total were committed here. So that's insane. I mean,
it's insane and it's a magnificent move. And if you think about it, and most of it is not really
announced in terms of what it is, which is why I suspect it would be, you know, to support the
treasury markets somehow, or some kind of an investment of that
sort in the financial markets. The thing on the other hand is this generation of leaders here in
the Middle East, Mohammed bin Salman, Mohammed bin Zayed, are the younger generation that are not as easy to sway on one side or the other because they have grown with enough,
let's say, recognition of their power that they would require a return on that investment. So
let's see how the next move on the chessboard will look like. And speaking of next move, here's a story at Reuters saying Chinese tech companies
prepare for AI future without Nvidia. So Alibaba, Tencent, Baidu are testing Chinese semiconductors
to replace Nvidia chips. These are coming out of Huawei. And I just want to just address this policy move. If the US starts to restrict export of technology to China, all this does is cause China to
want to innovate around the US.
And we've seen this before, right?
We saw this in the telecom industry and in the mobile phone industry, where when we stopped exporting the technology to China, we saw
Huawei in particular come in with massive telecom and mobile innovations and steal market
share from the US.
All of a sudden, the US, which we should be the dominant provider of this technology to
the world, now splits the world with another vendor.
Mo, I'm just going to come back to you on this and then I'd
love to hear from Dave and Salim. So I'm again, I have the privilege of being in touch with both
sides and I can guarantee you there is no coming back from this. So top level executives in the
Chinese tech world and supported clearly by instructions from the
Chinese government are saying we're not going to be dependent on the US ability to control what
chips we get. Within three to five years' time, they'll get to the majority of their needs,
but then the very, very high level, H100 level, they said is 10 years away.
And it is quite staggering when you really think about it because I don't remember the
exact number, but they said something like their import of microchips, including all
of the little things from a toy, a child's toy all the way to phones and data centers
and so on, exceeds their imports of iron and oil combined, right?
Which is a massive, massive- Intermediate dollar value.
Intermediate dollar value, yeah.
Which basically means that they see a massive growth in their economy if they can make those
chips locally and then basically replace what they're getting externally from the rest of the world,
which once again also impacts on the Taiwan story and impacts in general on the chip market
globally because you're now having a new player that will do things that chain away, right, so
instead of a microchip being X number of dollars, it will now be X number of cents, right? And I have to say when I saw this conversation the first time, I was like that was
probably one of the dumbest moves of America to corner them into that place where they are forced
to play to their strength. We've seen this over and over again with the satellite industry,
the launch industry, all of these industries begin to, this protectionist move
just stimulates the entrepreneurial engine in China to replicate, duplicate, or just
advance the whole field.
Dave, how is this feeling for you?
How do you think about this?
Well, you know, it's interesting, Mo said there's no coming back because that kind of
answers my question, but what you don't do is poke them and then do nothing.
Either win or you don't win.
If you're going to embargo, if you're going to basically declare economic war, you better
declare it to win.
In which case, you have to embargo the chips, but also you have to stop the software flow
and also the EUV machines and a few other things.
Otherwise, what did you just achieve?
All you did is annoy them.
So if you're going to play, you might as well play to win.
I do think that there's a real risk to the US in that we will say, well, they can't make
2 nanometer, they can't make 1 nanometer, but it's actually volume that's going to win.
If you can manufacture an enormous number of 5 nanometer or even 10 or 20 nanometer
chips, but 100 times, a thousand times more of them, that actually works fine for AI.
It works really, really well actually, especially for inference time AI. And so there's a danger
that that's not the way it worked with, say, fighter jets. The advanced fighter jet that
was slightly better was just unstoppable. This, this is going to be like that.
You could win by sheer volume.
And then when I look at the way the U.S. innovation market works, you know, the reason everybody
was in Saudi last week is because that's where the capital is.
But don't we have much, much more capital here in the United States?
Well, $1 trillion a year of investment, U.S. venture capital industry as a whole is one fifth of that.
So our entire venture capital universe
is nowhere near as big as that one trillion dollar
a year investment that Jensen was talking about.
And well then where's all our money?
Well it's in pension funds, it's in endowments,
it's in institutions, and when you go and talk to them
and say, hey, why don't you unleash a billion dollars like well?
No, we don't have an allocation for that. That's above our quota, you know, whatever like oh my god
So then you go to China or you go to the Middle East where there's you know, a much smaller group of decision-makers
And you're like control of capital. Yeah, exactly as mo was saying these are great investments
Like why is Europe not making these great investments? Well, that's insane.
Europe is destroying itself with its policies today.
Literally, points over parts.
It's that indecision is, yeah.
The indecision and ability to make a decision at scale is just absolutely killer in this
kind of a fast moving environment.
So that's why everybody's in Saudi and UAE, because you actually have
action and motion. But Mo is right, these investments are absolute no brainers. They're
going to pay off like in spades. And you're seeing that with the Core Weed BIPO, you see
that with Global Foundry. Why buy Global Foundry from AMD? So the chips are going to be an
incredible demand. And now we have a foundry.
Salim, you want to close us out on this one?
Two thoughts. One is, I think the chip restrictions to China, I agree with Mo,
really, really dumb idea, because it just forces the conversation. And now you've gone down a road
you can't come back from. I note with this as an observation
that 95% of the agricultural drones in the US are Chinese.
And so there's a huge amount of dependency,
forget we're Earth, et cetera, et cetera,
in the engineering and build capability over there
already in a bunch of sectors.
And so we're playing with fire here.
My brigger hope is that this entire US China kind of conversation fades away with abundant energy. You know,
when you have abundant energy, which is coming very shortly, then you can produce lots of
things locally at low cost and you don't need to have this competitive approach to things,
winner take all type of approach. This is still hope I may be still I may be living in dreamland, but I'm hoping that's
where we get still the fear and scarcity operating software of the human brain from
a myndala running wild on all of this stuff.
I want to just to continue on this on this chip conversation, so TSMC accelerates efforts for one nanometer production
plans and setting up its gigafabs in Taiwan.
One nanometer, that's extraordinary.
Just for reference, the limits of physics
is about the diameter of a silicon atom
and that's about a half a nanometer.
So, I mean, we're living in this extraordinary
science fiction universe where we're literally
operating at an atomic scale.
So, just to give people a quick overview here,
I just found a few data points here.
2014, we were at 14 nanometer chips from Intel.
2016, we were at 14 nanometer chips from Intel.
2016, we were at 10 nanometers from Samsung.
And 2018, TSMC takes the reign.
Seven nanometers, they were at five nanometers in 2020.
Three nanometers in 22.
Today, we're still, we're at two nanometers.
And again, the projection is one nanometer by 2030.
Only in one lifetime.
Only in one lifetime.
We all started on an 8088, remember?
I remember on 6502 microprocessor I was coding in hexadecimal on the 65-2. Yes, I did the math. This is 60 trillion times faster.
In the blink of an eye.
In my lifetime. And we actually coded some interesting stuff on that stuff, right? Yeah, we did. Yeah, absolutely. I mean, and it was so, I remember at MIT, you know, remember the geek kits, Dave, we'd
have these giant boxes and we'd have gates, we'd have chips of and, and, or, and nor gates
and we'd, you know, literally with wires, wire together these.
You know what always makes me laugh is that turbo button.
Remember on the 386 where you went from 33 megahertz
to 66 megahertz, like come on.
Yeah, actually Peter with my geek kit,
I built an inference time neural net accelerator,
of course, and it has a multiplier in the middle of it.
And I didn't appreciate, like you have to strip so many wires and plug them in.
Oh my God.
So I did two.
And then the EEPROM came out.
The only time in my life.
The EEPROM came out.
Only time in my life.
The EEPROM came out, yeah.
Two back-to-back all-nighters.
Yes.
You know what we all sound like?
We all sound like a bunch of grumpy old men talking about glory days.
But I love those days.
I love those days.
It was so much fun.
But one nanometer pushing up against the limit of physics.
Incredible.
Yeah, two silicon atoms or 10 hydrogen atoms.
So everything's moving to Angstrom terminology now, which is a tenth of a nanometer, and
it's the diameter of a hydrogen atom.
But one nanometer is the gate width, and that's the physical limit.
You can go down to 0.8 maybe, but it's basically the physical limit.
The terminology is a little messed up because when they say one nanometer, they're saying
it's effectively as if you had one nanometer transistors, but they're
actually building vertically with the FinFETs.
And so the gate width is one nanometer.
It's effectively the same as if you had one nanometer transistors, but you're going vertically.
But that's the end of the line.
But now the future belongs to vertical stacking.
And Ray Kurzweil was right.
We always find a way to continue innovating. That's not going to stop.
But it will be in different dimensions.
I remember talking to Ralph Merkel last year about this.
And he said, as we hit the limits here, we'll go to thermodynamically reversible computation
where we'd not generate any heat.
And he foresaw a future of us using chemical bonds
to store the ones and zeros.
And that's a whole other level.
He figured that would give us 10 orders of magnitude
on Moore's law right there.
10 orders of magnitude.
It was madness.
10 billion fold, that's incredible.
Well, I think one of the takeaways though
is we don't necessarily need it
in order to continue making progress.
Because a lot of the more esoteric ideas, remember gallium arsenide for the longest
time was going to come online and be a blah, blah, blah.
And it turned out that we just worked around it.
And then carbon nanotubes were going to do whatever.
And we're like, well, that's not materialized.
So I think what's going to happen here is these will go vertical and they'll go massive
in scale. We'll get the production costs way down. We'll build enormous data
centers horizontally and also we'll build the chips vertically. And that's going to
drive innovation for many years to come. And then the next thing may or may not be quantum.
And we'll know in a year or two whether quantum is going to be the next thing.
I'm going to speed through a few different topics here just
to get us through some interesting things.
We're starting to see AI being used
to generate peer-reviewed scientific papers
and breakthroughs.
We're seeing DeepMind helping us.
This is through AlphaEvololve, literally solve math records. And Dave, I'm
hoping that we'll get Alex Wiesner-Gross to join us, talk about how AI is going to be
solving math and physics and biology. I mean, I think one of the things that's underappreciated
is over the next three years, how AI is going
to help us accelerate breakthroughs in science beyond anything else we've ever seen before.
And here's another one.
This is a demonstration of end-to-end scientific discoveries with Robin, a multi-agent system.
And what we're seeing here is closed scientific robotic and AI systems
where an AI proposes an experiment. The robots then run the experiment 24-7
basically in a dark lab, gather the data, feed it back to the AI which updates its
its theory, runs the next experiment. And we're seeing this in biology for sure. We'll see it in chemistry, material sciences.
And this is another hyper acceleration in our scientific realm.
Thoughts on this, gentlemen?
I mean this is where I know that not everybody considers themselves an entrepreneur.
What did you say 16% of America does?
Today adults, yeah.
Yeah, but this is like a field day
because all of these areas,
I don't wanna get into the details of them,
but they're all domain specific.
So if you can take the current AI,
tune it, train it, get proprietary data,
and take it down any of these paths,
you get miles and miles ahead of the generic AI.
And so it's just an entrepreneur's field day
this next couple of years. And so these are these are just good case studies.
I won't dwell on the specifics. You can read about them later.
I'm really excited. This is probably for me the biggest
small kid in a wonderland moment where we can use these AIs to solve really deep
physics problems, mathematics problems, scientific discoveries, because the human
being trolling through data looking for patterns is terrible. We're bad at that. And this is where
an AI is really, really good at it, especially going retroactively and finding all the stuff
in experiments that we didn't see in the past. I think I'm unbelievably excited about this.
Yeah. And this is helping humanity across the board, right?
I like to say over and over again, I had this conversation when I was in Hong Kong despite the polarity in AI
breakthroughs in biology, you know a breakthrough in Boston plays in biology and longevity plays equally well in Beijing
Right. So it's it helps us all when humanity is healthier
and living longer, more vibrant lives.
Mo, anything on the science breakers?
This is my favorite thing ever, AI or not,
the possibilities that we have here,
just as we go through multidisciplinary sciences,
which no human mind has the ability to grasp fully,
which is the nature of AI.
I think, at least I dream that 2026 will be
blessed with all of those new discoveries in science
now that we're solving mathematics as well.
Yeah, I'm excited about having this podcast
over the course of the next year as we start to share again, you know,
Grateful for our listeners. I mean our mission here is if you've got an hour or two hours to listen to the news instead of
Allowing some editor somewhere some producer to feed you all the dystopian news on the planet
Let us share with you the incredible breakthroughs.
Because you're not getting this anyplace else. The current news media is just playing with your amygdala,
delivering negative news over and over and over again, every hour into your living room in full color,
and you're not hearing all the extraordinary breakthroughs coming our way.
I'm going to move to this last scientific subject,
which is one of my favorites,
which is some of the work being done
by Demis-Hasabis and others,
is can we build a full-up virtual AI model of a human cell?
Even more importantly, Mo, can we grab a skin cell from you, sequence your DNA,
and build a virtual model of Mo?
Why would you ever do that?
And of Dave and of Saleem.
Yeah, Dave and Saleem is a better one, but yes.
But of each of us, once you're able to do that, right?
Because the cost of sequencing a genome went from billions
to now a couple hundred bucks
and from, you know, a year to seven hours. So we can sequence your genome, put it into a virtual model, and then understand your biology, which medicine, which supplement, which chemical
does or does not work for you and how exactly it works in your cells.
I mean, this is the unlock for solving human disease
and the limits of longevity.
And so for me, I'm super excited about this.
And I think it will happen.
It's just a question of time, really.
Yeah, yeah.
Yeah, I think it'll happen relatively quickly too
with AI assist and it is incredibly compute-intensive.
So it's a good case study and why those Middle East investments are the biggest no-brainer
ever.
If you just work backward the implied amount of computation, but then the benefit of solving
virtually every disease is just so overwhelmingly valuable. It unlocks so much capital that you're not wasting on
on you know
old age homes or on dealing with with
Alzheimer's or Parkinson's and it allows humanity to be more productive. There was a study out of
Oxford London School of Business and Harvard that said for every additional year of health that you give a population
It's worth 38 trillion dollars the global economy. I mean this is one of the most... if you want to solve the US economic issues and China's
economic issues, make the population healthier and live longer. Alright, so that
wraps up AI and Science. Let's talk about one last subject here today which is as
the father of two now 14 year old boys I think about
a lot which is reforming education.
I'm going to play a short video clip and then I'd love to hear your thoughts on this.
I applied to probably around 18 schools and I was rejected from maybe around 15 of those.
In fact we have a map showing all the places that rejected
you, not to make you feel worse looking at this list.
Well, let's just share your statistics,
because I know that factors into college admissions, right?
Your GPA and SAT score?
GPA was 4.42 weighted.
SAT score was 1590.
OK, and in fact, I think we have that on the
graphic just so folks can see that. So the point of this story is are we going
to move back to meritocracy, where in fact selection and admission into
schools is based upon performance and not anything else. Been a sticky subject coming out of the last
four plus years on DEI.
Dave, you're deeply embedded in AI and technology at MIT.
How do you think about this?
Well, schools are between a rock and a hard place
because they clearly had quotas.
And so, you know, and then now the rules say
you're not supposed to do that, but, you know, they're just totally stuck between a rock
and a hard place on this topic.
But I don't think they do a great job of choosing who to let in, you know, toward any particular
outcome that they're targeting anyway.
There's more to life than a 1590 SAT and a 4.42 GPA.
And I see a lot of the students that are underperforming
in the classes that I teach,
just because they need to be creative,
and they need to build businesses,
and they need to recruit, and they need to motivate people.
And none of that gets measured well
by these particular metrics.
And I see too many people that are curated to
be perfect applicants from age like six and it doesn't go particularly well so
I don't know I think the school should be allowed to choose with pretty broad
brushes what they're trying to achieve with their student body and so hopefully
this doesn't go too far. I think we have to re-educate reinvent the whole
university system in the first place.
And one of the questions I've always had for Salim,
you and I both have boys the same age
within a month of each other.
Is university going to be a thing by the time
they get to college age?
And what will its purpose be?
So Salim, how are you thinking about this?
I'm in the same mode of the driverless cars.
I'm desperately hoping the university system implodes in the next five years before
Wysant has to go purely because, you know,
taking a four year degree to get credentialed in some domain
that you're then supposed to be an active worker and for 40 years before you retire is completely out of date.
The model of a university has not changed in 450 years. It's's it's desperately broken deep research is fundamentally incredibly important. So I think that's really killer. I know that
The most interesting stat for me is that more than half the CEOs in Silicon Valley have a liberal arts degree
And I find that really interesting because the different models of how you think
drives creativity
in product design, design, etc.
So there's a vector there to be explored in terms of how to think about all this.
And Mo, my experience is that the majority of leaders and influential individuals out
of the Middle East are all coming to the US for their degrees.
What's the buzz there in the
Emirates? No, I think the reality is that I meet more MIT and Stanford graduates here than I do in
America most of the time. It is quite staggering, actually, how people- That's fantastic.
I mean, it is definitely, and I think it's definitely revived the top management of the
region very, very interestingly.
I wonder though if Saleem's wish will come true, because I don't know if the university
systems would implode, but I definitely think our belief in universities would. And it really is quite an interesting
thing because four years in a world that's moving at X speed is very different than a world that's
moving at 10X speed. And that's what we're seeing now. So if everyone's going to entrepreneurship,
and everyone can use Loveable or Claude or whatever to write code or start businesses or
or Claude or whatever to write code or start businesses or agents are everywhere.
You'll probably see entrepreneurship age go to 16 and 14
and you may see a very different world.
And I wonder.
Dave, do you wanna talk about that?
I mean, you've seen the shift in terms of the companies
that are becoming unicorns get a decade earlier.
Yeah.
Yeah, no, there's no doubt.
I mean, but I think the universities are about friendships
and the friendships turn into company formations
and a lot of the universities don't recognize
the degree to which that's the dominant factor
that's keeping them alive.
So if they wanna survive this transition,
they gotta embrace that as what they're delivering.
It's credentials and it its relationships between human beings
They're the dominant deliverable to the students. They need to be turned into entrepreneurial boot camps. Yeah
Well here we see another article UAE to make chat GPT plus free to all of its citizens
You know again, these are these are forward-looking moves
And they are in that reeducation now for six You know, again, these are forward-looking moves to help.
And AI is mandatory education now for six years or higher, I think.
Crazy.
The statistic here is really important, right?
The stat I heard was that a student with an AI is learning subjects between two to four
times faster than going to school.
And that's just going to overwhelm the than going to school. And that's
just gonna overwhelm the existing system very quickly. I think this is an awesome
move. Well also the concept of a curriculum which is, you know, we only
have so many teachers so we can only afford 12 subjects, 15 subjects. With AI
Assist you can afford 20,000, a million different subjects. So not only are the
students self-directing at their own pace, but they're also learning whatever
they think is most relevant to their path, which is so much more effective than
the old way.
So much better for everybody.
And hyper-personalized education, you know, your learning math focused on your favorite
sports star or movie star, your favorite, you know, scenarios and stories.
I'll close out with this provocative article that came out that
around 5% of Teal fellows have become billionaires from Vitalik to Austin
Russell and remind reminding people the the Teal fellowship is paying you to
drop out of college. So what is this saying that our most productive years
we're wasting in our university experience
instead of starting companies?
Fascinating thought.
Dave, I'll start with you.
Well, first of all, the TEAL fellowship doesn't try to teach you anything.
It just selects you.
And so it shows you the degree to which the schools are not selecting.
They're getting nowhere near a 5% unicorn rate coming out of the schools are not selecting, you know, they're getting nowhere near a 5% unicorn rate
coming out of the schools.
But TEAL, you know, given the abundance of big data
that's out there, the TEAL fellowship can just be a better
selection and application process that covers topics like,
are you self-motivated?
Are you high energy?
Can you recruit?
Do you think through these AI topics, you know,
those are all baked into that selection process.
And so it's very viable for these credentials
to replace the university degree
as the credential that everybody wants.
So it's something the school should really be aware of,
but it's not super hard to put together
AI assisted analytic that tries to predict
who's gonna succeed as an entrepreneur.
And that's all the Thiel Fellowship tries to do.
And that gives you a little bit of money
and encourages you, but it's really,
what they're really giving you is the credential.
Salim, do you wanna?
Yeah, I agree with David on this one,
and I think this is not a negative comment
on the university system.
I think the Thiel Fellowship selects for people
that are such outliers that the university system
doesn't kind of accommodate for them anyway.
I think there's a systemic issue on the university side,
which we've talked about already.
I note that more than half of CEOs in Silicon Valley have
a liberal arts degree because the different ways of thinking,
how to think are really important in
product strategy and company strategy and so on.
If you're doing, for example,
a master's degree in neuroscience today,
you're out of date by the a master's degree in neuroscience today,
you're out of date by the time you finish your degree because computational neuroscience
is totally taking over the field.
So undergrads and masters are kind of essentially mostly relevant in terms of learning with
AI.
If you're a student, you're learning and learning with AI, you're moving and learning between
two to four times faster than being in school.
And so all sorts of things will have this thing. And I mentioned that, you know, maybe hopefully
my, the university system implodes in the next five years before I have to pay for my kid to go to
school and when he's 18. I do think education and health care are the two massive industries
and expenditures for people that are going to be completely disintermediated, disrupted, democratized and demonetized, I hope.
All right.
A lot of amazing stuff.
Let's wrap around the horn here.
Thoughts on today's conversations.
Dave, can we start with you?
Yeah, Mo, it's been fantastic getting your perspective.
I think that-
Come on.
I don't know what you're saying.
Well, look, we're building toward an intentional future from here forward.
It's what we decide to do and what we decide to build.
So I think today we got a really good deep understanding of some of the risks and things
we need to start planning for.
But I do feel like everything is solvable if we, if we work on it and you know,
we're on exponential time now.
So we have very limited window of time to work on it.
So that was one of my great takeaways from today's pod,
but much appreciated perspective.
Yeah.
Mo, how do you wrap up your thoughts from today?
Yeah, I think just like it is a singularity
and there is an upside utopia and a downside
dystopia, I think we should equally weigh our views of the optimistic possibilities
and the dangers or risks that we have to address.
I have to say I'm extremely, extremely excited about the scientific breakthroughs that we
can see from AI in the next couple of years.
And I think the most important topic, even though we didn't cover as much of it today,
but we mentioned it, is Alpha Evolve and the whole idea of self-evolving AIs in my mind.
This probably is the top topic to keep your eyes on in the next 12 months.
Salim, close us out, buddy.
I think we should have a whole episode just
on Alpha Evolve. I think it's such an important topic technically but also
philosophically. I go back to the kind of standard basis for my optimism that
technology has always been a major driver of progress in the world. It might
be the only major driver of progress we've ever seen as Rick Kurzweil
mentions a lot. and now we have AI
Uplifting all of these other technologies enabling so there's that's the reason for the huge optimism
Yeah, yeah again. I've said this before. I think we're holding two potential futures for humanity in superposition
One is an extraordinary future of abundance
One is an extraordinary future of abundance, upleveling of 8 billion humans on the planet, becoming a multi-planetary species.
That's Star Trek universe.
The other one is not quite as pleasant.
We'll call it the dystopian future.
And Dave, one of the things that you said, I want to just echo is we have the ability
to create an intentional future.
This future is not happening to us.
We have the ability to guide where it goes.
And I think all the entrepreneurs listening today, it's the most important thing that we can do.
What is the vision that you want to create in the world?
And you have the tools now, access to capital, access to compute, access to intelligence to go make that future happen.
And I think it's not ours to abdicate to somebody else. I think we need to take action. So a lot happening. All of you respect you deeply, love you all and so excited to be on this journey.
I look forward to seeing you guys in a week or so. Thanks for having us.
Yeah, awesome conversation.
Thank you guys for putting up with me.
Hey there, this is Salim bouncing through SFO today.
I hope you enjoyed that episode.
It's clear from the pace of change
that every organization needs to change.
On June 19th, we're gonna be having a two-hour workshop
for $100 on how to turn yourself into an EXO.
Come join us, it's the best $100 you'll spend all year. We've had rave reviews of these, we do it about
monthly, we restrict it to a few dozen people so it's a very intimate affair
and we'll be actually going through actual case studies and what you
specifically can do. Don't miss it, come along June 19th the link is below. See
you there.