Moonshots with Peter Diamandis - Ex-Google CEO: What Artificial Superintelligence Will Actually Look Like w/ Eric Schmidt & Dave Blundin | EP #183
Episode Date: July 17, 2025Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends Eric Schmidt is the former CEO of Google. Dave Blundin is the founder of Link Ventures – Off...ers for my audience: Test what’s going on inside your body at https://qr.diamandis.com/fountainlifepodcast Reverse the age of my skin using the same cream at https://qr.diamandis.com/oneskinpod –- Connect with Eric: X: https://x.com/ericschmidt His latest book: https://a.co/d/fCxDy8P Learn about Dave’s fund: https://www.linkventures.com/xpv-fund Connect with Peter: X Instagram Listen to MOONSHOTS: Apple YouTube – *Recorded on June 5th, 2025 *Views are my own thoughts; not Financial, Medical, or Legal Advice. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
When do you see what you define as digital super intelligence?
Within 10 years.
The AI's ability to generate its own scaffolding is imminent.
Pretty much sure that that will be a 2025 thing.
We certainly don't know what super intelligence will deliver, but we know it's coming.
And what do people need to know about that?
You're going to have your own polymath.
So you're going to have the sum of Einstein and Leonardo
da Vinci in the equivalent of your pocket.
Agents are going to happen.
This math thing is going to happen.
The software thing is going to happen.
Everything I've talked about is in the positive domain.
But there's a negative domain as well.
It's likely, in my opinion, that you're going to see.
that you're going to see.
Now that's the moonshot, ladies and gentlemen.
Everybody, welcome to Moonshots. I'm here live with my moonshot mate, Dave Blunden. We're here in our Santa Monica studios
and we have a special guest today, Eric Schmidt, the author of Genesis.
We'll talk about China, we're going to talk we're gonna talk about digital super intelligence, we'll
talk about what people should be thinking about over the 10 years.
And we're talking about the guy who has more access to more actionable information than
probably anyone else you could think of.
So it should be pretty exciting.
Incredibly brilliant.
All right.
Stand by for a conversation with the Eric Schmidt CEO or past CEO of Google, an
extraordinary investor and a thinker in this field of AI.
Let's do it.
Eric, welcome back to Moonshots.
It's great to be here with you guys.
Thank you.
It's been a long road since I first met you at Google.
I remember our first conversations were fantastic.
It's been a crazy month in the world of AI, but I think every month from here is going
to be a crazy month.
And so I'd love to hit on a number of subjects and get your take on them.
I want to start with probably the most important point that you've made recently that got a
lot of traction, a lot of attention, which is that AI is underhyped when the rest of the world is either confused, lost, or think it's not impacting us.
We'll get into in more detail, but quick most important point to make there.
AI is a learning machine. And in network effect businesses, when the learning machine learns
faster, everything accelerates. It accelerates to its natural limit. The natural limit is electricity.
Hmm. Not chips.
Electricity.
Really? Okay.
So that gets me to the next point here, which is a discussion on AI and energy. So we saw recently was MENA
and recently announcing that they signed a 20-year nuclear contract with Constellation
Energy. We've seen Google, Microsoft, Amazon, everybody buying basically nuclear capacity right now. That's got to be weird that private companies are basically
taking over into their own hands what was utility function before.
Well, just to be cynical, I'm so glad those companies plan to be around the 20 years that
it's going to take to get the nuclear power plants built. In my recent testimony, I talked about the current expected need for the AI
revolution in the United States is 92 gigawatts of more power for reference.
One gigawatt is one big nuclear power station, and there are
none essentially being started now.
And there've been two in the last, what, 30 years built.
There is excitement that there's an SMR, small modular reactor, coming in at 300 megawatts,
but it won't start till 2030.
As important as nuclear both fission and fusion is, they're not going to arrive in time to
get us to what we need as a globe to deal with our many problems and the many opportunities
that are before us.
Do you think, so if you look at the sort of three year timeline toward AGI, do you think
if you started a fusion reactor project today that won't come online for five, six, seven
years, is there a probability that the AGI comes up with some other breakthrough fusion
or otherwise that makes it irrelevant before it even gets online?
A very good question.
We don't know what artificial general intelligence
will deliver, and we certainly don't know
what super intelligence will deliver,
but we know it's coming.
So first we need to plan for it,
and there's lots of issues,
as well as opportunities for that.
But the fact of the matter is that the computing needs
that we need now are gonna come
from traditional energy
suppliers in places like the United States and the Arab world and Canada and the Western world.
And it's important to note that China has lots of electricity. So if they get the chips,
it's going to be one heck of a race. Yeah.
They've been scaling it at two or three times. US use. It's been flat for how long in terms of
energy production. From my perspective, infinite. In fact, electricity demand declined for a while
as has overall energy needs because of conservation of the things. But the data center story is the
story of the energy people. And you sit there and you go, how could these data centers use so much power?
Well, and especially when you think of it,
how little power our brains do.
Well, these are our best approximation in digital form
of how our brains work.
But when they start working together,
they become super brains.
The promise of a super brain with a one gigawatt,
for example, data center is so palpable.
People are going crazy. And by the way,
the economics of these things are unproven. How much revenue do you have to have to have 50 billion
in capital? Well, if you depreciate it over three years or four years, you need to have 10 or 15
billion dollars of capital spend per year just to handle the infrastructure. Those are huge businesses
and huge revenue, which in most places is not there yet.
I'm curious, there's so much capital being invested and deployed right now in SMRs,
in nuclear, bringing Three Mile Island back online, and fusion companies. why isn't there an equal amount of capital going into making the entire
chipset and compute just a thousand times more energy efficient?
There is a similar amount going in capital. There are many, many startups that are working on
non-traditional ways of doing chips. The transformer architecture, which is what
is powering things today, has new variants. Every week or so I get a pitch from a new startup
that's going to build inference time, test time, computing,
which are simpler and they're optimized for inference.
It looks like the hardware will arrive
just as the software needs expand.
And by the way, that's always been true.
We old timers had a phrase,
Grove giveth
and Gates take it away.
So Intel would improve the chipsets, right?
Way back when.
Yeah.
And the software people would immediately use it all.
Yeah.
And suck it all up.
Higher level code.
I have no reason to believe that that's, that that law,
Grove and Gates for law has changed.
If you look at the gains in like the Blackwell chip
or the 350 chip in AMD,
these chips are massive supercomputers.
And yet we need, according to the people,
have hundreds of thousands of these chips
just to make a data center work.
That shows you the scale
of what this kind of thinking algorithms.
Now you sit there and you go, what could these people possibly be doing with all these chips?
I'll give you an example.
We went from language to language, which is what chat GPD can be understood at, to reasoning and thinking.
If you want to look at an OpenAI example, look at OpenAI 03, which does forward and back reinforcement learning and planning. Now the cost of doing the forward and back is many orders of magnitude,
besides just answering your question for your PhD thesis or your college paper.
That planning, the back and forth, is computationally very, very expensive.
So with the best energy and the best technology today,
we are able to show evidence of planning. Many people believe that if you combine planning and very deep memories, you can
build human level intelligence. Now of course it will be very expensive to
start with, but humans are very very industrious and furthermore the great
future companies will have AI scientists, that is non-human scientists, AI programmers,
that is opposed to human programmers,
who will accelerate their impact.
So if you think about it, going back to,
you're the author of the abundance thesis,
as best I can tell, Peter.
You've talked about this for 20 years.
You saw it first.
It sure looks like, if we get enough electricity,
we can generate the power, in the sense of intellectual
power, to generate abundance, along the sense of intellectual power to generate
abundance along the lines that you predicted two decades ago.
Every week I study the 10 major tech meta trends that will transform industries over
the decade ahead.
I cover trends ranging from humanoid robots, AGI, quantum computing, transport, energy,
longevity and more.
No fluff, only the important stuff that matters, that impacts our
lives and our careers. If you want me to share these with you, I write a newsletter twice a week
sending it out as a short two-minute read via email. And if you want to discover the most
important meta trends 10 years before anyone else, these reports are for you. Readers include
founders and CEOs from the world's most disruptive companies and entrepreneurs
building the world's most disruptive companies.
It's not for you if you don't want to be informed of what's coming, why it matters,
and how you can benefit from it.
To subscribe for free, go to demand this.com slash meta trends.
That's demand this.com slash meta trends to gain access to trends 10 plus years before anyone else
Let me throw some numbers at you just to reinforce what you said, you know, we have a couple
Companies in the lab that are doing voice customer service voice sales with the new, you know, just as of the last month
and
the value of these these conversations is ten to a thousand dollars and
The cost of the compute is you know, maybe two three concurrent GPUs is optimal. It's like 10, 20 cents. And so they would buy
massively more compute to improve the quality of the conversation. There aren't even close to enough.
We count about 10 million concurrent phone calls that should move to AI in the next year or so.
And my view of that is that's a good tactical solution and a great business. Let's look at
other examples of tactical solutions that are great businesses. And I obviously have a conflict of
interest talking about Google because I love it so much. So with that as in mind, look at the
Google strength in GCP now, Google's cloud product, where they have a completely fully served enterprise offering
for essentially automating your company with AI.
And the remarkable thing, and this is to me is shocking, is you can in an enterprise write
the tasks that you want and then using something called the model context protocol, you can
connect your databases to that and the large language model can produce the code for your enterprise. Now there's a hundred thousand enterprise
software companies, middleware companies, that grew up in the last 30 years that
I've been working on this, that are all now in trouble because that interstitial
connection is no longer needed. And of course they'll have to
change as well. The good news for them is enterprises make these changes very slowly.
If you built a brand new enterprise architecture for ERP and MRP, you would be highly tempted
to not use any of the ERP or MRP suppliers, but instead use open source libraries, build
essentially a use BigQuery or the equivalent from Amazon, which is Redshift,
and essentially build that architecture and it gives you infinite flexibility and the computer
system writes most of the code. Now, programmers don't go away at the moment. It's pretty clear
that junior programmers go away, the sort of journeymen, if you will, of the stereotype,
because these systems aren't good enough yet to automatically write all the code.
They need very senior computer scientists, computer engineers who are watching it.
That will eventually go away.
One of the things to say about productivity, and I call this the San Francisco consensus
because it's largely the view of people who operate in San Francisco, goes something like
this.
We're just about to the point where we can do two things
that are shocking.
The first is we can replace most programming tasks
by computers, and we can replace both most mathematical
tasks by computers.
Now you sit there and you go, why?
Well, if you think about programming and math,
they have limited language sets compared to human language.
So they're simpler computationally and they're scale-free. You can just do it and do it and do
it with more electricity. You don't need data. You don't need real-world input. You don't need
telemetry. You don't need sensors. So it's likely, in my opinion, that you're going to see world-class mathematicians emerge in
the next one year that are AI based and world-class programmers that can appear within the next one or
two years. When those things are deployed at scale, remember math and programming are the basis of kind
of everything, right? It's an accelerant for physics, chemistry, biology, material science. So going back to things like
climate change, can you imagine if we, and this goes back to your original argument, Peter,
imagine if we can accelerate the discoveries of the new materials that allow us to deal with
a carbonized world. Yeah.
Right. It's very exciting.
Love to drill in.
Okay. I just want to hit this because it's important.
The potential for there to be, I don't want to use the word PhD level, you know, other
than thinking in the terms of research, the PhD level AIs and that can basically attack
any problem and solve it and solve math if you would and physics. This idea
of an AI intelligence explosion, Leopold put that at like 26, 27 heading towards digital
super intelligence in the next few years. Do you buy that timeframe?
So again, I consider that to be the San
Francisco consensus.
I think the dates are probably off by one and a half or two times,
which is pretty close.
So a reasonable prediction is that we're going to have specialized
savants in every field within five years.
That's pretty much in the bag as far as I'm concerned.
Sure.
And here's why you have this amount of humans and then you add a million AI
scientists to do something, your slope goes like this, your rate of improvement.
We should get there.
The real question is once you have all these savants, do they unify?
Do they ultimately become a superhuman? The term we're using is
superintelligence, which implies intelligence that beyond the sum of what humans can do,
the race to superintelligence, which is incredibly important. Because imagine what a superintelligence
could do that we ourselves cannot imagine, right? It's so much smarter than we. And it has huge proliferation issues, competitive issues,
China versus the US issues, electricity issues, so forth.
We don't even have the language for the deterrence aspects
and the proliferation issues of these powerful models.
Or the imagination.
Totally agree.
In fact, it's one of the great flaws actually
in the original conception.
Remember Singularity University and Ray Kurzweil's books and everything? We kind of drew this curve of rat level intelligence,
then cat, then monkey, and then it hits human, and then it goes super intelligent. But it's
now really obvious when you talk to one of these multilingual models that's explaining
physics to you that it's already hugely super intelligent within its savant category. And so,
Dennis Asabas keeps redefining AGI day as well when it can discover relativity the same way
Einstein did with data that was available up until that date. That's when we have AGI.
But long before that.
Yeah. So, I think it's worth getting the timeline right.
Yeah.
So, the following things are baked in. You're going to have an agentic revolution where agents are
connected to solve business processes, government processes, and so forth. They will be adopted
most quickly in companies that have a lot of money and a lot of time latency issues at stake.
It will be adopted most slowly in places like government, which do not have an incentive for innovation,
and fundamentally our job programs and redistribution of income kind of programs.
So call it what you will. The important thing is that there will be a tip of the spear
in places like financial services, certain kind of biomedical things, startups and so forth, and that's the place to watch. So all of that is going to happen.
The agents are gonna happen.
This math thing is gonna happen.
The software thing is gonna happen.
We can debate the rate
at which the biological revolution will occur,
but everyone agrees that it's right after,
that we're very close
to these major biological understandings.
In physics, you're limited by data,
but you can generate it synthetically. There are groups
which I'm funding, which are generating physics, essentially models that can approximate algorithms
that cannot be, they're incomputable. So in other words, you have essentially a foundation
model that can answer the question good enough for the purposes of doing physics without having
to spend a million years doing the computation of quantum chromodynamics
and things like that.
All of that's gonna happen.
The next questions have to do with,
what is the point in which this becomes a national emergency?
And it goes something like this.
Everything I've talked about is in the positive domain,
but there's a negative domain as well. The ability for biological attacks, obviously cyber attacks.
Imagine a cyber attack that we as humans cannot conceive of, which means there's no defense for
it because no one ever thought about it, right? These are real issues. A biological attack,
you take a virus. I won't obviously go into the
details, you take a virus that's bad and you make it undetectable by some changes in its structure,
which again, I won't go into the details. We released a whole report at the national level
on this issue. So at some point, the government, and it doesn't appear to understand this now,
is going to have to say, this is very big because it affects national security, national economic strengths, and so
forth. Now China clearly understands this, and China is putting an enormous amount of money into
this. We have slowed them down by virtue of our chips controls, but they've found clever ways
around this. There are also proliferation issues.
Many of the chips that they're not supposed to have, they seem to be able to get.
And more importantly, as I mentioned, the algorithms are changing.
And instead of having these expensive foundation models by themselves, you have
continuous updating, which is called test time training that continuous updating
appears to be capable of being done with lesser power chips. So there
are so many questions that I think we don't know. We don't know the role of open source,
because remember open source means open weights, which means everyone can use it. A fair reading
of this is that every country that's not in the West will end up using open source because they'll
perceive it as cheaper, which transfers leadership and open source from America to China. That's a big deal, right, if that occurs. How much longer do
the chip bands, if you will, hold and how long before China can answer? What are the effects
of the current government's policies of getting rid of foreigners and foreign investment?
What happens with the Arab data centers, assuming they
work? And I'm generally supportive of them. If those things are then misused to help train
Chinese models, the list just goes on and on. We just don't know.
Okay. Can I ask you probably one of the toughest questions? I don't know if you saw Mark Andreessen.
He went and talked to the Biden administration, past administration and said, how are we going
to deal with exactly what you just talked about,
chemical and biological and radiological and nuclear risks
from big foundation models being operated by foreign countries?
The Biden answer was,
we're going to keep it into the three or four big companies like Google,
and we'll just regulate them.
Mark was like, that is a surefire way to lose the race with China because all innovation comes from a startup that you didn't anticipate or, you know, it's just the American history and you're cutting off the entrepreneur from participating in this.
So as of right now, with the open source models, the entrepreneurs are in great shape. But if you think about the models getting crazy smart a year from now, how are we going to have the balance between startups actually being able to work with the best
technology, but proliferation not percolating to every country in the world? Again, a set of unknown
questions and anybody who knows the answer to these things is not telling the full truth. The
doctrine in the Biden administration was called 10 to the
26 flops. It was a point that was a consensus above which the models were powerful enough to
cause some damage. So the theory was that if you stayed below 10 to the 26, you didn't need to be
regulated. But if you were above that, you needed to be regulated. And the proposal and the Biden administration was to regulate both the open source and the
closed source.
Okay?
Those are the summary.
That, of course, has been ended by the Trump administration.
They have not yet produced their own thinking in this area.
They're very concerned about China and it getting forward.
So they'll come out with something.
From my perspective.
The core questions are the following.
Will the Chinese be able to use even with, um, chip restrictions?
Will they use architectural changes that will allow them to build
models as powerful as ours?
And let's assume they're government funded.
That's the first question.
The next question is how will you raise $50 billion for your data center if your product
is open source? In the American model, part of the reason these models are closed is that the
business people and the lawyers correctly are saying, I've got to sell this thing because I've
got to pay for my capital. These are not free goods. And the US government correctly is not
giving $50 billion to these companies. So we don't know that. To me, the key question to watch is look at DeepSeek. So DeepSeek,
a week or so ago, Gemini 2.5 Pro got to the top of the leaderboards in intelligence. Great
achievement for my friends at Gemini. A week later, DeepSeek comes in and is slightly better
than Gemini and DeepSeek is trained on the existing hardware
that's in China, which includes stuff that's been pilfered
and some of the Ascend, it's called the Ascend Huawei chips
and a few others.
What happens?
Now the US people say, well, you know, the deep-seek people cheated, and they
cheated by doing a technique called distillation, where you take a large model and you ask it 10,000
questions, you get its answers, then you use that as your training material. So the US companies
will have to figure out a way to make sure that their proprietary information that they've spent so much money on does not get
leaked into these open source things. I just don't know. With respect to nuclear biological,
chemical and so forth issues, the US companies are doing a really good job of looking for that.
There's a great concern, for example, that nuclear information would leak into these
models as they're training without us knowing it.
And by the way, that's a violation of law. Oh, really? And the whole nuclear information
thing is there's no free speech in that world for good reasons. And there's no free use and
copyright and all that kind of stuff. It's illegal to do it. And so they're doing a really,
really good job of making sure that that does not happen. They also put in very significant
tests for biological information and certain kinds of cyber attacks. What happens there? Is there
incentive to continue, especially if it's not required by law? The government has just gotten
rid of the safety institutes that were in place in Biden and are replacing it by a new term,
which is largely a safety assessment program, which is
a fine answer. I think collectively, we in the industry just want the government at the secret
and top secret level to have people who are really studying what China and others are doing.
You can be sure that China really has very smart people studying what we're doing.
We at the secret and top secret level should have the same thing.
Have you read the AI 27 paper?
I have.
And so for those listening who haven't read it, it's a future vision of the AI in US and China
racing towards AI. And at some point, the story splits into a we're going to slow down and work on alignment or we're going full out and
you know spoiler alert and the race to infinity humanity vanishes.
So the right outcome will ultimately be some form of deterrence and mutually assert destruction.
I wrote a paper with two other authors, Dan Hendricks and Alex Wang,
where we named it mutual AI malfunction.
And the idea was go something like this.
Um, you're the United States, I'm China, you're ahead of me.
Um, at some point you cross a line, you know, you, Peter cross a line and I
China go, this is unacceptable.
At some point it becomes.
A massive amount of compute and amount of.
It's, it's something you're doing where it
affects my sovereignty.
It's not just words and yelling and an occasional
shooting down a jet.
It's, it's a real threat to the identity of my,
my country, my economic, what have you.
Under this scenario, I would be highly tempted
to do a cyber attack to slow you down, okay?
In mutually assured malfunction, if you will,
we have to engineer it so that you have the ability
to then do the same thing to me.
And that causes both of us to be careful
not to trigger the other.
That's what mutual assured destruction is.
That's our best formulation right now.
We also recommend in our work.
And I think it's very strong that the government require that we
know where all the chips are.
And remember the chips can tell you where they are because they're computers.
And it would be easy to add a little crypto thing, which would say, yeah,
here I am, and this is what I'm doing. So, knowing where the chips are, knowing where the training
runs are and knowing what these fault lines are, are very important. Now, there are a whole bunch
of assumptions in this scenario that I described. The first is that there was enough electricity.
The second is that there was enough power. The third is the Chinese had enough electricity,
which they do and enough computing resources, which they may or may not have. The second is that there was enough power. The third is the Chinese had enough electricity,
which they do, and enough computing resources, which they may or may not have.
Or may in the future have.
I mean, in the future have. And also, I'm asserting that everyone arrives at this
eventual state of superintelligence at a roughly the same time. Again, these are debatable points.
But the most interesting scenario is we're saying it's 1938, the letter has come
from Einstein to the president, and we're having a conversation. And we're saying, well, how does
this end? Okay. So if you were so brilliant in 38, what you would have said is this ultimately ends
with us having a bomb, the other guys having a bomb, and then we're gonna have one heck of a negotiation
to try to make sure that we don't end up
destroying each other.
And I think the same conversation needs to get started now,
well before the Chernobyl events,
well before the buildups.
Can we just take that one more step
and don't answer if you don't want to,
but if it was 1947, 1948,
so before the Cold War really took off.
And you say, well, that's similar to where we are
with China right now.
We have a competitive lead,
but it may or may not be fragile.
What would you do differently in 1947, 1940,
or what would Kissinger do different in 1947, 1948,
and 1949 than what we did?
You know, I wrote two books with Dr. Kissinger
and I miss him very much.
He was my closest friend.
And Henry was very much a realist
in the sense that when you look at his history
in roughly 36, 38, he and his, I guess, 37, 38,
his family were Jewish, were forced to emigrate
from Germany because of the
Nazis. And he watched the entire world that he'd grown up with as a boy be destroyed by the Nazis
and by Hitler. And then he saw the conflagration that occurred as a result. And I can tell you
that whether you like him or not, he spent the rest of his life trying to prevent that from happening again.
So we are today safe because people like Henry saw the world fall apart. So I think from my perspective, we should be very careful in our language and our strategy to not start that
process. Henry's view on China was different from other China scholars. His view in China was that
we shouldn't poke the bear, that we shouldn't talk about Taiwan too much, and we let China
deal with her own problems, which were very significant. But he was worried that we or
China in a small way would start World War III in the same way that World War I was started.
You remember that World War I started with essentially a small
geopolitical event, which was quickly escalated for political reasons on all sides. And then the
rest was a horrific war, the war to end all wars at the time. So we have to be very, very careful
when we have these conversations, not to isolate each other. Henry started a number of what are
called track two dialogues, which I'm
part of one of them to try to make sure we're talking to each other. And so somebody who's a
hardcore person would say, well, you know, we're Americans and we're better and so forth. Well,
I can tell you having spent lots of time on this, the Chinese are very smart, very careful, capable,
very much appear. And if you're confused about that,
again, look at the arrival of DeepSeek.
A year ago, I said they were two years behind.
I was clearly wrong.
With enough money and enough power, they're in the game.
Let me actually drill in just a little bit more on that too,
because I think one of the reasons DeepSeek caught up
so quickly is because it turned out that inference time
generates a lot of IQ, and I don't think anyone saw that coming and
inference time is a lot easier to catch up on and also if you take one of our big open source models and distill it and
Then make it a specialist like you were saying a minute ago, and then you put a ton of inference time compute behind it
It's a massive
advantage and also a massive leak of
behind it, it's a massive advantage and also a massive leak of capability within CBRN, for example, that nobody anticipated.
And CBRN, remember, is chemical, biological, radiological, and nuclear.
Let me rephrase what you said.
If the structure of the world in five to 10 years is 10 models, and I'll make some numbers
up five in the United States, three in China, two elsewhere, and those models are data centers
that are multi gigawatts, they will be all nationalized in some way.
In China, they will be owned by the government.
The stakes are too high.
Um, one in my military work, one day I visited a
place where we keep our plutonium and we keep our
plutonium in, in a base that's inside of another
base with even more machine guns and even more
specialized because the plutonium is so, is so
interesting and, and obviously very dangerous.
And I believe it's the only one or two
facilities that we have in America.
So in that scenario, these data centers will
have the equivalent of guards and machine guns
because they're so important.
Now, is that a stable geopolitical system?
Absolutely.
You know where they are.
President of one country can call the other,
they can have a conversation, you know, they can agree on what they agree on and so forth.
But let's say it is not true. Let's say that the technology improves, again, unknown,
to the point where the kind of technologies that I'm describing are implementable on the equivalent of a small server.
Then you have a humongous data center proliferation problem.
And that's where the open source issue is so important because those servers, which
will be proliferate throughout the world, will all be on open source.
We have no control regime for that.
Now I'm in favor of open source.
As you mentioned earlier with Mark Andreessen,
that open competition and so forth tends to allow people to run ahead. In defense of the proprietary companies, collectively they believe, as best I can tell, that the open source models
can't scale fast enough because they need this heavyweight training. If you look, I'll give an example of grok is
trained on a single cluster.
It was built by Nvidia in 20 days or so forth in
Memphis, Tennessee of 200,000 GPUs.
Um, GPU is about $50,000.
You can say it's about a $10 billion supercomputer
in one building.
It does one thing.
Right.
If that is the future, then we're okay,
because we'll be able to know where they are.
If in fact, the arrival of intelligence
is ultimately a distributed problem,
then we're gonna have lots of problems with terrorism,
bad actors, North Korea, poorly funded countries.
Which is my greatest concern, right?
China and the US are rational actors.
The terrorist who has access to this.
And I don't want to go all negative on this podcast.
It's an important thing to wake people up to the deep thinking you've done on this.
My concern is the terrorist who gains access.
And are we spending enough time and energy and are
we training enough models to watch them?
So the first the companies are doing this.
There are there's a body of work happening now, which can be understood as follows.
You have a super intelligent model.
Can you build a model that's not as smart as the student it's studying?
Now there is a professor that's watching the student, but the student is smarter than the professor.
Is it possible to watch what it does?
It appears that we can.
It appears that there's a way, even if you have this rogue, incredible thing,
we can watch it and understand what it's doing and thereby control it.
Another example of the, of where, where we don't know is that it's very clear
that these savant models will proceed.
There's no question about that.
The question is how do we get the Einstein's?
So there are two possibilities.
One, and this is to discover completely new schools of thought.
Which is what's the most exciting thing in the next few years.
And in our book Genesis, Henry and I and Craig talk about the importance of polymaths in
history.
In fact, the first chapter is on polymaths.
What happens when we have millions and millions of polymass?
Very, very interesting. Okay. Now it looks like the great discoveries, the greatest
scientists and people in our history had the following property. They were experts in
something and they looked at a different problem and they saw a pattern in one area of thinking that they
could apply to a completely unrelated field. And they were able to do so and make a huge breakthrough.
The models today are not able to do that. So one thing to watch for is algorithmically,
when can they do that? This is generally known as the non-stationarity problem,
because of the reward functions in these models are fairly
straightforward, you know, beat the human, beat the question
and so forth, but when the rules keep changing, is it possible
to say the old rule can be applied to a new rule to discover
something new and again, the research is underway.
We won't know for years.
Peter and I were over at OpenAI yesterday,
actually, and we were talking to many people, but Noam Brown in particular.
I said the word of the year is scaffolding. He said, yeah, maybe the word of the month is scaffolding.
I was like, well, okay, why don't I step on there? He said, look, right now, if you try to get the AI
to discover relativity or just some greenfield opportunity. It won't do it.
If you set up a framework, kind of like a lattice, like a trellis, the vine will grow on the trellis
beautifully, but you have to lay out those pathways and breadcrumbs. He was saying the AI's ability
to generate its own scaffolding is imminent. That doesn't make it completely self-improving. It's not Pandora's box,
but it's also much deeper down the path of creating an entire breakthrough in physics or
create an entire feature-length movie or these prompts that require 20 hours of consecutive
inference time compute. Pretty much sure that that will be a 2025 thing, at least from their point of view.
So, uh, recursive self-improvement is the general term for the computer continuing to learn.
We've already crossed that in the sense that these systems are now running and
learning things and they're learning from the way they own, they think within
limited functions.
from the way they own, they think within limited functions.
When does the system have the ability to generate its own objective and its own question does not have that today.
Yeah.
That's another sign.
Another sign would be that the system decides to exfiltrate itself and it takes
steps to get it, get itself away from the command or the control and command system that has not happened yet.
Jim and I hasn't called you yet and said, hi, Eric.
But there, there are theoreticians who believe that the, that the systems will
ultimately choose that as a reward function because they're programmed to,
you know, to continue to learn.
Uh, another one is access to weapons, right.
And lying to
get it. So these are tripwires, each of which is a tripwire that we're watching. And again,
each of these could be the beginning of a mini Chernobyl event that would become part of
consciousness. I think at the moment, the US government is not focused
on these issues, they're focused on other things,
you can have opportunity, growth, so far it's all good,
but somebody's gonna get focused on this
and somebody's gonna pay attention to it
and it will ultimately be a problem.
A quick aside, you've probably heard me speaking
about fountain life before and you're probably wishing,
Peter, would you please stop talking about fountain life?
And the answer is no, I won't, because genuinely we're living through a healthcare crisis.
You may not know this, but 70% of heart attacks have no precinct, no pain, no shortness of
breath, and half of those people with a heart attack never wake up.
You don't feel cancer until stage 3 or stage 4, until it's too late.
But we have all the technology required to detect and prevent these diseases early at
scale.
That's why a group of us, including Tony Robbins, Bill Capp and Bob Haruri, founded
Fountain Life, a one-stop center to help people understand what's going on inside their bodies
before it's too late, and to gain access to the therapeutics to give them decades of
extra health span.
Learn more about what's going on inside your body from Fountain Life. Go to fountainlife.com
and tell them Peter sent you. Okay, back to the episode.
Pete Slauson Can I clean up one kind of common misconception there? Because
I think it's a really important one. In the movie version of AI, you described, hey,
maybe there are 10 big AIs and five are in the US, three are in China and two are, one's not in Brussels probably, one is maybe
in Dubai.
Or Israel.
Israel, okay, there you go.
Somewhere like that.
Yeah.
In the movie version of this, if it goes rogue, the SWAT team comes in, they blow it up and
it's solved.
But the actual real world is when you're using one of these huge data centers to
create a super intelligent AI, the training process is 10E26, 10E28, or more flops. But
then the final brain can be ported and run on four GPUs, eight GPUs. So a box about this size.
And it's just as intelligent. And that's one of the beautiful things about it is you can.
This is called stealing the weights.
Stealing the weights.
Exactly.
And the new, new thing is that that weight file with, if you have an
innovation in inference time speed and you say, Oh, same weights, no difference.
Distill it or, or just quantize it or whatever, but I made it a hundred times
faster. Now it's actually far more intelligent than what you exported from the data center.
And so the, but all of these are examples of the proliferation problem.
And I'm not convinced that we will hold these things in the 10 places.
And here's why.
Let's assume you have the ten, which is possible.
They will have subsets of models that are smaller, but nearly as intelligent.
And so the tree of knowledge, of systems that have knowledge, is not going to be ten and then zero.
It's going to be ten, a hundred, a thousand, a
million, a billion at different levels of complexity.
So the system that's on your future phone, maybe,
you know, three orders of magnitude, four order
magnitude smaller than the one at the very tippy
top, but it will be very, very powerful.
You know, to exactly what you're talking about.
There's some great research going on at MIT. It'll probably move to Stanford, just to be fair, but it always does.
But it's great research going on at MIT.
If you have one of these huge models and it's been trained on movies, it's been trained on
Swahili, a lot of the parameters aren't useful for this savant use case.
But the general knowledge and intuition is.
So what's the optimal balance between narrowing the
training data and narrowing the parameter set to be a
specialist without losing general, you know, learning?
So the people who are opposed to that view, and again,
we don't know, would say the following.
If you take a general purpose model and you
specialize it through fine tuning, it also becomes
more brittle.
Mm-hmm.
Their view is that what you do is you just make bigger and bigger and bigger models because
they're in the big model camp.
Right.
And that's why they need gigawatts of data centers and so forth.
And their argument is that that flexibility of intelligence that they are seeing will
continue. Dario wrote a piece called, um, basically about machines.
And he argued that there are machines of the, of, uh, grace,
machines of amazing grace.
And he argued that there are three scaling laws playing.
The first one is what you know of, which is foundation model growth.
We're still on that.
The second one is a test time training law.
And the third one is a reinforcement
learning training law.
Training laws are where if you just put more
hardware and more data, they just get smarter
in a, in a predictable way.
Um, we're just at the beginning in his view of,
uh, the second and third one beginning.
That's why I'm sure our audience would be frustrated.
Why do we not know?
We don't know, right?
It's too new, it's too powerful.
And at the moment, all of these businesses are incredibly highly valued.
They're growing incredibly quickly.
The uses of them, I mentioned earlier, going back to Google, businesses are incredibly highly valued through growing incredibly quickly.
The uses of them, I mentioned earlier, uh, going back to Google, um,
the ability to refactor your entire workflow in a business is a very big deal.
That's a lot of money to be made there for all the companies involved.
We will see.
Eric, shifting the topic, one of the concerns that people have in the near term and people have been ringing the alarm bells is on jobs.
I'm wondering where you come out on this
and flipping that forward to education.
How do we educate our kids today in high school and college?
And what's your advice?
So on the first thing, do you believe that as Jario has gone on TV shows now and speaking
to significant white collar job loss, we're seeing obviously a multitude of different
drivers and robots coming in.
How do you think about the job market over the next five years? Um, let's posit that in 30 or 40 years, there'll be a very different employment,
robotic human interaction.
Or the definition of, of do we need to work at all?
The definition of work, the definition of identity.
Let's just posit that.
Uh, and let's also posit that it will take 20 or 30 years for those things to
work through the economy of our world.
Um, now in California and other cities in America, you can get on a Waymo taxi.
Um, Waymo it's 2025.
The original work was done in the late nineties.
The original challenge at Stanford was done, I believe in 2004.
The Doppler grand challenge, it was 2004.
2004.
With Sebastian Thrun.
That's right.
One.
So, so more than 20 years from a visible demonstration to our ability
to use it in daily life, why?
It's hard, it's deep tech, it's regulated and all of that.
And I think that's going to be true, especially in robots that are interacting
with humans, they're going to get regulated. You're not going to be wandering around especially in robots that are interacting with humans. They're going to get regulated.
You're not going to be wandering around and the
robots are going to decide to slap you.
You just doesn't, you know, society is not
going to allow that sort of thing.
So you want to.
It's just not, it's not going to, it's not
going to allow it.
So in the shorter term, five or 10 years, I'm
going to argue that this is positive for jobs
in the following way.
Okay.
Um, if you look at the history of automation
and economic growth, automation starts with the
lowest status and most dangerous jobs and then
works up the chain.
So if you think about assembly lines and cars
and furnaces and all these sort of very, very dangerous jobs that our forefathers did, they don't do them anymore.
They're done by robotic solutions. And typically not a human robot, but an arm.
So the world dominated by arms that are intelligent and so forth will automate those functions.
What happens to the people? Well, it turns out that the person who
was working with the welder who's now operating the arm has a higher wage and the company has
higher profits because it's producing more widgets. So the company makes more money and the person
makes more money, right? In that sense. Now you sit there and say, well, that's not true because
humans don't want to be retrained. Ah, but in the vision that we're talking about, every single person will have a
human, a computer assistant that's very intelligent that helps them perform. And you take a person of
normal intelligence or knowledge and you add a, you know, sort of accelerant, they can get a higher
paying job. So you sit there and you go, well, why are there more jobs?
There should be less jobs.
That's not how economics works.
Economics expands because the opportunities expands, profits
expands, wealth expands and so forth.
So there's plenty of dislocation, but in aggregate, are there more
people employed or fewer?
The answer is more people with higher paying jobs.
Is that true in India as well?
It will be.
And you picked India because India has a positive
demographic outlook, although their, their birth rate is now
down to 2.0.
That's good.
The, the, the rest of the world is choosing not to have children.
If you look at Korea, it's now down to 0.7 children per two parents.
China is down to one child per two parents.
It's evaporating.
Now, what happens in those situations? They completely automate everything because it's
the only way to increase national priority. So the most likely scenario, at least in the next
decade, is it's a national emergency to use more AI in the workplace to give people better paying
jobs and create more productivity in the United States give people better paying jobs and create more productivity
in the United States,
because our birth rate has been falling.
And what happens is,
people have talked about this for 20 years.
If you have this conversation and you ignore demographics,
which is negative for humans,
and economic growth, which occurs naturally
because of capital investment,
then you miss the whole story.
Now, there are plenty of people who lose their
jobs, but there's an awful lot of people who have
new jobs.
And the typical simple example would be all those
people who work in, in Amazon distribution
centers and Amazon trucks, those jobs didn't exist
and until Amazon was created.
Right.
Um, the number one shortage in jobs right now in America are truck drivers.
Why? Truck driving is a lonely, hard, low paying, low status of good people job.
They don't want it. They want a better paying job.
Going back to education, it's really a crime that our industry has not invented the following product. The product that I want it to build is a product that teaches every single
human who wants to be taught in their language, in a gamified way, the stuff
they need to know to be a great citizen in their country, right?
That can all be done on phones now.
It can all be learned and you can all learn how to do it.
And why do we not have that product?
Right?
The investment in the humans of the world is the best return always. Knowledge and capability is always the right
answer.
You're trying to get your opinion on this because you're so influential with, so I've
got about a thousand people in the companies where I'm the controlling shareholder and
I've been trying to tell them exactly what you just articulated where a lot of these
people have been in the company
for 10, 15 years, they're incredibly capable and loyal,
but they've learned a specific white collar skill.
They worked really hard to learn the skill
and the AI is coming within no more than three years
and maybe two years.
And the opportunity to retrain
and have continuity is right now.
But if they delay, which everyone seems to be just,
let's wait and see.
And what I'm trying to tell them is if you wait and see,
you're really screwing over that employee.
So we are in wild agreement that this is going to happen.
And the winners, we are the ones who act now.
What's interesting is when you look at innovation history,
the biggest companies who you would think of
are the slowest because they have economic resources
that the little companies typically don't.
They tend to eventually get there, right?
So watch what the big companies do.
Are there CFOs and the people who measure things carefully
who are very,
very intelligent? They say, I'm done with that thousand engineering team that doesn't
do very much. I want 50 people working in this other way and we'll do something else
with the other people.
And when you say big companies, we're thinking Google, Meta, we're not thinking, you know,
Big Bank hasn't done anything.
No, I'm thinking about big banks. When I talk to CEOs and I know a lot of them in traditional
industries, what I counsel them is you already have people in the
company who know what to do.
You just don't know who they are.
So call a review of the best ideas to apply AI in our business.
And inevitably the first ones are boring, improve customer service,
improve call centers and so forth.
But then somebody says, you know, we could increase revenue if we built this product.
I'll give you another example.
There's this whole industry of people who work on regulated user
interfaces or one or another.
I think user interfaces are largely going to go away because if you think about it,
the agents speak English typically, or other languages, you can talk to them.
You can say what you want.
The UI can be generated.
So I can say, generate me a set of buttons
that allows me to solve this problem
and it's generated for you.
Why do I have to be stuck in what is called
the WIMP interface, Windows Icons Menus and Pulldown
that was invented in Xerox PARC, right?
50 years ago, why am I still stuck in that paradigm?
I just want it to work.
years ago. Why am I still stuck in that paradigm? I just want it to work.
Kids in high school and college now, any different recommendations for where they go?
When you spend any time in a high school, or I was at a conference yesterday where we had a drone challenge, and you watch the 15 year olds, they're going to be fine. They're just going to be fine.
It all makes sense to them and we're in their way.
If I were, but they're more than digital natives.
They get it.
They understand the speed.
It's natural to them.
They're also frankly, faster and smarter than we are.
Right.
That's just how life works.
I'm sorry to say.
So we have wisdom, they have intelligence, they win.
Right. just how life works, I'm sorry to say. So we have wisdom, they have intelligence, they win, right?
So in their case, I used to think the right answer was to go into biology. I now actually think
going into the application of intelligence to whatever you're interested in is the best thing
you can do as a young person. Purpose driven. Any form of solution that you find interesting.
Most, most kids get into it for gaming reasons or
something, and they learn how to program very young.
So they're quite familiar with this.
I work at a particular university with undergraduates
and they're already doing different, different
algorithms for reinforcement learning as sophomores.
This shows you how fast this is happening at their
level.
They're going to be just fine.
They're responding to the economic signals, but
they're also responding to their purpose.
Right?
So an example would be you care about climate,
which I certainly do.
If you're a young person, why don't you figure out a
way to simplify the climate science to use simple
foundation models to answer these core questions. Why don't you figure out a way to simplify the climate science to use simple foundation models to answer these core questions? Why don't you figure out a way to use these powerful models
to come up with new materials, right? That allow us again to address the carbon challenge. And
why don't you work on energy systems to have better and more efficient energy sources that
are not less carbon? You see my point. Yeah, I know, I've noticed, uh, cause I have kids exactly that, that era and, um, there's a very clear step
function change largely attributable, I think to
Google and Apple, that they have the assumption that
things will work.
And if you go just a couple of years older during the
WIMP era, like you described it, which I'll
attribute more to Microsoft, the assumption is
nothing will ever work.
Like if I try to use this thing, it's going to
crash.
I'm going to be, it's going to be.
What's also interesting is that in my career, I
used to give these speeches about the internet,
which I enjoyed, uh, where I said, you know, the
great thing about the internet is it has, there's
an off button and you can turn off your odd button
and you can actually have dinner with your family.
And then you can turn it on after dinner.
This is no longer possible.
So the, the distinction between the real world and the digital world has become
confusing, but no one, none of us are offline for any significant period of time.
And indeed the reward system in the world has now caused us to not even be able to
fly in peace, right?
Drive in peace, take a train in peace.
Starlink is everywhere.
Right.
And that, that ubiquitous connectivity has some negative impact in terms of psychological
stress, loss of emotional, physical health, and so forth.
But the benefit of that productivity is without question.
Every day I get the strangest compliment.
Someone will stop me and say, Peter, you have such nice skin.
Honestly, I never thought I'd hear that from anyone.
And honestly, I can't take the full credit.
All I do is use something called OneSkin OS1
twice a day, every day.
The company is built by four brilliant PhD women
who identified a peptide
that effectively reverses the age of your skin.
I love it.
And again, I use this twice a day, every day.
You can go to
Oneskin.co and write Peter at checkout for a discount on the same product I use. That's
Oneskin.co and use the code Peter at checkout. Alright, back to the episode.
Google I.O. was amazing. I mean, just hats off to the entire team there. VO3 was shocking.
And we're sitting here eight miles from Hollywood.
And I'm just wondering your thoughts
on the impact this will have.
You know, we're gonna see the one person feature film,
like we're seeing potentially one person unicorns in the future with the
ogenic AI, are we going to see an individual be able to compete with a Hollywood studio
and should they be worried about their assets?
Well, they should always be worried because of intellectual property issues and so forth.
I think blockbusters are likely to still be put together by people with an awful lot of help from AI.
I don't think that goes away.
If you look at what we can do with generating long-form video, it's very expensive to do long-term video,
although that will come down, and also there's an occasional extra leg or extra clock or whatever.
It's not perfect yet, and that requires human editing. So even in the scenario
where a lot of the video is created by a computer, they're going to be humans that are producing it
and directing it for reasons. My best example in Hollywood is that, let's use the example,
and I was at a studio where they were showing me this. They had, they happened to have an actor who was recreating William Shatner's movies,
movements, a young man, and they had licensed the likeness from, you know, William Shatner,
who's now older, and they put his head on this person's body and it was seamless. Well,
that's pretty impressive. That's more revenue for everyone. The unknown actor becomes a bit more famous.
Mr.
Schattner gets more revenue.
They, the whole, the whole movie genre works.
That's a good thing.
Another example is that nowadays they use green screens rather than sets.
And furthermore in the alien department, when you have, you know, scary movies,
instead of having the makeup person, they just add the makeup digitally.
So who wins?
The costs are lower.
The movies are made quicker.
In theory, the movies are better because you have more choices.
So everybody wins.
Who loses?
Well, there was somebody who built that set and that set isn't needed anymore.
That's a carpenter and a very talented person who
now has to go get a job in the carpentry business.
So again, I think people get confused.
If I look at, if I look at the digital
transformation of entertainment, subject to
intellectual property being held, which is always
a question, it's going to be just fine.
Right.
There's still going to be blockbusters.
The cost will go down, not up or the, or the relative income.
Cause in Hollywood, they essentially have their own accounting and they essentially
allocate all the revenue to all the key producing people.
The allocation will shift to the people who are the most creative.
That's a normal process.
Remember we said earlier that automation gets rid of the poor, the
lowest quality jobs, the most dangerous jobs. The jobs that are sort of
straightforward are probably automated. But they're really creative jobs. Another
example, the script writers, you're still gonna have script writers but they're
gonna have an awful lot of help from AI to write even better scripts. That's not bad. Okay. I saw a study recently out of Stanford
that documented AI being much more persuasive than the best humans. Yes. That set off some alarms.
It also set off some interesting thoughts on the future of advertising.
Any particular thoughts about that? So we know the following. We know that if the system knows you well enough, it can learn to convince you of anything.
So what that means in an unregulated environment is that the systems will know you better and better.
They'll get better at pitching you.
And if you're not savvy, if you're not smart, you could be easily manipulated.
We also know that the computer is better than humans trying to do the same thing.
So none of this surprises me.
The real question, and I'll ask this in as a question, is in the presence of unregulated
misinformation engines, of which there will be many, advertisers, politicians,
just criminal people, people trying to evade responsibility.
There's all sorts of people who have free speech.
When they have free speech,
which includes the ability to use misinformation
to their advantage, what happens to democracy?
We've all grown up in democracies
where there's a sort of a consensus around trust and is an elite that more or less
Administers the trust vectors and so forth. There's a set of shared values
Do those shared values go away in our book about Genesis? We talk about this as a deeper problem
What does it mean to be human when you're interacting mostly with these digital things?
Especially if the digital things have their own scenarios. My favorite example is that you have a son or a grandson
or a child or a grandchild, and you give them a bear,
and the bear has a personality, and the child grows up,
but the bear grows up too.
So who regulates what the bear talks to the kid?
Most people haven't actually experienced the super,
super empathetic voice that can be any inflection you want when they see that,
which will be in the next probably two months,
if they're going to completely open their eyes to what this is.
Well, remember that voice casting was solved a few years ago and that you can
cast anyone else's voice onto your own.
And that has all sorts of problems.
Have you seen an avatar yet of somebody that you love
that's passed away or Henry Kissinger or anything?
We actually created one with the permission of his family.
Did you start crying instantly?
It's very emotional.
It's very emotional because it brings back,
I mean, it's a real human, it's a real memory,
a real voice, and I think we're going to see more of that.
Now, one obvious thing that will happen is at some point in the future, when,
when we naturally die, our digital essence will live in the cloud and it will know
what we knew at the time and you can ask it a question.
So can you imagine asking Einstein, going back to Einstein, what did you really think about,
you know, this other guy?
Yeah.
You know, did you actually like him or were you just being polite with him with letters?
Yeah.
Right.
And in all those sort of famous contests that we study as students, can you imagine being
able to ask the, you know, the people, you know, with today's retrospective, what did you really think?
I know that the education example you gave earlier is so much more compelling when you're
talking to Isaac Newton or Albert Einstein instead of just the, no.
You talk about-
You know, it's so, it's so, coming back to the VO3 and the movies, when one of the first
companies we incubated out of MIT, Course Advisor, we sold it to Don Graham and the
Washington Post. And then, so I was working for him for a year after that. And the conception was here's the internet, here's the
newspaper, let's move the newspaper onto the internet, we'll call it washingtonpost.com.
And if you look at where it ended up, you know, today with Meta, TikTok, YouTube,
didn't end up anything like the newspaper moves the internet. So now here's VO3, here are movies.
like the newspaper moves the internet.
So now here's VO3, here are movies.
You can definitely make a long form movie much more cheaply, but I just had this experience of somebody that I know is a complete, this director will try and make
a tearjerker by leading me down a two hour long path, but I can get you to that
same emotional state in about five minutes if it's personalized to you.
Well, one of the things that's happened because of the addictive nature of the
internet is we've lost, um, sort of the
deep state of reading.
So I was walking around and I saw a
borders, sorry, a Barnes and Noble
bookstore, it was big.
Oh my God.
My old home.
And I went in and I felt good, but it's
a very fond memory.
But the fact of the matter is that
people's attention spans are shorter.
They consume things quicker.
One of the things interesting about sports
is the sports highlights business is a huge business.
Licensed clips around highlights
because it's more efficient than watching the whole game.
So I suspect that if you're with your buddies
and you wanna be drinking and so forth,
you put the game on, that's fine. But if you're a busy person and you want to have be drinking and so forth, you put the game on, that's fine.
But if you're a busy person and you're busy with whatever you're busy of, and you want to know what happened with your favorite team, the highlights are good enough.
Yeah, you have four pains of it going at the same time too.
And so this is again a change and it's a more fundamental change to attention.
I've been working, I work with a lot of 20 somethings in research.
And one of the questions I had is how do they do research in the presence
of all of these stimulations?
And I can answer the question definitively.
They turn off their phone.
Yeah.
You can't think deeply as a researcher with this thing buzzing.
And remember that, that part of the, the industry's goal was to
fully monetize your attention.
Yeah.
Right. Aside from sleeping, and we're working on having you have less sleep, I guess, from stress,
we have essentially tried to monetize all of your waking hours with something, some form of ads,
some form of entertainment, some form of subscription. That is completely antithetical
to the way humans traditionally work with respect to long,
thoughtful examination of principles, the time that it takes to be a good human being. These
are in conflict right now. There are various attempts at this. So, you know, my favorite
are these digital apps that make you relax. Okay. So the correct thing to do to relax is to turn off your phone, right?
And then relax in a traditional way for, you know, 70,000 human years of existence.
Yeah.
I had an incredible experience.
Yeah.
I'm doing the flight from MIT to Stanford all the time.
And, you know, like you said, attention spans are getting shorter and
shorter and shorter, the TikTok extreme, you know, the clips are so short.
This particular flight was my first time brainstorming
with Gemini for six hours straight.
And I completely lost track of time.
And I'm trying to figure out it's a circuit design,
a chip design for inference time compute.
And it's so good at brainstorming with me
and bringing back data.
And as long as the wifi on the plane is working,
time went by.
So then my first experience with technology
that went the other direction.
But notice that you also were not responding to texts and annoyances. You weren't reading
ads. You were deep inside of a system for which you paid a subscription. So if you look
at the deep research stuff, one of the questions I have when you do a deep research analysis,
I was looking at factory automation for something. Where is the boundary of factory automation versus human automation?
It's some area I don't understand very well.
It's very, very deep technical set of problems.
I didn't understand it.
It took 12 minutes or so to generate this paper.
12 minutes of these supercomputers is an enormous amount of time.
What is it doing?
And the answer, of course, the product is fantastic.
Yeah.
You know, to Peter's question earlier too, I keep the Google IPO
prospectus in my bathroom up in Vermont.
And it's 2004.
I've read it probably 500 times, but I don't know if you remember.
Is your bathroom ready?
It's getting a little ratty at the stage.
You're the only person besides me who did the same tech form.
I read it 500 times because I had to.
Because you had to.
Well, it was back in the day. It was legally required. Well forum. I read it 500 times because I had to. Because you had to. It was legally required.
Well, I still read it.
Um, because, because of the misconceptions,
it's just so, it's such a great learning
experience, but even before the IPO, if you
think back, you know, there was this big
debate about, will it be ad revenue?
Will it be subscription revenue?
Will it be paid inclusion?
Will the ads be visible?
And all this confusion about how you're
going to make money with this thing.
Now the internet moved to almost entirely ad revenue.
But if you look at the AI models, you got your $20, now $200 subscription and people
are signing up like crazy.
So it's ultra, ultra convincing.
Is that going to be a form of ad revenue where it convinces you to buy something or no, is
it going to be subscription revenue where people pay a lot more and there's no advertising at all?
No, but you have this with Netflix.
There was this whole discussion about how would you fund movies through ads and the
answer is you don't.
You have a subscription.
And the Netflix people looked at having free movies without a subscription and advertising
supported and the math didn't work.
I think both will be tried.
I think the fact of the matter is deep research, at least at the moment, is
going to be chosen by well-to-do or professional tasks.
You are capable of spending that $200 a month.
A lot of people don't afford, cannot afford it.
And that free service, remember, is the thing that is the stepping stone for that
young person, man or woman,
who just needs that access. My favorite story there is that when I was at Google and I went to Kenya,
and Kenya is a great country, and I was with this computer science professor and he said,
I love Google. And I said, well, I love Google too. And he goes, well, I really love Google.
I said, I really love Google too. And I said, why do you really love Google? He said, because we don't have textbooks. And I thought,
top computer science program in a nation does not have textbooks.
Yeah. Well, let me, uh, let me jump in a couple of things here. Uh, Eric, in, in the next few years,
what moats actually exist for startups as AI is coming in and disrupting.
Do you have a list?
Yes, I'll give you a simple answer.
What do you look for in the companies
that you're investing in?
So first, the deep tech hardware stuff,
there's going to be patents, patents, filings, inventions,
the hard stuff.
Those things are much slower than the software industry
in terms of growth, and they're just as important, you know power systems
All those robotic systems. We've been waiting for a long time
They're just it's just slower for all sorts of ways hard hardware is hard for those reasons in software
It's pretty clear to me. It's gonna be really simple
These software is typically a network effect business where the fastest mover wins.
The fastest mover is the fastest learner in an AI system.
So what I look for is a company where they have a loop.
Ideally they have a couple of learning loops.
So I'll give you a simple learning loop.
As you get more people, the more people click
and you learn from their click.
They express their preferences.
So let's say I invent a whole new consumer thing, which I don't have an idea right now
for it, but imagine I did.
And furthermore, I said that I don't know anything about how consumers behave, but I'm
going to launch this thing.
The moment people start using it, I'm going to learn from them. And I'll have instantaneous learning to get smarter about what they want.
So I start from nothing.
If my learning slope is this, I'm essentially unstoppable.
I'm unstoppable because I'm my learning advantage by the time my competitor
figures out what I've done is too great.
Now, how close can my, my competitor be and still lose? The answer is a few months because the slopes are exponential.
And so it's likely to me that there will be another 10 fantastic Google
scale, metascale companies, they'll all be founded on this principle of learning loops.
And when I say learning loops, I mean, in the core product solving the
current problem as fast as you can, if And when I say learning loops, I mean in the core product
solving the current problem as fast as you can.
If you cannot define the learning loop,
you're going to be beaten by a company that can define it.
And you said 10 meta Google size companies.
Do you think there'll also be a thousand,
like if you look at the enterprise software business,
Oracle on down, PeopleSoft,
whatever, thousands of those or will they all consolidate into those 10 that are domain
dominant learning loop companies? I think I'm largely speaking about consumer scale because
that's where the real growth is. The problem with learning loops is if your customer is not ready for you, you can only learn at a certain rate.
So it's probably the case that the government is not interested in learning
and therefore there's no growth in learning loop serving the government.
I'm sorry to say that needs to get fixed.
Educational systems are largely regulated and run by the unions over.
They're not interested in innovation.
They're not going to be doing any learning.
I'm sorry to say that has to get fixed.
So the ones where there's a very fast feedback signal are the ones to watch.
Another example, it's pretty obvious that you can build a whole new stock trading
company where you learn, if you get the algorithms right, you learn faster than
everyone else and scale matters. So in the presence of scale and fast learning loops, that's the moat.
Now I don't know that there's many others that you do have.
Do you think brand would be a moat?
A brand matters, but less so.
What's interesting is people seem to be perfectly willing now to move from one
thing to the other in the, at least in the digital world.
And there's a whole new set of brands that have emerged that everyone is using
that are the next generations that I haven't even heard of.
Within those learning loops, you think domain specific synthetic data is a big advantage?
Well, the answer is whatever it causes faster learning. There are applications
where you have enough training data from humans. There are applications where you have to generate the training data
from what the humans are doing.
Right.
So you could imagine a situation where you had a learning loop, where
there's no humans involved, where it's monitoring something, some sensors.
But because you learn faster on those sensors, you get so smart, you can't
be replaced by another sensor management company.
That's the way to think about it.
So what about the capital for the learning loop? You get so smart, you can't be replaced by another sensor management company.
That's the way to think about it.
So what about the capital for the learning loop?
Because do you know Daniela Rousse, who runs C718?
So Daniela and I are really good friends.
We've been talking to our governor, Maura Healy,
who's one of the best governors in the world.
I agree.
So there's a problem in our academic systems
where the big companies have all the hardware
because they have all the money.
And the universities do not have the money for even
reasonable size data centers.
I was with one university where after lots of meetings, they agreed to spend
$50 million on a data center, which generates less than a thousand GPUs.
Right.
For the entire campus and all of research.
And that doesn't even include the terabytes of storage and so forth.
So I, and others are working on this as a philanthropic matter.
The government is going to have to come in with more money for universities
for this kind of stuff.
That is among the best investment.
When I was young, I was on a national science foundation scholarship for, and
by the way, I made $15,000 a year.
Uh, the return to the nation of my, that $15,000 has been very good,
based on the taxes that I pay and the jobs that we have created.
So core question.
So, so glad you said that.
So creating, so creating an ecosystem for the next generation to have the
access to the systems is important.
It's not obvious to me that they need billions of dollars.
It's pretty obvious to me that they need a million dollars, two million dollars.
Yeah.
That's the goal.
Yeah.
I want to, I want to take a, I want to take us in a direction of, uh, of, uh,
wrapping up on super intelligence and the book.
Um, we didn't finish the timeline on super intelligence and I
think it's important to give people a sense of how quickly the self referential
learning can get and how rapidly we can get to something you know a thousand
times, a million, a billion times more capable than a human. On the flip side of
that Eric, when I look at my greatest concerns, when we get through this five to seven year period of,
that you say, rogue actors and stabilization and such, one of the biggest concerns I have is the diminishment of human purpose.
purpose. You wrote in the book and I've listened to it, haven't read it physically and my kids say, you don't read anymore dad. You listen to books, you don't read. But you said the real risk is not
Terminator, it's drift. You argue that AI wouldn't destroy humanity violently, but might slowly erode human values,
autonomy and judgment if left unregulated, misunderstood.
So it's really a Wally-like future
versus a Star Trek boldly go out there.
We're very, in the book and my own personal view,
is it's very important that human agency be protected.
Human agency means the ability to get up in the day
and do what you want, sub-issue the law, right?
And it's perfectly possible that these digital devices
can create a form of a virtual prison
where you don't feel that you as a human
can do what you want, right?
That is to be avoided.
I'm not worried about that case.
I'm more worried about the case that if you want to do something, it's just so
much easier to ask your robot or your AI to do it for you.
The, the human spirit that wants to overcome a challenge.
I mean, the unchallenged life is so good.
So critical,
but there will be always new challenges.
Uh, when I was a boy, uh, one of the things that I did is I would repair my father's car.
I don't do that anymore.
When I was a boy, I used to mow the lawn.
I don't do that anymore.
So there are plenty of examples of things that we used to do that we don't need to do anymore,
but there'll be plenty of things.
Just remember the complexity of the world that I'm describing is not a simple
world. Just managing the world around you is going to be a full-time and purposeful
job, partly because there will be so many people fighting for misinformation and for
your attention, and there's obviously lots of competition and so forth. There's lots
of things to worry about. Plus you have all of the people trying to get your money, create opportunities, deceive you, what have you.
So I think human purpose will remain because humans need purpose.
That's the point.
And there's lots of literature that the people who have what we would consider to be low-paying, worthless jobs enjoy going to work.
So the challenge is not to get rid of their job, is to make their job
more productive using AI tools. They're still going to go to work. And to be very clear,
this notion that we're all going to be sitting around doing poetry is not happening, right? In
the future, they'll be lawyers. They'll use tools to have even more complex lawsuits against each
other, right? There will be evil people who will use these tools to have even more complex lawsuits against each other.
There will be evil people who will use these tools to create even more evil problems.
There will be good people who will be trying to deter the evil people.
The tools change, but the structure of humanity, the way we work together is not going to change.
Peter and I were on Mike Saylor's yacht a couple of months ago, and I was complaining
that the curriculum is completely broken in all
these schools. But what I meant was we should be teaching AI. And he said, yeah, they should be
teaching aesthetics. And I looked at him like, what the hell are you talking about? He said,
no, in the age of AI, which is imminent, look at everything around you. Whether it's good or bad,
enjoyable, not enjoyable, it's all about designing aesthetics. When the AI is such a force multiplier
that you can create virtually anything, what are you creating and why? And that becomes the
challenge. If you look at Wittgenstein and the sort of theories of all of this stuff,
it is all fundamental. We're having a conversation that America has about tasks and outcomes. It's
our culture. But there are other aspects of human life, meaning, thinking, reasoning.
We're not going to stop doing that.
So imagine if your purpose in life in the future is to figure out what's going on
and to be successful, just figuring that out is sufficient because once you
figured it out, it's taken care of for you.
That's beautiful.
Right.
That provides purpose.
It's pretty clear that robots will take over an awful lot of mechanical or manual work. And for people who like
to, you know, I like to repair the car, I don't do it anymore. I miss it, but I have other things to
do with my time. Take me forward. When do you see what you define as digital superintelligence? Within 10
years. Within 10 years. And what do people need to know about that? What do people
need to understand and sort of prepare themselves for, either from as a parent
or as a employee or as a CEO? One way to think about it is that when digital super intelligence finally
arrives and is generally available and generally safe, you're going
to have your own polymath.
So you're going to have the sum of Einstein and Leonardo da Vinci
in the equivalent of your pocket.
I think thinking about how you would use that gift is interesting.
And of course, evil people become more evil, but the vast majority of people are good.
They're well-meaning, right?
So going back to your abundance argument, there are people who've studied the, the
nation, the notion of productivity increases, and they believe that you can get, we'll
see to 30% year-over-year economic
growth through abundance and so forth. That's a very wealthy world. That's a world of much less
disease, many more choices, much more fun, if you will, right? Just taking all those poor people
and lifting them out of the daily struggle they have. That is a great human goal. Let's focus on that.
That's the goal we should have. Does GDP still have meaning in that world?
If you include services, it does. One of the things about manufacturing, and everyone's
focused on trade deficits and they don't understand, the vast majority of modern economies are service
economies, not manufacturing economies. And if you look at the percentage of farming, it was roughly
98% to roughly 2% or 3% in America over 100 years. If you look at the percentage of farming, it was roughly 98% to roughly 2 or 3%
in America over 100 years. If you look at manufacturing, the heydays in the 30s and 40s
and 50s, those percentages are now down lower than 10%. It's not because we don't buy stuff,
it's because the stuff is automated, you need fewer people. There's plenty of people working in other jobs. So again, look at the totality of the society.
Is it healthy?
If you look in China, it's easy to complain about them.
They have now deflation.
They have a term where people are, it's called laying down, where they lay,
they stay at home, they don't participate in the workforce, which is counter
to their traditional culture.
If you look at reproduction rates, these countries that are essentially the workforce, which is counter to their traditional culture.
If you look at reproduction rates,
these countries that are essentially having no children,
that's not a good thing.
Those are problems that we're gonna face.
Those are the new problems of the age.
I love that.
Eric, so grateful for your time.
Thank you, thank you both.
I love your show.
Yeah, thank you, buddy. Thank you. Thank you guys. I love your show. Thank you buddy. Thank you. Thank you
guys. If you could have had a 10-year head start on the dot-com boom back in the 2000s, would you
have taken it? Every week I track the major tech meta trends. These are massive game-changing shifts
that will play out over the decade ahead. From humanoid robotics to AGI, quantum computing,
energy breakthroughs, and longevity, I cut through the noise and deliver only what matters
to our lives and our careers.
I send out a Metatrend newsletter twice a week
as a quick two minute read over email.
It's entirely free.
These insights are read by founders, CEOs and investors
behind some of the world's most disruptive companies.
Why?
Because acting early is everything. This is for you if you want to see the future before it arrives and
profit from it. Sign up at dmagnus.com slash meta trends and be ahead of the
next tech bubble. That's dmagnus.com slash meta trends. you