The Diary Of A CEO with Steven Bartlett - Google CEO: AI Is Creating Deadly Viruses! If We See This, We Must Turn Off AI! They Leaked Our Secrets At Google!
Episode Date: November 14, 2024He scaled Google from startup to $2 trillion success, can Eric Schmidt now help save humanity from the dangers of AI? Eric Schmidt is the former CEO of Google and co-founder of Schmidt Sciences. He... is also the author of bestselling books such as, ‘The New Digital Age’ and ‘Genesis: Artificial Intelligence, Hope, and the Human Spirit’. In this conversation, Eric and Steven discuss topics such as, how TikTok is influencing algorithms, the 2 AI tools that companies need, how Google employees leaked secret information, and the link between AI and human survival. (00:00) Intro (02:05) Why Did You Write a Book About AI? (03:49) Your Experience in the Area of AI (05:06) Essential Knowledge to Acquire at 18 (06:49) Is Coding a Dying Art Form? (07:49) What Is Critical Thinking and How Can It Be Acquired? (10:24) Importance of Critical Thinking in AI (13:40) When Your Children's Best Friend Is a Computer (15:38) How Would You Reduce TikTok's Addictiveness? (18:38) Principles of Good Entrepreneurship (20:57) Founder Mode (22:01) The Backstory of Google's Larry and Sergey (24:27) How Did You Join Google? (25:33) Principles of Scaling a Company (28:50) The Significance of Company Culture (33:02) Should Company Culture Change as It Grows? (36:42) Is Innovation Possible in Big Successful Companies? (38:15) How to Structure Teams to Drive Innovation (42:37) Focus at Google (45:25) The Future of AI (48:40) Why Didn’t Google Release a ChatGPT-Style Product First? (51:53) What Would Apple Be Doing if Steve Jobs Were Alive? (55:42) Hiring & Failing Fast (58:53) Microcultures at Google & Growing Too Big (01:04:02) Competition (01:04:39) Deadlines (01:05:17) Business Plans (01:06:28) What Made Google’s Sergey and Larry Special? (01:09:12) Navigating Media Production in the Age of AI (01:12:17) Why AI Emergence Is a Matter of Human Survival (01:17:39) Dangers of AI (01:21:01) AI Models Know More Than We Thought (01:23:45) Will We Have to Guard AI Models with the Army? (01:25:32) What If China or Russia Gains Full Control of AI? (01:27:56) Will AI Make Jobs Redundant? (01:31:09) Incorporating AI into Everyday Life (01:33:20) Sam Altman's Worldcoin (01:34:45) Is AI Superior to Humans in Performing Tasks? (01:35:29) Is AI the End of Humanity? (01:36:05) How Do We Control AI? (01:37:51) Your Biggest Fear About AI (01:40:24) Work from Home vs. Office: Your Perspective (01:42:59) Advice You Wish You’d Received in Your 30s (01:44:44) What Activity Significantly Improves Everyday Life? Join the waitlist for The 1% Diary - https://bit.ly/1-Diary-Waitlist-YT-ad-reads Follow Eric: Instagram - https://g2ul0.app.link/bX3DQSIKuOb Twitter - https://g2ul0.app.link/7JNHZYGKuOb You can purchase Eric’s books, here: ‘Genesis: Artificial Intelligence, Hope, and the Human Spirit’ - https://g2ul0.app.link/JdoJEJ7KuOb ‘The Age of AI And Our Human Future’ - https://g2ul0.app.link/bO1UnZ9KuOb ‘Trillion Dollar Coach’ - https://g2ul0.app.link/4D9a9icLuOb ‘How Google Works’ - https://g2ul0.app.link/pEnkHTeLuOb ‘The New Digital Age: Transforming Nations, Businesses, and Our Lives’ - https://g2ul0.app.link/37Vt9yhLuOb Watch the episodes on Youtube - https://g2ul0.app.link/DOACEpisodes You can purchase the The Diary Of A CEO Conversation Cards: Second Edition, here: https://g2ul0.app.link/f31dsUttKKb Follow me: https://g2ul0.app.link/gnGqL4IsKKb PerfectTed - https://www.perfectted.com with code DIARY40 for 40% off NordVPN - http://NORDVPN.COM/DOAC Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Someone was leaking information on Google and this stuff is incredibly secret.
So what are the secrets?
Well, the first is...
Eric Schmidt is the former CEO of Google.
Who grew the company from $100 million to $180 billion.
And this is how...
As someone who's led one of the world's biggest tech companies,
what are those first principles for leadership business and doing something great?
Well, the first is risk taking is key.
If you look at Elon, he's an incredible entrepreneur
because he has this brilliance where he can take huge risks
and fail fast.
And fast failure is important because if you
build the right product, your customers will come.
But it's a race to get there as fast as you can
because you want to be first.
Because that's where you make the most amount of money.
So what are the other principles
that I need to be thinking about?
So here's a really big one.
At Google, we have this 70-20-10 rule that generated 10, 20, 30, 40 billion dollars of
extra profits over a decade.
And everyone can go do this.
So the first thing is...
What about AI?
I can tell you that if you're not using AI at every aspect of your business, you're
not going to make it.
But you've been in the tech industry for a long time, and you've said the advent of artificial
intelligence is a question of human survival.
AI is going to move very quickly,
and you will not notice how much of your world
has been co-opted by these technologies
because they will produce greater delight.
But the questions are, what are the dangers?
Are we advancing with it, and do we have control over it?
What is your biggest fear about AI?
My actual fear is different from what you might imagine.
My actual fear is... That's you might imagine. My actual fear is...
That's a good time to pull the plug.
This has always blown my mind a little bit.
53% of you that listen to the show regularly
haven't yet subscribed to the show.
So could I ask you for a favour before we start?
If you like the show and you like what we do here
and you want to support us, the free simple way
that you can do just that is by hitting the subscribe button.
And my commitment to you is if you do that, then I'll do everything in my power, me and my team,
to make sure that this show is better for you every single week. We'll listen to your feedback,
we'll find the guests that you want me to speak to, and we'll continue to do what we do. Thank you so
much. Eric, I've read about your career and you've had an extensive, varied, a fascinating career,
a completely unique career.
And that leads me to believe that you could have written about anything.
You've got some incredible books, all of which I've been through over the last couple of
weeks here in front of me.
I apologize.
No, no, but I mean, these are subjects that I'm just obsessed with.
But this book in particular, of all the things you could have written about with the world
we find ourselves in, why this?
Why Genesis?
Well, first, thank you for...
I've wanted to be on the show for a long time, so I'm really happy to be able to be here
in person in London.
Henry Kissinger, Dr. Kissinger, ended up being one of my greatest and closest friends.
And 10 years ago, he and I were at a conference
where he heard Demis Hassabis speak about AI.
And Henry would tell the story that he was about to go catch
up on his jet lag.
But instead, I said, go do this.
And he listened to it.
And all of a sudden, he understood
that we were playing with fire, that we were doing something
that we did not understand it would have the impact on,
and that Henry had been working on this since he was 22 coming
out of the Army after World War II,
and his thesis about Kant and so forth
as an undergraduate at Harvard.
So all of a sudden, I found myself
in a whole group of people who were trying to understand
what does it mean to be human in an age of AI?
When this stuff starts showing up, how does our life change?
How do our thoughts change?
Humans have never had an intellectual challenger of our own ability, or better or worse.
It just never happened in history.
The arrival of AI is a huge moment in history.
For anyone that doesn't know your story,
or maybe just knows your story from Google onwards,
can you tell me the inspiration points, the education,
the experiences that you're drawing on when
you talk about these subjects?
Well, like many of the people you meet,
as a teenager, I was interested in science.
I played with model rockets, model trains,
the usual things for a boy in my generation.
I was too young to be a video game addict,
but I'm sure I would be today if I were that age.
I went to college and I was very interested in computers,
and they were relatively slow then,
but to me they were fascinating.
To give you an example, the computer that I used in college
is 100 million times slower, 100 million times slower
than the phone you have in your pocket.
And by the way, that was a computer
for the entire university.
So Moore's law, which is this notion
of accelerating density of chips, has defined the wealth creation, the career
creation, the company creation in my life. So I can be understood as lucky because
I was born with an interest in something which was about to explode.
And when sort of everything happens together, everyone gets swept up in it.
And of course the rest is history.
I was sat this weekend with
my partner's little brother who's 18 years old.
Yes.
And as we ate breakfast yesterday before they flew back to Portugal,
we had this discussion with her family,
her dad was there, her mum was there,
Raph, the younger brother was there,
and my girlfriend was there.
Difficult because most of them don't speak English,
so we had to use, funnily enough,
AI to translate what I was saying.
But the big discussion at breakfast was,
what should Raph do in the future?
He's 18 years old,
he's got his career ahead of him,
and the decisions he makes as is so evident in your story,
at this exact moment as to what information and
intelligence he acquires for himself,
will quite clearly define the rest of his life. If you were sat at that table with me yesterday
when I was trying to give Raff advice on what knowledge he should acquire at 18 years old,
what would you have said and what are the principles that sit behind that?
The most important thing is to develop analytical critical thinking skills.
To some level I don't care how you
get there. So, if you like math or science or if you like the law or if you like entertainment,
just think critically. In his particular case as an 18-year-old, what I would encourage
him to do is figure out how to write programming, to write programs in a language called Python.
Python is easy to use,
it's very easy to understand,
and it's become the language of AI.
So the AI systems,
when they write code for themselves,
they write code in Python.
So you can't lose as developing Python programming skills.
The simplest thing to do with an 18-year-old man is say,
make a game because these are typically gamers, stereotypically.
Make a game that's interesting using Python.
It's interesting, because I wondered if coding,
I think five, 10 years ago, everyone's advice to an 18-year-old
is learn how to code.
But in a world of AI where these large language models are
able to write code and are increasing every month
in their ability to write better
and better code. I wonder if that's like a dying art form.
Yeah. A lot of people oppose this and that's not correct. It sure looks like these systems
will write code, but remember the systems also have interfaces called APIs, which you
can program them. So one of the large revenue sources for these AI models, because these
companies have to make money at some point,
is you build a program and you actually make an API call
and ask it a question.
Typical example is give it a picture
and tell me what's in the picture.
Now, can you have some fun with that as an 18-year-old?
Of course.
So when I say Python, I mean Python
using the tools that are available
to build something new, something
that you're interested in.
And when you say critical thinking,
what is critical thinking?
And how does one go about acquiring that as a skill?
Well, the first and most important thing
about critical thinking is to distinguish
between being marketed to, which is also known
as being lied to, and being given the argument on your own.
We have, because of social media, which I hold responsible for a lot of ills
as well as good things in life, we've sort of gotten used to people
just telling us something and believing it, because our friends believe it or so forth.
And I strongly encourage people to check assertions.
So you get people to say all this stuff.
And I learned at Google all those years.
Somebody says something, I check it on Google.
And you then have a question.
Do you criticize them and correct them,
or do you let it go?
But you want to be in the position
where somebody makes a statement.
Did you know that only 10% of Americans have passports, which is a widely viewed but
false statement.
It's actually higher than that, although it's never high enough in my view in America.
But that's an example of an assertion that you can just say, is that true?
Right?
And there's a long meme about American politicians where the Congress is basically full of criminals.
It may be full of one or two, but it's not full of 90.
But again, people believe this stuff
because it sounds plausible.
So if somebody says something plausible, just check it.
You have a responsibility before you repeat something
to make sure what you're repeating is true.
And if you can't distinguish between true and false, I suggest you keep your
mouth shut, right? Because you can't run a government, a society, without people
operating on basic facts. Like, for example, climate change is real. We can
debate over whether it's, how to address it. But there's no question the climate is changing.
It is a fact.
It is a mathematical fact.
And how do I know this?
And somebody will say, well, how do you know?
And I said, because science is about repeatable experiments
and also proving things wrong.
So let's say I said that climate change is real.
And this was the first time it had ever been said,
which is not true.
Then 100 people would say,
that can't be true, I'll see if he's wrong.
And then all of a sudden,
they'd see I was right and I'd get some big prize.
Right? So the falsifiability of these assertions is very important.
How do you know that science is correct?
It's because people are constantly testing it.
And why is this skill of critical thinking so especially important in a world of AI?
Well, partly because AI will allow for perfect misinformation.
So let's use an example of TikTok.
TikTok can be understand.
It's called the bandit algorithm in computer science in the sense of the Las Vegas
one arm bandits. Do I stay in the bandit machine
and I keep on this slot machine
or do I move to another slot machine?
And the TikTok algorithm basically can be understood
as I'll keep serving you what you tell me you want,
but occasionally I'll give you something
from the adjacent area and it's highly addictive.
So what you're seeing with social media,
and TikTok is a particularly bad example of this,
is people are getting into these rabbit holes
where all they see is confirmatory bias.
And the ones that are, I mean, if it's fun
and entertaining, I don't care.
But you'll see, for example, there are plenty of stories
where people have ultimately self-harm or suicide because they're already unhappy and then they start picking up
unhappy and then their whole environment online is people who are unhappy and it
makes them more unhappy because it doesn't have a positive bias. So there's
a really good example where let's say in your case you're the dad you're gonna
watch this as the dad with your kid and you're going to say,
it's not that bad,
let me give you some good alternatives,
let me get you inspired,
let me get you out of your funk.
The algorithms don't do that unless you force them to.
It's because the algorithms are fundamentally about optimizing an objective function,
literally mathematically maximize some goal that has been trained to.
In this case, it's attention. And by the way, part of it, part of we have so much outrage is because if you're a CEO, you want to maximize revenue.
To maximize revenue, you maximize attention. And the easiest way to maximize attention is to maximize outrage.
Did you know? Did you know? Did you know? Right?
And by the way, a lot of stuff is not true. They're fighting over scarce attention. There
was a recent article where there's an old quote from 1971 from Herb Simon,
an economist at the time at Carnegie Mellon, who said that economists don't understand,
but in the future the scarcity will be about
attention.
So somebody now, 50 years later, went back and said, I think we're at the point where
we've monetized all attention.
An article this week, two and a half hours of videos consumed by young people every day.
Right?
Now, there is a limit to the amount of video you can, you know,
that you have to eat and sleep and hang out, but these are significant societal
changes that have occurred very, very quickly. When I was young there was a
great debate as to the benefit of television, and you know my argument at
the time was, well, yes we did, you know, we did, you know, rock and roll and
drugs and all of that, and we watched a lot of television,
but somehow we grew up okay.
It's the same argument now with a different term.
Will those kids grow up okay?
It's not as obvious because these tools are highly addictive,
much more so than television ever was.
Do you think they'll grow up okay?
I personally do because I'm inherently an optimist. I also think the society begins to understand the problems. A typical
example is there's an epidemic of harm to teenage girls. Girls as we know are
more advanced than boys at those you know below 18 and the girls seem to get
hit by social media at 11 and 12, when they're not quite capable of handling
the rejection and the emotional stuff.
It's driven emergency room visits,
self-harm, and so forth to record levels.
It's well documented. Society is beginning to recognize this.
Now, schools won't let kids use
their phones when they're in the classroom,
which is obvious if you ask me.
So developmentally, one of the core questions about
the AI revolution is what does it do to
the identity of children that are growing up?
Your values, your personal values,
the way you get up in the morning and think about life,
is now set. It's highly unlikely that an AI will change your programming.
But your child can be significantly reprogrammed.
And one of the things that we talk about in the book
is what happens when the best friend of your child
from birth is a computer?
What's it like?
Now, by the way, I don't know.
We've never done it before.
You're running an experiment on a billion people
without a control, right?
And so we have to stumble through this.
So at the end of the day, I'm an optimist
because we will adjust society with biases and values
to try to keep us on a moral, high ground human life.
And so you should be optimistic for that
because these kids, when they grow up,
they'll live to 100, their lives will be much more prosperous.
I hope and I pray that there'll be much less conflict. Certainly their lifespans are longer,
the likelihood of them being injured in wars and so forth are much, much lower statistically.
It's a good message to kids.
As someone who's led one of the world's biggest tech companies, if you were the CEO of TikTok, what would you do? Because
I'm sure that they realize everything you've said is true, but they have this commercial
incentive to drive up the addictiveness of the algorithm, which is causing these echo
chambers, which is causing the rates of anxiety and depression amongst young girls and young
people more generally to increase. What would you do? So I have talked to them and to the others as well and I think it's pretty straightforward.
There's sort of good revenue and bad revenue.
When we were at Google, Larry and Serg and I, we would have situations where we would
improve quality.
We would make the product better.
And the debate was, do we take that to revenue in the form of more ads or do we just make the product better. And the debate was, do we take that to revenue in the form of more ads, or do we just
make the product better?
And that was a clear choice.
And I arbitrarily decided that we would take 50% to one,
50% to the other, because I thought
they were both important.
And the founders, of course, were very supportive.
So Google became more moral and also made more money, right?
All of the, there's plenty of bad stuff on Google,
but it's not on the first page.
That was the key thing.
The alternative model would be say, let's maximize revenue.
We'll put all the really bad stuff,
the lies and the cheating and the deceiving and so forth
that draws you in and will drive you insane.
And we might have made more money,
but first it was the wrong thing to do, but more importantly,
it's not sustainable.
There's a law called Gresham's Law.
It's a verbal law, obviously, where bad speech drives out good speech.
And what you're seeing is you're seeing in online communities, which have always been
present with bullying and this kind of stuff,
now you've got crazy people, in my view, who are building bots that are lying, right, misinformation.
Now, why do you do that?
You've got, and there was a hurricane in Florida, and people are in serious trouble, and you,
sitting in the comfort of your home somewhere else, are busy trying to make their lives
more difficult.
What's wrong with you?
Like, let them get rescued.
Human life is important.
But there's something about the human psychology
where people talk about,
there's a German world called Schadenfreude.
There's a bunch of things like this that we have to address.
I want social media and the online world
to represent the best of humanity.
Hope, excitement, optimism, creativity, invention,
solving new problems, as opposed to the worst.
And I think that that is achievable.
You arrived at Google at 46 years old, 2001?
2001.
2001.
You had a very extensive career before then,
working for a bunch of really interesting companies.
Sun Microsystems is one that I know very well.
You've worked with Xerox in California as well.
Bell Labs was your first sort of real job, I guess, at 20 years old, first sort of big tech job.
What did you learn in this journey of your life about what it is to build a great company
and what value is as it relates to being an entrepreneur
and people in teams.
Like if there were like a set of first principles
that everyone should be thinking about
when it comes to doing something great
and building something great,
what are those like first principles?
So the first rule I've learned is that you need
a truly brilliant person to build a really brilliant product
and that is not me.
I work with them.
So find someone who's just smarter than you, more clever
than you, moves faster than you, changes the world, is
better spoken, more handsome, more beautiful, whatever it is
that you're optimizing.
And ally yourself with them.
Because they're the people who are going to make the world
different.
In one of my books, we use the distinction between divas and naves.
In a diva, and we use the example of Steve Jobs, who clearly was a diva,
opinionated and strong and argumentative and would
bully people if he didn't like them, but was brilliant when he was a diva.
He wanted perfection. Aligning yourself with Steve Jobs is a good idea.
The alternative is what we call a nave,
and a nave, which you know from British history,
is somebody who's acting on their own account.
They're not trying to do the right thing.
They're trying to benefit themselves
at the cost of others.
And so if you can identify a person in one of these teams,
they're just trying to solve the problem in a really clever way and they're passionate about it
And they want to do it
That's how the world moves forward if you don't have such a person your company is not going to go anywhere
And the reason is that it's too easy just to keep doing what you were doing
Right and an innovation is fundamental about changing what you're doing up until the this generation of tech companies
Most companies seem to me to be one shot wonders.
They would have one thing that was very successful
and then it was sort of,
it was typically follow an S curve
and nothing much would happen.
And now I think that people are smarter,
people are better educated.
You now see repeatable waves.
A good example being Microsoft,
which is an older company now,
founded in basically 81, 82, something like that.
So let's call that 45 years old,
but they've reinvented themselves a number of
times in a really powerful way.
We should probably talk about this then before we move on,
which is what you're talking about there is that founder,
things people now refer to as founder mode that founder energy that high
Conviction that sort of disruptive thinking
And that ability to reinvent yourself
I was looking at some stats last night in fact and I was looking at how long companies stay on the S&P 500
On average now and it went from 33 years to 17 years to 12 years
Average tenure and as you play those numbers forward eventually
in sort of 2050, an AI told me that it would be about eight years.
Well, I'm not sure I agree with the founder mode argument.
And the reason is that it's great to have a brilliant founder.
And it's actually more than great.
It's really important.
And we need more brilliant founders.
Universities
are producing these people by the way. They do exist and they show up every year, you know,
another Michael Dell at the age of 19 or 22. These are just brilliant founders, obviously Gates and
Ellison and sort of my generation of brilliant founders, Larry and Sergey and so forth.
For anyone that doesn't know who Larry and Sergey are and doesn't know that sort of early Google
story,
can you give me a little bit of that backstory, but then also introduce these characters called Larry and Sergey for anyone that doesn't know?
So Larry Page and Sergey Brin met at Stanford in, they were on a grant from, believe it or not, the National Science Foundation as graduate students.
And Larry Page invented a algorithm called PageRank,
which is named after him.
And he and Sergey wrote a paper, which is still
one of the most cited papers in the world.
And it's essentially a way of understanding
priority of information.
And mathematically, it was a Fourier transform
of the way people normally did things at the time.
And so they wrote this code. I don't think they were that good a set of the way people normally did things at the time. So they wrote this code.
I don't think they were that good a set of programmers.
They did it, they had a computer,
they ran out of power in their dorm room,
so they borrowed the power from
the dorm room next to and plugged it in,
and they had their data center in the bedroom,
in the dorm, classic story.
Then they moved to a building that was owned by the sister of a girlfriend
at the time. And that's how they founded the company. Their first investor was one of the
founder of Sun Microsystem, whose name was Andy Bechtelsheim, who just said, I'll just
give you the money because you're obviously incredibly smart.
How much did he give them?
$100,000. Yeah, maybe it was a million, but in any case,
it ultimately became many billions of dollars.
So it gives you a sense of this early founding is very important.
So the founders then set up in this little house in Menlo Park,
which ultimately we bought at Google, you know, as a, as a museum.
And they set up in the garage and they had Google,
Google world headquarters in neon made.
And they had big headquarters with the four employees that
were sitting below them and the computer
that Larry and Sergey had built.
Larry and Sergey were very, very good software people,
and obviously brilliant.
But they were not very good hardware.
And so they built the computers using corkboard
to separate the CPUs.
And if you know anything about hardware, hardware generates a lot of heat, and the corkboard to separate the CPUs. And if you know anything about hardware,
hardware generates a lot of heat,
and the corkboard would catch on fire.
So eventually, when I showed up, we
started building proper hardware with proper hardware engineers.
But it gives you a sense of the scrappiness
that was so characteristic.
And today, there are people of enormous impact on society.
And I think that will continue for many, many years.
Why did they call you in, and at what point
did they realize that they needed someone like you?
Well, Larry said to me, and they're very young,
he looked at me and said, we don't need you now,
but we'll need you in the future.
We'll need you in the future.
Yeah.
So one of the things about Larry and Sergey is that they thought for the long term.
So they didn't say Google would be a search company.
They said the mission of Google is to organize all the world's information.
If you think about it, that's pretty audacious 25 years ago.
Like, how are you going to do that?
And so they started with web search.
Eventually, Larry had studied AI quite extensively,
and he began to work.
And ultimately, he acquired, with all of us,
obviously, this company called DeepMind here in Britain,
which essentially is the first company to really see
the AI opportunity.
And pretty much all of the things you've seen from AI in
the last decade have come from people who are
either at DeepMind or competing with DeepMind.
Going back to this point about principles then,
before we move further on,
as it relates to building a great company,
what are some of those founding principles?
We have lots of entrepreneurs that listen to the show.
One of them you've expressed is this need for the divas,
I guess, these people who are just very high conviction
and can kind of see into the future.
What are the other principles that I need to be thinking
about when I'm scaling my company?
Well, the first is to think about scale.
I think a current example is look at Elon.
Elon is an incredible entrepreneur
and an incredible scientist.
And if you study how he operates, he gets people by, I think, sheer force of
personal will to overperform, to take huge risks, which somehow he, he has this
brilliance where he can make those trade-offs and get it right.
So these are exceptional people.
Now in our book with Genesis we argue
that you're gonna have that in your pocket. But as to whether you'll have the
judgment to take the risks that Elon does, that's another question.
One of the other ways to think about it is an awful lot of people talk to me
about the companies that they're founding and they're a little
widget. You know, like I want to make the camera better, I want to make the dress
better, I want to make book publishing better, or I want to make book publishing
cheaper, or so forth.
These are all fine ideas.
I'm interested in ideas which have the benefit of scale.
And when I say scale, I mean the ability
to go from zero to infinity in terms of the number of users
and demand and scale.
There are plenty of ways of thinking about this,
but what would be such a company in the age of AI?
Well, we can tell you what it would look like.
You would have apps, one on Android, one on iOS,
maybe a few others.
Those apps will use powerful networks
and they'll have a really big computer in the back
that's doing AI calculations.
So future successful companies will all have that.
Exactly what problem it solves, well, that's up to the founder.
But if you're not using AI, at every aspect of your business, you're not going to make
it.
And the distinction as a programming matter
is that when I was doing all of this way back when,
you had to write the code.
Now AI has to discover the answer.
It's a very big deal.
And of course, a lot of this was invented at Google,
you know, 10 years ago.
But basically, all of a sudden, analytical programming,
which is what I did my whole life, you know, writing code
and, you know, do this, do that, add this,
subtract this, call this, so forth and so on,
is gradually being replaced by learning the answer.
So for example, we use the example of language translation.
The current large language models are
essentially organized around predicting the next word.
Well, if you can predict the next word, you can predict the next sequence in biology, you can predict the next action, you can predict the next thing the robot should do.
So all of this stuff around large language models and deep learning has come out in the
transformer paper, GPT-3, chat GPT, which for most people was this huge moment, is essentially about predicting the next word
and getting it right.
In terms of company culture and how important that is
for the success and prospects of a company,
well, how do you think about company culture
and how significant and important is it?
And like when and who sets it?
So I'll give, it's almost always set,
company cultures are almost always set by the founders.
I happen to be on the board of the Mayo Clinic.
Mayo Clinic is the largest health care system in America.
It's also the most highly rated one.
And they have a rule, which is called the needs of the customer
come first, which came out of the Mayo brothers who've
been dead for like 120 years.
But that was their principle.
And when I initially got on the board,
I started wandering around,
I thought that this is kind of a stupid phrase
and nobody really does this.
And they really believe it and they repeat it
and they repeat it, right?
So it's true in non-technical cultures,
in that case, it's the healthcare for service delivery.
You can drive a culture even in non-tech.
In tech, it's typically an engineering culture.
And if I had to do things over again, I would have even more technical people and even fewer non-technical people and just make the technical people figure out what
they have to do.
And I'm sorry for that bias because I'm not trying to offend anybody, but the fact
of the matter is the technical people, you build the right product your customers will come
If you don't build a product product, then you don't need a sales force. Why are you selling an inferior product?
so in in the how Google works book and ultimately in the
trillion-dollar coach book which is about bill Campbell we talked a lot about how
The CEO is now the chief Product Officer, the Chief Innovation Officer,
because 50 years ago, you didn't have access to capital,
you didn't have access to marketing, you didn't have access to sales,
you didn't have access to distribution hours.
I was meeting today with an entrepreneur who said,
yeah, we'll be 95% technical.
I said, why? I said, well, we have a contract manufacturer
and our products are so good that people will just buy them.
This happened to be a technical switching company. have a contract manufacturer and our products are so good that people will just buy them.
This happened to be a technical switching company.
And they said it's only 100,000 times better than its competitors.
And I said, it will sell.
Unfortunately, it doesn't work yet.
That isn't the point.
But if they achieve their goal, people will be lined up outside the door.
So as a matter of culture, you want to build a technical culture with values
about getting the product to work.
And working me is not another thing
you do with engineers is you say,
they make a nice presentation to you, and they go,
that's very interesting.
But you know, I'm not your customer.
Your customer is really tough because your customers
wants everything to work and free and work right now and never make any mistakes.
So give me their feedback and if their feedback is good
I love you and if their feedback is bad then you better get back to work
and stop being so arrogant. So what happens is that in the
invention process within firms people fall in love with an idea
and they don't test it.
One of the things that Google did,
and this was largely Marissa Mayer way back when,
is one day she said to me, I don't know
how to judge user interface.
Marissa Mayer was the previous CEO.
She was the CEO of Yahoo, and before that, she
ran all the consumer products at Google.
And she's now running another company in the Bay Area.
But the important thing about Marissa is she said,
I said, well, the UI, the user interface is great at the time,
and it certainly was.
And she said, I don't know how to judge the user interface
myself, and none of my team do.
But we know how to measure.
And so what she organized were A-B tests.
You test one, test another.
So remember that it's possible using these networks to actually kind of figure out, because
they're highly instrumented, dwell time.
How long does somebody watch this?
How important it is?
If you go back to how TikTok works, one of the signals that they use include
the amount of time you watch, commenting,
forwarding, sharing, all of those kinds of things.
And you can understand those as analytics
that go into an AI engine
and makes a decision as to what to do next,
what to make viral.
And on this point of culture at scale, is it right to expect that the culture changes
as the company scales? Because you came into Google, I believe, when they were doing sort
of $100 million in revenue and you left when they were doing what, $180 billion or something
staggering. But is it right to assume that the culture of a growing company should scale
from when there was 10 people in that garage to when there's 100?
So when I go back to Google to visit and they were kind enough to give me a badge and treat me well,
of course, I hear the echoes of this. I was at a lunch where there was a lady running search and a
gentleman running ads, you know, the successors for the people who worked with me. And I asked
them, what's it going?
And they said the same problems.
The same problems have not been solved,
but they're much bigger.
And so when you go to a company, I
suspect I was not near the founding of Apple,
but I was on the board for a while.
The founding culture you can see today
in their obsession about user interfaces,
their obsession about being closed, and their privacy and secrecy,
it's just a different company, right?
I'm not passing judgment.
Setting the culture is important. The echoes are there.
What does happen in big companies is they become less efficient for many reasons.
The first thing that happens is they become conservative because of their public and they have lawsuits. And a famous example is that Microsoft,
after the antitrusts case in the nineties, became so conservative in
terms of what it could launch that it really missed the Web revolution for a
long time. They have since recovered. And I, of course, was happy to exploit
that as a competitor to them when we were at Google.
But the important thing is when big companies should be faster because they have more money
and more scale, they should be able to do things even quicker. But in my industry anyway,
the tech startups that have a new clear idea tend to win because the big company can't move
fast enough to do it.
Another example, we had built something called Google Video.
I was very proud of Google Video.
And David Drummond, who was the general counsel at the time, came in and said,
you have to look at this YouTube people.
I said, like, why?
Right. Who cares?
And it turns out they're really good and they're more clever than your team.
And I said, that can't be true.
You know, typical arrogant Eric. And we sat down that can't be true. Typical arrogant Eric.
And we sat down, and we looked at it,
and they really were quicker, even though we
had an incumbent.
And why?
It turns out that the incumbent was
operating under the traditional rules that Google had,
which was fine.
And the competitor, in this case YouTube,
was not constrained by that.
They could work at any pace, and they
could do all sorts of things,
intellectual property and so forth.
Ultimately, we were sued over all of that stuff and we ultimately won all those suits.
But it's an example where there are these moments in time where you have to move
extremely quickly.
You're seeing that right now with generative technology.
So the AGI, the generative revolution, generate code, generate videos, generate text,
generate everything.
All of those winners are being determined
in the next six, 12 months.
And then once the slope is set,
once the growth rate is quadrupling
every six months or so forth,
it's very hard for somebody else to come in.
So it's a race to get there as fast as you can.
So when you talk to the great venture capitalists,
they are, they're fast, right?
We'll look at it, we'll make a decision tomorrow,
we're done, we're in and so forth.
And we want to be first
because that's where they make the most amount of money.
So we were talking before you arrived,
I was talking to Jack about this idea
of like harvesting and hunting. So harvesting what you've already sowed We were talking before you arrived, I was talking to Jack about this idea of harvesting
and hunting.
So harvesting what you've already sowed and hunting for new opportunities.
But I've always found it's quite difficult to get the harvesters to be the hunters at
the same time.
So harvesting and hunting is a good metaphor.
I'm interested in entrepreneurs.
And so what we learned at Google was ultimately if you want to get something done,
you have to have somebody who's entrepreneurial
in their approach in charge of a small business.
And so, for example, Sundar, when he became CEO,
had a model of which were the little things that he was going
to emphasize and which were the big things.
Some of those little things are now big things, right?
And he managed it that way.
So one way to understand innovation in a large company
is you need to know who the owner is.
Larry Page would say over and over again,
it's not going to happen unless there's an owner who's
going to drive this.
And he was supremely good at identifying
that technical talent.
That's one of his great founder strengths.
So when you talk about founders, not only
do you have to have a vision, but you also
have to have either great luck or great skill
as to who is the person who can lead this.
Inevitably, those people are highly technical
in the sense that they can end very quick moving
and they have good management skills, right?
They understand how to hire people and deploy resources.
That allows for innovation.
Most of the, if I look back in my career,
each generation of the tech companies failed,
including, for example, Sun,
at the point at which he became non-competitive with the future.
Is it possible for a team to innovate
while they still have their day job,
which is harvesting, if you know what I mean?
Or do you have to take those people,
put them into a different team, different building, different P&L,
and get them to focus on their disruptive innovation?
There are almost no examples of doing it simultaneously in the same building. The Macintosh was famously,
Steve, in his typical crazy way,
had this very small team that invented the Macintosh, and he put them in a little building next to the big building on Bob Road and Cupertino and they put a pirate flag on
top of it. Now was that good culturally inside the company? No, because it
created resentment in the big building. But was it right in terms of the revenue
and path of Apple? Absolutely.
Why?
Because the Mac ultimately became the platform
that established the UI.
The user interface ultimately allowed
them to build the iPhone, which, of course, is defined
by its user interface.
Why couldn't they stay in the same building?
It just doesn't work.
You can't get people to play two roles.
The incentives are different.
If you're going to be a pirate and a disruptor,
you don't have to follow the same rules.
So there are plenty of examples where you just
have to keep inventing yourself.
Now, what's interesting about cloud computing
and essentially cloud services, which is what Google does,
is because the product is not sold to you,
it's delivered to you, it's easier to change.
But the same problem remains.
If you look at Google today, it's basically a search box, and it's incredibly powerful.
What happens when that interface is not really textual?
Google will have to reinvent that.
The system will somehow know what you're asking.
It will be the system will somehow know what you're asking. Right. It will be your assistant.
And again, Google will do very well.
So I'm in no way criticizing Google here.
But I'm saying that even something as simple as
the search box will eventually
be replaced by something more powerful.
It's important that Google be the company that does that.
I believe they will.
And I was thinking about it,
you know, the example of Steve Jobs
in that building with the pirate flag on it. My brain went, there's so many offices around the world that were trying
to kill Apple at that exact moment that might not have had the pirate flag, but that's exactly what
they were doing in similar small rooms. So what Apple had done so smartly there was they owned
the people that were about to kill their business model.
And this is quite difficult to do. And part of me wonders if
in your experience, it's a founder that has that type of
conviction that does that.
It's extremely hard for non-founders to do this in
corporations. Because if you think about a corporation,
what's the duty of the CEO?
There's the shareholders, there's the employees,
there's the community, and there's a board.
Trying to get a board of very smart people
to agree on anything is hard enough.
So imagine I walk in to you and I say, I have a new idea.
I'm going to fill our profitability for two years.
It's a huge bet, and I need $10 billion.
Now, would the board say yes?
Well, they did to Mark Zuckerberg.
He spent all that money on essentially VR of one kind or another.
It doesn't seem to have produced very much.
But at exactly the same time,
he invested very heavily in Instagram,
WhatsApp and Facebook, and in particular in the AI systems that power them. And today,
Facebook, to my surprise, is a very significant leader in AI, having released this language
called or version called Lama 400 billion, which is curiously an open source model. Open
source means it's available freely for everyone.
And what Facebook and Metta is saying is, as long as we have this technology,
we can maximize the revenue in our core businesses.
So there's a good example, and Zuckerberg's obviously an incredibly
talented entrepreneur.
He's now back on the list of the most rich people.
He's feeded it in everything he was doing.
And he managed to lose all that money
while making a different bet.
That's a unique founder.
The same thing is almost impossible with a hired CEO.
How important here is focus?
And what's your sort of opinion of the importance of focus
from your experience with Google,
but also looking at these other companies?
Because when you're at Google and you have so much money in the importance of focus from your experience with Google, but also looking at these other companies.
Because when you're at Google and you
have so much money in the bank, there's so many things
that you could do and could build,
like an endless list you can take on anybody,
and basically win in most markets.
How do you think about focus at Google?
Focus is important, but it's misinterpreted.
In Google, we spent an awful lot of time
telling people we wanted to do everything.
And everyone said, you can't pull off everything.
And we said, yes, we can.
We have the underlying architectures,
we have the underlying reach.
We can do this if we can imagine
and build something that's really transformative.
And so the idea was not that we would somehow focus
on one thing like search, but rather that we would somehow focus on one thing like
search, but rather that we would pick areas of great impact and importance to
the world, many of which were free, by the way. This is not necessarily revenue
driven, and that worked. I'll give you another example. There's an old saying in
the business school that you should focus on what you're good at, and you
should simplify your product lines and you should get rid of product lines that don't work.
Intel famously had a, the term is called ARM, it's a risk chip.
And this particular risk chip was not compatible with the architecture that they were using for most of their products.
And so they sold it. Unfortunately, this was a terrible mistake,
because the architecture that they sold off
was needed for mobile phones with low memory,
with small batteries and heat problems
and so forth and so on.
And so that decision, that fateful decision now,
15 years ago, meant that they were never a player
in the mobile space.
And once they made that decision,
they tried to take their expensive and complex chips,
and they kept trying to make cheaper and smaller versions.
But the core decision, which was to simplify,
simplified to the wrong outcome.
Today, if you look at, I'll give you an example,
the Nvidia chips use an ARM CPU,
and then these two powerful GPUs, it's called the B200.
They don't use the Intel chip, they use the ARM chip because it was for their needs faster.
It would never have predicted that 15 years ago.
So at the end, maybe it was just a mistake.
But maybe they didn't understand in the way they were organized as a corporation that
ultimately battery power would be as important
as computing power, right, the amount of battery you use,
and that was the discriminant.
So one way to think about it is if you're going to have
these sort of simple rules, you better have a model
of what happens in the next five years.
So the way I teach this is just write down
what it'll look like in five years. Just try.
What will look like in five years?
Your company or your...
Whatever it is, right?
So let's talk about AI.
What will be true in five years?
That it's gonna be a lot smarter.
That it's now. It'll be a lot smarter.
But how many companies will there be in AI?
Will there be five or 5,000 or 50,000?
50,000? How many big companies will there be? Will there be 5 or 5,000 or 50,000? 50,000?
How many big companies will there be?
Will there be new companies?
What will they do?
I just told you my view is that eventually you and I will have our own AI assistant,
which is a polymath, which is incredibly smart, which helps us guide through the information
overload that it is today.
Who's going to build it? Make a prediction. What kind of hardware will it be on? Make a prediction.
How fast will the networks be? Make a prediction. Write all these things down and then have a discussion about what to do.
What is interesting about our industry is that when something like the PC comes along or the internet, I lived through all of these things, they are such broad phenomena that they really do create a
whole new lake, a whole new ocean, whatever metaphor you want. Now people
said, well wasn't that crypto? No! Crypto is not such a platform. Crypto is not
transformative to daily life for everyone. People are not running around all day using crypto tokens rather than currency. Crypto is a specialized
market. By the way, it's important and it's interesting. It's not a horizontal transformative
market. The arrival of alien intelligence in the form of savants that you use is such
a transformative thing because it touches everything. It touches you as a producer, as a star, as a narrative.
It touches me as an executive.
It will ultimately help people make money
in the stock market.
People are working on that.
There's so many ways in which this technology
is transformative.
To start, in your case, when you think about your company,
whether it's little itty bitty or a really big one,
it's fundamentally how will you apply AI
to accelerate what you're doing?
In your case, for example, here you have,
I think, the most successful show in the UK by far.
So how will you use AI to make it more successful?
Well, you can ask it to distribute you more,
to make narratives, to summarize,
to come up with new insights, to suggest, to
have fun, to create contests.
There are all sorts of ways that you can ask AI.
I'll give you a simple example.
If I were a politician, thankfully I'm not, and I knew my district, I would say to the
computer, write a program.
So I'm saying to the computer, you write a program which goes through to all the constituents in my interest, figures out roughly what they care about, and if,
and then send them a video, which is labeled, you know, of me digitally. So I'm not fake,
but it's kind of like my intention, where I explain to them how important I as their
constituent have made the bridge work. Right. And you sit there and you go, that's crazy,
but it's possible.
Now politicians have not discovered this yet,
but they will.
Because ultimately politicians are on a human connection
and the quickest way to have that communication
is to be on their phone,
talking to them about something that they care about.
When ChatGPT first launched
and they sort of scaled rapidly to a hundred million users,
there was all these articles saying that the founders of Google had rushed back in
and it was a crisis situation at Google and there was panic.
And there was two things that I thought. First is, is that true?
And second thing was, how did Google not come to market first with a ChatGPT style product?
Well, remember that Google also, that's the old question of why did you not do Facebook?
Well, the answer is we were doing everything else, right?
So my defensive answer is that Google has eight or nine or ten billion user clusters of activity,
which is pretty good, right? It's pretty hard to do, right?
I'm very proud of that. I'm very proud of what they're doing now.
My own view is that what happened
was Google was working in the engine room,
and a team out of OpenAI figured out a technology called RLHF.
And what happened was when they did GPT-3,
and the T is transformer, which was invented at Google,
when they did it, they had sort of this interesting idea,
and then they sort of casually started to use humans
to make it better.
And RLHF refers to the fact that you use humans at the end
to do A-B tests, where humans can actually say,
well, this one's better, and then the system learns
recursively from human training at the end. That was a real breakthrough.
I joke with my OpenAI friends that you were sitting around on
Thursday night and you turn this thing on and you go,
holy crap, look how good this thing is.
It was a real discovery that none of us expected.
Certainly, I did not.
Once they had it,
the OpenAI people Sam and Mira
and so forth will talk about this, they didn't really understand how good it was. They just
turned it on. And all of a sudden they had this huge success disaster because they were
working on GPT-4 at the same time. It was an afterthought. It's a great story because
it just shows you that even the brilliant founders do not necessarily
understand how powerful what they've done is.
Now today, of course, you have GPT-4.0, basically a very powerful model from OpenEye.
You have Gemini 1.5, which is clearly roughly equivalent, if not better, in certain areas.
The Gemini is more multimodal for example and then you have other players. Lama architecture,
L-L-A-M-A, does not stand for Lama's, it's large language models out of Facebook and
a number of others. There's a startup called Anthropic which is very powerful
founded by one of the inventors of GPT-3
and a whole bunch of people, and they formed their company knowing they were going to be that successful.
It's interesting, they actually formed as part of their incorporation that they were a public benefit corporation
because they were concerned that it would be so powerful that some evil CEO in the future would force them to go for revenue as opposed to world goodness.
So the teams, when they were doing this,
they understood the power of what they were doing,
and they anticipated the level of impact.
And they were right.
Do you think if Steve Jobs was at Apple,
they would be on that list?
How do you think the company would be different?
Well, Tim has done a fantastic job in Steve's legacy.
And what's interesting is normally the success story is not as good as the founder.
But somehow Tim, having worked with Steve for so long and having set the culture, having
Steve, they've managed to continue the focus on the user with this incredible safety focus
in terms of apps and so forth and so on.
And they've remained a relatively closed culture. I think all of those would have maintained
as Dave tragically died, and he was a good friend.
But the important point is,
Steve believed very strongly in what are called closed systems
where you own and control all your intellectual property,
and he and I would battle over open versus closed
because I came from the other side, and I did this with respect. I don't think they would have changed that.
And they've changed that now? No. I think still Apple is still basically
a single company that's vertically integrated. The rest of the industry is largely more open.
I think everyone, especially in the wake of the recent launch of the iPhone 16, which
I've got somewhere here, has this expectation that Apple would, if Steve was still alive,
taken some big bold bet in some, and I think about, you know, Tim's tenure, he's done
a fantastic job of keeping that company going, running it with the sort of principles of
Steve Jobs. But has there been many big, bold, successful bets? A lot of people point at
the AirPods, which is a great product.
But I think AI is one of those things where you go, I wonder if Steve would
have understood the significance of it.
And Steve was that smart that he, I would never, you know, he's
an Elon level intelligence.
Um, when, when Steve and I worked together very closely, which was 15 years ago before his death, he
was very frustrated at the success of MP4 over MOV format files.
And he was really mad about it.
And I said, well, you know, maybe that's because you were closed
and QuickTime was not generally available.
He said, that's not true.
My team, our product is better and so forth.
So his core belief system, he's an artist, right?
And given the choice, we used to have this debate
where do you want to be Chevrolet
or do you want to be Porsche?
Do you want to be General Motors or do you want to be BMW?
And he said, I want to be Porsche? Do you want to be General Motors or do you want to be BMW?" And he said,
I want to be BMW. And during that time, Apple's margins were twice as high as the PC companies.
And I said, Steve, you don't need all that money. You're generating all this cash. You're giving it
to your shareholders. And he said, the principle of our profitability and our value and our brand is this luxury brand.
Right? So that's how he thought.
Now, how would AI change that?
Everything that he would have done with Apple today would be AI inspired,
but it would be beautiful.
That's the great gift he had.
Because I think Siri was almost a glimpse at what AI now kind of looks like. It was a glimpse
at what the, I guess the ambition was. We've all been chatting to the Siri thing, which is I think
most people would agree is kind of like largely useless unless you're trying to figure out something
super, super simple. But now I, this weekend, as I said, I sat there with my girlfriend's family there
speaking to this voice-activated device and it was solving problems for me almost instantaneously that are very complex and translating them into
French and Portuguese.
Welcome to the replacement for Siri.
And again, would Steve have done that quicker?
I don't know.
It's very clear that the first thing Apple needs to do is have Siri be replaced by an
AI and call that Siri.
Hiring.
We're doing a lot of hiring in our companies at the moment
and we're going back and forward on what
the most important principles are when it comes to hiring.
Making lots of mistakes sometimes,
getting things right sometimes.
What do I need to know as an entrepreneur
when it comes to hiring?
Startups, by definition, are huge risk takers.
You have no history, you have no incumbency,
you have all these competitors, by no history, you have no incumbency, you have all these competitors by definition
and you have no time.
So in a startup, you wanna prioritize intelligence
and quickness over experience and sort of stability.
You wanna take risks on people.
And the great and part of the reason why startups
are full of young people is because young people
often don't have the baggage of executives been around for a long time.
But more importantly, they're willing to take risks.
So it used to be that you could predict whether a company was successful by the age of the founders.
And in that 20 and 30 year old period, the company would be hugely successful.
Startups wiggle.
They try something, they try something else,
and they're very quick to discard an old idea.
Corporations spend years with a belief system
that is factually false, and they don't actually
change their opinion until after they've lost all the contracts.
And if you go back, all the signs were there.
Nobody wanted to talk to them.
Nobody cared about the product.
Yet they kept pushing it.
If you're a CEO of a larger company,
what you want to do is basically figure out how to
measure this innovation so that you don't waste a lot of time.
Bill Gates had a saying a long time ago,
which was that the most important thing to do is to fail fast.
That from his perspective as the CEO of Microsoft,
founder of Microsoft, that he wanted everything to happen
and he wanted to fail quickly.
And that was his theory.
And do you agree with that theory?
I do.
Fast failure is important because you
can say it in a nicer way.
But fundamentally, at Google, we had this 70-20-10 rule
that Larry and Sergey came up with.
70% of the core business, 20% on adjacent business,
and 10% on other.
What does that mean, sorry?
Core business means search ads.
Adjacent business means something that you're trying,
like a cloud business or so forth.
And the 10% is some new idea.
So Google created this thing called Google X.
The first product it built was called Google Brain,
which was one of the first machine learning architectures.
This actually precedes DeepMind.
Google Brain was used to power the AI system.
Google Brain's team of 10 or 15 people
generated 10, 20, 30, 40 billion dollars
of extra profits over a decade.
So that pays for a lot of failures.
Then they had a whole bunch of other ideas
that seemed very interesting to me that didn't happen for one or another, and they would cancel
them. And then the people would get reconfigured. And one of the great things about Silicon Valley
is it's possible to spend a few years on a really bad idea and get canceled, if you will,
and then get another job having learned all of that.
My joke is the best CFO is one who's just gone bankrupt.
Because the one thing that CFO is not going to let happen
is to go bankrupt again.
Yeah.
Well, on this point of culture as well,
Google, as such a big company, must experience
a bunch of microcultures.
One of the things that I've always, I've kind of studied it as a cautionary tale
is the story of TGIF at Google,
which was this sort of weekly all hands meeting
where employees could ask the executives
whatever they wanted to.
And the articles around it say that it was eventually
sort of changed or canceled because it became unproductive.
It's more complicated than that.
So Larry and Sergey started TGIF, which I obviously
participated in.
And we had fun.
There was a sense of humor.
It was all off the record.
A famous example is the VP of sales, whose name was Omid,
was always predicting lower revenue
than we really had, which is called sandbagging.
So we got a sandbag, and we made him stand on the sandbag
in order to present his numbers.
It was just fun, humorous.
You know, we had skits and things like that.
At some size, you don't have that level of intimacy
and you don't have a level of privacy.
And what happened was there were leaks.
Eventually, there was a presentation,
I don't remember the specifics,
where the presentation was ongoing and someone was
leaking the presentation live to a reporter,
and somebody came on stage and said,
we have to stop now.
I think that was the moment where
the company got too big.
I heard about a story that, because from what I had understood, where the company got sort of too big.
I heard about a story that, because from what I had understood,
this might be totally wrong,
but it's all just things that Google employees have told me,
was that there wasn't many sackings, firings at Google,
wasn't many layoffs, wasn't really a culture of layoffs.
And I guess in part that's because the company
was so successful that it didn't have to make
those extremely, extremely tough decisions that we're seeing a lot of companies make today. I reflect
on Elon's running of Twitter. When he took over Twitter, the story goes that he went to the top
floor and basically said, anyone who's willing to work hard is committed to these values, please
come to the top floor, everyone else you're fired. This sort of extreme culture of culling and people
being sort of activists at work.
And I wanted to know if there's any truth in that.
There's some.
In Google's case, we had a position of why lay people off.
Just don't hire them in the first place.
It's much, much easier.
And so
in my tenure, the only layoff we did was 200 people in the sales structures right after
the 2000 epidemic. And I remember it as being extremely painful. It was the first time we
had done it. So we took the position, which is different at the time, that you shouldn't
have an automatic layoff. What would happen is that there was a belief at the time that you shouldn't have an automatic layoff. What would happen is that there was a belief at the time
that every six months or nine months,
you should take the bottom 5% of your people and lay them off.
Problem with that is you're assuming the 5%
are correctly identified.
And furthermore, even the lowest performers
have knowledge and value to the corporation
that we can take it.
So we took a very much more positive view of our employees
and the employees liked that.
And we obviously paid them very well and so forth and so on.
I think that the cultural issues ultimately
have been addressed.
But there was a period of time where there were,
because of the freewheeling nature of the company,
there were an awful lot of internal distribution lists
which had nothing to do with the company.
What does that mean?
They were distribution lists on topics of war, peace,
politics, so forth.
What's a distribution list?
A distribution is like an email,
think of it as a message board.
Okay.
Roughly speaking, think of it as message boards for employees.
And I remember that at one point somebody discovered
that there were 100,000 such message boards.
And the company ultimately cleaned that up because companies are not like universities
and that there are in fact all sorts of laws about what you can say and what you cannot say and so forth.
And so, for example, the majority of the employees were Democrats in the American political system.
And I made a point, even though I'm a Democrat, to try to protect the small number of Republicans
because I thought they had a right to be employees too.
So you have to be very careful in a corporation
to establish what does speech mean within the corporation.
And what you are hearing as wokeism
is really can be understood as,
what are the appropriate topics on work time
in a work venue
should you be discussing?
My own view is stick to the business.
And then please feel free to go to the bar, scream your views,
talk to everybody.
I'm a strong believer in free speech.
But within the corporation, let's just
stick to the corporation and its goals.
Because I was hearing these stories about,
I think in more recent times in the last year or two of people,
coming to work just for the free breakfast,
protesting outside that morning,
coming back into the building for lunch.
As best I can tell, that's all been cleaned up.
I did also hear that it had been cleaned up
because I think it was addressed
in a very high conviction way,
which meant that it was seen to.
How do you think about competition?
For everyone that's building something,
how much should we be focusing on our competition?
I strongly recommend not focusing on competition
and instead focusing on building a product
that no one else has.
And you say, well, how can you do that
without knowing the competition?
Well, if you study the competition,
you're wasting your time.
Try to solve the problem in a new way
and do it in a way where the customers are delighted.
Running Google, we seldom looked at what
our competitors were doing.
What we did, we spent an awful lot of time,
was what is possible for us to do?
What can we actually do from our current situation?
And sort of the running ahead of everybody
turns out to be really important.
What about deadlines?
Well, Larry established the principle
of OKRs,
which were objectives and key results.
And every quarter, Larry would actually
write down all the metrics.
And he was tough.
And he would say that if you got to 70% of my numbers,
that was good.
And then we would grade based on, are you above the 70%
or are you below the 70%.
And it was harsh.
And it worked.
You have to measure to get things done in big corporation.
Otherwise, everyone kind of looks good,
makes all sorts of claims, feels good about themselves,
but it doesn't have an impact.
What about business plans?
Should we be writing business plans as founders?
Google wrote a business plan.
It was run by a fellow named Salar.
And I saw it years later, and it was actually correct.
And I told Salar that this is probably the only business plan ever written for a corporation
that was actually correct in hindsight.
So what I prefer to do, and this is how I teach it at Stanford, is try to figure out
what the world looks like in five years, and then try to figure out what you're going to do in one year,
and then do it.
Right? So if you can basically say,
this is the direction,
these are the things we're going to achieve within one year,
and then run against that as hard goals,
not simple goals, but hard goals,
then you'll get there.
And the general rule, at least in a consumer business,
is if you can get an audience of 10 or 100 million people,
you can make lots of money.
So if you give me any business that has no revenue
and 100 million people, I can find a way
to monetize that with advertising and sponsorships
and donations and so forth and so on.
Focus on getting the user right, and everything else
will follow.
The Google phrase is focus on the user
and everything else will follow. The Google phrase is focus on the user and everything else is handled.
So Jane, Larry, you work with them for 20 years,
many decades, two decades.
What made them special?
Frankly, raw IQ.
They were just smarter than everybody else.
Really?
Yeah. In Sergey's case, his father was a very brilliant
Russian mathematician. His mother was also highly technical. His family is all
very technical. And he was clever. He's a clever mathematician. Larry, a different
personality, but similar. So an example would be that Larry and I are in his
office and we're writing on the whiteboard a long list about what we're
going to do. And he says, look, we're going to do this and this this and I said, okay, I agree with you. I don't agree with you
we make this very long list and
Sergey is out playing volleyball and
so he
Runs in his little volleyball shorts in a little shirt all sweating. He looks at our list and said this is the stupidest thing
I've ever heard and then he suggests five things and
He was exactly right. So we erased the whiteboard,
and then he of course went back to play volleyball,
and that became the strategy of the company.
So over and over again, it was their brilliance
and their ability to see things that I didn't see
that I think really drove it.
Can you teach that?
I don't know.
I think you can teach listening,
but I think most of us get caught up in our own ideas and we are always surprised that something new happened.
Like I've just told you that I've been in AI a long time.
I'm still surprised at the rate.
My favorite current product is called Notebook LM.
And for the listeners, Notebook LM is an experimental product out of Google DeepMind, basically
Gemini.
It's based on the Gemini back end.
And it was trained with high quality podcast voices.
It's terrifying.
And you basically give it a, so what I'll do
is I'll write something.
And again, I don't write very well.
And I'll ask Gemini to rewrite it to be more beautiful. I, I'll take that text and I'll put it in Notebook LM and it produces this
interview between a man and a woman who don't exist. And for fun what I do is I
play this in front of an audience and I wait and see if anyone figures out that
the humans are not human. It's so good they don't figure it out.
We'll play it now.
So this is the big thing that everyone's making a big fuss about.
You can go and load this conversation.
Now it's going to go out and create a conversation that's in a podcast style
where there's a male voice and a female voice and they're analyzing the content
and then coming up with their own kind of just creative content.
So you could go and push play right here.
We are back Thursday. Get ready for week three.
The injury report this week. What's a doozy? It's a long one. Yeah, it is. And it has
the potential to really shake things up. So for that, to me, Gemma, Notebook LM
is my chat GPT moment of this year. It was mine as well. And it's much of the
reason that I was, um, deeply confused because as a, and it's much of the reason that I was deeply confused.
Because as a podcaster who's building a media company, we have an office down the road,
25,000 square feet, we have studios in there.
We're building audio, video content in the dawn of this new world where the cost of production
of content goes to like zero or something.
And I'm trying to navigate how to play as a media owner.
So first place, what's really going on
is you're moving from scarcity to ubiquity.
You're moving from scarcity to abundance.
So one way to understand the world I live in
is it's scale computing generates abundance,
and abundance allows new strategies.
In your case, it's obvious what you should do.
You're a really famous podcaster, and you have lots of interesting guests.
Simply have this fake set of podcasts criticize you and your guests.
You're essentially just amplifying your reach.
They're not going to substitute for your honest brilliance and charisma here, but they're
going to accentuate it. They will be entertaining,
they will summarize it, and so forth.
It amplifies your reach.
If you go back to my basic argument,
that AI will double the productivity of everybody or more.
So in your case, you'll have twice as many podcasts.
What I do, for example, is I'll write something and I'll say,
I'll have it respond, and then to Gemma and I, I'll say, make it longer. And it adds more stuff. And I think, God, I do
this in like 30 seconds. Then how powerful. In your case, take one of these lengthy interviews you do,
ask the system to annotate it, to amplify it, and then feed that into fake podcasters and see what
they say.
You'll have a whole new set of audiences that love them more than you, but it's all from you.
That's the key idea here.
I worry because there's going to be potentially billions of podcasts that are uploaded to RSS
feeds all around the world and it's all going to sort of chip away at the moat that I've...
So many people have believed that, but I think that evidence is it's not true.
When I started at Google, there was this notion that celebrity would go away,
and there would be this very long tail of micro markets, specialists,
because finally you could hear the voices of everyone,
and we're all very democratic and liberal in our view.
That's the, what really happened was networks accentuated the best people and
they made more money, right?
You went from being a local personality to a national personality, to a global
personality, and the globe is a really big thing and there's lots of money and
lots of players.
So you, as a, as a celebrity are competing against a global group of people and
you need all the help you can to maintain your position. If you do it well by using these AI
technologies you will become more famous not less famous. Genesis, I've had a lot of conversations
with a lot of people about the subject of AI.
When I read your book and I've watched you do a series of interviews on this,
some of the quotes that you said really stood out to me.
One of them I wrote down here,
which comes from your book Genesis, it's on page 5.
The advent of artificial intelligence is in our view,
a question of human survival.
Yes, that is our view.
So why is it a question of human survival?
AI is going to move very quickly. It's moving so much more quickly than I've ever seen.
Because the amount of money, the number of people, the impact, the need.
What happens when the AI systems are
really running key parts of our world?
What happens when AI is making the decision?
My simple example, you have a car which is AI controlled,
and you have an emergency
or a lady is about to give birth or something like that,
and they get in the car
and there's no override switch,
because the system is optimized around the whole
as opposed to his or her emergency.
We as humans accept various forms of efficiency,
including urgent ones versus systemic efficiency.
You could imagine that the Google engineers
would design a perfect city that would perfectly
operate every self-driving car on every street,
but would not then allow for the exceptions
that you need in such an important issue.
So that's a trivial example, and one which is well understood,
of how it's important that these things represent human values, right?
That we have to actually articulate what does it mean.
So my favorite one is all this misinformation.
Democracy is pretty important.
Democracy is by far the best way to live and operate societies.
Look at, there are plenty of examples of this.
None of us want to work in essentially an authoritarian dictatorship. So you better figure out a way where the misinformation components do not screw up proper political examples.
Another example is this question about teenagers and their mental development and growing up into these societies.
I don't want them to be constantly depressed.
There's a lot of evidence that dates around 2015, when
all the social media algorithms changed from linear feeds to targeted feeds. In other words,
they went from time to this is what you want, this is what you want. That hyper focus has
ultimately narrowed people's political views, as we discussed, but more importantly, it's
produced more depression and anxiety. So all the studies indicate that basically,
if you time it to roughly then when people are coming to age,
they're not as happy with their lives,
their behaviors, their opportunities for this.
The best explanation is it was an algorithmic change.
Remember that these systems,
they're not just collections of content.
They are algorithmically deciding.
The algorithm decides what the outcome is for humans. We have to manage that. What we say
in many different ways in the book is that you have sort of a choice of whether the
algorithms will advance. That's not a question. The question is, are we advancing with it and do we have control over it?
There are so many examples where you could imagine an AI system could do something more efficiently,
but at what cost, right?
I should mention that there is this discussion
about something called AGI,
artificial general intelligence.
And there's this discussion in the press among many people that AGI occurs on a particular day, right?
And this is sort of a popular concept that on a particular day, five years from now or 10 years from now,
this thing will occur and all of a sudden we're going to have a computer that's just like us but even quicker.
That's unlikely to be the path.
Much more likely are these waves of innovation in every field.
Better psychologists, better writers.
You see this with ChatGBT already.
Better scientists.
There's a notion of an AI scientist that's working with the AI real scientists to accelerate
the development of more AI science.
People believe all of this will come, but it has to be under human control.
Do you think it will be under human control.
Do you think it will be? I do.
And part of the reason is I and others have worked hard
to get the governments to understand this.
It's very strange.
My entire career, which has gone for 50 years,
we've never asked for government for help
because asking the government help
is basically just a disaster
in the view of the tech industry. In this case, the people who invented it collectively came to the same
view that there need to be guardrails on this technology because of the potential for harm.
The most obvious one is how do I kill myself, give me recipes to hurt other people, that
kind of stuff. There's a whole community now in this part of the industry, which are called trust and safety groups.
And what they do is they actually have humans test the system before it gets released to make sure the harm that it might have in it is suppressed.
It literally won't answer the question.
When you play this forward in your brain, you've been in the tech industry for a long time and from looking at your work, it feels like you're describing this as the most sort of transformative, potentially harmful
technology that humans have really ever seen, you know, maybe alongside the nuclear bomb, I guess,
but some would say even potentially worse because of the nature of the intelligence and its autonomy.
You must have moments where you think forward into the future and your thoughts about that future aren't so rosy.
Because I have those moments.
Yes, but let's answer the question. I said think five years.
In five years, you'll have two or three more turns of the crank of these large models.
These large models are scaling with ability that is unprecedented.
There's no evidence that the scaling laws,
as they're called, have begun to stop.
They will eventually stop, but we're not there yet.
Each one of these cranks looks like it's a factor of two,
factor of three, factor of four of capability.
So let's just say turning the crank,
all of these systems get 50 times
or 100 times more powerful.
In and of itself, that's a very big deal, because those systems will be capable of physics
and math.
You see this with ODOT-1 and OpenAI, all the other things that are occurring.
Now what are the dangers?
Well the most obvious one is cyber attacks.
There's evidence that the raw models, these are the ones that have not been released,
can do what are called day zero attacks as well or better than humans.
A day zero attack is an attack that's unknown.
They can discover something new.
And how do they do it?
They just keep trying because they're computers and they have nothing else to do.
They don't sleep, they don't eat, they just turn them on and they just keep going.
So cyber is an example where everybody's concerned.
Another one is biology. Viruses are relatively easy to make.
And you can imagine coming up with really bad viruses.
There's a whole team.
I'm part of a commission.
We're looking at this to try to make sure that doesn't happen.
I already mentioned misinformation.
Another probably negative, but we'll see,
is the development of new forms of warfare.
I've written extensively on how war is changing.
The way to understand historic war is that it's stereotypically the soldier with the gun
on one side and so forth, world war trenches. You see this, by the way, in the Ukraine fight today,
where the Ukrainians are holding on valiantly against the Russian onslaught, but it's sort of
where the Ukrainians are holding on valiantly against the Russian onslaught.
But he's sort of, you know, mano a mano, you know,
man against man, sort of all of the stereotypes of war.
So in a drone world, which is the sort of the fastest way
to build new robots is to build drones,
you'll be sitting in a command center
in some office building connected by a network,
and you'll be doing harm to the other side
while you're drinking your coffee, right?
That's a change in the logic of war.
And it's applicable to both sides.
I don't think anyone quite understands how war will change,
but I will tell you that in the Russian-Ukraine war,
you're seeing a new form of warfare
being invented right now.
Both sides have lots of drones,
tanks are no longer very useful.
A $5,000 drone can kill a $5 million tank.
So it's called the kill ratio.
So basically it's drone on drone.
And so now people are trying to figure out how to
have one drone destroy the other drone.
Right. This will ultimately take over
war and conflict in our world in total.
You mentioned rural models.
This is a concept that I don't think people understand exists.
The idea that there's some other model that's the rule model
that is capable of much worse than the thing we play with
on our computers every day.
It's important to establish how these things work.
So the way these algorithms work is they have complicated training things
where they suck all the information in.
And they, one, we currently believe we've sort of sucked
all of the written word that's available.
That doesn't mean there isn't more,
but we've literally done such a good job
of sucking everything that humans have ever written.
It's all in these big computers.
When I say computers, I don't mean computers,
I mean super computers with enormous memories.
And the scale is mind-boggling.
And of course, there's this company called NVIDIA, which
makes the chips, which is now one
of the most valuable companies in the world,
surprisingly so incredibly successful
because they're so central to this revolution
and good for Jensen and his team.
So the important thing is when you do this training,
it comes out with a raw model.
It takes six months, and you wait 24 hours a day.
You can watch it.
It gets close to, there's a measurement that they use called the loss function.
When it gets to a certain number, they say, good enough.
So then they go, what do we have?
Right?
What do we do?
Right?
So the first thing is, let's figure out what it knows.
So they have a set of tests, and of course it knows all sorts of bad things, which they
immediately then tell it not to answer.
To me the most interesting question is in over a five-year period, these systems will
learn things that we don't know they learn.
How will you test for things that you don't know they know?
The answer in the industry is that they
have incredibly clever people who sit there
and they fiddle, literally fiddle with the networks,
and say, I'm going to see if it knows this.
I'll see if it can do this.
And then they make a list, and they say, that's good.
That's not so good.
So all of these transformations, so for example, you can show it a picture of a website and
it can generate the code to generate a website.
All of those were not expected.
They just happened.
It's called emergent behavior.
Scary.
Scary, but exciting.
And so far, the systems have held, the governments have worked well.
These trust and safety groups are working here in the UK.
One year ago was the first trust and safety conference.
The government did a fantastic job.
The team that was assembled was the best of all the country teams here in the UK.
Now what's happening is these are happening around the world.
The next one is in France in early February and I expect a similarly good result.
Do you think we're going to have to guard, I mean you talk about this, but do you think we're going
to have to guard these raw models with guns and tanks and machinery and stuff?
I worked for the Secretary of Defense for a while in my, in Google you could spend 20% of your time
on other things, so I worked for the Secretary of Defense to try to understand the US military.
And one of the things that we did
is we visited a plutonium factory.
Plutonium is incredibly dangerous and incredibly
secret.
And so this particular base is inside of another base.
So you go through the first set of machine guns,
and then you have a normal thing.
And then you go into the special place
with even more machines, guns, and even
because it's so secure.
So the metaphor is, do you fundamentally believe that the computers that I'm talking about
will be of such value and such danger that they'll have their own data center with their
own guards, which of course might be computer guards?
But the important thing is that it's so special that it has to be protected in the same way
that we protect nuclear bombs and programming.
An alternative model is to say that this technology will spread pretty broadly and there'll be
many such places.
If it's a small number of groups, the governments will figure out a way to do deterrence and
they'll figure out a way to do non-proliferation.
So I'll make something up.
I'll say there's a couple in China, there's a few in the US, there's one in Britain, of
course we're all tied together between the US and Britain, and maybe in a few other places.
That's a manageable problem.
On the other hand, let's imagine that that power is ultimately so easy to copy that it
spreads globally and it's accessible to, for example, terrorists, then you have
a very serious proliferation problem, which is not yet solved.
This is again speculation.
Because I think a lot about adversaries in China and Russia and Putin.
I think, I know you talk about them being a few years behind, maybe one or two years
behind, but they're eventually going to get there.
They're eventually going to get to the point where they have
these large language models or these AIs that can do
these day-zero attacks on our nation.
They don't have the same sort of social incentive structure
if they're a communist country to protect
and to guard against these things.
Are you not worried about what China is going to do?
I am worried, and I'm worried because you're going into a space of
great power without fully defined boundaries.
What Kissinger and we talk about this in the book,
the Genesis book is fundamentally about what happens
to society with the arrival of this new intelligence.
The first book we did, Age of AI,
was right before Chatt GPT. So now
everybody kind of understands how powerful these things are. We talked about it. Now
you understand it. So once these things show up, who's going to run them? Who's going
to be in charge? How will they be used? So from my perspective, I believe at the moment
anyway that China will behave relatively responsibly. And the reason is that it's not in their interest
to have free speech.
In every case in China, when they have a choice
of giving freedom to their citizens or not,
they choose non-freedom.
And I know this because I spent all the time dealing with it.
So it sure looks to me like the Chinese AI solution
will be different from the West because
of that fundamental bias against freedom of speech.
Because these things are noisy.
They make a lot of noise.
They'll probably still make AI weapons, though.
Well, on the weapons side, you have
to assume that every new technology is ultimately
strengthened in a war. The tank was
invented in World War I. At the same time you had the initial forms of airplanes.
Much of the Second World War was an air campaign which essentially built many
many things. And if you look at the there's a book called Freedom's Forge
about the American structure.
According to the book, they ultimately got to the point where they could build
two or three airplanes a day at scale. So in an emergency, nations have
enormous power. I get asked all the time if everyone's,
if anyone's going to have a job left to do, because this is the disruption of
intelligence. And whether it's people driving cars today, I mean, we saw the Tesla
announcement of the robo taxis, whether it's accountants, lawyers, and everyone
in between us, or podcasters.
Are we going to have jobs left?
Well, this question has been asked for 200 years.
There was, there were the Luddites here in Britain way back when, and
inevitably when these technologies come along, there's all these fears about them.
Indeed, with the Luddites, there were riots and people destroying the looms and all of
this kind of stuff.
But somehow we got through it.
So my own view is that there will be a lot of job dislocation, but there will be a lot
more jobs, not fewer jobs.
And here's why.
We have a demographic problem in the world, especially in the developed world, where we're
not having enough children.
That's well understood.
Furthermore, we have a lot of older people, and the younger people have to take care of
the older people, and they have to be more productive.
If you have young people who need to be more productive, the best way to make them more
productive is to give them more tools to make them more productive.
Whether it's a machinist that goes from a manual machine into a CNC machine,
or in the more modern case of a knowledge worker who can achieve more objectives.
We need that productivity group.
If you look at Asia, which is the centerpiece of manufacturing,
they have all this cheap labor.
Well, it's not so cheap anymore. So do you know what they did?
They added robotic assembly lines.
So today, when you go to China in particular, it's also true in Japan and Korea,
the manufacturing is largely done by robots.
Why? Because their demographics are terrible and their cost of labor is too high.
So the future is not fewer jobs.
It's actually a lot of jobs that are unfilled with people who may have a job skill mismatch,
which is why education is so important.
Now what are examples of jobs that go away?
Automation has always gotten rid of jobs that are dangerous, physically dangerous, or ones
which are essentially too repetitive and too boring for humans.
I'll give you an example, security guards. It makes sense that security guards
would become robotic because it's hard to be a security guard, you fall asleep,
you don't know quite what's it. And these systems can be smart enough to be very,
very good security. And these are important sources of income for these people.
They're going to have to find another job.
Another example, in the media, in Hollywood,
everyone's concerned that AI is going to take over their jobs.
All the evidence is the inverse, and here's why.
The stars still get money, the producers still make money,
they still distribute their movie,
but their cost of making the movie is lower
because they use more, they use, for example,
synthetic backdrops,
so they don't have to build the set.
They can do synthetic makeup.
Now, there are job losses there,
so the people who make the set and do the makeup are going to have to go back into construction and personal care.
By the way, in America, and I think it's true here,
there's an enormous shortage of people who can do high-quality craftsmanship.
Those people will have jobs.
They're just different and they may not be in Los Angeles.
Am I going to have to interface with this technology?
Am I going to have to get a neural link in my brain?
Because you go over the subject of there being these sort of two species of humans,
potentially ones that do have a way to incorporate themselves more with
artificial intelligence and those that don't.
And if, and if that is the case, what is the time horizon in your view of that happening?
I think Neuralink is much more speculative because you're dealing with direct brain connection
and nobody's going to drill on my brain until it needs it, trust me.
I suspect you feel the same.
I guess my overall view is that
I guess my overall view is that you will not notice how much of your world has been co-opted by these technologies because they will produce greater delight.
If you think about it, a lot of life is inconvenient.
It's fix this, call this, make this happen.
AI systems should make all that seamless.
You should be able to wake up in the morning and have coffee
and not have a care in the world and have the computer help
you have a great day.
This is true of everyone.
Now, what happens to your profession?
Well, as we said, no matter how good the computers are,
people are going to want to care about other people.
Another example, let's imagine you have Formula 1,
and you have Formula 1 with humans in it,
and then you have a robot Formula 1,
where the cars are driven by the equivalent of a robot.
Is anyone going to go to the robotic Formula 1?
I don't think so, because of the drama, the human achievement,
and so forth.
Do you think that when they run the marathon here in London,
they're gonna have robots running with humans?
Of course not, right?
Of course the robots can run faster than humans.
It's not interesting.
What is interesting is to see human achievement.
So I think that the commentators who say,
oh, there won't be any jobs, we won't care,
I think they miss the point that we care a great deal
about each other as human beings
We have opinions you have a detailed opinion about me having just met me met me right now
And I for you we just are naturally set up your face your mannerisms and so forth
We can describe it all right the robot shows up is like oh my god another robot
How boring why is some Altman working on the founder of open AI?
One of the co-founders of OpenAI,
working on universal basic income projects like WorldCoin then?
Well, WorldCoin is not the same thing as universal basic income.
There is a belief in the tech industry that it goes something like this.
The politics of abundance, what we do, is going to create so much abundance that most people won't have to work,
and there'll be a small number of groups that work, who are typically these people themselves,
and there will be so much surplus that everyone can live like a millionaire and everyone will be happy.
I completely think this is false. I think none of what I just told you is false.
But all of these UBI ideas come from this notion that humans don't behave the way we actually do.
So I'm a critic of this view.
I believe that we as humans, so I'll give an example, is we're going to make the legal profession much, much easier
because we can automate much of the technical work of lawyers.
Does that mean we're going to have fewer lawyers?
No. The current lawyers will just do more laws. They'll do more, they'll add more complexity. The system doesn't
get easier. The humans become more sophisticated in their application of the principles. We are
naturally basically, we have this thing called basically reciprocal altruism. That's part of us,
but we also have our bad sides as well. Those are not going away because of AI.
When I think about AI,
there's a simple analogy you often think of is,
say my IQ is Stephen Bartlett is 100,
and there's this AI that sat next to me whose IQ is 1000.
What on earth would you want to give Stephen to do?
Because that 1000 IQ would have really bad judgment
in a couple of cases.
Because remember that the AI systems
do not have human values
unless it's added.
I would much rather talk to you about something
involving a moral or human judgment, even with the 1,000.
I wouldn't mind consulting it.
So tell me the history.
How was this resolved in the past?
How were these?
But at the end of the day, in my view,
the core aspects of humanity, which
have to do with morals and judgment
and beliefs and charisma, they're not going away.
Is there a chance that this is the end of humanity?
No.
The way humanity dies is much harder to eliminate all of humanity than you think.
All the people I've looked with on these biological attacks say it's, it takes more than one horrific pandemic
and so forth to eliminate humanity. And the pain can be very, very high in these moments.
Look at the World War I, World War II, the Holomodor in Ukraine in the 1930s, the Nazis.
These are horrifically painful things, but we survived, right? We as a humanity survived
and we will.
I wonder if this is the moment where humans couldn't see past around the corner because,
you know, I've heard you talk about how the AIs will turn and they'll be agents and they'll be
able to speak to each other and we won't be able to understand the language.
Well, I have a specific proposal on that. There are points where humans should assert control.
And I've been trying to think about where are they?
I'll give you an example.
There's something called recursive self-improvement,
where the system just keeps getting smarter and smarter
and learning more and more things.
At some point, if you don't know what it's learning,
you should unplug it.
But we can't unplug them, can we?
Sure you can.
There's a power plug and there's a circuit breaker.
Go and turn the circuit breaker off.
Another example, there is a scenario, theoretical,
where the system is so powerful it
can produce a new model faster than the previous model was
checked.
That's another intervention point.
So in each of these cases, if agents, and the technical term is called agents, what
they really are is large language models with memory, and you can begin to concatenate them.
You can say, this model does this, and then it feeds into this, and so forth. You build
very powerful decision systems, we believe. This is the thing that's occurring this year
and next year. Everyone's doing them, they will arrive.
The agents today speak in English.
You can see what they're saying to each other.
They're not human, but they are communicating what they're doing English to English to English.
As long as, and it doesn't have to be English, but as long as they're human understandable,
but let's, so the thought experiment is one of the agents says, I have a better idea.
I'm going to communicate in my own language
that I'm going to invent that only other agents understand.
That's a good time to pull the plug.
What is your biggest fear about AI?
My actual fear is different from what you might imagine.
My actual fear is we're not going
to adopt it fast enough to solve the problems that
affect everybody.
And the reason is that if you look at everyone's everyday lives, what do they want?
They want safety, they want healthcare, they want great schools for their kids.
We just work on that for a while.
Why don't we make people's lives just better because of AI?
We have all these other interesting things.
Why don't we have a teacher that is an AI teacher that works with existing teachers
in the language of the kid, in the culture of the kid to get the kid as smart as they
possibly can?
Why don't we have a doctor or doctor's assistant really that enables a human doctor to always
know every possible best treatment and then based on their current situation, what the
inventory is, which country is, how their insurance works, what is the best way to treat that patient.
Those are relatively achievable solutions.
Why don't we have them?
If you just did education and healthcare globally, the impact in terms of lifting human potential
up would be so great, right, that it would change everything.
It wouldn't solve the various other things that we complain about,
about, you know, this celebrity or this misbehavior or this conflict,
or even this war.
But it would establish a level playing field of knowledge and opportunity
at a global level that has been the dream for decades and decades and decades.
Chuck me that perfected.
One of the things that I think about all the time,
because my life is quite hectic and busy,
is how to manage my energy load.
And as a podcaster, you kind of have to manage your energy
in such a way that you can have these articulate conversations
with experts on subjects you don't understand.
And this is why perfect Ted has become so important in my life
because previously when it came to energy products,
I had to make a trade-off that I wasn't happy with.
Typically, if I wanted the energy, I had to deal with high sugar.
I had to deal with jitters and crashes
that come along with a lot of the mainstream energy products.
And I also just had to tolerate the fact that if I want energy,
I have to put up with a lot of artificial ingredients,
which my body didn't like.
And that's why I invested in Perfect Ted
and why they're one of the sponsors of this podcast.
It has changed not just my life, but my entire team's life.
And for me, it's drastically improved my cognitive performance, but also my physical performance.
So if you haven't tried Perfect Ted yet, you must have been living under a rock. Now is
the time. You can find Perfect Ted at Tesco and Waitrose or online where you can enjoy
40% off with code Diary 40 at checkout. Head to perfectted.com.
Kick off an exciting football season with Bet MGM, an official sportsbook partner of Perfect Ted dot com. and as an official sports book partner of the NFL, Bet MGM is the best place to fuel your football fandom
on every game day.
With a variety of exciting features,
Bet MGM offers you plenty of seamless ways
to jump straight onto the gridiron
and to embrace peak sports action.
Ready for another season of gridiron glory?
What are you waiting for?
Get off the bench, into the huddle,
and head for the end zone all season long. Visit betmg.com for terms and conditions must be 19 years of age or older.
Ontario only. Please gamble responsibly gambling problem for free assistance.
Call the Connacks Ontario helpline at one eight six six five three one twenty six
hundred. That MGM operates pursuant to an operating agreement with iGaming Ontario.
Throughout the pandemic, I've been a big supporter.
It was a contrarian view, but I think it's now less of a contrarian view
that companies and CEOs need to be clear in their convictions
around how they work.
And one of the things that I've been criticized a lot for
is that I'm for having people in a room together.
So my companies, we're not remote.
We work together in an office, as I said,
down the road from here. And I believe in that because I think of community and engagement and
synchronous work. And I think that work now has a responsibility to be more than just a set of tasks
you do in a world where we're lonelier than ever before. There's more disconnection and especially
for young people who don't have families and so on. Having them work alone in a small white box
in a big city like London or New York
is robbing them of something which I think is important. This was a bad, this was a contrarian
view. It's become less contrarian as the big tech companies in America have started to roll back some
of their initial knee-jerk reactions to the pandemic. That there, a lot of them are asking
their team members to come back into the office at least a couple of days a week. What's your point
of view on this? So I have a strong view that I want people in an office.
It doesn't have to be all one office,
but I want them in an office.
And partly it's for their own benefit.
If you're in your 20s, when I was a young executive,
I knew nothing of what I was doing.
I literally was just lucky to be there.
And I learned by hanging out at the water cooler,
going to meetings, hanging out, being in the hallway,
had I been at home, I wouldn't have had any of that knowledge,
which ultimately was central to my subsequent promotions.
So if you're in your 20s, you want to be in an office
because that's how you're going to get promoted.
And I think that's consistent with the majority of the people
who really want to work from home have honest problems
with commuting and family and so forth.
They're real issues.
The problem with our joint view is it's not supported by the data.
The data indicates that productivity is actually
slightly higher when you allow work from home.
So you and I really want that company of people sitting
around the table and so forth.
But the evidence does not support our view.
Interesting.
Is that true?
It is absolutely true.
Why is Facebook and all these companies rolling back their,
and Snapchat rolling back their remote working policies?
Not everyone is.
And most companies are doing various forms of hybrids,
where it's two days or three days or so forth.
I'm sure that for the average listener here
who works in public security or in a government,
they say, well, my god, they're not in the office every day. But I'll tell you that at least for the industries that
have been studied, there's evidence that allowing that flexibility from work from home increases
productivity. I don't happen to like it, but I want to acknowledge the science is there.
What is the advice that you wish you'd gotten at my age that you didn't get?
The most important thing is probably keep betting on yourself and bet again and roll the dice and roll the dice. What happens as you get older is you realise that these opportunities were in front
of you and you didn't jump for them. Why you were in a bad mood or you didn't know who to call
or so forth. Life can be understood as a series of opportunities
that are put before you and they're time limited.
I was fortunate that I got the call
after a number of people had turned it down
to work for Larry and with Larry and Sergey at Google.
Changed my life.
But that was luck and timing.
My one friend on the board at the moment said,
I was very thankful to him, and he said,
but you know, you did one thing right.
I said, what?
He said, you said yes.
So your philosophy in life should be
to say yes to that opportunity.
And yes, it's painful, and yes, it's difficult,
and yes, you have to deal with your family,
and yes, you have to travel to some foreign place
and so forth.
Get on the airplane and get it done.
What's the hardest challenge
you've dealt with in your life?
Well, on the personal side, you know, I've had the,
I've had a set of, you know, personal problems
and tragedies like everyone does.
I think on a business context,
there were moments at Google where we had control over an industry that we didn't execute
well.
The most obvious one is social media.
At the time when Facebook was founded, we had a system which we called Orkut, which
is really, really interesting.
And somehow we did everything well, but we missed that one.
Right.
And I would have preferred, and I'll take responsibility for that.
We have a closing tradition on this podcast
where the last guest leaves a question for the next guest,
not knowing who they're going to be leaving it for.
And the question left for you is,
what is your non-negotiable?
Something you do that significantly improves everyday life.
Well, what I try to do is I try to be online
and I also try to keep people honest.
Every day you keep, you hear all sorts of ideas and so forth,
half of which are right, half of which are wrong.
I try to make sure I know the truth
as best we can determine it.
Eric, thank you so much.
Thank you.
It's such an honor.
Your books have shaped my thinking
in so many, so many important ways.
And I think your new book, Genesis,
is the single best book I've read on the
subject of AI, because you take a very nuanced approach to these subject matters. And I think
sometimes it's tempting to be binary in your way of thinking about this technology, the pros and
the cons, but your writing, your videos, your work takes this really balanced but informed approach
to it. I have to say as an entrepreneur, the Trillion Dollar Coach book as well, I highly
recommend everybody goes and reads because it's just a really great manual of being a leader in
the modern age and an entrepreneur.
I'm going to link all five of these books in the comments section below.
The new book Genesis comes out in the US I believe on the 19th of November.
I don't have the UK date but I'll find it and I'll put it in.
But it's a book, it's a critically important book that nobody should avoid.
I've been searching for answers that are
contained in this book for a very, very long time.
I've been having a lot of conversations on
this podcast in search of some of these answers and I feel
clearer about myself, my future,
but also the future of society because I've read this book.
So thank you for writing it.
Thank you. Let's thank Dr. Kissinger.
He finished the last chapter in his last week of life in his deathbed.
That's how profound he thought that this book was.
And all I'll tell you is that he wanted to set us up for a good next 50 years.
Having lived for so long and seen both good and evil,
he wanted to make sure we continued the good progress we're making as a society.
Is there anything he would want to say?
Any answer he gave would take five minutes.
A remarkable man. Thank you, Eric.
Thank you.
I'm going to let you into a little bit of a secret and you're probably going to think
that I'm a little bit weird for saying this, but our team are our team because we absolutely
obsess about the smallest things. Even with this podcast, when we're recording this
podcast we measure the CO2 levels in the studio because if it gets above a thousand parts
per million, cognitive performance dips. This is the type of 1% improvement we make on our
show and that is why the show is the way it is. By understanding the power of compounding
1% you can absolutely change your outcomes in your life. It isn't about
drastic transformations or quick wins, it's about the small consistent actions
that have a lasting change in your outcomes. So two years ago we started the
process of creating this beautiful diary
and it's truly beautiful. Inside there's lots of pictures, lots of inspiration and motivation as
well, some interactive elements and the purpose of this diary is to help you identify, stay focused
on, develop consistency with the 1% that will ultimately change your life. We have a limited
number of these 1% diaries and if you want to do this with me then join our waiting list. I can't guarantee all of you that join
the waiting list will be able to get one but if you join now you have a higher
chance. The waiting list can be found at the diary dot com. I'll link it below but
that is the diary dot com. Bye!