The Megyn Kelly Show - The Benefits and Dangers of Artificial Intelligence, with Nick Bostrom and Andrew Ng | Ep. 151
Episode Date: August 26, 2021Megyn Kelly is joined by Nick Bostrom, director of the Future of Humanity Institute and author of "Superintelligence," and Andrew Ng, founder of DeepLearning.AI and co-founder of Coursera, to talk abo...ut what artificial intelligence (AI) is and what's coming in the future, what "superintelligence" is and why it's dangerous, what happens to how humans work when AI runs the world, how AI can help with healthcare in the future, the America vs. China arms race in artificial intelligence, transhumanism and cryogenics, self-driving cars, 3-D printing, and more.Follow The Megyn Kelly Show on all social platforms:Twitter: http://Twitter.com/MegynKellyShowInstagram: http://Instagram.com/MegynKellyShowFacebook: http://Facebook.com/MegynKellyShowFind out more information at:https://www.devilmaycaremedia.com/megynkellyshow
Transcript
Discussion (0)
Welcome to The Megyn Kelly Show, your home for open, honest, and provocative conversations.
Hey everyone, I'm Megyn Kelly. Welcome to The Megyn Kelly Show.
Oh, we have a fascinating show for you today. Fascinating.
It's about artificial intelligence.
I've been asking my team to line up a show on this and we have the two greatest guys,
the most brilliant, just greatest guys to talk about it with. Don't you want to know where this is going? Right? Like, okay, there's Amazon Alexa and then there's something called
super intelligent computers that are going to take over the world and possibly eliminate humanity.
Opposite extremes. It can be wonderful and it can be life changing in a great
way. And it could also potentially be life extinguishing if it gets into the wrong hands
and so on. So we've got all these angles covered. You're going to love, love, love this show.
We're going to kick it off with a guy named Nick Bostrom. He's a professor at Oxford. He's the
director of something called the Future of Humanity Institute. He's done so many
things. He's been a teacher at Yale. He did his postdoctoral fellowship at Oxford. He's the
founding director, as I say, of this Future of Humanity Institute. That's at Oxford as well.
Researches the far future of human civilization. A professor of philosophy at Oxford. He has been
included in Foreign Policy's Top 100 Global
Thinkers list repeatedly. He was listed by Prospect Magazine in their list of world's top
thinkers. You get it? Sensing a theme. And he's probably best known for his incredibly best
selling book, Superintelligence, Paths, Dangers, Strategies. It's been recommended by everyone from Elon Musk, who's a huge fan
of our guest, Nick Bostrom to Bill Gates. And he is one of the leading thinkers on where
super intelligence, what it is, where it's going and how it needs to be handled. That's sort of
where the machines become smarter than the humans. Now we're going to talk to him. Then we're going
to be joined by a guy named Andrew Ang. He's also incredibly brilliant. So excited to talk to him. Then we're going to be joined by a guy named Andrew Ang. He's also
incredibly brilliant. So excited to talk to these guys. He's the founder of deeplearning.ai,
co-founder of Coursera. Coursera is huge. This is the world's leading massive open online courses
platform. He's also an adjunct professor at Stanford. He's the founding, he was the founding lead of the Google brain team.
He coined the term Google brain. He was the chief scientist at Baidu, which is China's Google.
There's no Google in China. This is China. I mean, this guy's been, he's led a 1300 person AI group
for China's Google. All right. So he's done, he's basically been in charge of everything. He's,
he's a globally recognized leader in AI. And I would describe him as more of a happy warrior when it comes to AI. Very optimistic about it and what it can do and talk about how it could change your life for the better. And I think you're going to be delighted with the show. And I predict you'll be sharing it with everyone. OK, so we're going to start with our guests in one minute. Real quickly, here's this. There is so much that I want to go over
with you. Just treat me like I am AI 101 because I know almost nothing about this field, but am
dying to know more. And just having read what I've read now of your work and having listened to your TED Talks and so on, I'm terrified.
I'm terrified. So let's start here. What is super intelligence?
I just use it as a term for any form of a general artificial intelligence that
greatly surpass humans in all cognitive abilities.
And so in other words, when the machines get smarter than we are?
Yeah.
Okay.
And how likely is it to come into existence?
I think it is highly likely that it will eventually come into existence.
I think almost a certainty if we avoid destroying ourselves through some
other means before. But if science and technology continue to advance on a wide front, then I think
eventually we'll figure out how to produce high-level machine intelligence and super intelligence.
Is it in the works right now?
Well, I mean, in some sense, it has been in the works for a long time in that
people have been trying to understand better how the brain works, how to use statistical methods
to better extrapolate from past data, how to build faster computers. All these are potential
ingredients. And of course, the field of artificial intelligence has really burgeoned in the last eight years or so with the deep learning revolution.
And so there's quite a lot of excitement now about what is becoming possible with machine learning.
But predicting how far we are from being able to match and then maybe surpass human level intelligence is
really hard.
And I think we just have to acknowledge that there's enormous uncertainty on the timeline
of these kind of things.
Mm-hmm.
Now, we're going to be joined in after you by another guest who his belief is, the way
he phrases it is there's two types of AI. There's a N I artificial narrow
intelligence and a G I artificial general intelligence. And he says artificial narrow
intelligence is basically like the stuff we've seen already where you're typing on your computer
and it recognizes the word you're typing and completes it, you know, or you're, I don't know,
maybe Amazon Alexa or the self-driving car,
like those things that are improving our day-to-day living. But general intelligence
is what you're talking about, super intelligence, which is, that's a whole different realm. And
that's the thing, as I understand it, that you're sounding the alarm on.
Yeah. Or at least try to draw attention to ask something that would be very important.
I think it has an equally large upside if we get this transition to the machine intelligence era right.
But I do think also there are significant risks associated with this.
But yeah, I think it is useful to make this distinction between kind of specialized AI systems that can only do one thing.
Maybe sometimes at the superhuman level.
So for a long time, we've had chess computers that can beat any human.
But contrasting that to something that matches humans, say, in our general learning abilities
and reasoning abilities that make it possible for a human to learn any of thousands of different
occupations or to solve novel problems that you've never seen before and to use common sense.
So why would we be seeking superintelligence? Because we're going to get into the risks of it
and the possibility that machines not only get smarter than humans, but actually take over the
world and possibly eliminate humans, why
would we be even going down that route?
Why wouldn't we have just seen that future and said, why would we create another being
on Earth that's smarter than we are that could take over this planet?
For the most part, we are not seeking super intelligence, but greater intelligence.
Like to have, you have some AIs today and it'd be nice if they made fewer errors
and were a little bit more capable.
Then of course, if we succeed in that,
we would want them to be better still.
So it's not so much that there are a lot of people
who are specifically trying to create super intelligence,
but there are huge drivers for making progress
in having better forms of machine intelligence.
I mean, in general, it's not as if
human civilization has some kind of great master plan either, right? I mean, we are not sort of
having 100-year plans for which technologies we're going to promote and which less. So
for the most part, things just happen. And there are these local reasons why people do things.
And that, I I think is also true
for the field of AI. I know that you've said you, a possible scenario is we create a machine,
a computer that has general intelligence below human level, but is superior mathematically.
And in this scenario, human beings understanding the risks of creating
a super intelligent machine would take safety measures. They would pre-program it, for example,
so that it would always work from principles that are under human control. We would try to box it in
with limitations. We would try to be careful. How do you see the possibility of that machine
that we've tried to take these precautions with, nonetheless on its own, becoming a super intelligent being, for lack of a better word?
Well, so I think we will keep trying to make machines smarter.
And if we succeed in this, at some point, they will become smarter than us.
I think at that point, once you have maybe even weak super intelligence,
development is likely to be very fast for various reasons. For a start, at this point,
the technology would be extremely economically valuable. So massive investments would flow in
to running these AIs on even larger data centers or applying even more human ingenuity to improve
them still further. At some point also, you might get this feedback loop when the AI itself is able
to contribute to its own further improvement. So you might get a kind of intelligence expulsion
where you go from something maybe just slightly human level to something radically super intelligent within a relatively brief span of time.
And then the question becomes, would we be able to steer what such a super intelligent system would decide to do?
Like it would be very powerful for basically the same reasons that we humans are very powerful on this planet today
compared to other animals. The gorillas are much stronger than us and cheetahs are much faster and
yet the fate of the gorillas depend a lot more on what we humans decide to do than on what the
gorillas do. So if you have something that radically outstrips us in terms of its general intelligence, its ability to strategize, to develop new technologies, then it might well be that the future will be shaped by its preferences and its decisions.
And it might be non-trivial for us to make sure that those are aligned with our human values, especially if we need to get it right on the first try. Right. You've been saying that for a while, saying if we're going to do this, we have to
make absolutely sure that they are aligned with our human values and there are all sorts of dangers
in doing it anyway. I mean, who's going to determine what the values are and what if not
everybody's on the same page and what if we do it, but it gets into the wrong hands and
people misuse it and so on. But let me just stick with the gorilla thing. And what if we do it, but it gets into the wrong hands and, you know, people
misuse it and so on. But let me just stick with the gorilla thing. That's interesting. So, I mean,
cause I've, I've heard you use the example of the tiger too. The reason the tiger gets in the cage
is, and can be controlled by us is because we have superior intelligence to it. So it may be
more powerful, more lethal, but we're smarter. And so we can trick the tiger into the cage and keep it there.
And the same is true of the gorilla.
And so in the scenario where we have a super intelligent machine, we're the gorilla.
Well, that would be one type of scenario or one type of risk that could arise from future advances in AI, that the AI itself somehow takes over or runs amok or is poorly aligned.
I think there are also scenarios in which we maybe manage to tie it to our purposes,
but then we do with it as we have done with practically every other general purpose technology
in human history, that we've also used it for a lot of bad ends to oppress each other, to wage war against each other.
And so that's another way in which advances in AI could turn out
to be harmful if they become a means of kind of amplifying human conflict
or if they empower more people to develop other dangerous technologies,
like maybe you could use AI to more rapidly invent new biological warfare agents or something like that, that might proliferate.
So I think there are several distinct classes of dangers that one would have to be aware of as we move into this future.
Well, I know, I mean, you think that if you're the creator of it, you can control
it, right? You can program it such that it won't get smarter than you. And it won't, how could you,
I look at the computer on my desk, how could it ever control me? How's that? It doesn't seem
possible that, because you're not talking about robots running around, you know, threatening us
with knives and guns. You're talking about this thing, this thing sitting on the desk,
getting smart enough that somehow it's controlling humans. And you think about that
in the abstract and you think, how could that ever, that doesn't make any sense. How could this
thing sitting on my desktop ever control me? Yeah. I mean, presumably not the thing that
actually sits on a desktop now, but I mean, you're right. It's easy enough to not develop
super intelligence for any one individual or group but i think it's likely
that we as a civilization will nevertheless do it and i think actually probably we should be doing
it i see it kind of as this portal in a sense that all plausible paths to a really great future
will eventually go through um now it might be that it would be wise for us to go a
little bit more slowly as we approach this gate, so we don't kind of slam into the wall on the
side. We certainly should be very careful with this transition, but I think it's kind of unrealistic
to think that everybody, all the different countries, all the different labs would decide
to refrain from pushing forward with this when it
has such enormous potential for positive applications in the economy, for medicine,
for security, for arts and entertainment, for practically any area at all where human
intelligence is useful, which is pretty much any area. So I think our focus should be not so much
should we do it or not, but how can we
position ourselves in the best possible ways? Do the research in advance, say on how to align,
to find scalable methods for AI alignment, try as much as we can to build cooperative
institutions and norms and practices around the deployment of AI
and then proceed cautiously.
But can you walk us through that scenario for people who don't, I mean, this is a big
concept for folks who don't work in your field.
How could it ever be that the machines would take over?
I mean, I know you've spoken about, look, it could happen.
They could be controlling us.
They could control all the other computers and things.
Humanity could cease to exist. And we need to be cognizant of this possibility. But how?
Well, I mean, so if we look at, say, humans have caused a lot of mischief over the course of history, that it's for the most part, not because they use their own personal bodily strengths to like wield a sword and go around chopping people's head up.
They use maybe their pen or their voice to issue commands
to persuade others and then thereby to exert great influence.
So those modes of action would be available
even just to a laptop sitting on a desk.
If it could print text on a screen, I think that's already enough
for a sufficiently great intelligence to be very powerful.
But of course, there is no reason to think it would have to stop
with these indirect methods.
You could maybe persuade humans to be your arms and legs,
to do your work in some lab, to develop different robotic
systems that you could use or hack into, or maybe develop some kind of nanotechnology that would
then give you more direct access to the world. I think there are many ways with a sufficient level of intelligence to kind of think above and around and through humans and achieve your ends.
It's also likely that if we develop this, we would want to give them access to a lot of stuff because that would make it more useful, right?
If you could have an AI that drives your car, that's more useful than an AI that just sits and tells you how to drive the car.
If it could run your factories, if it could pilot your airplanes, maybe we will have a lot of robots
by the time this transition happens so that there would be an even more ready-made infrastructure
for it to tap into. Let's talk about the factories because I've heard you say that
this super intelligent computer or these computers could, quote, create nanofactories covertly distributed at undetectable concentrations in every square meter of the globe that would produce a worldwide flood of human killing devices on command and that AI would then achieve world domination.
What?
That doesn't sound good.
No.
And I think there is a kind of, I mean, it's kind of almost by definition impossible for
us to know exactly what the best strategy would be that would come into view if you
were a malicious super intelligence, because kind of by definition, it can think much more deeply in the strategic space than we can but i think what that particular
scenario is meant to illustrate is the idea that one of the things that a super intelligence
certainly could do would be to invent new technologies that we can already see are
physically possible that we haven't yet, however,
been able to actually manufacture and build
because they involve a lot of detail work
to kind of figure out the specifics.
But if research were done on a digital timescale
rather than on a kind of slow biological human timescale,
then these futuristic technologies
might become available quite quickly
after you have super intelligence.
And then using those futuristic technologies would possibly be one way to leverage its power
and get the kind of advantage. Not the only one, but I think that's one possible path. Whether it
would be specifically by developing nanorobots or whether there's some other technologies that we haven't yet thought of, I think we'll just have to be agnostic about.
It makes me think of the movie War Games to go back to the 1980s when I grew up.
And in War Games, they created this computer that could help with nuclear war and planning out the war games that the United States was going to be
engaged in with presumably Russia. And they couldn't stop it. The computer sort of got a
mind of its own. It started it was going to launch the missiles anyway, even when they had figured
how to like turn off the computer. But when they were trying to deal with it first, one guy says,
why don't you just unplug the damn thing? Right. And that that wasn't going to work. And then even when they found a way of
telling it to stand down, it wouldn't stand down. It had a mind of its own and it kept going.
Just to bring it to sort of an example that a lot of people may have seen. Is that basically what
we're talking about? That once you've said before, it may not be possible to put the genie back in
the bottle. Once we create the thing, it's not going possible to put the genie back in the bottle. Once we create
the thing, it's not going to be so simple to just unplug it or tell it not to do the thing that we
find awful. Uh, yeah, I think like, I mean, so you might, and if like the, the, the apes, uh, uh,
that we evolved from were still around and they thought, well, maybe these humans were a bad idea.
Like it would kind of be hard for them to unwind what had happened.
And similarly, if you have some super intelligence,
once in existence, it might have strategic incentives
to avoid us shutting it down.
And so if it's very skillful at achieving its goals,
then in particular, it would be skillful at achieving this goal
of preventing its own shutdown.
Or maybe it will make surreptitious copies in other computer systems so that it doesn't matter if the original is terminated or spawn other sub-agents that can execute on its preferences.
So I think we shouldn't rely on that as the sole method of ensuring a safe future, that we build systems, we don't bother to align them just to see what they do and then planning, well, if things go wrong, we just unplug a couple hundred years of feeling pretty well protected from the world, thanks to these oceans that surround our country on the east and the west coast.
And obviously with nuclear weapons, that's less the case.
But we've reached a detente with other nuclear powers that we understand it would be mutually assured destruction.
And we don't launch those for the most part.
This threat doesn't recognize oceans, boundaries.
This makes everyone accessible.
If as you've posited, the supercomputer,
the super intelligence can create drones
that come right to your doorstep and drop a bomb
or create an office robot
that may be cleaning the carpets at night,
but then assassinate the CEO when they turn around.
Like there's all sorts of nefarious ways
in which it could be unleashed on people worlds away.
Yeah, I think with respect to superintelligence,
like, yeah, I think we're all kind of eggs
in the same basket,
at least with respect to this class of dangerous that
arise from the ai itself doing something that is on the line with its creator's intention um so
yeah i think we have a common cause there to to try to figure out how to how to align these systems
so and i mean i i'm reasonably hopeful about that.
When I was writing this book of mine,
I think it came out in 2014,
this was an almost entirely neglected field.
It looked like we were moving towards
developing the most important technology ever.
And hardly anybody was thinking about
what would happen if we were succeeding
in this goal of AI.
It's all along
been to not just do specific tasks, but to make machines generally smart like humans. But it was
like that was such a radical goal that imagination exhausted itself in just conceiving of this
possibility of matching humans, that it couldn't take the obvious next step, that if we reach that,
we will have super intelligence and then thinking about the consequences. So yeah, drawing attention to that
was a big part of the reason for writing this book.
But since then, there has now sprung up
a kind of technical subfield of people doing serious research
and actually trying to figure out how to align
arbitrarily capable AI systems
by harnessing their ability to learn, to make them better able to
learn and understand what our intentions are when we ask them to do something or what
train them on specific tasks. That's crazy that that was only seven years ago, that this wasn't
even being discussed that seriously by those academics and so on who are now taking such a hard look at it.
Meanwhile, this is probably going to be an industry that employs many of our children, grandchildren and so on.
Yeah, I mean, I think, I mean, certainly if there are advances in AI, it's going to have a big economic impact.
I mean, it might be that if you get super intelligence, then the effects on employment will be...
I mean, at some point, if you have sufficiently generally capable AI,
basically all jobs become automatable. So I think in a good scenario,
I mean, in some sense, the goal is full-on employment. right? So the idea is to try to develop technologies so powerful that we don't have to
do stuff we don't like to do.
And so
if you define work as the kind of thing
people have to pay you to do, then
yeah,
almost all of that could
theoretically be done by a sufficiently
capable AI system.
It sounds totally unfulfilling. It sounds awful.
Well, I think it would be a situation where we would have to rethink a lot of our assumptions about what it means to be human,
kind of from the ground up.
I actually believe that there would be some extremely wonderful possibilities that would be unlocked by this.
But it would require a pretty
round-up rethink.
We would, for example, have to find our dignity
and meaning in life, not in what we do
for a living or being a breadwinner, but in other areas, in relationships,
in hobbies, in things we do for our own sake rather than as a means to some other end.
But yeah, I mean, I think it would be a kind of high quality problem to have for us. I think first we need to make sure we don't kind of crash into something on the way there.
Well, and before we get to sort of the benefits, let's talk about the possibility of a terrorist getting a hold of this technology if we create it or it creates itself from something we've created, or even
an actor like China, which is very advanced in the AI field. And our defense secretary has made
clear that this is an area in which we're equal. We're not, at best, we're equal with China. It's
not like our military is so much more powerful than theirs. It is. But I'm just saying in this
department, which is a potential security threat,
they're on par with us and they're working it
and they aim to be the world leader in AI.
And we don't trust China for good reason.
So we do need to be worried about what they're going to create.
Not to mention, as I say,
somebody more nefarious like a terrorist actor.
So what is the likelihood of that?
Well, I think at present, the West is ahead in AI,
certainly in this kind of basic research
of trying to develop general artificial intelligence.
But it's not a huge lead.
It's not like it's 20 years ahead or something like that.
The field is very open.
Researchers publish their findings and so other teams can catch up within six months
or a year or so.
I'm not so worried really about terrorists using AI for particular things.
I would be more worried about terrorists using, say, biological weapons, which at the moment
would be a lot more destructive
and are also becoming much easier to use or obtain
through advances in synthetic biology.
But it is possible that AI will become one dimension
of a great power competition as it becomes an increasingly important both economic factor and also factor in national security. Because I know you've said that the first superintelligence to be creative will have a decisive first mover advantage, that there will be a lot of power in being the first one to come up with it.
And so, I mean, how worried should we be that somebody not all that friendly to the United States will be the person who has it?
Yeah, well, I mean, it's possible that it would have this decisive first mover advantage.
I'm not at all sure about that.
You could also imagine scenarios in which the transition happens a bit more gradually.
If it's not like an overnight or over a week thing where you get from human to radical super intelligence, multiple labs or countries being more or less going through this transition in tandem.
And you might then have a multipolar outcome.
But yeah, I do think it's potential for exacerbating conflicts of different kinds or empowering, say, despots to make themselves more immune from overthrow
by intelligence applications, surveillance applications,
and so forth, that is certainly one concern.
I think it will be important to try to manage that,
both to kind of avoid conflicts on that,
but also because I think it might make the first danger harder to avoid the danger coming from the ai itself like if you're
thinking about this you suppose you are like some researchers you've like got to the point where
you have something almost human level you think with a bit more work, we can make it super intelligent.
Ideally, at this point,
you would really want to go slow, right?
And really check everything, double check it,
make sure it's all right,
increment it step by step,
not just turning on the gas full throttle.
And maybe over several years,
like trying to do this while having a lot of people helping you
and looking over what
you're doing to make sure it's right.
But if you are in some kind of arms race, then that might be very hard to do.
It might basically mean that if you go slow, it just means you lose the race and become
irrelevant.
So you feel forced to rush ahead as quickly as you can.
And then you throw caution to the wind.
And then this risk from the AI itself,
creating kind of a destruction will increase.
So the two problems are connected.
Yeah, it's kind of like a Dr. Frankenstein situation.
It's a Frankenstein situation, right?
Where the entity you create becomes super dangerous
and turns on you, even though we're so full of hubris,
I think most humans would believe
that they could
continue controlling a machine.
Again, it's hard, I think, to conceptualize that the machine I control now will someday
have the capability of controlling me.
But I know you've said right now the potential for this superintelligence, right now it's
lying dormant, but it's akin to the power of the atom and how it laid dormant through much of our human history until 1945, in which case it was very much not dormant.
And we saw its power in really raw and disturbing ways.
Yeah, I think in general, we are kind of reaching into this giant urn of invention.
This is almost like the picture of human history.
We reached in, we pulled out one ball after another, one idea, one technology.
And I think we've kind of been lucky so far in that, for the most part, the net effect
of all this technological progress has been hugely positive. But if there is a black ball in this earth,
some technology that could be some technology that is just such
that invariably discovers the civilization that invents it,
it looks like we're just going to keep reaching into this earth
until we get the black ball, if it's in there, right?
And while we have a developed a
great ability to to pull balls out of there we don't have an ability to put them back in we can
un-invent our inventions so it looks like our strategy such as it is is basically just to hope
that there is no black ball in the earth um and i think not just AI but some other technologies as well could be potential
black balls I alluded to synthetic biology before which is one area where it we might
discover means that would make it a lot easier to create kind of highly enhanced pathogens
in some sense we were lucky with nuclear weapons. They are enormously destructive, but at least
they are hard to make.
You need highly enriched uranium or plutonium
to be able to make a nuclear bomb.
And that requires large
facilities, huge amounts of energy.
Really only states can do this.
But suppose it had turned
out that there were an easier way to do this,
to unleash the energy of the atom.
Before we actually did the relevant nuclear physics, how could we have known how it would turn out? But if they had out that there were an easier way to do this to unleash the energy of the atom that that before
we actually did the relevant nuclear physics like how could we have known how it would turn out but
if there had been some easy way like something baking sand in the microwave oven between two
nickel plates or something like that right then that might have been the end of human civilization
once we discovered how to do that because then then anybody would be able to destroy a city.
And in a sufficiently large population,
there's always going to be a few individuals
who would choose to do that, right?
Whether because they're mad or they have some grudge
or they have some extortion scheme or some ideology.
So we can't really afford a kind of democratization
of the ability to cause mass destruction.
But if we discover some easily implementable recipe for this, then it looks like we are
in a pretty dire situation.
Up next, how important is it going to be to have ethicists involved in creating these
super intelligent beings, right?
It's not just the people who can make the machines function. It's those who can lay in some sort of an ethical code.
And is that even possible? And then we're going to get into the future of humanity and how it
can be helped by technology. This is something that Nick has studied for a long time. Could
there be something like an anti-aging pill coming our way? And how far away is that?
And also cryogenics, is he going to freeze himself and why? And should you? Stay tuned.
I know you've said you could use this technology or some bad actor could or the super intelligent
computer could for ethnic cleansing. I mean, imagine if Hitler had control over this type of technology where he could target, you know, some particular
groups this way, it would be efficient. It could be a killing machine. And we need, this is why we
need ethical people creating the technology if it gets created at all. And that, that leads me to
something I heard recently. And I know you've been saying all along, which is not only are we going
to need, if we're going down this route and we're going to create super
intelligence or something less than super intelligent computers for the good that they can
do, one of the most important roles we're going to have in this process will be ethicists,
philosophers. It's not all about kids now who are in robotics or kids who are somehow trying to study AI. We're going to need people
who consider and can even program for the ethics of a well-meaning life, of a well-meaning existence.
Yes. I mean, not necessarily people whose jump title are ethicists at some university, but yeah certainly ethics and other sources of wisdom about what we want and what
we should be wanting i think will be important it's not just a purely technical problem it's a
kind of all of society probably how to figure out how to create the happy world with this new
technology i mean it will have economic implications right? We alluded to the security implications before.
And then more cultural dimensions are like,
what do we want the role of humans to be
versus our technology and automation in the future?
I think it's like, yeah,
it needs to draw on all the different aspects
of human wisdom, such as it is.
It's not much to boast about, but we'll have to do our best at least.
Yeah, I think it will require a much wider purview than just a narrow technical focus.
Although the technical focus also is really important for AI alignment.
I've read that many leading researchers in this field say it's extremely likely this will happen by as early as 2075 to 2090.
That's in the lifetime of our children right now.
Do you agree with that?
Could it happen that soon?
Yeah, I think that's a real possibility.
That's hard to get your arms around right like our my own children alive right now could be dealing with um computers in the corners of the world that are trying to erase humanity i mean you could
it could well i mean or save humanity right i mean that that's that's the other possibility
or help well i'm not worried about that one that one sounds good uh i mean um i mean that i mean
that that would be what people are aiming for almost universally,
like AI researchers.
They are usually quite well-meaning and idealistic people.
Some are just curious about what they're doing and think it's fun.
Some have a kind of vague general sense of wanting to do something good
for the world.
So I think the intention is positive.
And it's just the outcome that there is more uncertainty around.
Right.
Because the dark side is this could be more dangerous than any pandemic, than nukes, than
catastrophic climate change.
This could be more than an asteroid.
This could be the thing we're not really paying that much attention to, to your example
about what was going on in 2014, uh, that really is an existential threat to humanity on the earth.
Um, and on that front, one of the things I wondered in reading about you, Nick, is whether
do you worry about anything other than this? I mean, if, if this were my world that I were, that I were immersed in full-time thinking about it, I don't know that I'd, I'd worry about anything other than this? I mean, if this were my world, that I were immersed in full-time thinking about it,
I don't know that I'd worry about anything else.
Would I worry about the crime rate?
Would I worry about the erosions of free speech we're seeing?
Government power growing too large?
Some of it relates, but I just...
Do you walk through the day worried about nothing other than this?
No, I mean, I think it's useful maybe to have some division of labor here.
And also, I think it might, for somebody like me, be worth trying to focus more efforts on relatively neglected areas where one extra person like me might make a bigger difference than global warming. So many thousands of people
have been worrying about this, and I think it's a smaller concern. But it's not
the only one. I think certain advances in biotechnology are quite
concerning as well. That
might just make it too easy to create really dangerous stuff.
To make it concrete easy to create really dangerous stuff. I mean to make it concrete, so we have DNA synthesis machines that can print out DNA strings
if you have a digital blueprint. And we also have in the public domain the DNA sequence of a lot of
really dangerous viruses and ideas how to make them even more dangerous.
So as this sequencing technology becomes good enough
to print out whole viral genomes,
then, I mean, you just connect the dots.
That would give anybody with access
to these DNA synthesis technology
the ability to create things far worse than COVID. And at the moment, like, anybody can buy one of these DNA synthesis technology, the ability to create things far worse than COVID.
And at the moment, like anybody can buy one of these DNA synthesis machines.
And if that continues to be the model as they improve in capacity, then we soon get to an
unsustainable situation.
So there needs to be some kind of global regulatory framework, I think, imposed on the DNA synthesis
market where DNA synthesis is
provided as a service. Like if you're a researcher, if you need a particular string, there's maybe
five or six companies in the world, you'd send a request to get your product back in the vial.
You don't need to have the machine yourself in a lab. Or if you do, that would have to be some
sort of controls. That's just one example, but it's a real cauldron.
People are inventing so many cool new stuff in biotech all the time that new ways of creating
mostly good stuff will come into view, but there could also be some bad stuff in there.
And it's a kind of wild west at the moment. The ethos is very much open science. Like,
yeah, let's encourage biohackers. Let's just make it available to all
because that's nice.
You know, everybody has equal access to it.
But with some technologies,
like with, say, nuclear weapons,
we don't think that's the right way.
And I think biotechnology will have to similarly
change to something.
Yeah.
Yes.
Yeah.
I mean, I think I can speak on behalf of my audience
and say that we're glad that people like you, I think I can speak on behalf of my audience and say that
we're glad that people like you, I know Stephen Hawking has joined you. Many others have sort of
sketched out some priorities for making this field safer and imposing some guidelines so that we
don't just jump into this willy nilly. I'm glad you're out there. It reminds me of my husband
used to run an internet security firm and he used to speak of the white hat hackers
and the black hat hackers. And, uh, you know, the Russians have black hat hackers who will try to
get into your bank accounts and your private information and so on. And he was in a, in a
firm that he would say he employed the white hat hackers who understood how to do all that stuff,
but would try to stay one step ahead of the bad guys to protect you. And it seems like this is a field where everyone needs to have the
same devious skills, but we have to make sure we have more people employing them for good than for
bad. Um, let me switch it to you with you for one second, because I do know that you, is it
transhumanism that you've described yourself as into being a transhumanism? I think like your proselytization about the medical field and what could happen for humans during our lifetime to make life better, to make to improve life when it comes to these scientific advances made me feel hopeful.
You know, potential anti-aging pill, potential the potential to bring one back.
Thanks to cryogenics after one dies.
Can you just talk about that a bit?
Yeah.
So, I mean, I sometimes, I think,
I'm mistaken for some kind of technophobe a lot,
because I spend a lot of time describing some specific concerns or dangers.
But, I mean, broadly speaking,
I'm very excited about the potential to improve the human condition through advances in technology,
and including by enhancing the human organism itself in different ways. I mean, I'd love to
see some kind of anti-aging pill that really works, or something that could make us smarter,
or improve our quality of life in other ways.
And so sometimes that's referred to as transhumanism. I don't tend to use the word very much because a lot of people use it for a lot of different ideas, which are kind of
cookie. Yeah. And it attracts some, some kind of miscellaneous hope that I wouldn't necessarily agree with on all issues. So, um,
but yeah, I, I think, I think there's, I think there's a lot of room for improvement.
Do you think academics are with you on, on that desire? Because I, I've heard you say something
like there are a lot of people in the field who are like, why, what about overpopulation? And why
would we want to extend our time here on earth significantly right like what's the boredom of living longer is reason
enough not to do that so do you think we have an academic field that is devoted to developing
things like that well i think i think this have been shifting so slowly but steadily over the
last 20 years or so i think like um there certainly used to be an
extremely strong double standard if we take say something that's lost like doing something about
aging was a big no-go but because people couldn't see why you would possibly want to do that but
then at the same time there were billions of fundings to try to fix cancer, to fix heart disease, to fix diabetes, like
to fix wrinkles, like all these things, right?
But then like the sum total of that is like aging, like aging is what makes you more vulnerable
to cancer, to heart disease, to diabetes, to wrinkles.
And so for exactly the same reasons that you might not like these diseases and symptoms,
for the same reason, you might not like the underlying thing that is
creating a huge fraction of all of that so it's not as if there was some weird special reason you
would have to have for favoring life extension or anti-aging it's just exactly the same reasons
that they play in all its other cases but for some reason there was this mental block that a lot of
people had that they they i think actually it was because the anti-aging thing
seemed unrealistic and fanciful so it was not evaluated by the same standards that like a pill
that is likely better for treating some cancer would be evaluated it was more evaluated by
reference to some traditions um or or like some kind of spiritual wisdom where you're like you're
supposed to accept the fact that you're gonna die and that's a sign of wisdom where you're supposed to accept the fact
that you're going to die and that's a sign of wisdom
and you should come to terms with it
rather than fight against it.
So it was kind of the issue was placed
in that mental bucket rather than
this is something we can actually work on.
Yeah, but you raise a good point.
If that's your mindset, then why get chemotherapy? Why not eat as many trans fats as you want, right? Like we all take steps to prolong our lives, even when we get bad diagnoses, even though we may be people of faith and understand how this is going to end eventually. your body seems good. I mean, it's not to say that in every, in all circumstances, extending life further.
I mean, if you're kind of have a life
that's not worth living,
you're in some respirator
and you just kind of kept alive
under horrible conditions like that.
I'm not sure it's a great boon
if that can be like two years of that
rather than one year.
But if you have a good quality of life,
if you're healthy, you enjoy life,
maybe you can contribute to society in some way. then that seems something that is definitely worth preserving. Or just if we can
improve our senior years, you know, I mean, we've all watched our parents or grandparents deteriorate
mentally. Some some would argue we're watching it right now with our president. OK, that was an
aside. But we've all watched that and thought to ourselves, oh, you know, I don't want that for
them. And if you could create a situation where we could live to our 80s or 90s, forget beyond, beyond would be delightful.
But even that old, but sharp with mental acuity, why wouldn't we want that?
It'd be great.
But can I ask you?
Yeah, yeah, go ahead.
Yeah, I think if and when it actually is possible, like if there is actually a pill on the market at some point that does this,
then I think a lot of the people
who were previously expressing skepticism
would kind of quickly come around.
How far away from that are we?
Well, I mean, from curing aging altogether,
I mean, I think it seems quite hard to do that.
I mean, maybe superintelligence would expedite that.
Like, I think actually a lot of these things
would happen soon after superintelligence.
Other than that, I mean,
we've been kind of on the verge of curing cancer now
for the last 50 years, right?
So we've made small incremental progress.
So the superintelligence is going to make us live forever
and then kill us all off. Something to look forward to. One or the other. Yeah. One of those.
Right. They may have buyer's regret. Can we talk about cryogenics for a minute? Because this is
something, you know, I think most of us grew up thinking, as I'm told now wrongly, Walt Disney
had himself frozen so that he could be brought back to life. But cryogenics is a real thing.
And I know you're in
favor of it. Can you talk about it for a minute? Well, so it's the idea that we know that that's
sufficiently cold temperatures, basically all physiological processes stopped. So I mean,
you know, things last longer if you put them in your freezer, right? But if you put them in your freezer right but if you put them in even colder temperatures like in liquid nitrogen
then they can last for hundreds of years with basically not change so so the idea is if
somebody dies today if you freeze them then you preserve whatever is there and of course if you
thaw them up they are still dead because their tissue,
well, A, that was the original thing that killed
them, and B, the actual freezing process
creates additional damage. But if
you think that technology will continue to improve,
then maybe at some point in the future,
the technology will exist to reverse
whatever originally
caused their death and to
cure frostbite, the kind of damage
that happens during freezing.
So if you just preserve somebody long enough there, then there's some hope that the technology will one day exist to bring them back to life.
Unless you're sure that that technology will never be developed, then it seems like the
conservative thing to do would be to put them in liquid nitrogen.
And if it doesn't work, well, they'll be dead anyway.
So the downside is just minimal.
Yeah.
Well, would you want to come back?
I mean, one of the fears I would have is, what if they wake me up in the year 4000?
And it's terrifying.
It's like a caveman walking into 2021.
And nothing is familiar.
No one he loves is around anymore.
It's, it seems like a nightmare in some ways.
Yeah.
I mean, that's obviously very hard to know what the future would be like that, that you
would be brought back into.
And so I think, yeah, probably mostly comes down to whether somebody is like more of an optimistic person or a pessimistic person.
I guess that's true.
And what is the likelihood that the people in the future who may have these prolonged lifespans are going to want to come back and get people like you and me?
Well, forget me, but you.
I mean, you they should want.
But seriously, given overpopulation concerns and the limited size of the earth,
why would they want to bring people back?
Well, I mean, it would be very cheap for them.
Given the resources in the future,
this would be like a drop in the bucket.
So if they have sentimental reasons
or ethical reasons for doing this,
I mean, I think certainly if there were somebody
who lived a thousand years ago
and I could, you know, with a snap of
a finger or, you know,
bring them back, I think that would be a very
nice thing to do.
What if it were this historical
figure who would be a source of wisdom?
Or even if it were just some pumpkin, right?
I mean,
if you could save a life, even if it were a life that started a long time
ago and had been sort of in suspension.
That would be nice.
Do you think, do you have reason to believe that in that scenario, the brain would still
have the information, you know, upon revival that it had on the way out?
That, you know, the brain retains information?
I think so.
I mean, it depends obviously somewhat on how you die.
So if you die in a fire or you're lost at sea, right, then it's kind of gone.
But in a reasonably good scenario, I think it's likely that the information is basically
preserved.
It's quite hard to destroy information.
So think if you have a book, right?
Okay, and you tear up all the pages in small pieces.
So now you can't read the book anymore.
But the information is still there.
Like in theory, you can put the pieces together again
if you have enough patience.
And I think it's similar with the brain,
like the freezing damage will kind of,
there are ice crystals forming.
Actually, they are trying to use various antifreeze agents
and they are not literally freezing it, they are vitrifying it,
setting aside these technical details.
I think there is like some kind of shoving around of different pieces
that happen in this process, but the information is still there,
most likely, and with sufficiently advanced technology,
it should be able to
put it together. But my guess is if it happens at all, it will happen after superintelligence
as a consequence of superintelligence. Wow. The last sort of question I have for you is a practical one, which is given that we may be
looking at total unemployment, if there is super intelligence, total unemployment, what do you see
as a, an important area for kids today, for young people today, kids in college to be looking at,
or even younger, you know, I have, I have a almost 12, 10 and eight year old. It used to be definitely
robotics. And we would talk about that a lot in schools today, but like, where, where do we steer
our kids if they want to stay on the cutting edge of technology and future jobs? And, you know,
I know that one of the fields that, that artificial intelligence may take over is actually doing
pretty, pretty good job of right now is radiology. They can read x-rays in certain settings. So I don't know, maybe you don't want
your kid to be a radiologist, but what is your thinking about sort of the wave of the future and
the likely good industries to be in? Yeah, I mean, so for kids, I mean, I think it depends a lot on
the kid, like you want to build on their unique strengths and not everybody have to be a computer programmer right
that's a small part of the economy um i i think that i mean already today we are at an unprecedented
time of of wealth and prosperity relative to any other time in all of human history and so in addition to having
a focus on you know finding a profitable career for your your kid i think also equipping them to
actually enjoy life and to find do something meaningful in their life would be worthwhile
because if not now then like when i mean i, I guess after the singularity, maybe.
So I think that would be important.
In terms of areas, I think computer stuff will continue to be important,
but so will many, many other areas in the economy.
I think maybe inculcating certain habits, like a habit
of continual learning, like flexibility to be able and willing sometimes and
feel empowered to kind of, there's a field you don't know anything about, some
skill you don't yet have, well I could try to learn it, like there's so much
information online or training courses. It's easier than ever
to get access to new areas. So giving them that sense of personal agency that I can take
responsible for what I want to do and I can figure out how to learn it. Or if I can't figure that out,
I can figure out who to ask. I think that will be useful across a very wide range of different
scenarios. They're going to have to stay nimble in the world that's coming their way. That's for sure. Thank you so much for your expertise. It's
been an absolute pleasure. Oh, no, I enjoyed it. All right. Up next, Andrew Ang. This guy was
super high up, both at Google and at China's Google. It's called Baidu. He was their chief
scientist and has led huge teams when it comes to developing the AI
of both groups. What does he think about all of this? And is your computer already spying on you?
Is your government spying on you? This is a guy who's been at some very well-known,
big leading corporations when it comes to artificial intelligence and data
amassing. What does he have to say? You're going to love this guy. He's next. Before we get to him, however, I want to bring you a feature we have
here on the MK show called From the Archives. This is where we bring you a bit of audio from
our growing library of content now nearing 150 episodes. Hard to believe. Today, we're going to
go back to our 69th episode and one of our most popular with Tulsi Gabbard, a veteran, former
congresswoman who
shared with us some stories of her time in the military, in Washington, D.C., and in the media
ringer. Here's just a bit on that, on the way she was covered during her 2020 presidential run
against now President Biden and VP Kamala Harris. Take a listen. The ones who were writing about you,
of course, this is the mainstream press's left wing, were writing bad things. And the ones who control the airwaves weren't giving you any
airtime. That's exactly right. And that's where the evidence of this kind of facade of a democracy
comes to the forefront, because you really have these corporate media interests who are, are,
who most care about ratings and entertainment and how they can create conflict, um, you know,
on a debate stage or, uh, push, push a narrative that they think will get more eyeballs to their,
to their screens. Uh, and I put social media in this category as well, combined with a party that
pre-selecting who they wanted voters to hear from. And so that's where you saw a lot of,
hey, you know, they're changing the standards for the debates as they go along. You know,
just as, you know, hey, okay, we're ticking up a little bit in the polls
where we think we're going to qualify for another debate. Oh, sorry. Rules changed, you know, the
day before or right right when, you know, those new polls were coming out and and just other
things, you know, the Democratic, the DNC saying, hey, you know, all presidential candidates,
if you want to be featured in any of our publicity that we're
putting out, then you got to fork up, I think it was something like $175,000 to the DNC just to be
included in their social media videos or whatever. And I'm just like, no, I'm not going to do that.
I got people across the country who are giving five bucks, 10 bucks contributing to my campaign
because they believe in the kind of leadership that I'll bring and the message and the truth
that I'm, I'm sharing with voters. And they're certainly not giving me a whole bunch of money
to go and then pass it on to, uh, to the DNC. And, and so ultimately that's where we saw time
and time again, even, even small it's not that small, but things that
went unnoticed. For example, CNN had a bunch of town halls where they featured different candidates.
They only gave me one. Most of the other candidates had more than one.
And someone called me one day and said, hey, I'm going through'm going through my, um, CNN, it's not DVR, but if
you go to CNN's, I guess, digital library, they had, uh, you could replay the town halls of all
the different candidates. They're like, you're not on here. Like, it's just not, it doesn't exist.
There's no option to find your town hall, but I can find every single other Democrat who ran
for president on here. And so there were
things like that and more forward, blatant things that made it very clear that if the media makes a
decision not to allow voters to hear from you, then A, voters really don't have the ability to
make an informed decision in a true democracy. And then B, the
reality is that if you want to talk about issues, if you want to get information to people so they
can make this informed decision, then clearly running for office is not the way to do it.
Gabbard has just struck a new deal with Rumble, the video social network YouTube competitor. So
I think we're about to hear a lot more from her in the weeks to come
and good. And we, in the meantime, will keep bringing you more of our best episodes
from the archives. Up next, Andrew Ang. You'll love him.
Thank you for being here. I'm excited for this conversation. We just wrapped up with Nick Bostrom, who he wasn't totally anti-AI, right?
He's pro-AI, but has some concerns about, I think, what you call artificial general intelligence, AGI, the long-term game where you develop a machine that develops super intelligence. So let's just start there. What's your take on the likelihood
that we will develop super intelligent machines
in this century?
Nick Bostrom is an interesting character.
AI as a new electricity is transforming tons of industries,
revolutionizing the way we do things
in the United States and around the world.
As for artificial general intelligence,
I think we'll get there, but whether it'll take 50 or 500 or 2000 years to make computers as intelligent as
you or me or other people, I think that's a really long-term open research project.
It's an exciting question. Okay, I like 2000. 2000 makes me feel better than by the end of
this century when my kids are still, God willing, alive? You know, I think that one of the problems with the whole field of AI is confusing in this way.
There's one type of AI called AGI, artificial general intelligence,
things to do anything a human could do maybe someday,
and artificial narrow intelligence, which is AI that does one thing really, really well
and is really valuable.
Turns out over the last 10, 20 years,
we've had tons of progress
in artificial narrow intelligence,
those AI that do one thing really, really well.
So people say accurately,
there's tons of progress in AI.
I agree with that.
But just because there's tons of progress in AI
doesn't mean from where I'm sitting,
I'm candidly not seeing that much progress
toward artificial general intelligence.
So I think that's led to some of the unnecessary
hype and fearmongering, kind of the AI. That makes me feel better. I'm feeling
better already. Now, you know a thing or two about narrow artificial narrow intelligence,
just so the audience understands. You've led teams at Google and is it pronounced Baidu? Forgive me
for not knowing. Oh yes. I started that the Google brain I also ran AI for Baidu, which is a large web search engine company in China.
Because China doesn't use Google.
So this is China's Google.
China's leading web search engine is Baidu.
And then I'm also really proud of the work that I did leading the Google Brain team,
which is a team that hopes a lot of Google embrace modern AI.
So if you use Google, you're probably using technology that my
former team wrote. Actually, almost certainly. That's amazing. So now what are some of the
fun things that you and your team have introduced into my life that I don't even know I should be
thanking you for? Don't thank me. Thank the many millions, well, thousands of people around the
world building these technologies. I think that all of us use AI dozens of times a day, maybe even more,
perhaps without even knowing it. Thanks to modern AI, when you do an internet search,
you get much more relevant results. Or every time you check your email, there's a spam filter in
there, kind of saving us from massive amounts of spam. That's AI. Every time you use a credit card,
it's probably an AI trying to figure out if it is you or if someone stole the credit
card and we should not let that transaction through. So all of us probably use AI algorithms
many, many dozens of times a day, maybe without even knowing it.
And what about the self-driving car? Because that makes the news every so often. And
it's interesting to me. It's scary to me because you also hear some reports of crashes and you
understand that, okay, the technology is not exactly where they want it to be yet.
But what do you see when it comes to self-driving cars?
I think that many people, including me, collectively underestimated how difficult it will be to get to true, fully autonomous self-driving cars that could drive the way that a person can.
I think we will get there,
but it's been a longer road than any of us estimated. When I drive these cars, I'm happy
for the driver assistance technology. I personally don't really fully trust them yet. So I keep an
eye on the road when I'm driving and one of these technologies is supposedly doing something.
Well, yeah. So here's a dumb question. I understand why somebody, if we perfect the
technology, somebody like my mom, who's 80 and really not all that well physically, mentally, she's great, but physically, I could see why a self-driving car would work well for her. It's like you're a built-in chauffeur. But why do young, able-bodied people need that? Why is it an improvement for people our age? I think that it depends a lot on the individual. I sometimes find it fun to drive.
You know, if I, I don't know, take my daughter out on the road, drive around, that's fun.
But sometimes if I'm driving to work in traffic, it's like, boy, I wish someone else could do the
driving for me. And if a computer could do that, so I could maybe even sit in the backseat and,
you know, play with my daughter. I would rather do that than be stuck in traffic.
So I think it depends a lot on the individual. It's funny because I asked this having just
yesterday I had to go to the city. I'm in New Jersey for the summer. I had to go to the city.
It's a couple of hours and I had the choice of driving myself or sometimes we use a driver.
And I said, you know, I'm going to use a driver because I had a bunch of interviews to do today.
And I said, I want to read all my stuff. And I, and so it's a dumb question, right? It's basically,
you can read everything. If you have a self-driving car, it's going to make your life
really convenient if it doesn't kill you or all the people around you.
And, and, and again, I think, I think I know you have kids, right? My, my kids are really young.
A lot of me worries, you know, when they grow up, will they ever get in the car accident? So
with, when my daughter grows up, uh, if you know, there's a computer that can drive her safer than
if I were to drive or she were to drive herself, I think it'll make all of us better off.
You know, the AI world, we keep on, we've made a lot of predictions and sometimes we're not very
good at predicting the timeline on which this will
happen.
I think that self-driving cars are kind of getting there in limited environments.
So I'm seeing exciting progress.
For example, if you're driving around in a constrained environment of a port, you know,
shipping stuff or in a mine or sometimes on a farm, that's actually kind of getting there.
If we're willing to rejigger some of the cities, I think we'll be there pretty soon.
I don't know.
I think it'll still be quite a few years.
It'll still be many, many, many years before we can drive in downtown New York or downtown
New Jersey.
Yeah, I understand that they're not as good at picking up things like the hand signals
that a construction worker might be issuing to
you that they don't totally understand those things yet. So they're not quite where they need
to be. Okay. So let's talk about other ways in which AI is going to be helping our lives and
how you see it. Because one of the things that Nick said that concerned me was we're probably
headed toward total unemployment, total eventually in the distant future, once the machines become as smart
as we think they're likely to get.
And that concerns me.
You know, I don't know what life looks like if nobody works for a living and if the machines
are in control of everything.
So what is the journey from here to there look like in terms of technological advances?
You know, I think that total unemployment,
I'm actually skeptical it'll ever happen
or if it does happen,
maybe I don't know how many thousands of years away.
You know, it turns out,
just let's demystify AI.
What can we make AI do and what can we not?
It turns out, to get a little bit geeky and technical,
almost all of AI today is about input to output mappings,
such as input and email,
output, is it spam or not? Or input a picture of what's in front of your car and output the
position of the other cars. Or input an x-ray image and output a diagnosis, does this person
have pneumonia or not, or some other condition? So that's sort of the one idea, input-output,
that is creating 99% of the value, of the economic value of today's AI system.
Turns out this is a ton of economic value.
The large ad platforms have an AI that inputs an ad and some information about the user and outputs,
are you going to click on this ad or not, because it can get people to click on more ads,
this direct impact on the bottom line of the large ad platforms.
So it's creating tons of economic value.
But frankly, this input-output thing, if we think about how much more people do,
there's just so much more people can do.
I don't think anyone in the world has a realistic roadmap for getting to AGI.
So I think sometimes that concept has been over heightened, fear mongered.
I do worry about unemployment. With every wave of technology, looking back, industrial evolution,
invention of electricity. I mean, well, all the people working on steam engines,
they unfortunately, really sadly lost their jobs. Or we used to have human operated elevators,
right? There was someone standing in the elevator that would dial it up and down.
When someone invented automatic elevators,
those jobs went away.
So I worry about that for AI.
That would create some amounts of disruption
and affect work,
but complete total unemployment,
this input-output mapping,
I don't see that piece of software
replacing, you know, you anytime
or me anytime soon.
Can you talk about the radiology thing?
Because I read about the work being done.
Is it Stanford with the AI and radiology, but the conditions have to be just so?
Can you just talk about that?
Sure.
So I think that I'm excited about AI and its potential to improve healthcare.
But actually, some of my friends and I worked on AI
that can input a picture of an x-ray
and I'll put, you know, what's the appropriate diagnosis.
And it turns out we were able to show in the lab
that we could diagnose or recognize many conditions
as accurately as a board certified,
highly trained doctor radiologist.
But it turns out that it worked great
if we were to train on data,
we collect it from our research from Stanford Hospital, and then see if the system did work
well on data from the same hospital or from the same set of x-ray machines. It turns out if you
take that AI system and walk it down to a different hospital down the street with maybe an older x-ray
machine, maybe the technician has a slightly different way of imaging the patient, the performance gets much worse. Whereas any human
doctor can walk down the street and diagnose at this other hospital, you know, kind of roughly
equally well. So I think that one of the challenges of AI is we have a lot of prototypes in the lab
that you didn't read about in the news. You know, you see, oh, AI does this, what,
diagnose that human radiologist or something, maybe about in the news. You know, you see, oh, AI does this, what, diagnose that human radiologist or something. You may read about it in the news. But it turns
out that we collectively in the AI field, we still have a lot of work to do to take those lab
prototypes and put them into production in a hospital setting. It will happen. It's just that
this will be some additional years of work before some of the things that, you know, have been promised, right, come to fruition.
Well, the medical field is so ripe for help from this kind of technology.
I can think of a million ways in which it could change lives and save lives, but it's really every industry.
I know you've been making the point.
It's every industry that's going to be touched by this eventually.
But before we move off the medical field, may I just ask you about a report in the Wall
Street Journal that got my attention?
OK. Among other things, they're talking about what we should expect in the next few years.
Toilets that screen for disease.
It says researchers at Stanford have developed a prototype toilet that uses an artificial intelligence trained camera to track the form of feces and monitor the color and flow of urine. Why is this necessary? Because it could potentially analyze micro stool samples to detect viruses like COVID-19 and blood. It could potentially detect irritable bowel syndrome or colorectal cancer. And here was
the part, forgive me, because I'm really just a 12 year old boy at heart, that I wanted to ask you
about. So the toilet could identify individual users by scanning their anuses, unique characteristics, or anal print. Now we, no one wants an anal print going off to
some AI researcher, but this is happening. This is actually, they're saying these units could
cost between 300 and a thousand bucks. They could be rolled out in the next couple of years. Is this
what life is going to hold for us? Yeah, let's hope not. A lot of that description was, I think a lot of that,
you know, the description you read sounds disturbing. Having said that, I think there
are doctors that have to do many disturbing things for the good of the patients. But I think
a lot of us will not want this in our homes anytime soon.
But we'll see.
Doctors got to innovate.
We'll see what the FDA approves
and what seems to be appropriate
for patients that may need it,
even if it doesn't seem like
the right thing for everyone.
Because you know that's going to turn into
one of these things
where you get false alarms every other day
and you're in the doctor saying,
oh, my anal print suggested
I've got colorectal cancer.
I don't know. It sounds like there's a ton of internet means to be created off what she just
said. Listen, as somebody who's on camera for a living, a lot of my life, there are limits to how
far I'm willing to go. And I think I speak for a lot of people. So what about the other industries?
How else could AI improve or negatively impact our lives over the next 10, 25 years?
One of the challenges I see is AI, as of today, has clearly transformed the consumer software internet industry.
The US uses a website, the large website operating, app operating companies, almost all of them, I mean, all of them use AI to great effect.
One of the challenges that still faces us, is ahead of us,
is figuring out how to use AI to improve, transform,
create value for all of the other industries out there.
So, for example, one thing I'm personally passionate about is manufacturing.
I think that for American manufacturing to be more competitive, the road
forward is not to try to just try harder to do the jobs that were around 20 years ago. I think
it's America and frankly, all nations around the world should race ahead to figure out how this
technology can work for manufacturing and for all those other industries. So, for example, it turns out that in many factories around the world today,
there are tons of people standing around using their eyes
to inspect a manufactured thing, like automotive component
or pill bottle or food and beverage or food component
to see if there's a defect on it.
I think AI is clearly going to be able to do a lot of that work in the near future
in an automated way. And if we in America want to embrace this technology, figure out how to use AI
for automatic vision inspection is coming. I'm working on it, my friends are working on it.
I think that's how many industries become competitive. But it turns out,
getting AI to work for manufacturing, for healthcare, for agriculture, these industries, there's actually a different recipe. It turns out the
stuff that I was doing at Google and other internet companies, it doesn't quite work.
So there's something a little bit more needed. But again, a bunch of us in the AI field are
working on this, I hope we'll get it. Don't leave me now. We got more coming up in 60 seconds.
When you talk about Baidu for a minute and just talk about China and its approach to data,
because I know that they they really want to be leaders in the field and the United States is watching them and they're watching us.
Do you think that the Chinese are any better than the Googles of the world where you were also the top guy at collecting information,
synthesizing it, keeping an eye on people's habits and so on?
Yeah, I think that China is phenomenal at some types of technology. The US is phenomenal at some types of technology. I think we do live in a multipolar world where I see innovations in the US, in Europe, in China, really, frankly, all around the world.
And the AI community tends to be very global.
There is a global network where researchers in Singapore may publish a paper and then two weeks later it's running in some site in the United States.
And then someone in the UK will read it too and figure out something to apply and deploy
in Europe.
So I think we live in a global world where different teams sometimes collaborate and
different teams sometimes compete.
I think, actually, one thing I will say, a lot of people underestimate the importance
of government support in the early days of AI.
So not many people know this. When I was running AI way back before modern AI deep learning became
popular, a lot of reasons I was able to do my work was because DARPA, the defense agency in
Washington, DC, was willing to fund some of my work. So I think without DARPA funding some of
my research work, I don't know that I would ever have gone to Google to propose starting the Google Brain Project. So
I think just ensuring American competitiveness is something I would love to see.
Where are we on the scale? Are we the world leaders? You look at the military superpowers
and we know where we are, but where is America when it comes to AI?
I think that the two leading countries in the world in AI are quite
clearly the US and China. I think the US is the world leader in a lot of the basic research
innovations, but this is not a lead that we should take for granted. And we just got to keep on
working really hard. And what about the creation of super intelligence? Because I, I read something about you creating something about where a computer can recognize a cat. I don't know. You can tell me what it was. But to me, that sounded like working toward developing super intelligence, you know, a computer that can learn on its own and, you know, develop its own intelligence and improve its own intelligence. But can you talk about that, about where we are on it, what you've done on it, and whether
you think how far along we are?
Yeah, the cat result.
When I was in the Google Brain team, one of the early results we had was we built an AI
system called the neural network and had it watch tons of YouTube videos.
Basically had it sit in front of the computer and watch a YouTube video for like a week. And then we asked it, hey, what did you learn? And to our surprise,
one of the things that learned was it had figured out or had learned to detect this
thing, which turns out to be a cat. Because it turns out when you have an AI system, watch
YouTube videos for a lot, it learns to detect things that occur a lot in YouTube videos.
So people faces occur a lot on YouTube, figured out how to detect that. There are for a lot. It learns to detect things that occur a lot in YouTube videos. So people faces occur a lot on YouTube,
figured out how to detect
that. There are also a lot of cats, right? There's another
internet meme on YouTube, so it also
figured out how to detect that.
It wasn't a very good cat detector,
but the remarkable thing about that
was that it had figured out that there's
this thing. It didn't know it was called cat,
C-A-T, but there was this thing that it just learned
to, boy, see a lot of this thing, whatever it is, I don't know it was called CAT, C-A-T, but there was this thing that it just learned to, boy, I see a lot of this
thing, whatever it is, I don't know what it is.
It's pretty remarkable the AI system
neural network had figured that out
by itself. But again,
between that and
superintelligence or AGI,
I think it's very far away.
I think that worrying about
AI superintelligence today is
a bit like worrying about overpopulation
on the planet Mars.
I actually hope that we will manage to colonize Mars and maybe someday we'll have so many
people on Mars that we'll have children dying because of pollution on Mars.
And you may be saying, hey, Andrew, how can you be so heartless to not care about all
the children dying on Mars?
And my answer is, well, you know, we haven't even kind of landed people on the planet yet.
So I don't know how to productively defend against overpopulation there.
So I feel a little bit about, I think it's fine if academics study it, you know, publish some theories on what to do when we have AGI.
But it's so far away. I personally don't really know how to productively work on that problem.
Now, you are the co-founder of a group called Coursera.
Is that how you pronounce it?
Yes, Coursera.
And I feel like this dovetails very nicely with one of the things that Nick was recommending
when I talked to him about the future, our children, and so on.
And he was saying the one thing the kids of the future are going to need to be able to
do is understand that learning is a lifetime process, right? That nothing is as
static as it used to be. And the world is changing so rapidly and our kids are going to need to be
able to handle information at an even more rapid pace than it now comes into their life, which is
already faster than ever. And I feel like this is one of the missions of Coursera is to, to nurture
lifelong learning.
Can you talk about it?
Because it sounds really interesting and it's been hugely successful.
Yeah.
So, yeah, through Coursera, hopefully we can give anyone the power to transform their lives
through learning.
I was teaching at Stanford University about a decade ago, actually over a decade ago,
and put my class on machine learning type of AI on the internet.
And to my surprise, 100,000 people signed up for it. And I kind of did the math. I was teaching
400 people, 400 students a year. But when I did the math, I realized that for me to reach a similar
audience, 100,000 people, teaching 400 people a year, I would have to teach at Stanford University for 200 years.
So based on that early traction, I got together with a friend to start Coursera,
to create a platform that now works with over 200 universities and other institutions and companies
in order to create online learning courses that,
you know, pretty much anyone in the world can access. That's so great. I mean, so it's like
for those of us who didn't go to Stanford or Harvard or what have you, but want access to
that kind of education, though not full time, we can go here. Yeah. In fact, you know, to actually
share two thoughts relevant to all of you watching this,
if you want to learn about AI and not, you know, and cut through the hype, one of the causes I'm
most proud of is AI for Everyone on Coursera. And I think I tried to give a non-technical
presentation of AI. So if you want to know how will AI affect your life in the future,
how will AI affect your job, your industry,
there's several hours of video that I hope will give anyone
that's interested a non-technical introduction to AI
so you can think about this strategically
and know how it will affect you,
but also learn to recognize it and ignore some of the hype.
There's one other trend I'm excited about,
which is, you know, with the rise of tech,
I think we may, I hope we'll eventually shift
toward a world, and this is relevant to all of you,
you know, with children, for example,
but I hope we'll shift toward a world
where almost everyone will know a little bit
about coding.
And I say this because many, many hundreds of years ago,
we lived in a
society where, you know, some people believe that maybe not everyone needs to read, right? Maybe
there are just a few priests, you know, and monks, they had to learn to read so they could read the
holy book to the rest of us or something. And the rest of us, we didn't need to read, we just sit
there and listen to them. Fortunately, society wised up and now with widespread literacy, we've figured
out that it makes human-to-human communications much better. I think that with the rise of
computers in today's society, for good and for ill, this is a very powerful force, I would love
to see a lot of people able to just learn to code. So not all of us need to learn to be great
authors. I can write, but I'm not a great author. I don't think everyone needs to be a great programmer,
but for many of us, there will become a time where if you could write a few lines of code,
get your computer to do what you want, just like literacy has created much deeper human
to human communications, I think everyone can kind of everyone can kind of learn a little bit of coding or computer literacy, then all of us can have much deeper interactions with our computers.
And it would be very powerful for all of you in the future.
Well, it certainly had a massive impact on your life, just reading your background.
How did you get into it at such a young age?
It was your dad, I understand?
Yes.
So my dad's a doctor. And when I was a teenager, I was born in the UK, but I was living
in Singapore at the time. But so my dad was interested in AI for healthcare. So he kind of
taught me about his attempts to use, frankly, like 1980s AI, which is not that advanced,
to do medical diagnosis. So that sparked off a
lifelong interest in me. I do remember when I was in high school, I once had an internship. I once
had a job as an office admin. And I don't remember much from that job. I just remember doing a lot of
photocopying. And even I was like, whatever, 15, 16 years old, I remember thinking, boy,
if only,
why am I doing so much photocopying?
If only we could write some software,
have a robot or something, do all this photocopying,
maybe I could do something even more interesting and more valuable.
And I think that for me was part of my lifelong inspiration
to just write software that can help, you know,
automate some of the more repetitive
things so that all of us collectively can tackle more challenging and exciting things.
Well, it's so great because I tell you, I went out to Google and I spoke to a bunch
of executives there a couple of years ago.
And I know that they try to give the coders some stress relief, some like a break, because
it can be very intense work. And one of
the one of the stations on campus was sword fighting. I'm like, this is so great. You know,
they just because, you know, you spend all day doing that. It's very intense. And you do need
a mental break, a break for your eyes, a break for your for your body. So it's it's just a totally
different way of approaching the workplace. Yeah, I think, you know, I find that I think coding is hard work, but I find that almost, you know, when I look at problems in our society, I think almost everything is hard work.
When I walk into a manufacturing plant, some of the work that my company, Lending AI, does for manufacturing, I see the men and women on the manufacturing shop floor and they're really smart at what they do. And then I meet up with my friends from Google and I think they're really
smart at what they do. I think that the world has lots of challenging, intellectually stimulating,
or physically challenging work for us to do. And hopefully AI tools can help make things
a little bit better for everyone.
Well, I like that you sort of decide
where you're going to put your energies
because I understand looking at you today
in your blue shirt, it is no accident
you are wearing that blue shirt
and it is one of the areas of your life
in which you've chosen to simplify
and streamline your decision-making.
Yeah.
I think, yeah, a few friends have asked me,
you know, there's actually a core of yours, I think, someone actually asked publicly,, there's actually a quorum, someone actually
asked publicly, why does Andrew wear a blue shirt all the time?
So I used to wear either blue or like a light purple, but then I realized every morning
is like, oh, do I wear a blue shirt or a purple shirt?
I can't decide.
It's like, forget it.
I'm just buying a full stack of blue shirts and do that.
I don't know.
So you don't know.
So you don't have to think about it in the morning.
Vera Wang does the same thing.
Vera Wang, who dresses the most beautiful, successful, prominent people in the world,
just wears sort of a black, a column of black every day.
That's her uniform.
I did not know that.
Because she doesn't want to think about it.
Same as you.
Yeah.
Turns out there is a downside to this.
One day, one of my friends was working on an AI for fashion thing. And I tried to express in a pin that said, well, you want to do AI for fashion? How about this? How about this? And she said, Andrew, you used a 3d printer to make your wedding rings, which brought up a lot of things for me, which is number one, I do not understand the 3d printer at all. My kids
are using it at school. It scares me. I don't get it. What is it? What is it? And how does it print
out a wedding ring? How does it produce a wedding ring? Yeah. So I think, uh, uh, so Carol, um,
she's from Flint, Michigan, but we now are in Washington State.
So 3D printer takes, you know, one way that 3D printing works is it takes little bits of metal and melts them and kind of, you know, deposits little drops of metal until gradually you end up building a ring.
I'm not wearing a ring now, I haven't had enough stares. And then you end up with this incredible shape.
Whatever you can, almost anything you can imagine and program into a computer, it can
just by putting little drops of plastic or little drops of metal or some other substance,
just create this incredible 3D shape that's maybe difficult to manufacture via other ways.
So I don't know.
Actually, this is one fun thing about technology.
3D printing still are really, really cutting-edge technology,
but now we have high school students able to use it.
I hope we would like that for AI too, frankly. I find that today AI seems a little bit mysterious, maybe a little bit overly so.
But actually, last week, I was chatting with a few high school students
in different parts of the country talking about, you know,
they're taking online AI classes from Coursera or from whatever.
And now we have high school students able to do things
that if done just five or six years ago,
would have been chapter in a PhD thesis at a place like Stanford, right?
Really? Like what?
Actually, so one thing happened to me.
I was attending a fair, a make-a-fair,
where I met this student that was demoing his robot
that was taking pictures of plants,
trying to figure out if they were diseased,
if they had a disease on their leaves or not.
So I looked at his work and I thought,
boy, if this had been done
five or six years ago, this would have been a chapter in someone's PhD thesis at Stanford
University. And you know what? I asked him, how old are you? And he said, oh, I'm 12 years old.
So this is today's world. I think anyone in the world can go and learn this stuff and then implement this.
And even though some of the technology seems so cutting edge,
I think that if someone out there is watching this
and wants to learn it, a lot of tools are now on the internet.
Go learn it online from DeepLine, AI, or Coursera.
And then on your computer, you could actually start developing stuff
that while not decutting your stuff, right, that's actually still pretty difficult.
You could actually do stuff that was kind of state of the art just a few years ago.
All right.
On this subject, I have a confession to make to you.
I were moving into a new home or moving towns, and I decided to not make my new home a smart home because my old smart home was annoying me. My dishwasher was yelling at me
and my microwave was yelling at me. And I was walking around my apartment all day saying,
you are not the boss of me. I am the boss of you. Shut up. I will unload you when I'm good and
ready. And the TV required 40,000 buttons to turn on. And it's like, I just want a dumb home
for me because maybe it says I'm a dumb person, but it seemed easier to me. And yet all
of these appliances are getting smarter by the day. And they're saying there's going to be a
refrigerator that's going to tell you whether things are spoiled on the inside and so on.
So do you have a smart home? Do you recommend a smart home? And how, if at all, concerned
should we be about people spying on us, for lack of a better term?
You know, I think people, they distrust Google.
They think Google's amassing information on them.
They distrust the government.
They think the government could possibly hack into one of these appliances.
You know, these are real concerns you hear from people.
Yeah, so I know.
I think that a lot of people are concerned about privacy.
So am I.
But also, I have friends at many of the large internet companies.
And I know that my friends, I trust them to tell me the truth.
Many of my friends are genuinely concerned, but also very respectful of privacy.
So a lot of the large internet companies, some better than others, really do have stringent
privacy controls.
It makes it incredibly difficult for anyone to just spy on you.
Now, having said that, I actually would be disappointed.
I have no reason to think the U.S. government can hack into these devices,
but frankly, I'd be a little bit disappointed if they can't.
So, you know, by the way, I used to work on speech recognition, right?
So I worked on this voice-activated devices.
One thing I'm not proud of, for a long time, even as working on these devices, I had exactly one light bulb in my home
that was connected to my smart speaker because the configuration process was so annoying. So I
got through, you know, configuring one light bulb so I could turn it on with a voice command,
but after that, I couldn't be bothered. So I think we still got to make these things better.
You know, we tend to inject the margin of the law of things.
Sometimes it's really great and I love it.
But sometimes you do wonder if we're really helping solve people's problems.
Hey, if we have more people working on it, maybe we can all collectively make all this tech much better.
Yeah, no, I've said in this day and age, it's not enough to pretend you actually have to be a good person because someone's probably always listening, watching, amassing data. They're going to know one way or another. It's disconcerting, but I don't know if you're not a criminal and you're not dealing with terrorists and so on. How worried do you need to be? I don't know. I think AI is the new electricity. Much of the rise of electricity starting about 100 years ago transformed every industry.
I think AI is now on a path to do the same.
So I think really to anyone wondering if it's worth learning about it, jumping in, trying
to help all of us collectively navigate the future.
I think every citizen, every government, all of us individuals should jump in and play
a role in shaping
a better future for everyone in light of this amazing technology.
Wonderful talking to you.
Thank you so much for your expertise and your insights.
Thank you, Megan.
It was really fun to do this with you.
So as I mentioned in our other episode this week, we're scaling back a little for this week and next week on our episodes.
Just as we get ready to launch on Sirius, my team, especially my team, has a lot they need to be doing.
So we're going to launch five days a week starting on September 7th.
But in the meantime, we're a little bit of a scaled back schedule for those of you wondering.
But our next guest, who's going to be coming up on Monday Monday is one we've really wanted to have on for a while. Controversial guy because he worked for
Trump. And, you know, he's been completely excoriated by the mainstream media. But
fascinating and really smart dude, Stephen Miller, is going to be here. You still have him on the
Kelly file all the time. Then you saw what the press did to him when he went on inside the Trump team. But even just, you know, I've
spent years talking to him. There is no better person to talk to if you want to understand what's
happening in this country with our southern border, our northern border and our approach
toward immigration in general. So I'm really looking forward to the conversation, Stephen
Miller, Monday. Don't miss it. In the meantime, go ahead and subscribe so you don't miss it.
Download.
Give me a five-star rating while you're there.
And give me a review.
Let me know what you think.
What do you think of AI?
Are you in favor?
And what would you like me to ask Stephen Miller?
Taking your thoughts right now
in the Apple review section
or wherever you download your podcasts.
Thanks.
Thanks for listening to The Megyn Kelly Show.
No BS, no agenda, and no fear.
The Megyn Kelly Show is a Devil May Care media production
in collaboration with Red Seat Ventures.