a16z Podcast - a16z Podcast: Artificial Intelligence and the 'Space of Possible Minds'
Episode Date: November 15, 2015What is A.I. or artificial intelligence but the 'space of possible minds', argues Murray Shanahan, scientific advisor on the movie Ex Machina and Professor of Cognitive Robotics at Imperial College Lo...ndon. In this special episode of the a16z Podcast brought to you on the ground from London, Shanahan -- along with journalist-turned-entrepreneur Azeem Azhar (who also curates The Exponential View newsletter on AI and more) and The Economist Deputy Editor Tom Standage (the author of several tech history books) -- we discuss the past, present, and future of A.I. ... as well as how it fits (or doesn't fit) with machine learning and deep learning. But where are we now in the A.I. evolution? What players do we think will lead, if not win, the current race? And how should we think about issues such as ethics and automation of jobs without descending into obvious extremes? All this and more, including a surprise easter egg in Ex Machina shared by Shanahan, whose work influenced the movie.
Transcript
Discussion (0)
Hi, everyone. Welcome to the A6 and Z podcast. I'm Sonal. And today we have another episode of the A6 and Z podcast on the road, a special edition coming from the UK. We're in the heart of London right now. I'm here with Murray Shanahan, who is a professor of cognitive robotics at Imperial College in London. And he also consulted on the movie X Machina. And so if you didn't like the way that movie turned out, we don't want to put any spoiler alerts. You can blame him.
And then I'm here with Tom Stannis, who's the deputy editor at The Economist and also the author of a few books.
Yeah, six books. The most recent one was writing on the wall, which was the history of social media going back to the Romans.
And probably the best known one in this context is a Victorian Internet, which is about telegraph networks in the 19th century being like the Internet.
That's great. And I'm here with Azimazar, who publishes an incredibly interesting and compelling newsletter that I'm subscribed to, the exponential view.
He used be it at The Guardian and The Economist, and then most recently founded and sold a company that used machine learning heavily.
So welcome, everyone.
Thank you.
Hello.
So today we're going to talk about a very grand idea.
theme, which is AI, artificial intelligence, and just sort of its impact and movements.
This is really meant to be a conversation between the three of you guys.
But Murray, just to kick things off, like you consulted on the movie X Machina.
What was that like?
Oh, it was tremendous fun, actually.
So I got an email out of the blue from Alex Garland, famous author.
So that was very exciting to get this email.
And the email said, oh, my name's Alex Garland.
I've written a few books and stuff.
and I read your book on consciousness embodiment in the inner life
and I'm working on a film about artificial intelligence and consciousness
and would you like to kind of get together and talk about it
so of course I jumped at the chance and we met and had lunch
and I read through the script and he wanted a bit of feedback on the script as well
where it hung together from the standpoint of somebody working in the field
and then I'm then we met up several times while the movie was being
filmed and I have a little easter egg in the film. Oh, you do. What was your
Easter egg? I saw that movie three times in a theatre, so I will remember it, I bet.
Oh, fantastic. So there's a point in the film where Caleb is typing into a screen to try
and crack the security and then some code flashes up on the screen at that point and that code
was actually written by me. Oh, yay. It's a real code. It's not the usual rubbish code. And it just
sort of flashes up. But what it actually does is if you actually type it in to a Python
an interpreter, it will print out ISBN equals and the ISBN of my book.
Oh, that's so great.
So it's Python as well.
I'm even more thrilled.
I think it's fascinating that you say that the part that you didn't go into detail about
is the name of the second part of the title of your book, embodied consciousness?
Yeah, there's a long subtitle, which is cognition and consciousness in the space of possible
minds.
And I very much like that phrase, the space of possible minds.
I think if you were to kind of pin me down on what I think is my most fundamental
and deepest interest. It's this idea that what constitutes possible minds is much larger than
just humans or even the animals that we find on this earth, but also encompasses the AI that we
might create in the future, either robots or disembodied AI. There's a Hamiltonian space of
possible minds. That's beautiful. Yeah, a huge kind of space of possibilities. I mean, it's a really
interesting idea, and it's something that comes across in a couple of your other books as well,
which is this notion that we think of intelligence as quite often the artificial intelligence,
that plastic white mask that you see on the cover of many, many a film or book cover.
But of course, as we start to develop these new AI systems,
they might take very, very different shape.
They may be embodied in different ways, or they may be networked intelligence.
So one of the areas I think is interesting is what's happening with Tesla
and the Tesla cars that learn from the road, but they all learn from each other.
Now, where is that intelligence located
and what would it look like
and where will it sit in your space of possible minds?
Yeah, absolutely.
It's a completely distributed intelligence
and it's not embodied in quite the sense of...
Of course, a car is a kind of robot in a way
if it's a self-driving car,
but it's not really an embodied intelligence.
It's sort of disseminated or distributed
throughout the internet
and it's a kind of presence.
So I can imagine that in the future,
rather than the AI necessarily being
the stereotype of a robot standing in front of us,
it's going to be something that sort of is hidden away on the internet
and is a kind of ambient presence that goes with us wherever we go.
Well, that's another sci-fi stereotype there, isn't it?
That's the Univac or the Star Trek computer.
But as I understand it, your work starts from the presumption
that embodiment is a crucial aspect of understanding intelligence,
which is why you're interested in both the robotic side and the intelligence side.
So certainly I would have taken a stance, you know, if you'd ask me 10 years ago,
that cognition and intelligence is inherently embodied because what our brains are really for
is to help us get around in this world of three-dimensional space and complex objects
that move around in that three-dimensional space and that everything else about our intelligence,
our language, our problem-solving ability is built on top of that.
Now, I'm not totally sure that it's not impossible, that it's not possible to build
AI that is kind of
disembodied. Maybe in my
latest book
I use the phrase
vicarious embodiment, or I should say
vicarious embodiment for a US audience.
So it can kind of embody itself temporarily in a thing
and then go somewhere else. Well, that's another
thing. You can have sort of avatars.
But what I mean by vicarious embodiment
is that it uses the embodiment of
others and to gather data.
For example, the enormous repository of videos
there are on the internet.
There are zillions of videos of people picking up objects
and putting things down and moving around in the world.
And so potentially it can learn in that vicarious way
everything that it needed to learn
by being actually embodied itself.
And this goes right back to the neurological basis,
I think, of some of your research,
who started off doing symbolic AI
and then moved over as kind of the whole field has
to more of this neurological approach.
And by neurological approach, Tom,
you mean more like in the deep learning sense?
Well, exactly.
As I understand it,
your approach there was the idea that the brain itself can rehearse motor neuro sort of combinations
and that that's how we kind of predict how the world will behave. We kind of say, what would
happen if I did this? Which is very much like what the deep mind AI is doing when it plays
breakout or whatever, this kind of deep queue networks where you, which is all about feedback
based on predicted actions and remembering how things worked out in the past.
Certainly, I've always thought that this idea of inner rehearsal is very important. Our
ability to imagine different possibilities.
So watching YouTube videos of people doing things is a form of it can function as in a rehearsal
or if you have a system that can learn from that, the sort of dynamics of the world and the
statistics of actions and their effects and so on, then it can use that, so it sort of builds
a model of how the world works, and then it can use that model to construct imaginary scenarios
and rehearse imaginary scenarios. Actually, just going back very quickly to the deep mind
DQN. So in fact, one of the, for the
bit of work that they actually published, I think
one of its shortcomings, actually, is that in fact
it, although it has done all that learning about
what the right action to do in the right circumstance is,
it doesn't actually do in a rehearsal. It doesn't actually work
through scenarios. It's just very real. It's just
remembering how things worked out in the past.
Yeah. Marie, what actually, what exactly then is in a rehearsal?
Because I think I'm actually confused at, we're describing three different
things. There's sort of a predictive aspect. There's sort of
sort of this decision-making framework,
and then there's also sort of a,
something that reacts to the world
in a dynamic environment that's constantly changing
and reacting to that information
in a very proactive and intentional way.
Those are all different qualities.
So what is inner rehearsal exactly?
So I think that the sort of architecture of intelligence
is putting all of those aspects together, really.
So the inner rehearsal is when we close our eyes,
of course we're not really necessarily going to close our eyes,
especially if we're on the underground in there.
But it's when we close our eyes
and imagine going through some particular scenario.
Imagine doing an action
and inwardly realizing that this will be a good
or this would have a bad outcome.
It's like a planning scenario.
So planning, yeah.
Some of the same bits of our brains light up.
And if I imagine punching you,
then actually parts of my brain
that will be involved in punching you
are partly, fortunately, they're not actually...
Punching you, but there's a envisioning that...
Similar, right.
But there's more to it than just sort of
thinking of the scenario. In some sense
the brain does rehearse the scenario
in other ways, doesn't it? There's quite a bit of evidence
that the way the brain does it, as you say,
is to actually use the very same
bits of neurological apparatus
that it uses to do
things for real. So the planning is almost
interchangeable. It's just kind of
turning off the output and the input. And the cat's acting out
its dreams because it had had some part
of its brain basically
modified so that
the part that normally suppresses
the intention to act things out
you're rehearsing actually was taken away. And so the cat was imagining swiping mice and this
sort of thing while being asleep. So that's actually kind of fascinating because it's a reversal
of how I've always thought of the human brain, which is you're basically saying almost that
there's always a bunch of scenarios and actions that can play out in any given moment in the brain
and that we're actually already acting on in essence by the neurological impulses that are being
fired in the brain. But in reality, what's holding it back is some kind of control that's stopping
something from happening as opposed to saying, I'm going to do X, Y, or Z and then acting on something
intentionally. So it's more of a negative space thing than a positive thing. Or a kind of veto
mechanism. Right. Yeah. I mean, in fact, I think you've actually proposed there two very good
rival hypotheses for what's going on. And I wouldn't want to venture what I think is the answer
there. And it's the kind of thing that neuroscientists study. But it doesn't feel like current
AI, certainly the stuff that's implemented commercially, or even that's published at a research level,
is really bridging into this area
that we're talking about
these rehearsal mechanisms.
Yeah, I think it's actually one of the potentially hot topics
to incorporate into machine learning
in the not too distant future.
And people, so one of the fundamental techniques
in DeepMind's work is reinforcement learning.
Which is a very popular in developmental psychology.
It has its origins, really, in things like classical conditioning.
That's right, Pavlovian classic bells signals.
So within the field of real,
reinforcement learning, there's a whole little subfield called model-based reinforcement learning,
which is all about trying to do it by building a model, which you can then potentially use
in this rehearsal sort of way. But although Rich Sutton, who's the sort of father of reinforcement
learning in his book way back in 1997, he proposes architectures in which these things are blended
together very nicely. But I don't think anybody's really built that in a very satisfactory way
quite yet. So just to help us come along with this, concretely, where are we right now in this
evolution? And there's schools of thoughts that can disagree with this, but just simplify things.
Machine learning, deep learning, as a deeper evolution of machine learning, and then sort of like
a full AI on a continuum. Is that sort of a fair way to start looking at it? Where do we kind of
stand on that continuum? So I have a model which says that, you know, AI and machine learning
are really quite distinct things. You know, AI is all about building systems that can
in some way, replicate human intelligence or explore the spaces of possible minds in Murray's
phrase, whereas machine learning is a very specific technique about building a system that can make
predictions and learn from the data itself. So there are AI efforts that have no machine learning
in them. I mean, Syc, CYC is a great example. You know, you try to catalog all the knowledge in the
world. And I think it's the mindset of the market to combine the two because it, it,
it might give something more attention.
Yeah, I mean, I very much agree with that.
I see machine learning as a kind of subfield of artificial intelligence,
and it's a subfield that's had tremendously,
a tremendous amount of success in recent years
and is going to go very, very far.
But ultimately, the machine learning components have to be embedded in a larger architecture,
as indeed they already are, you know, in some ways and things like Deep Mines.
We've had this sort of thing before, though, in the history of AI, haven't we?
where particular approaches have been flavour of the month
and have douched, so expert systems for a while.
I mean, there was the early neural nets,
which were much smaller neural nets,
and now bigger neural nets and deep learning based on that,
and sort of these systems that are sort of self-guided learning
seem to be flavour of the month.
But given that you've been in the field so long,
do you see this as something is likely to run its course
and then we'll move on to something else?
Is it the end of history?
So I think there might be something special this time,
and one of the indicators of that is the fact
that there's so much commercial and industrial interest in AI and in machine learning.
Well, that reflects the fact that it's been making a lot more progress than any of those previous attempts.
Isn't there a problem, though, with the expert-based systems, that you could ask them why they reach particular conclusions.
And with a self-driving car based on an expert system, and it decides, the classic, you know, trolleyology dilemma of does it, you know, run over the...
Oh, the school bus with the children.
Yeah, exactly.
All of those sorts of things.
I mean, which I think are very interesting, because even now you have sort of implicit ethical standards in
automatic braking systems? Is it small enough? It's that small. It's probably a dog. If it's this
big, it's probably a child. So I think the, I think that the trolley problem is definitely worth
looking at and talking about because we, we as humans, don't even agree on what the correct
outcome should be. So if we're thinking about the trolley problem and some, one of these
scenarios comes to pass, with an expert based system, expert system based, you know, rule based
system, you could say, why did you do this? And the system will be able to say, well,
basically this rule fired and da-da-da-da.
And with these more elaborate systems where it's more like gardening than engineering the way
we build them, it's much, much harder to get any of that kind of thing out of them.
And isn't that going to make them harder to regulate?
It makes them much more capable, but isn't that going to be problematic potentially?
So I think there are still objectives that we understand, right?
So the way that you build a system that predicts using machine learning is very utilitarian, right?
You say there's some cost function you want to minimize,
is some objective function we want to target.
And then you train it.
And you don't really worry about the reasoning
because the ends in a way justify the means.
But the ends are going to vary.
I think we're going to see, you know,
get into a car and you can, like, adjust the ethics dial.
Right.
Because that recent research that suggested
that people are totally fine about cars making utilitarian decisions
as long as it's not them that, you know, is in there.
But in a way, we've lived in this world for a long time.
We just haven't had to, we haven't had to ask the difficult questions.
So any time you pick up the phone,
in the UK, it's your utilities, provider in the US, I understand it's to Comcast customer service,
you're forced through an algorithm, you're forced through a non-expert, expert, expert system
where the human at the other end has no discretion and has to just ply their way through
a script. And we know how frustrating it is to live in that point. Very much so.
Now, as we embed these AI-based systems or machine learning-based systems into our everyday lives,
we're going to face exactly the same issues, which is my car didn't do what I wanted it to do.
My toaster didn't do what I wanted it to do, and I have no way of changing that.
And so this question about where is the utility function and what is a trade-off
has been designed into systems for 30, 50, 100 years or more.
It's just becoming explicit now.
It's becoming much more explicit because it's happening everywhere.
Also, I think there's a big issue with these kinds of systems,
which may work just the way we want them to work statistically.
So if you're a company, then you know that it makes the right decision for 1990.
percent of the people who... Right. Sort of an actuarial analysis. If you are the one percent
person who's phoned up and got a decision which is that, which is not one that you
like. So you don't want to be told. Well, it was right statistically. Yeah. Or just
computer says no, you know, you want to have reasons. And also if you're, or more seriously,
if you're a government, if you're in government and you're making some big decision
about about something or in a company and making a big decision about something, you don't want
the computer to say, just trust me, it's statistics, man. You know, you want to do, you want to, you
want to have a chain of reason. That brings up another aspect of this, which I find quite
amusing, which is that there are quite a lot of sci-fi futures. So Ian Banks's future and
the Star Trek future, where you basically have a post-capitalist society because you can have a
perfect planned economy because an AI can plan the economy perfectly. But, you know, there is
this question of, you know, how plausible that is. But I wonder, you know, to the extent to
which you think AIs will start to be used in policymaking and those sorts of decisions.
Well, I suspect that they will be in it. And I think that's why.
in fact, this whole question that you're raising of trying to make the decision-making process more
transparent, even though it's based on statistics and so on. I think that's a very important
research area. I think I'd separate out the two areas of transparency. So one is the black
box nature, right? Can we look inside the box and see why it got to the conclusion it got to?
The other side part that's important is to actually say this is the conclusion we were aiming
for. And within policymaking, what becomes interesting then is, for.
forcing policymakers to go off and say that extra million pounds we could have put into
heart research, we didn't, even though it cost four lives, and we put it into something
else because we needed to. The kind of analysis we're talking about this actuarial analysis,
we're doing it every day already with insurance, which is just distributed risks.
So we don't mind if humans do it, but we might mind if machines do it.
I think that's the big issue, is would we be happy to hand over those decision-making
processes to machines, even if they made exactly the same decisions on exactly the same basis,
you know, will society accept that being done in this order to make a way?
In a sense, we already have.
It's called Excel.
It's not even so much that we trust whether it works.
Let's assume it works.
What we're doing is we're allowing, using Excel,
we're allowing ourselves to manipulate much larger data sets
than we could have done just with pen and paper.
We're sort of organizing the cells in our mind into sells in a spreadsheet.
In cells of a spreadsheet.
And instead of having 100 data samples, you just look at 16 million and what, you know,
that Excel can handle.
So we've already started to explore the space of decisions using,
these tools, right, to extend human reach?
There's some of the examples that kind of approximate where we can go.
Because the examples that come to mind, I think historically of Doug Engelbart's notion
of augmented cognition, augmented intelligence.
And then I'm even thinking of current examples like Stephen Hawking, Helene Miali,
wrote a beautiful book called Hawking Incorporated, about how he's essentially a collective
because, I mean, I don't agree with this turn of phrase, but describing him almost as a
brain in a vat, surrounded by this collective of a group of people,
who are anticipating his every need.
And it's not just like Obama's crew
who's helping him get elected
and his support team.
It's actually people who understand him so well
that they know exactly how to help him interpret information.
In your space of possible minds,
we have a whole bunch of minds that we could be, you know,
and some people are trying to figure out already,
which are animals,
and then you've got the sort of social animals,
the group minds there.
And this kind of brings us to another ethical question
from the previous while we were talking about,
which is, yeah,
the whole question of the evidence that octopuses are very octopodies, we should say,
are extremely intelligent, has made some people change their mind about whether they want to eat octopus.
So I don't eat octopause anymore.
Really?
I'm vegetarian, I don't eat anything that thinks.
And as of today, I don't eat crab either.
Anyway, so there's the point of the extent to which a creature with a mind that we recognize is cleverer than we thought,
whether it's right for us to boss it around.
But we're going to get this with AIs as well, aren't we?
Because the usual scenario if people worry about is we are enslaved by the AIs.
But I'm much more interested in the opposite scenario, which is if the AIs are smart enough to be useful, they will demand personhood and rights, at which point we will be enslaving them.
So let me give you a practical example of that. There's an AI assistant called Amy, which allows you to schedule calendar requests. And so, you know, I'll send an email to you, to you, Murray, and say, I would like to meet you, CC, Amy. And then Amy will have a natural language conversation with you and you think you're dealing with my assistant. One of the things that I found was I started. I started.
to treat her very nicely because the way she's been designed as a product from a product manager
perspective is very thoughtful. So you didn't say organised lunch slave. Exactly. I didn't do that.
And I was quite nice to her. And then I had a couple of people who are incredibly busy,
write very long emails to her saying, I could try this, I could try this. If it's not convenient,
I could do this. And I thought, this is just not right. There is a misrepresentation on my part.
So I then started to create a slightly apartheid system with Amy, which is if you're very
important and Murray you fell into that category you'll get an email directly from me
and other people will start we'll get we'll get an Amy invite and it does start to
raise some of the issues that are very present day right they're very present day because right
now we have these systems and I think one of the the ethical considerations is that
we need to think about our own attention as individuals and as people you know and and as we
start to interface with systems that are trying to be a bit like the turk the chess playing
device that presented to be human we're giving attention to
to something that can't appreciate the fact that we're giving it attention.
And so I'm now using a bit of computer code to impose a cost on you.
Well, actually, it's like when I speak to an automated voice response system.
You know, I speak in a much more precise way when it says read out your policy number.
I know it's a, I've got to help the algorithm.
We're shaping our behaviours to sort of adapt to it.
We already do that when we type questions into Google, we miss out the stop words.
And we know that we're just basically helping them.
We don't ask questions anymore.
We peck things out in keywords.
I think Google will expect us to do that less and less as time goes by and expect
than the interactions to be in more and more
in natural language, but
can I come, so I think that between the two of you,
you've raised the two kind of opposing
sides of this deeply important
ethical question about the relationship
between consciousness and intelligence
and consciousness and artificial intelligence.
Because on the one hand, there's the
prospect of us failing
to treat as conscious
something that really is
that's very intelligence, and that raises
an ethical issue for how we treat them.
Then on the other side of the coin, there's the
possibility of us inappropriately treating us conscious, something that is not conscious and is,
and is, you know, is perhaps not as intelligence. Both of those things are possible. We can go
wrong in both of those ways. And I think this is really one of the big questions we have to
think about here. And the first, I think the first really important point to be made is that
there's a difference between consciousness and intelligence. And just because something is
intelligent doesn't necessarily mean that it's conscious in the sense of capable of suffering. And
just because something is capable of suffering and conscious
doesn't necessarily mean that it's terribly bright.
So we have to separate out those two things for a start.
That's a great point.
And the thing we seem to care about is consciousness
from an ethical perspective
because we care a lot about the 28-week pre-term baby,
which is not very intelligent.
We care about dogs and cats as well.
So wait, where are we then in when people have expressed fears?
Because one of the things I think has compelled me
to invite all three of you in this,
in this discussion is none of you fall into one of these extremes of like completely, you know,
cheerleading, like the future is dead and, you know, we're going to be attacked and taken over
or the other extreme, which is sort of dismissive, like, this will never happen ever.
Where are we?
We're all in the sensible middle, aren't we?
I guess that you're asking, where are we, you know, historically speaking now, right?
In this evolution and this moment.
And I think the answer is we just don't know.
But again, there's a very, very important distinction to be made.
This is a trouble with academics, where we just want to make distinction.
They are kind of important.
Journalists want to make generalisations.
They are kind of important.
And this is a case where it's really important
to distinguish between the short-term
specialist AI,
the kind of tools and techniques
that are becoming very, very useful
and very economically significant,
and general intelligence,
artificial general intelligence,
or human-level AI.
And we really don't know how to make that yet.
And we don't know when we're going to know how to make that.
You don't sound like a believer in the kind of take-off theory
that, you know, the AI is able to develop a better AI in exponent, you know, less time.
And so you get this sort of runaway.
And I think that's a very unconvincing argument.
It assumes all sorts of things about how things scale.
So I think the takeoff argument, it has a sense of plausibility.
It's a timing that's the issue.
So I can't deny the possibility that we could build systems that could program better systems
and that could start to program better systems.
But the point is that a system that's twice as good, if it's, say, an order, you know, an order,
It might scale non-linearly.
So it might be 256 times harder
to build a system that's twice as good.
And so every incremental improvement
is going to take longer
and it's going to take a lot longer.
And improvements in other areas
like Moore's Law and so on
are not fast enough
to allow each incremental generation
of better intelligence
to arrive sooner than the previous one.
So there's a simple scaling argument
that this need not be linear.
There's also a classic complexity break argument.
I mean, there's so many different arguments.
There are lots.
And you know, as we start
to peel apart the brain and our understanding of the neurological basis for how kind of cognition
functions work, we learn more and more, and we see more and more complexity as we dig into it.
So in a sense, it's a case of we don't know what we don't know, but we've been here before,
before we had understood this idea of there being a magnetic field and needing to, you know,
represent physical quantities with tensors rather than with scalers or vectors.
We didn't see magnetic fields.
We didn't understand them.
We didn't have mechanisms for manipulating them because we couldn't.
measure them. We couldn't, and therefore we couldn't affect them. And there would have been this whole
set of physical crystals and rocks that were useless, because we didn't know that they had these
magnetic properties, and we didn't know we could use them. Silicon dioxide being a great example,
totally useless in the 17th century, quite useful now. And so at some point, we might say that
the reason we think this looks very hard or it's not possible is because we're actually just not
seeing these physical quantities. When we touch on this idea of consciousness, you know, there
is this idea of integrated information theory, which is this theory that, you know, consciousness
is actually an emergent property of the way in which systems integrate information and it's
almost a physical property that we can measure. Yeah, or we could be like, I suppose,
like Babbage saying, I can't imagine how you could ever build a general purpose system using this
architecture, because he can't imagine a non-mechanical architecture for computing. Right.
Marie, where do you fall in this, in this singularity debate? And you're not allowed to say
you make any distinctions. Well, without making any distinctions, I'm still going to be
boringly academic because I want to remain kind of neutral because I just don't, I think we just
don't know. I think these arguments in terms of recursive self-improvement, the idea that if
you did build human-level AI, then it could self-improve. I think there's a case to be answered
there. I think it's a very good argument. And certainly I do think that if we do build human-level
AI, then that human-level AI will be able to improve itself. But I kind of agree with Tom's
argument that it's that it's not necessarily entail that it's going to be exponential yeah right
so actually to pause there for a moment you started off very early on talking about some of the
drivers for why you're excited about this time why this time might be different what are some of
those more specifically like Moore's law we've talked about well because that's obviously one of
the scalers that sort of helps yeah yeah so so basically there are what's driving the whole machine
learning revolution if we can call it that is I mean there are three things and one is
Moore's Laws, so the availability of a huge amount of computation, in particular, the development
of GPUs or the application of GPUs to this whole space has been terrifically important.
So that's one.
Two is big data, or just the availability of very, very large quantities of data, because
we have found that algorithms that didn't really work terribly well on what seemed like a lot
of data, you know, 10,000 examples, actually work much better if you have 10 million.
an example. So they work extremely well. So the unreasonable effectiveness of data, as some
Google researchers called it. And so that's two. And then the third one is some improvements in
the algorithms. So there have been quite a number of little tweaks and improvements to
ways of using back propagation and the way and the kind of neural network architectures
and so. So I add three more to that list. One is the, in practical software architectures,
we're starting to see the rise of microservices. What's nice about Microsoft?
is a very, very cleanly defined systems.
So you don't need generalized intelligence.
You just need very specialized optimizations.
And as our software moves from these hideous spaghettis
to these API-driven microservice architectures,
you can apply machine learning or AI-based optimizations
to improve those single interfaces.
Right, that's actually closely tied to the containerization of code
at the server level.
There's so many connected things with that.
So it's much easier to insert a bit of intelligence into a process.
And then the other two are, so there's this phrase,
which I'm sure Andrew St. Horowitz is familiar with,
which is software is eating the world.
And as software eats the world,
there are many more places where AI can actually be relevant and useful.
So you can start to use AI in a food delivery service
because it's now a software coordination platform,
not chefs in a kitchen,
and therefore more places for it to play.
And this is a commercial argument.
And so Murray has explained some of the technical reasons.
The third commercial argument is accelerating returns.
So as soon as you start within a particular
industry category to use AI and get benefit from it, the increased profits you get, you
reinvest into more AI, which means your competitors have to follow suit. So you can't now
build an Xbox video game without tons of AI. You can't build a user interface without using
natural language processing and natural language understanding. So that forces the
allocation of capital into these sectors, because that's the only way that you can compete.
So given those six drivers, not three, who are the entities that are going to win in this game?
Like, is it startups? Is it the big companies? Is it government?
Well, if you were to ask me to place a bet at the moment, I would place it on the big corporations like Google.
Basically, they have access to the data and everything else you can buy, but that you can't.
Yeah. Right. And also, they have the resources to buy whoever they want.
Right.
And an interesting phenomenon we're seeing in academia these days is that it used to be the case that the people who had, you know, were very interested in ideas and intellectual.
things, they wouldn't necessarily be tempted away to the financial sector. But we'd still retain
a good chunk of them in universities to do PhDs and go. But now companies like Google and
Facebook can hoover up quite a few of those people as well, because they can offer intellectual
satisfaction as well as a decent salary. But also they are getting the, you know, the Silicon
Valley is the new Wall Street argument. They are getting the people who used to go into financial
services. Which is a good thing. I remember the head of a Chinese sovereign wealth fund saying a few years
ago, you Westerners are crazy. You educate your people in these fantastic universities and then
you take the best people and you send them into investment banks where they invent things that
blow up your economy. Why don't you have to do something useful? We used to say that too.
And the whole of, you know, the Chinese Politburo is they're all engineers and they, you know, they
really, they value sort of engineering culture and engineering skills and they can't believe
that we've sort of wasted it in this way. So I think it's fantastic that, you know, now there's
less money to be made on Wall Street that maybe there isn't Silicon Valley and people are going
west. I think that's only got to be a good thing.
to who the winners, the winners might be. I mean, I think there is a strong argument to say
that having the data makes a lot of the difference. Yeah, no, I think that's the crucial
distinction. You'd be, I think you'd be hard pushed to say, look at voice interfaces, you know,
between Apple, Microsoft, Google, Bidu, and Nuance. That's quite a crowded field already. So it does
feel like there are a lot of AI startups who are going to run up against this problem of both
data and distribution. But that said, there are particular niche applications where you can
imagine a startup being able to compete because it's just not of interest to a large company
now. And they may then be able to take a path to becoming independent.
Look at, say, Boston Dynamics, because one of the ways you train machines to walk
like animals is not to use a massive internet data set of how cats walk. So in that case,
not having access to that data is not an impediment. And you can develop amazing things. And
they have done. And then, of course, it's been acquired by Google.
Actually, the DeepMind are another example of the same thing, because if you want to
apply reinforcement learning to games and that's enabled them to make some quite fundamental
sort of progress, you don't need vast amounts of data either.
We're just reinforcing your thesis, though, Murray, which is that Google's going to buy
all these companies.
Ah, well, yeah. But, well, I ought to put in a little pitch for academia.
Because the one thing that you do retain by staying in academia is a great deal of freedom.
and the ability to disseminate your ideas to whoever you want.
So you're not in any kind of silo.
And some of these companies are very generous
in making things, making stuff available.
Right, so TensorFlow.
TensorFlow is a great example of that
that we've just seen Google release.
But nevertheless, you know,
all of these companies are ultimately driven by profit motive
and are going to hold things back.
We've just seen, for example, Uber has snaffled
the entire robotics department for Carnegie Mellon.
Presumably the motivation of the people there
is that, you know, finally, the work that they've been doing on self-driving vehicles and so on.
You can actually get out into the world.
And you can actually make a different.
And, yeah, I'm sure they get much better pay.
But, I mean, the main thing is that rather than doing all of this in a theoretical way,
here is a company that's prepared to fund you to do what you want to do.
You can buy and have impact.
In the real world, in the next decade.
And that must be amazingly attractive.
It is incredibly attractive.
And, of course, many, many people, you know, will go into industry in that way.
But there's also something attractive for a certain kind of mind in staying in academia,
where also you can explore maybe some larger and deeper issues that you, I mean, for example,
you know, Google aren't going to hire me to think about consciousness.
No.
Or they might.
They might do.
There's also this question about the kind of questions that you will look at as an academic.
So the trolley problem being a good one, but there are all sorts of ethical questions that
don't, don't necessarily naturally play a part in your thinking when you think about your
Wall Street.
That's right.
And corporate entities aren't set up to think about that.
Like Patrick Lynn studies the ethics of robotics and AI.
And that entire work is funded by government contracts and distributed through universities.
So, okay, so the elephant in the room, AIN jobs.
What are our thoughts on that?
Well, I think look at where we are today, which is that we're quite far away from a generalized intelligence.
And McKinsey just looked at this question about the automation of the workforce.
And they did something very interesting.
They looked at every worker's day and they broke it down into the dozens of tasks they did
and figured out which ones could be automated.
And their conclusion was, we'll be able to automate quite a bit,
but by no means the entirety of any given worker's job,
which means the worker will have more time for those other bits,
which were always the social, emotional, empathetic, and judgment-driven aspects of their job,
whether you're a delivery person.
Yeah, no, I read that and I thought, hang on a minute, though,
because what they're looking at is they're looking at the jobs of basically well-paid information workers
and saying, well, you can't automate their jobs away,
but the bits you can automate are the bits that are currently,
many of them are bits that are currently done for them by other people.
So the typing pool, you know, we've got rid of the typing pool,
because we all type for us.
Factory workers.
Exactly.
So, you know, this means that the support workers for those people
are potentially put out of business by AI.
Or they're moved up.
Yeah, or they have to find something else to do.
But I think just because the architects are safe
doesn't mean that the people who work for the architects are...
If you walk down a British High Street,
the main street today,
one of the things you'll notice is a plethora of massage parlors,
nail salons, and barbers shops.
Service businesses.
Because these are the things that you can't do through Amazon.
Everything else you can do through Amazon or exterior design,
yoga, zumba, whatever, that's the future of employment.
At coffee shops.
Okay, so we've talked a lot about some of the abstract notions of this.
And, you know, this is not a concrete answer
because we're talking about a fiction film.
But how possible in reality is the ex-machina scenario?
And a warning to all our listeners
that spoiler alerts are about to follow.
So if you're really bitter about spoiler alerts,
you should probably sign off now.
The reality that the character, the main embodied AI,
Ava could essentially fight back to her enslavement.
To me, the most fascinating part of this story,
and we have no time to talk about it right now,
but I do want to explore this at some point in the future,
is sort of the gendering of the AI,
which I think is incredibly fascinating.
How real is that scenario?
Yeah, so the whole film is predicated on the idea,
well, it seems to be predicated on the idea
that Ava is not only a human-level AI,
but is a very human-like AI.
To the human-like. Of course, she looks like a human-off.
But, I mean, human-like in her mind.
And her objectives.
And her objectives.
Her mood.
Her emotions.
So, you know, so she, if you were a person in those circumstances, you would want to get out, right?
And in fact, very often science fiction films that portray AI, that's a fundamental premise
that they use for how they work, is that they assume that we're going to assume that
AI is very much like us and has the same kinds of motives and drives.
for good or for ill. They can be good motives or bad motives. They could be evil or they could be good, you know. But it's not necessarily the case that AI will be like that. It all depends how we build it. And if you're just going to build something that is very, very good at making decisions and solving problems and optimising...
It may just sit down and say, I just want to sit here and do math. We really have no idea what their motivations will be. Yeah, I mean, if the AI had been modelled on a 45-year-old dad, it would have been perfectly happy being locked up and its shed at the bottom of the garden with an Xbox.
And some magazines, right?
But then, just moving on a little bit from that, though,
it is worth pointing out some of the arguments
that people like Nick Bostrom and so on have advanced
that you shouldn't anthropomorphize these creations.
You shouldn't think of them as too human-like.
In the film, I saw it three times, as I mentioned.
On the third watching, I noticed that there's a scene
where Nathan has a photo of himself on his computer
where he programs, like he's at his computer all day,
like hacking the code, which I think is so fascinating
because there's almost this narcissistic notion,
which kind of ties your notion of the anthropomorphization of the AI.
You use the term anthropomorphism because it is,
I've noticed you use the word creatures to refer to AI.
And I think that's really telling
because they are going to be more like aliens
or more like animals than they are like human.
I mean, the chances of them being just like humans are very small.
We might try and architect their minds
that they are very, very human-like.
But can I just come back to the Nick Bostrom kind of argument
because he points out that although we shouldn't anthropomorphize the AI,
Nevertheless, if we imagine this very, very powerful machine capable of solving problems and answering questions,
that there are what people who think about this refer to as convergent instrumental goals.
You'll have to break that down for us really quickly, yeah.
So anything that's really, really smart is going to have a number of goals that anything is going to share.
And these are going to be things like self-preservation and gathering resources.
if it's sufficiently powerful, then any goal that you can think of,
if it's really, really good at solving that goal,
then it's going to want to preserve itself, first of all,
because how can it, you know, maximize the number of paper clips in the world
to use Nick Bostrom's argument if it doesn't preserve itself
or if it doesn't gather as many resources as it can.
So that's their argument for why we have to be cautious
about building something that is a very, very powerful AI,
very powerful optimizer.
Because it'll always be optimising for that.
So I think the very important thing here is that the media tends to get the wrong end of the stick here
and think of this as some kind of evil terminator-like thing.
And so we might think that those arguments are flawed, the arguments by Bostrom et al.
Maybe we do, maybe done.
But I think there's a very, very serious case to answer there.
And in order to answer it, you have to read their arguments.
You can't just kind of assume what you think their arguments are.
Right, the derivative.
What's the problem with a lot of technology discussion in general is.
to always revisit these in a very derivative way
versus viewing the original.
But putting that exhortation aside,
how do people make sense of this?
Like, how do they make sense of what is possible?
So how do we think about the future, really,
when it comes to artificial intelligence?
And I think the only way to do it
is actually to kind of set out the whole tree
of possibilities that we can imagine
and try to, you know,
not sort of fixate on one particular way that things might go
because we just don't know where we're going to go down that tree at the moment.
So there's a whole tree of possibilities.
Is AI going to be human-like, or not?
Is it going to be embodied or not?
Is it going to be a whole collection of these kinds of things?
Is it going to be a collective?
Is it going to be conscious or not?
Is it going to be self-improving in this exponential way or not?
I don't think we really know, but we can lay out that huge range of possibilities
and we can try to analyze each possibility and think,
what would steer us down in that direction and what would the implications be?
It's a great way to approach it. Well, that's another episode of the A6 and Z podcast.
Thank you so much for joining, everyone.
Thank you.
Thank you.