a16z Podcast - a16z Podcast: The History and Future of Machine Learning
Episode Date: June 19, 2019How have we gotten to where were are with machine learning? Where are we going? a16z Operating Partner Frank Chen and Carnegie Mellon professor Tom Mitchell first stroll down memory lane, visiting the... major landmarks: the symbolic approach of the 1970s, the "principled probabalistic methods" of the 1980s, and today's deep learning phase. Then they go on to explore the frontiers of research. Along the way, they cover: How planning systems from the 1970s and early 1980s were stymied by the "banana in the tailpipe" problem How the relatively slow neurons in our visual cortex work together to deliver very speedy and accurate recognition How fMRI scans of the brain reveal common neural patterns across people when they are exposed to common nouns like chair, car, knife, and so on How the computer science community is working with social scientists (psychologists, economists, and philosophers) on building measures for fairness and transparency for machine learning models How we want our self-driving cars to have reasonable answers to the Trolley Problem, but no one sitting for their DMV exam is ever asked how they would respond How there were inflated expectations (and great social fears) for AI in the 1980s, and how the US concerns about Japan compare to our concerns about China today Whether this is the best time ever for AI and ML research and what continues to fascinate and motivate Tom after decades in the field The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments and certain publicly traded cryptocurrencies/ digital assets for which the issuer has not provided permission for a16z to disclose publicly) is available at https://a16z.com/investments/. Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.
Transcript
Discussion (0)
The content here is for informational purposes only, should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security and is not directed at any investors or potential investors in any A16Z fund. For more details, please see A16Z.com slash disclosures.
Hi and welcome to the A16Z podcast. I'm Frank Chen. Today I'm here with Carnegie Melons, Professor Tom Mitchell, who has been involved with machine learning basically his entire career. So I'm super excited to have this conversation with Tom, where he can tell us a little bit about the history and where all of our techniques came from. And we'll spend time talking about the future, where the field is going. So Carnegie Mellon's been involved in sort of standing up, the fundamental teaching institutions and research institutions of, you know, the big
areas of computer science, artificial intelligence and machine learning. So take us back to the early
days, you and Newell and Jeff Hinton are teaching this class. What was the curriculum like?
Like, what were you teaching? Pretty different, I imagine, than what we teach undergrads today.
That's right. Well, at the time, this was the 1980s. So artificial intelligence at that point
was dominated by what we would call symbolic methods, where things like formal logic would be used
to do inference.
And much of machine learning was really about learning symbolic structures,
symbolic representations of knowledge.
But there was this kind of young whippersnapper, Jeff Hinton,
who had a different idea.
And so he was working on a book with Rummelhart-McClellan
that became a very well-known parallel data processing book
that kind of launched the field of neural nets.
And they were, if I remember, psychologists, right?
Yeah, Jay McClellan was a psychologist here at CMU.
Rommelho, kind of a neuroscientist, and more.
He was very broad person.
Yeah.
And Jeff.
So the three of them were kind of the rebels who were taking things off in a different paradigm.
Right.
The Empire wanted us to do research on knowledge representation and inference and first order logic.
I remember as an undergrad, I took this computer-aided class that John Etchimendi called Tarski's World,
where we learned all about first world logic, right?
What could you prove?
What could you not prove?
Right.
And so that's what the establishment, quote unquote, was teaching.
And then Jeff was the rebel off in neural network land.
And, you know, he gets his reprise later.
So take us back to the world of knowledge representation, because I'm actually seeing a lot of startups these days who are trying to bring back some of these techniques to comment.
complement deep learning because, you know, there are well-known challenges with deep learning, right?
Like we're not encoding any priors, we're learning everything for the first time.
We need tons of labeled data sets to make progress.
And so take us back to the days of knowledge representation.
What were we trying to solve with those set of techniques and how might we use them today?
So back in the 80s and the 90s, and I have to say that some of the really senior people in the field were totally.
devoted to this paradigm of logical inference, logical representations.
People like John McCarthy, for example, were very strong proponents of this.
And really, essentially, just saw that reasoning is theorem-proving,
and therefore, if we're going to get computers to do it, that's what we have to do.
There were some problems with that, and there still are.
One that I remember from back then that was an example was the banana in the tailpipe problem.
These logical systems were used to reason to do things like how would you plan a sequence of actions to achieve a goal?
Like how would you get from here to the airport?
Well, you'd walk to your car, you'd turn the key in, turn the car on, you'd drive out of the parking lot, get on the interstate, go to the airport exit, etc.
But what if there's banana in the tailpipe?
Even back then, before it became a meme in Beverly Hills Cop, we were worried about the banana and the tailpipe.
That's right.
And so, and the point of the banana in the tailpipe is there are an infinite number of other things that you don't say when you spin out a plan like that.
And any proof, if it's a proof, really, is going to have to cover all those conditions.
And that's kind of an infinitely intractable problem.
You couldn't encode enough to do the inference you needed for your plans to be successful.
Right.
And so one of the big changes between the 80s and 2019 is that we no longer really think in the field of AI that inference is proving things.
Instead, it's building a plausible chain of argument.
And it might be wrong.
and if it goes wrong, if there is a banana in the tailpipe,
you'll deal with it when it happens when you figure it out.
Right, so we move from certainty and proof
to sort of probabilistic reasoning, right?
Bayesian technique started becoming popular.
Right, and so around 19, in the late 90s, in fact.
So if you look at the history of machine learning,
there's an interesting trajectory where in maybe up to the mid-80s,
things were pretty much focused on symbolic representations.
Actually, if you go back to the 60s, there were it's the perceptron,
but then it got swallowed up by the end of the 60s
by symbolic representations and trying to reason that way
and trying to learn those kind of symbolic structures.
Then when the neural net wave came in around the late 80s, early 90s,
that started competing with the idea of symbolic representations.
But then in the late 90s, the statisticians moved in and probabilistic methods became very popular.
And at the time, there was this, if you look at this history, you can't help but realize what a social phenomenon, technology advances and sciences and technology.
People influencing each other at conferences.
Right.
And shaming them into adopting a new paradigm.
And so one of the slogans, or one of the phrases you kept hearing when people started working on probabilistic, statistical probabilistic methods, they would never call them that.
They would have called them instead principled probabilistic methods, just to kind of shine a light on the distinction between neural nets, which are just somehow tuning a gazillion parameters and the principled methods that were being used.
And so that became really the dominant paradigm in the late 90s and kind of remained in charge of the field up through until about 2009, 2010 when now, as everybody kind of knows, deep networks made a very serious revolution showing that they could do all kinds of amazing things that hadn't been done before.
Yeah, we really are living in a golden age here in deep learning and neural network land.
But let's go back to the original sort of rebel group, right?
This is Jeff Hinton hanging out in the shadow of sort of first order logic and saying,
no, this is going to work.
I think they were loosely inspired by the architecture of the brain.
Is that?
Definitely.
Definitely.
The kinds of arguments, Jerry Feldman was one of the people who gave some of these arguments.
He said, look, you recognize your mother in about 100 milliseconds.
Right.
Your neurons can't switch state in faster than a few milliseconds.
And so it looks like at most the chain of inference that you're doing to go from your retina to recognize your mother can only be about 10 deep just from the timing.
Oh, fascinating.
So it was an argument of sort of how long it took to recognize your mother.
Right.
And then how slow your neurons are, right?
Because they're basically, these are biochemical.
processes, right?
Right.
Fascinating.
Really a computational efficiency argument.
And therefore, Jerry would say, there must be a lot of stuff happening in parallel.
It must be a very wide chain of inference if it's only 10 layers deep.
And then he says, look at the brain.
Look at visual cortex.
Yeah.
Got it.
And so neuroscientists at this time were making progress in understanding the structure of neurons
and how they connected to each other and how they form connections.
and those connections could change strength over time, right?
All mediated body chemical interactions in the computer science community was inspired by this.
Definitely.
And the level of abstraction at which the computational neural nets met up with the real biological neural nets
was not a very detailed level.
But where they kind of became the same was this idea of distributed representations.
that, in fact, it might be a collection of hundreds or thousands or millions of neurons
that simultaneously were firing that represent your mother instead of a symbol.
Right, right.
So it's such a completely different notion of what it even means to represent knowledge.
And really, one of the most exciting things that has come out of the last decade of research in deep networks is
a better understanding, although we still don't fully understand, of how these artificial neural
networks can learn very, very useful representations.
And for me, a simple example of that that in a sentence summarizes it is we have neural
networks now that can take as input an image, a photograph, and output a text caption for
that photograph, what kind of representation must be in the middle of that neural network
in order to actually capture the meaning well enough that you can go from a visual stimulus
to the equivalent textual content that it's really, it must be capturing a very basic
core representation of the meaning of that photograph. Yeah, and one of my favorite things
about the brain, which is otherwise this very sort of slow computer, right, if you just look at
neuron speeds is that not only can they do this, but they can actually use this, the representation
they're deriving to actually inform our actions and our plans and our goals, right? So not only is it,
like, this picture has a chair in it, but, like, I can sit in that chair. I can simulate sitting
in that chair. I think, like, that chair is going to support my weight. And all of these things
happen in, like, milliseconds, despite the fact that the basic components of the brain are very slow.
Yeah, it's an amazing thing. In fact, now that you mention it, I have to tell you, a half of my research
life these days is in studying how the human brain represents meaning of language.
We use brain imaging methods to do this.
And in one set of studies, we put people in an fMRI scanner, and we showed them just common
nouns like automobile, airplane, a knife, a chair, and so forth.
And we would get a picture, literally, with about 3 millimeter resolution of the three-dimensional
neural activity in their brain as they think about these different words.
And we're interested in the question of all kinds of fundamental questions.
Like, what do these representations look like?
Are they the same in your brain and my brain?
Given that they don't appear instantaneously, by the way, it takes you about 400 milliseconds to
understand a word, if I put it on the screen in front of you.
What happens during that 400 milliseconds?
How do these representations evolve and come to be?
And one of the most interesting things we found, we studied this question by training a
machine learning system to take as input an arbitrary noun and to predict the brain image
that we will see if a person reads that noun.
Now, we only had data for 60 nouns at that time.
So we didn't train it on every noun in the world.
We only trained it on 60.
In fact, what we did was we trained it only on 58
so we could hold out two nouns that they hadn't seen.
And then we would test how well it could extrapolate
to new nouns it had never seen.
Fascinating.
By showing it the two held out nouns
and having it predict the images.
Then we'd show it two images, and we'd say,
well, which of those is strawberry
and which of those is their plane?
And it was right 80% of the time.
Wow.
So you could actually predict essentially brain state, right?
I'm going to show you a strawberry.
Let me predict the configuration of your neurons
and who's lighting up and who's not.
Right.
And so then we had a model that we trained with machine learning
that captured something about representations in the brain.
We used that to discover
that the representations are almost identical in your brain in mind. We could train on one set of
people and decode what other people were thinking about. And we also found that the representations
themselves are grounded in parts of the brain that are associated with perception. So if I give
you a word like peach, the parts of your brain that code the meaning of that are the ones
associated with the sense of taste and manipulation, because sometimes you pick up a peach
and visual color.
Yeah, that is fascinating.
Well, it's so exciting to think that the brain structures are identical across people because,
you know, what everybody wants is sort of that, remember that scene in the matrix where you sort
of like, you know, you're jacked straight into your brain and you're like, oh, now I know
Kung Fu, right?
Like, this is what we want, right?
We want to learn new skills and, you know, sort of new facts and new inferences just like, you
You know, like loading an SD card, right?
And so the fact that we are sort of converging to the same structures in the brain, at least, makes
that theoretically possible.
We're a ways.
We're a ways away from that.
But I'm with you.
Yeah.
Awesome.
So another area that interests you is finding biases.
And why don't we start by distinguishing sort of two types of biases?
Because, you know, when you hear the word bias today in machine learning, you're mostly thinking
about things like, gee, let me make sure my data set is representative.
so I don't draw the wrong conclusion from that, right?
So the classic example being here that I don't do good recognition on people with darker skin
because I didn't have enough of those samples in my data set.
And so the bias here is you've selected a very small subset of the target data set that you want to cover
and make predictions on, and therefore your predictions are poor.
So that's one sense of bias.
But there's another sense of bias that statistical bias, which is kind of what you want out of algorithm.
So maybe talk about this notion.
Yeah, sure.
And this is really a very important issue right now because now that machine learning is being used in practice in many different ways, the issue of bias really is very important to deal with.
You gave an example.
Another example would be, for instance, you have some historical loan applications in which ones were approved.
But maybe there is some bias that say people of one gender receive fewer loan approvals just because.
because of their gender.
And if that's inherent in the data
and you train a machine learning system that's successful,
well, it's probably going to learn the patterns
that are in that data.
So the notion of what I'll call social bias,
socially unacceptable bias,
is really this idea that you want the data set
to reflect the kind of decision-making
that you want the program to make
if you're going to train the program.
And that's kind of the kind of
common sense notion of bias that most people talk about. But there's a lot of confusion in the
field right now because bias is also used in statistical machine learning to really with a very
different meaning. We'll say that an algorithm is unbiased if the patterns that it learns, the
decision rules that it learns for approving loans, for example, reflect correctly the, the
patterns that are in the data. So that notion of statistically unbiased just means the algorithm's
doing its job of recapitulating the decisions that are in the data. The notion of the data
itself being biased is really an orthogonal notion. And there's some interesting research going on
now. So, for example, typically when we train a machine learning system, say, to do loan approval,
A typical thing would be you want to, you can think of these machine learning algorithms as optimization algorithms.
They're going to tune maybe the parameters of your deep network so that they maximize the number of decisions that they make that agree with the training examples.
But if your training examples have this kind of bias that maybe females receive fewer loan approvals than males,
there's some new work where people say, well, let's change that objective that we're trying to optimize.
In addition to fitting the decisions that are in the training data as well as possible,
let's put another constraint that the probability of a female being approved for a loan
has to be equal to the probability of a male being approved.
And then subject to that constraint, we'll try to match as many decisions as possible.
So there's a lot of work right now in really technical work trying to understand if there are ways of thinking more creatively, more imaginatively about how to even frame the machine learning problem so that we can take what might be biased datasets, but impose constraints on the decision rules that we want to learn from those.
Yeah, that's super interesting.
We're sort of envisioning the world we want rather than the data of the world.
that we came from, right? Because we might not be happy with the representation of the
representativeness, I guess, right, of the data that we came from. And it's causing people to
look a lot more carefully, even the very notion of what it means to be biased and what it means
to be fair. Are there good measures for fairness that the community is driving towards, or do we
not really have a sort of an objective measure of fairness? We don't. We don't have an objective
measure. And there's a lot of activity right now to discussing that, including people like
our philosophy professor David Danks, who is very much part of this discussion. And social
scientists, technology, people all getting together. And in fact, there are now a couple
conferences centered around how to introduce fairness and explainability and trust.
in AI systems. It's a very important issue, but it's not only technical. It's partly
getting our philosophical, social, trying to get our heads around what it is that we really
want. That's a beautiful thing about AI and about computers in general. It forces you to be
way more precise when you are getting a computer to do it about what you want. Right. And so even if
you just think about self-driving cars, we have, when I was 16, I took a test.
and I was approved to be a human driver.
They never asked me questions about whether I would swerve to hit the old lady
or swerve to hit the baby carriage.
Right.
Trolley problem was not on the DMV test.
Exactly.
But it's on the test for the computers.
Right.
Yeah, it's really interesting that we sort of hold computers to a different standard
because we're programming them.
Right.
We can be explicit and we can have them sort of hit goals or not, right?
And those are design decisions rather than sort of, you know,
bundled into a brain, right, of a person.
Yeah.
And so I think of, you know, look, banks historically have hired loan officers.
Those loan officers may or may not be fair, right, according to the definitions that we're sort of talking about now.
But we kind of hold those humans, those human loan officers to a different standard than we would hold the algorithms.
That's true.
And, I mean, who knows which way it will go in the future.
Right.
If we continue to have human loan officers and some computer loan officers, will we up the constraints on.
the humans so that they pass the same qualifications, or will we drop the constraints
on the computers so that they're no more titrated than the people?
Yeah.
Yeah, that's fascinating, right?
Who is master?
Who is the student, right?
The intuitive thing is, let's make the humans the models that we train our systems
to approach, right, like human competence being the goal.
The other way to think about it is, no, we can actually introduce constraints like, you know,
equal number of men and women or equal number of this ethics.
ethnicity versus another ethnicity and our algorithm as a result of those constraints could be
more fair than humans and so we invert it right let's get humans up to that level of impartiality
right in fact maybe the algorithm can end up teaching the human how to make those decisions in a
different way so that the fairness outcome you want is really achieved yeah that's fascinating
and it's great that sort of not just computer scientists are involved in this conversation about
the ethicists and the social scientists who are weighing in.
So that gives me hope that, you know, sort of smart people across disciplines are really grappling with this.
So we get the outcomes that we want.
Well, sort of a related topic to this, right, sort of social impact of AI.
You recently co-wrote a paper with MIT's Eric Brinholzson about the workplace implications.
And I think you also testified on Capitol Hill about what AI is going to do to jobs.
So why don't you talk a little bit about what you guys found?
in the paper. Well, this actually started with Eric and I co-chairing a National Academy study
on automation in the workforce, which was a two-year affair with a committee of about 15
experts from around the country who were economists, social scientists, labor experts,
technologists. And in that study, I think we learned so much. It turns out when you really dig
into the question of what's going to be the impact of AI and automation on jobs, you can't
escape noticing that there are many different forces that automation and technology is exerting
on the workforce. One of them, of course, is automation. Toll booth operators are going away. Do not
sign up to be a toll booth operator. But in other kinds of jobs, instead of the job going away,
there will be a shift, a redistribution of the tasks.
So take, for example, doctor.
A doctor has multiple tasks, for instance,
they have to diagnose the patient,
they have to generate some possible therapies,
they have to have a heart-to-heart discussion with the patient
about which of those therapies the patient elects to follow.
And they have to build the patient.
Now, computers are getting,
computers are pretty good at billing but they're getting better at diagnosis and they're getting
better at suggesting therapies for example you know in just in the last couple years we've seen
computers that are at the same level if not a little better than doctors at things like
diagnosing skin cancer and other kinds of diseases the radiologists the tissue biopsies right
all of these things we're using these computer vision techniques right so to get
very good performance. Right. So what does this mean about the future of doctors? Well, I think what
it means is automation happens at the level of the individual tasks, not at the job level. If a job is
a bundle of tasks like diagnosis therapy, heart-to-heart chat, what's going to happen is computers will
provide future doctors with more assistance in some degrees, hopefully automating billing,
but some amount of automation or advice giving.
But for other tasks, like having that hard-to-hard chat,
we're very, very far from when computers are going to be able to do anything close to that.
Yeah, good bedside manner is not going to be a feature of your Robo Doc anytime soon.
Right.
And so what you find if you look into this,
and Eric and I recently had a paper in science with a more detailed study of this,
But what you find is that the majority of jobs are not like toll booth operators where there's just one task.
And if that gets automated, that's the end of the job.
The majority of jobs, like podcast interviewer or computer or professor or doctor, really are a bundle of tasks.
And so what's going to happen is that, according to our study, the majority, more than half of jobs are going to be influenced,
impacted by automation, but the impact won't be elimination.
It'll be a redistribution of the time that you spend on different tasks.
And we even conjecture that successful businesses in the future will, to some degree,
be redefining what the collection of jobs is that they're hiring for.
Because they still have to cover the tasks through some combination of automation and manual
at work, but the current bundles of tasks that form jobs today might shift dramatically.
So the key insight is to think of a job as a bundle of tasks, and that bundle might change over
time as AI enters and says, well, look, this specific task I'm very good at in algorithm
land. And so let's get humans to focus on other things. We just need to think of them as
differently bundled. Well, the last topic I wanted to talk with you about, Tom, is around whether
this is the best time ever for AI research, right? So we started the grand campaign,
you know, some would argue summer of 1956 with the Dartmouth conference. And we've had several
winters and summers. Where are we now? And then what are you most excited about looking into the
future? I think we're absolutely at the best time ever for the field of artificial intelligence.
And there have been, as you say, ups and downs over the year. And for example, in the late 80s,
AI was very hot and there was great expectation of the things it would be able to do.
There was also great fear, by the way, of what Japan was going to do.
Yeah, this is the fifth generation supercomputer and the entire national policy of Japan, right, focusing on this area.
Right.
And so in the U.S., there was great concern that this would have a big impact.
Japan would take over the economy.
So there are some parallels here.
Now, again, AI is very popular. People have great expectations. And there's a great amount of fear, I have to say, about what China and other countries might be doing in AI. But one really, really important difference is that, unlike in the 1980s, right now, there's a huge record of accomplishment over the last 10 years. We already have AI.
and machine learning being used across many, many different, really economically valuable
tasks.
And therefore, I think really there's very little chance that will have a crash.
Although I completely agree with my friends who say, but isn't AI overhyped?
Absolutely, it's overhyped.
But there's enough reality there to keep the field progressing and to keep commercial interest
and to keep economic investment going for a long time to come.
So you would argue this time it really is different.
It really is different because we have real working stuff to point to.
And over the next 10 years, we'll have a whole lot more real working stuff
that influences our lives daily.
So as a university researcher, I look at this and I say,
where is this going and what should we be doing in the university?
If you want to think about that, you have to realize just how much progress
there was in the last 10 years.
When the iPhone came out,
I guess that's 11 years ago,
computers were deaf and blind.
Right?
When the iPhone came out,
you could not talk to your iPhone.
Right.
This sounds like such a weird idea,
but you could not talk to your iPhone
because speech recognition didn't work.
And you know how,
well,
and now computers can transcribe voice to text
just as well as people.
Right.
Similarly, when you pointed your camera at a scene, it couldn't recognize with any accuracy
the things that were on the table in the scene.
And now it can do that with about the same accuracy comparable to humans.
And in some visual tasks like skin cancer detection, even better than trained to doctors.
So it's hard to remember that it's only been 10 years.
And that's the thing about progress in AI.
you forget because it becomes so familiar, just how dramatic the improvement has been.
Now, think about what that means.
That means we're really in the first five years of having computers that are not deaf and blind.
And now think about what are the kinds of intelligence that you could exhibit if you were deaf and blind?
Well, you could do game playing and inventory control.
You could do things that don't involve perception.
But once you can perceive the world and converse in the world, there's an explosion of new applications you can do.
So we're going to have garage door openers that open for you because they recognize that's your car coming down the driveway.
We're going to have many, many things that we haven't even thought about that just leverage off this very recent progress in perceptual AI.
So going forward, I think a lot about how I will.
want to invest my own research time. I'm interested still in machine learning. I'm very proud
at the field of machine learning. It's come a long way. But I'm also somebody who thinks we're
only at the beginning. I think if you want to know the future of machine learning, all you need to do
is look at how humans learn and computers don't yet. So we learn, for example, we do learn statistically
like computers do.
My phone watches me over time
and statistically it eventually learns
where it thinks my house is
and where it thinks my work is.
It statistically learns what my preferences are.
But I also have a human assistant.
And if she tried to figure out
what I wanted her to do
by statistically watching me do things a thousand times,
I would have fired her so long ago.
A lot of false positives and false negatives, right?
Right.
So she doesn't learn that way.
She learns by having a conversation with me.
I go into the office and I say, hey, this semester I'm team teaching a course with
Katerina on deep reinforcement learning.
Here's what I want you to do.
Whenever this happens, you do this.
Whenever we're preparing to hand out a homework assignment, if it hasn't been pre-tested
by the teaching assistants two days before hand out.
You send a note saying, get that thing pre-tested.
So what I do is I teach her with, and we have a conversation, she clarifies.
So one of the new paradigms for machine learning that I predict we will see in the coming
decade is what I'll call conversational learning.
Use the kind of conversational interfaces that we have, say, with our phones to allow people
to literally teach their devices what they want them to do instead of have the device statistically
learn it.
And if you go down that road, here's a really interesting angle on it.
It becomes kind of like replacing computer programming with natural language instruction.
So I'll give you an example of a prototype system that we've been,
working on, together with Brad Myers, one of our faculty in HCI,
it allows you to say to your phone something like,
whenever it snows at night, I want you to wake me up 30 minutes earlier.
If you live in Pittsburgh, this is a useful app.
And none of the California engineers have created that app.
And today, I could create that app if I took the trouble of learning the computer language of the phone.
I could program it, but only far less than 1% of phone users can actually have taken the time to learn the language of the computer.
We're giving the phone the chance to learn the language of the person.
So with our phone prototype, if you say, whenever it snows at night, wake me up 30 minutes earlier, it says, I don't understand, do you want to teach me?
And you can say, yes, here's how you find out if it's snowing at night.
You open up this weather app right here, and where it says current conditions, if that says
S-N-O-W, it's snowing.
Here's how you wake me up 30 minutes earlier.
You open up that alarm app, and this number you subtract 30 from it.
So with a combination of showing, demonstrating, and telling voice, we're trying to give users the opportunity to create
their own apps, their own programs, with the same kind of instruction, voice, and demonstration
that you would use if you're trying to teach me how to do it.
I love that.
It's sort of a natural language for an end to, we have an investment in a company called
If, If This, Then That, which is you can program those things, but you have to be a little
sophisticated.
You'd like to just be able to talk to your phone and have it figured out, how do I fill
the slots into Ift that Ift wants.
Exactly.
And If This Then That is a wonderful thing.
It has a huge library of these apps that you can download.
But as you say, you still have to learn the language of the computer to create those.
We're trying to have the computer learn the language of the person.
If that line of research plays out, and I believe it will this decade,
what will be in a very different world because we'll be in a world where instead of the elite few,
less than 1% of phone users being able to program, it'll be 99% of,
phone users who can do this. Now, think about what that does for the whole conception of how we think
about human-computer interaction. Yeah, that's a profound shift in society, right? Just like, you know,
everybody became literate, not just the priests, and look what happened to society. Exactly.
Yeah. Think about what it means for the future of jobs. Right now, if you have a computer introduced
as your teammate, you, the human, and the computer are a team, well, the computer,
The computer is frozen, and the only teammate who gets to do the adapting is the human.
Right.
Because the computer is a fixed functionality.
What if in that team the human could just teach the computer how they want the computer to help them do their job?
It would be a completely different dynamic.
It would change the future of work.
Yeah.
That's fascinating.
And then I think another thread that you're super interested in on the future of machine learning is something around never-ending learning.
So tell us about that.
Sure. Again, I just go back to what do humans do that computers don't yet.
And computers are very good at, say, learning to diagnose skin cancer.
You give it some very specific task and some data.
But if you look at people, people learn so many things.
We learn to all kinds of things.
You can tap dance. You can do double entry bookkeeping.
Right, right?
Right.
You can add numbers.
You can play music, all kinds of.
things. And a lot of those things we learn over time in a kind of synergistic way, in a staged
sequence. First you learn to crawl, then you learn to walk, then you learn to run, then you learn to
ride a bike. And it wouldn't make any sense to do them in the other sequence because you're
actually learning to learn. When you acquire one skill, it puts you in a position that you now
are capable of learning the next skill.
So I'm very interested in what it would mean
to give a computer that kind of capability
to do learning for days and weeks and years and decades.
And so we have a project we call our never-ending language learner,
which started in 2010,
running 24 hours a day trying to learn to read the web.
Oh, fascinating.
And there's something about sort of
longitudinal, right? We've been, we started in 2010 and it just keeps on going. So it's not just
like transfer learning from one model to another. It's like long running. It's long running and
it's has many different learning tasks. It's building up a knowledge base of knowledge about the
world. But to keep it short, I'll just say we've learned so much from that project about
how to organize the architecture of a system so that it can invent new learning tasks as it goes
so that it can get synergy once it learns one thing to become better at learning another
thing, how, in fact, very importantly, it can use unlabeled data to train itself
instead of requiring an army of data labelers. So I just think this is an area that's relatively untouched
in the machine learning field, but looking forward, we're already seeing an increasing number
of embedded machine learning systems in continuous use. And as we see more and more of those
in the Internet of Things and elsewhere, the opportunity for learning continuously for days and weeks
and months and years and decades is increasingly there, we ought to be developing the ideas,
the concepts of how to organize those systems, to take advantage of that.
Yeah, I love both of these design approaches, and they're sort of inspired by humans,
and sort of humans are mysteriously good at learning and adapting, and they sort of shine a
spotlight on where machine learning algorithms are not yet, right?
So it's such a fertile area to look for inspiration.
Well, Tom, it's been a great delight having you on the podcast.
Thanks for sharing about the history and the future of machine learning.
We can tell you're still fired up after all of these decades.
And so that's a great delight just to see somebody who is committed, basically, their life to understanding the mysteries of learning
and wishing many good decades to come as you continue working on it.
Thanks, and thanks for doing this podcast.
I think it's a great thing to get a conversation.
going and it's a great contribution to do that.