Microsoft Research Podcast - 052r - Chris Bishop
Episode Date: December 18, 2019This episode first aired in November, 2018. Dr. Christopher Bishop is quite a fellow. Literally. Fellow of the Royal Academy of Engineering. Fellow of Darwin College in Cambridge, England. Fellow of t...he Royal Society of Edinburgh. Fellow of The Royal Society. Microsoft Technical Fellow. And one of the nicest fellows you’re likely to meet! He’s also Director of the Microsoft Research lab in Cambridge, where he oversees a world-class portfolio of research and development endeavors in machine learning and AI. Today, Dr. Bishop talks about the past, present and future of AI research, explains the No Free Lunch Theorem, talks about the modern view of machine learning (or how he learned to stop worrying and love uncertainty), and tells how the real excitement in the next few years will be the growth in our ability to create new technologies not by programming machines but by teaching them to learn. https://www.microsoft.com/research
Transcript
Discussion (0)
When I first talked to MSR Cambridge Lab Director Chris Bishop last November,
we covered the big issues in AI, past, present, and future.
He's since made another appearance on the podcast talking about gaming and research
with the head of Microsoft's gaming division, Phil Spencer.
Whether you're just getting to know Chris for the first time,
or you've heard him twice before in the third time's a charm,
I know you'll enjoy episode 52 of the Microsoft Research Podcast,
Machine Learning and the Learning Machine.
The amount of data in the world is, guess what?
It's growing exponentially.
In fact, it's doubling about every couple of years or so.
And that's set to continue for a long, long time to come
as we instrument our cities,
as we have the Internet of Things,
as we instrument our bodies, as we gather more data. And that will fuel this revolution in machine learning.
You're listening to the Microsoft Research Podcast, a show that brings you closer to
the cutting edge of technology research and the scientists behind it. I'm your host, Gretchen
Huizinga. Dr. Christopher Bishop is quite a fellow, literally. Fellow of the Royal Academy
of Engineering, fellow of Darwin College in Cambridge, England, fellow of the Royal Society
of Edinburgh, fellow of the Royal Society, Microsoft Technical Fellow, and one of the
nicest fellows you're likely to meet. He's also Director of the Microsoft Research Lab in Cambridge,
where he oversees a world-class portfolio of
research and development endeavors in machine learning and AI.
Today, Dr. Bishop talks about the past,
present, and future of AI research,
explains the no free lunch theorem,
talks about the modern view of machine learning or how he learned to
stop worrying and love uncertainty
and tells how the real excitement in the next few years
will be the growth in our ability to create new technologies
not by programming machines
but by teaching them to learn.
That and much more on this episode of the Microsoft Research Podcast. Chris Bishop, welcome to the podcast.
It's great to be here. You are quite a fellow, literally. In broad strokes,
what gets a fellow like you up in the morning? What does a day in the life of Chris Bishop look
like? Great question. If I had to summarize my work life in one word, I'd say it's varied. I have to do many, many different
things. Of course, think about the strategy for the research lab, think about research directions.
Recruitment's a very big part of what I do, really finding great talent and then looking
after the career development of people that we've hired, nurturing that great talent. I think a lot
about inclusion and diversity,
but also thinking about our external visibility, giving presentations, engaging with universities,
engaging with customers, but also scanning the horizon, thinking about new opportunities for us.
So no two days are ever the same. Well, as the lab director of MSR Cambridge in Cambridge,
England, not to be confused with Cambridge, Massachusetts over here.
Correct.
Give our listeners a sense of the vision for the work in your lab and what constitutes
what you call thought leadership in AI today.
Yeah, that's a great question.
The field of AI is really evolving very rapidly and we have to think about what the implications
are, not just a few years ahead, but even further beyond. I think one thing that really characterizes the MSR Cambridge Research Lab is that we have a
very broad and multidisciplinary approach. So we have people who are real world experts in the
algorithms of machine learning and engineers who can turn those algorithms into scalable technology.
But we also have to think about what I call the sort of
penumbra of research challenges that sit around the algorithms, issues to do with fairness and
transparency, issues to do with adversaries, because if it's a publication, nobody's going
to attack that. But if you put out a service to millions of people, then there will be bad actors
in the world who will attack it in various ways. And so we now have to think about AI and machine learning in
this much broader context of large-scale real-world applications. And that requires people
from a whole range of disciplines. We need designers, we need social scientists,
a whole spectrum of different talent. And then those people have to come together and collaborate.
I think that's quite a special feature of the MSR Cambridge Lab.
So on the work that's happening in machine learning, how are you pushing the boundaries
when it comes to developing and furthering the science of machine learning and artificial
intelligence? So really we take a very bottom-up view in that we hire very smart, creative people
and give them a lot of flexibility to go and explore the
many different frontiers of machine learning. But part of it too comes back to this multidisciplinary
approach. So one of the areas, for example, that we're very interested in is confidential
machine learning. Machine learning, of course, is fueled by data. And we know that data is precious.
We need to protect it. It may be very personal data if it's healthcare data, for example. And so how can we make sure that the technology has access to the data,
but at the same time preserve the appropriate levels of privacy? And then what technologies
can we create to support that? And so we need to bring together people who understand the
algorithms with people who understand security and privacy and the engineering skills as well
to create viable, scalable technologies that we can actually use in the real world.
Let's talk about you for a second. You started out in physics and then moved to computer science.
What prompted that move and how do you see the two fields complementing each other
in what's going on in computer science today?
Right. Yes, I started out in physics, as you say, because as a teenager, I was just fascinated by
quantum mechanics and relativity, and it was a very exciting time in physics. So I actually did
a PhD in Edinburgh with David Wallace and Peter Higgs in quantum field theory. And after that,
I wanted to do something more applied, and I worked on the fusion program.
So that's the challenge of heating hydrogen up to hundreds of millions of degrees and
getting it to fuse into helium, much as happens in the sun.
And one day we'll crack that, and that will give humanity unlimited amounts of clean energy.
But that's still a long way off.
But while I was working on that, of course, I developed a lot of expertise in certain
kinds of mathematics, in particular linear algebra and continuous math, multivariate calculus
and probabilities.
And it turns out that those are just the kinds of math skills you need for machine learning.
In fact, much more so than traditional computer science, because traditional computer science
is really based on logic and determinism, whereas machine learning requires continuous
maths and dealing with uncertainty. And so physics actually turns out to be a pretty good starting, whereas machine learning requires continuous maths and dealing with
uncertainty. And so physics actually turns out to be a pretty good starting point for machine
learning. In terms of how I made the switch, that's actually quite interesting. I was
working on the Fusion program, and Hinton published his paper on neural nets, on backpropagation,
and it got quite a bit of attention. And I thought this sounds pretty interesting. I'd never really
been interested in traditional computing in the sense of telling a machine what to do step by step and how to do it
step by step. But the idea that a machine could learn from experience so that it could acquire
its own intelligence was just incredibly fascinating. And so when this research was
published, I got very interested. I persuaded my boss to buy me a workstation. I taught myself how
to program,
got some software, and started to play about with these neural networks. And the first thing I did with the neural nets was to apply them to data from the fusion program, because I was working
down in Oxford on the world's biggest fusion experiment. And in its day, it was the big data
of the day, very high frequency, high spatial resolution diagnostics, huge amounts of
data pouring off, lots of interesting data analysis problems. And I found myself almost
uniquely in that field, in possession of this rather flexible nonlinear technique of neural
nets. So I published a lot of papers, solved a lot of problems in that space, had great fun for a
couple of years, and then decided this field was so interesting that I actually wanted to move out of physics and actually do machine learning full-time. That was nearly 30
years ago now. Well, let's talk about MSR Cambridge. You've been there from the beginning
and you said at one point that you've noted over the years that progress in artificial
intelligence and machine learning has been both much slower and much faster than you expected.
Right.
What do you mean by that?
Okay. So I think the reality, the underlying reality is that progress has been,
actually been relatively steady and quite good. But the perception of it is that nothing much
happened for a very long time and then suddenly it all took off. And I think what really happened
is that there were some particular developments specifically around
multi-layered neural nets so-called deep learning which allowed ideas that have really been around
for quite a long time to improve in accuracy to the point where they became of great practical
value and this was noticed particularly in speech recognition and also in certain image analysis
problems detecting objects in images, for example.
And the qualitative improvement in performance across the threshold at which these techniques
became of great practical relevance. And so we went from a world where I would have said
insufficient attention was being paid to machine learning. It had a lot of potential, and yet it
was sort of being ignored and it was rather frustrating. And now we're almost in the opposite situation where there's this huge amount
of attention and excitement around it. We're kind of running just to keep up.
The middle child is getting attention, finally.
That's right.
You once said that being a researcher is better than being a rock star. I don't know if you
remember that, but I did.
Did I say that? is better than being a rock star. I don't know if you remember that, but I did. It was funny.
I started laughing. So what do you know that Mick Jagger doesn't? And why do you feel research is so
rewarding? Well, I find it strange that I said this. I'm not entirely sure what it's like to
be a rock star, but I can tell you what's great about being a researcher. And it's the fact that
every day is new. By definition in research, you always do new things. There's no point being the
second person to discover something. And so you have that sort of endless variety that will last an
entire career. And I always think it's just wonderful to be in a field where you're always
doing new things. It's always fresh rather than doing the same thing over and over again. So for
me, that's just one of the great things about research. And also another great aspect about
being a researcher as a career is that there's this ocean of possibilities. I may have been quite lucky and I've
worked in very abstract theoretical physics, my PhD. I worked on a very applied area, fusion
research for a while, then switched into machine learning. And within machine learning, I've been
interested for a while in algorithms, then I've shifted my interest more to applications. There's
this infinite ocean of possibility. And so this is why I think people don't retire, because why would
you? It's just so interesting. And there's always more to be done and always new things to explore.
I would propose that every field is searching for its own version of a silver bullet or a grand theory of everything.
And for AI, some people have suggested that there might be a universal algorithm for machine learning.
Should researchers be spending any time on that?
And if not, what should we be looking for instead?
That's a really interesting question.
So there's this theorem in machine learning.
It has a wonderful name.
It's called the no free lunch theorem.
I love it.
It basically says that if you look at all possible problems that you might apply machine
learning to, then on average, any algorithm is just as good or bad as any other.
In other words, the theorem says there cannot be a single universal machine learning algorithm
that will solve all problems.
Now, we have to be a little bit careful because it's a piece of abstract theory, so it's correct.
We've got to be careful when we interpret it because it may be that there are certain algorithms that are very good at solving all the kinds of problems that we're going to encounter in the real world.
So it may be that techniques like deep neural networks are really
quite generic and broadly applicable. However, what the no free lunch theorem does teach us
is that you cannot learn just from data. You learn from data in the context of assumptions,
or they're sometimes called prior knowledge or constraints. The terminology varies, but it's data
in the context of a model or set of
assumptions that allows you to learn or allows the machine to learn. And those assumptions are
dependent on the particular problem you're solving. So what it means is that instead of
searching for the single universal algorithm that will solve all problems, instead you need to think
about the particular problem that you're trying to solve and finding the best technique. And that involves thinking about the assumptions you want to make or the domain
knowledge. Well, let's talk about the concept of uncertainty for a minute. I had a researcher on
the show a couple of weeks ago, and he talked about sometimes we need to embrace the idea of
well-calibrated uncertainty in complex autonomous systems. So how might we work to quantify
uncertainty? This is really fundamental to machine learning. I call it the modern view
of machine learning. So traditionally, we thought of machine learning as a kind of a
function that you fitted to some data, fitting a curve through data so that you could make
predictions where you tune up the parameters so that the neural net gets it right on the training
set and you hope that it works on the test set. And I think there's a broader view of machine learning in which we say that what's
really happening is the machine is building a model of the world. And that model of the world
is quantified through uncertainty and the unique calculus of uncertainty is probability. And so the
machine is built on probabilities and its understanding of the world carries uncertainty. But as it sees more
data, that uncertainty typically will reduce. So it becomes less uncertain. In other words,
it's learned something, it's learned from the data. And that notion is all captured in a very
elegant piece of mathematics called Bayes' theorem. And so I view Bayes' theorem and the
quantification of uncertainty through probabilities as being the bedrock of machine learning. And from that, everything else can follow. So I agree. I think it's totally fundamental
to the field. So you've used a phrase, model-based machine learning. Is that what you're talking
about here? Right. The idea of model-based machine learning is really taking that idea of prior
knowledge or constraints, domain knowledge, and making that a first-class citizen.
Think of it less as being a specific technique. Think of it more as a viewpoint, a way of
understanding what machine learning is about. So imagine you're a newcomer to the field of
machine learning. The first thing you discover is that there are thousands and thousands and
thousands of papers with hundreds or thousands of different algorithms
with lots of different names. It's like you're at sea without a compass. Do you have to read all
those papers? Do you have to understand them all? You want to solve some practical problem,
but you can't possibly be familiar with all of the different techniques. So what are you going to do?
Well, instead, you can adopt this model-based viewpoint. And the model-based viewpoint says,
think about the assumptions that you want to make
in your machine learning solution
and actually write them down, be explicit about it,
and then translate those assumptions into a model.
So the model is just a mathematical representation
of your assumptions.
And you can then combine that with the data.
And then when you turn the crank, the machine will learn. And if you've made good assumptions, the machine will learn very efficiently with the data. And then when you turn the crank, the machine will learn.
And if you've made good assumptions, the machine will learn very efficiently from the data. So if
you're able to make strong assumptions and they're correct, you get a lot more information out of the
same amount of data. The risk though, is that if you make a strong assumption and it's wrong,
then not only can the machine make bad predictions, but it can be very confident
about those bad predictions. So you have to be careful. So we're in the middle of what many
people are calling an AI revolution. And you've suggested that we're seeing machine learning
usher in a sort of Moore's law of data, even as we see Moore's law in the traditional sense,
sort of on the wane. But it's changing the way that we write software. Can you tell us a bit more about what we're seeing at the revolution?
Yeah, I'd love to talk about this. It's really a viewpoint on all the hype and excitement that
we have right now around artificial intelligence. So the term artificial intelligence, for me,
refers to that grand aspiration, that very long-term goal of producing human-level
intelligence and beyond. We're a long way from
that. So you might ask, well, does that mean that all this excitement around AI is just misplaced,
or it's just way too early, that it's just a hype bubble, it will go away? And I say not.
I think there is something happening which is very profound and very transformational,
and it's not to do with artificial intelligence. It's to do with a revolution
in the way we create technology. So I can explain that by analogy with hardware. So you need
hardware and you need software to build technology. And the hardware, if you think about computers
over the years, all the time, hardware is getting faster and cheaper and better. And that progression, though, is not linear. It was sort of linear
up until a certain moment when a particular technology was created called photolithography,
and that allows us to print transistors. So instead of manufacturing the components of a
computer and then assembling them, instead you print the whole circuit in one go on a silicon
chip. And that was profound because it switched the progression to exponential, and that's Moore's law.
And everything else follows.
The existence of Microsoft, the fact that you have a supercomputer in your pocket, all follows from Moore's law.
So I think we're seeing in the so-called AI revolution, which is really a machine learning revolution, a similar singular moment in the history of software.
So go back to the beginnings of software.
Ada Lovelace, she was the world's first software developer,
and she wrote software for Babbage's Analytical Engine.
She had to specify exactly what every gear wheel did at every moment.
Software developers today are sort of much the same, but they're much more productive.
But nevertheless, software developers today still have to tell the machine how to solve the problem. The bottleneck is the human intellect.
But with machine learning, we have a radically different way of creating software,
because instead of programming the machine to solve the problem, we program the machine to learn,
and then we train it using data. The rate-limiting step now is the fuel that powers machine learning,
it's the data. So we write these machine learning algorithms. The computer can learn from experience. And now we train it using data.
And what's really interesting is that the amount of data in the world is, guess what,
it's growing exponentially. In fact, it's doubling about every couple of years or so.
And that's set to continue for a long, long time to come as we instrument our cities,
as we have the Internet of Things, as we instrument our bodies, as we gather more data. And that will fuel this revolution
in machine learning, which is why I think the hype around artificial intelligence is not
incorrect. It's just misplaced. The real excitement for the next few years is going
to be this exponential growth in our ability to create new technologies,
not by programming machines, but by having them
learn. Let's come back to healthcare. I know this is a passion of yours. So talk about some of the
strategies you're working on to improve healthcare with AI and ML.
Healthcare is possibly the biggest opportunity of all for machine learning, but it's a very,
very difficult field in which to work. And in many ways, healthcare as an industry is still in a relatively primitive state compared to some areas of manufacturing or other sectors.
The healthcare industry still buys fax machines, for example, which is extraordinary.
I didn't realize that fax machines were still being manufactured, but they are, and it's the health industry that buys them.
So in a way, before we even think about machine learning in healthcare, we have to think about
the digital transformation of healthcare. My personal medical records are probably stored
on bits of paper scattered across various different cities in the UK, according to where
I've lived and so on. We first of all have to think about digital transformation of healthcare,
and then we can think about the machine learning that builds on top of it. So it's a bit of a
long-term bet. On the other hand, the societal benefit that could come by taking a more evidence
driven approach to healthcare is phenomenal. One of the things that it can allow is the potential
for personalised healthcare, because we're individuals. Personalised healthcare, though,
is something, if we're to deliver thatized healthcare, though, is something,
if we're to deliver that at scale, it's got to be done in an automated way. And machine learning
offers that potential in just the same way that a machine can learn your preferences for movies.
It can learn what would be an appropriate course of treatment for you as an individual,
but it can only do that by learning about the patterns that occur in large numbers of people.
So by analyzing data from millions of people, we have the potential to create personalized health solutions for each of those people as individuals.
I call it the paradox of personalization.
So in terms of the strategies, we come with two things, but really only two things.
One is our cloud technology,
and the other is our machine learning expertise. But for everything else, we need to work in
partnership. So all of our healthcare work is done in collaboration with medics, with clinicians,
we work with local hospitals, actually hospitals around the world, bringing together domain experts
from the healthcare sector with machine
learning experts jointly to work on solutions. If we have personalized healthcare that's digital
or machine learning based, where does that put the doctor? How much of this is maybe going to
displace medical professionals? Any? There's always a question asked about whether machine learning
will replace people or whether it will help them. One particular project we're working on is using
machine learning to find the boundaries in three dimensions of tumors, brain tumors, for example,
in order to be able to use that information for radiation therapy planning. Now, this is a job
which is ideal for a machine
because the machine can do it very much faster than the human with less variability, more accurately.
And the clinicians that we work with, the radiation oncologists, love this technology.
They're very excited to have it because this is a part of their job which is tedious and
time-consuming, but they have to get it right. They have to pay attention to the detail.
So they're excited to work with us to help to create tools that will allow them to do that
piece of their job more effectively, to free up time to do things that machines aren't very good
at. Of course, you need the machine to know where the tumor is in order to plan the radiation
therapy, but you also need a conversation with somebody about whether you want that therapy or
not. What are the outcomes going to be? What are the implications going to be for you and your family?
You've got to make these complex decisions.
And I, for one, wouldn't want to do that by interacting with an app.
I would like to talk to a clinical expert and really understand, based on their experience,
but also on their human empathy, what is the best path forward for me?
Circling back to the healthcare issue and the big data,
what's your take on how we can work to protect privacy and trust then in a world where big
data sets are essential to machine learning and data is even a new form of currency?
This is a really important issue and part of it is technical, but part of it is more general.
Somebody told me the other day of a lovely Portuguese expression, apparently, which says that trust arrives on foot, but it leaves on a horse,
meaning that it's hard won, but easily lost. So trust is not just about technology,
it's about perception, and it's about the confidence that people have in the technology.
That said, though, technology can help with some aspects of trust. So in particular,
one of the areas we're working on
in the MSR Cambridge Research Lab, we call confidential machine learning. And the idea
is to be able to take data, which normally would be protected by encrypting it. So it's scrambled
in a way that others can't access if you don't have the keys. So that's standard. And it would
be stored in an encrypted form. Again, that's standard practice. would be stored in an encrypted form again that's standard practice
but now when you want to process that data for example you want to use it as the fuel for machine
learning then you have to decrypt the data and once the data is decrypted it becomes vulnerable
to attack so the technology that we've been developing and it's been deployed on azure the
microsoft cloud allows for the data to be decrypted only inside what are called
secure enclaves. These are very tightly controlled software environments protected with certain
hardware technologies, making them very secure and meaning that only those with access to the
keys to the data could ever access the data itself, even when it's being processed, not just
when it's being stored, even to the extent that Microsoft itself can't access the data of its
customers if it's being decrypted inside these secure enclaves. So that's a kind of technology
which can help to protect the most valuable kinds of data. But it's not enough just to have the
technology, you need to have the trust. But it's important, I think, also to understand
the benefits that can arise from the application of data to machine learning so that we are able,
as a society, to find the right balance between how we use data and how we protect data. Because
at one extreme, we don't want a free-for-all where data is readily available to everybody
when it's clearly private. On the other hand, we don't want to miss out on the enormous opportunity to improve
lives and save lives that could come through applying machine learning in fields like healthcare.
So technology can help, but it's by no means the complete solution.
So let's talk about talent. You've said that you're looking for researchers that
not only are the world's best, but also fit with our values. How would you define those values?
In other words, what kind of incentives can any company offer top researchers now?
That's a great question because there is a tremendous competition right now for top
talents in the field of machine learning. What I would say to somebody going into the field or
somebody who's on the job market and
thinking about coming to Microsoft, in particular to Microsoft Research, is to say that we have this
great combination of, on the one hand, the opportunity to have a lot of freedom and to
set the direction of your research and to go after things that you're particularly excited about,
but at the same time, mechanisms to take the output of your research and get them out there
into the real world. So the people that I really want to attract are those who want to go after
really hard, deep, challenging research problems, but with a view to the output of their research
being used to make the world a better place, but actually to see that work being used at scale.
And that's one of the great things about Microsoft is that we can reach hundreds of millions or even billions of people with our
technologies. We're at a stage now where we have unprecedented compute power. We have huge data
sets and sophisticated algorithms. But I'm hearing that people in the field are starting to recognize
that we need more than computer scientists to solve them.
So is that true?
And if so, talk about the trend toward this interdisciplinary approach to problem solving.
Sure. So that's definitely been one of the transformations in the field over the last 30 years.
For the first 25 years or so that I've been in machine learning, the goal was to get the error rate down, to have the performance
of the algorithm be sufficiently good that it was interesting. And so that was really all that
anybody was focused on. But now that the error rates are low, now that the algorithms are working
on real-world problems with high accuracy and therefore becoming of great interest for practical
application, we now have to explore and understand a whole
swathe of new research problems that arise when you take that algorithm and you put that in the
real world. So first of all, there's going to be an end user and that user will have some sort of
experience. So they're not going to want to engage directly with an algorithm. There'll be some sort
of user interface, some kind of user experience. So having designers who can design that user experience is crucial. But also we need social scientists who can understand the way in which people against those. Again, that opens up a whole range
of research challenges. We need to think about explanations and you don't want an explanation
expressed in terms of mathematics. If you're just a regular end user, you need something expressed
in normal language that you can understand. So how do we go to address that? And then one of
the real challenges is getting people from different fields working successfully together,
because they often have different cultures and different language and different terminology. And yet they
need to collaborate if we're going to tackle some of these problems. You've just answered two of the
last three questions I have, but I do want to ask you, you know, we talked about what gets you up in
the morning and I always ask my guests, is there anything that keeps you up at night? And you've
alluded to a couple of the things that we need to be concerned about when we're developing, designing and implementing AI and machine learning technologies. And most people are talking about this idea of bias and fairness and transparency. But there are other concerns out there about AI in general. Some of them are a bit fantastical. Chris, what would you say to the,
could we call them fear mongers of AI, beyond bias in data?
So I think there is another danger that we haven't talked about, but it's not the risk of
superhuman robots taking over the universe. I think that is fantastical and far-fetched and
at best lies, you know, many years in the future or at worst lies many years in the future.
So I don't think we need to worry about that. The very real concerns around bias and fairness
and transparency are incredibly important. But the good news is people are thinking about it.
There's a lot of discussion about this. A lot of very smart researchers are working on this.
So I feel good about that. Not because they're not hard problems or they're not important,
but at least we are aware of them. We are talking about them. We're researching them
and we're making progress. There's another danger though, that if there's anything that keeps me up
at night about machine learning and AI, it's this. In some way, we'll have some sort of bump in the
road. Perhaps it will be something around bias. Perhaps it will be something around privacy.
Perhaps there'll be some security issue.
There'll be something which causes us to turn our backs on the technology.
And we would forego the amazing opportunities which machine learning can offer us, let's
say, just in the healthcare space, where this could literally improve the lives and
save lives of countless people around the world for decades and centuries to come. So I think we have to be very careful at the same time as we discuss
all the challenges and the risks of the technology to keep in mind the enormous potential benefits
so that we find the right balance. Tell us a bit about your history, Chris. How did you
come to MSR and come to lead a lab at MSR?
The way I came to join MSR was actually quite interesting. I'd been in the field of machine
learning for six or seven years, something like that. And I was an academic in Birmingham,
a research professor. And I submitted a proposal to what's called the Isaac Newton Institute in
Cambridge, which is an
institute for mathematical sciences, and it runs six-month research programs. And I proposed a
program on neural networks and machine learning. This is back in 1997. And the proposal was
accepted. And so I got to bring to Cambridge over a six-month period, essentially all of the top
people in the field of machine
learning at the time. And we had a tremendous time. It really was an amazing six-month research
program. What was interesting though, was that program began on the 1st of July, 97, which
happened to be the exact same day that Microsoft set up its first ever research lab outside of the
US. And it chose Cambridge in the UK as the location
for that lab. So I arrived in Cambridge on the exact same day that Microsoft Research started.
And so the whole of Microsoft Research Cambridge came to see me in a taxi, three of them,
the founding lab director and a couple of deputies came to visit me at the Newton Institute and said,
hey, we're setting up this lab and we're going to do all this great stuff. And do you want to join us? And I thought about
it for a few nanoseconds and said, yes, I'd love to. And of course, I stayed on to finish the six
month program. So I didn't actually start working in the lab until the January, but it was an
amazingly happy coincidence. And so I've really been at the MSR Cambridge lab since it began,
leading the machine learning group, really helping build the machine learning group,
and leading it for many years. And then three years ago, the opportunity arose to become lab
director. And that's an exciting and new and different challenge. And I've been having a lot
of fun doing that. As we close, perhaps you could give some parting advice to
our listeners, many of whom are just getting started in their research careers. We talked
about some of the most exciting problems and challenges that you see on the horizon already.
What would you tell your 25-year-old self if you were listening to this podcast?
I think my top level advice would be not worry too much about optimizing
your career and planning your trajectories and paths through different job opportunities and so
on. But instead, just go after something that you really care about. Because at the end of the day,
you'll have much more energy and you will most likely achieve a lot more and the career will
kind of sort itself out. And especially if you're in a field anything like machine learning, there are so many opportunities out there that really
just focus on the thing that really excites you, whether it's working on the algorithms, whether
it's working in healthcare, whatever it is, focus on the thing that you're most passionate about,
and the rest will take care of itself. Would 25-year-old Chris Bishop listen to you?
Probably not, but I didn't listen to anybody.
You know what?
That's why you're successful in research.
Chris Bishop, thank you for joining us on the show today.
It's been fun.
Thank you very much.
To learn more about Dr. Christopher Bishop and the innovative research he directs at MSR Cambridge, visit Microsoft.com slash research.