Behind The Tech with Kevin Scott - Dr. Peter Lee: Microsoft Research & Incubations
Episode Date: September 22, 2021Kevin talks with Peter Lee about critical response scientific research, deep learning and neural networks – all related to Covid-19 and Microsoft’s growing healthcare and life sciences initiative.... Listen in as they also discuss the key role that public trust plays in the work of scientists and researchers. Click here for transcript of this episode. Kevin Scott Behind the Tech Â
Transcript
Discussion (0)
What we're seeing today is that more and more what we do, and even just to survive as a civilization, depends on researchers and scientists being able to get drawn in and solve problems, respond to crises, help us become hopefully more resilient to the future.
And that sort of crisis response science, I think is getting to be incredibly
important and it won't work if society doesn't trust what we're doing.
And that trust is, is so hard to earn.
Hi, everyone. Welcome to Behind the Tech. I'm your host, Kevin Scott, Chief Technology Officer for Microsoft. In this podcast, we're going to get behind the tech. We'll talk with
some of the people who made our modern tech world possible and understand what motivated
them to create what they did. So join me to maybe learn a little bit about the history of computing
and get a few behind-the-scenes insights into what's happening today.
Stick around.
Hello and welcome to Behind the Tech.
I'm Christina Warren, Senior Cloud Advocate at Microsoft.
And I'm Kevin Scott.
And today our guest is Dr. Peter Lee.
Dr. Lee is a distinguished computer scientist
whose research spans a range of areas,
including artificial intelligence,
quantum computing, and biotechnology.
And currently he's leading a team of researchers
here at Microsoft with eight labs across the globe.
Yeah, we are super lucky to have Peter on our team.
I've known about Peter since I
was a computer science graduate student in the 90s.
He was a professor at Carnegie Mellon University
when I was a PhD student at the University of Virginia.
We were working in pretty similar academic spaces.
And I was always a huge admirer, not just his work, but of the work of his PhD students.
So it's a real honor and a privilege for me to now be able to work with Peter here at
Microsoft.
It's a strange, strange journey.
I love that. I love that.
I love that you've been aware of him for so long and now you get to work together, which is fantastic.
Yeah, it's super fun.
And he has a really big job here at Microsoft.
So he runs all of Microsoft Research, which as an institution turns 30 this year.
Wow. And over its lifetime,
it has been one of the most important research institutions
for computer science and related areas
for the past three decades.
Again, I'm a little bit biased.
Microsoft Research is in my group at Microsoft,
and I was an intern at Microsoft Research 20 years ago.
Oh, God, that's a terrible thing to think.
So anyway, Peter is awesome.
That's great.
That's great.
I can't wait to hear your conversation.
All right, let's talk with Dr. Lee.
Hello, and welcome to Tech Fit for Europe,
a new podcast series looking at the big policy questions behind today's technologies and the people who shape them.
My name is Kasper Kløgen, I'm the Vice President for European Government Affairs at Microsoft.
We believe in the power of dialogue and finding common ground, and that's exactly what our new podcast is about.
Join us as we discuss some of the most pressing digital policy issues of our times, affecting us here in Europe and beyond.
Can we protect and preserve democratic values in an era of digital disruption?
What is the role and responsibility of the tech industry in ensuring a climate-neutral future?
And how can technology companies support governments in driving economic recovery from COVID-19?
Europe's policymakers are grappling with these and many other questions as they look
to set out the rules for developing and deploying technology. But the global reach of technology
regulation and the digitalization of geopolitics means that these issues have implications far
beyond our continent's borders. Nowadays, what happens in Brussels matters worldwide.
In this podcast series, you'll hear from some of the most influential voices on
key digital policy issues, whether in government, from academia, the private sector, or civil
society. So tune in on your preferred podcast platform. Take fit for Europe. Our guest today is Dr. Peter Lee.
Peter is a computer scientist and corporate vice president of research and incubations at Microsoft.
Before joining Microsoft in 2010, he was the head of the Transformational Convergence Technology Office at DARPA,
and before that, chair of the computer science department at Carnegie Mellon University.
He's a member of the National Academy of Medicine, serves on the board of directors of the Allen Institute for Artificial Intelligence, and was a commissioner on President Obama's Commission
on Enhancing National Cybersecurity.
Welcome, Peter.
Thank you, Kevin.
It's great to be here.
Yeah, the thing that your intro doesn't say is that when you were at Carnegie Mellon,
you were a functional programming expert.
And when I began my journey as a graduate student,
that was the particular area of compilers and programming languages that I was studying.
My first PhD advisor was a guy named Norman Ramsey, who
went to Harvard and is, I think, at Tufts now. And yeah, and it's a very small community. So
like, you know, even before we ever met, I felt this weird sense of familiarity. You know, like
I knew who your PhD students were. I knew your writing. I read books you had written and contributed to.
So I'm sort of curious to just start at the beginning of your journey.
You as a kid, how did you get interested in a set of things that took you to functional programming?
Norman Ramsey, I, of course, know very well and is great. And
I think I, in fact, pretty sure I had encountered you while you're a grad student. And so it's
amazing how things kind of intersect. It is. Well, yeah. So to go back to the beginning,
you know, I grew up in a hardcore physical science household.
My parents immigrated from Korea. My mom became a chemistry professor. My dad became a physics professor.
Wow.
So the joke is I was a big disappointment to there seems everybody has a pecking order in their head about which of the disciplines are better than the others, which is sort of just an outrageously ridiculous thing.
Well, you know, I think, of course, then I compounded the problem by going to grad school, not in math, but in computer science.
Obviously, my parents became very, very proud of me in time. But, you know, it's actually something
that I think all researchers encounter, you know, because what researchers do, it's not
clearly useful to anyone. You know, people oftentimes don't understand what it is that you and I do, or people in a
place like Microsoft Research do. Society has to actually tolerate the burden and the cost
of all of these great research institutions around the world. And so we're oftentimes
encountering questions like questions like that so you
know as you say there's a pecking order so even i grew up with that in my own household
yeah and i was i was thinking about this yesterday i think tolerate is one word i think trust
is another and like we're at this weird moment in time right now where i do feel that science
and like particularly scientific research where the result of what
you're doing before the thing that you're spending all of your time is going to have an impact on
human beings. It sort of takes a while and sometimes it's very indirect. Like you make
a contribution to a thing that's going to have to have hundreds or thousands of different things
contributed into it before, you know, you get a medicine or a breakthrough product or whatever it is.
And I think part of the challenge with earning people's trust and tolerance is on us just
figuring out how to better tell folks what it is that we're doing. My mom used to...
I was a weird teenager. I would have
the transactions on programming
languages and systems laying
around my house when I was
16 or 17. I think it'd be
weirder if I were 13 or 14.
But
she'd look at me reading these computer
science papers and
textbooks and she would be like, I, you know, like, what are you doing?
Like, all of those squiggles hurt my head.
And it's like a perfectly legitimate point of view.
And I never did a great job of explaining to her what I did, whereas, you know, I was playing around in my machine shop a couple of days ago, and I made this little part that I needed for a microphone holder.
And I posted a picture of that on Instagram, and a gazillion people jumped on and said, oh, wow, that's neat because it's a thing and you can see it and I can explain pretty easily what it's good for.
I don't know.
What are your thoughts there? Like, how do we do a better job
helping people understand what we do?
Because it is really necessary.
The world doesn't work without all of this research.
Well, and it's become even more important.
You know, the need for scientific research
has just gotten incredibly important.
You know, my frame growing up the way I grew up,
my frame for scientific research, you know, was formed by stories about, you know, Isaac Newton sitting under a tree and then an apple falls and hits him in the head.
And he's just wondering, what the heck is that about?
So it's just pure curiosity driven research.
And that's sort of the frame that I grew up with. But to your point, what we're seeing today is that more and more, what we do,
and even just to survive as a civilization, depends on researchers and scientists being able to get
drawn in and solve problems, respond to crises that help us become hopefully more resilient to the future.
And that sort of crisis response science, I think, is getting to be incredibly important.
And it won't work if society doesn't trust what we're doing. And that trust is so hard to earn.
You know, another story, when I was a professor,
I was an assistant professor.
I didn't have tenure.
And we had a change in department head,
a very good friend of mine now, Jim Morris.
But at the time he became department head,
I didn't know who he was.
And so he was brand new department head.
He was going to have one-on-one meetings
with all the faculty.
So it was my turn.
And he asked me what I did.
And I explained all this functional programming stuff to him.
And he sort of crunched his nose and said,
well, why would anyone work on that stuff?
You know, what is it good for?
And I was so nervous about the meeting,
I just sort of stammered out,
well, it's just so beautiful.
And Jim's response was, well, if it's beauty that you care about,
maybe you should be a professor in the fine arts college
instead of computer science.
That's brutal.
I know.
And, of course, you know, in time, we came really close
and even did some research together.
But it's that kind of thing where part of what researchers do, there's
a portion of it that is sort of curiosity driven, that's searching for truth and beauty.
But now more and more, there's another part of it that is really important to like making sure
voting machines work correctly, to helping us find, you know, drugs and vaccines for things like COVID-19,
understanding, you know, where the next wildfires might happen because of climate change,
and all of these sorts of things that are so important. You know, if an asteroid that has
the power to destroy life on the planet were to come towards Earth, you're going to call in researchers to try to figure out
how to prevent that from happening.
That mode of what we do is just getting so, so important.
And especially at a place like Microsoft Research,
where we have an obligation to situate our work in the real world,
it's gotten really important.
You're right, how we explain what we do so that people have
the trust in us so that we can respond,
I think ends up being everything.
I want to go back to this idea
of doing things because they're beautiful.
It always struck me that you've got many different
reasons that you do research. Part of the reason that you do research and you try to tackle really,
really, really hard problems is because it's almost like exercise, right? You just need to
be in the habit of doing that so that when the moment comes, and you may not even realize when the moment has
arrived, but like when it does arrive, that you will be prepared to like actually throw your full
mind and energy at a thing and have a higher chance of being able to solve the problem.
I mean, another reason I've always thought that working on these hard problems is important is just solving them gives us a catalog of things to draw upon,
even if it's not immediately obvious what they're useful for.
And what we do know from the world that we live in is every thing that we have
like a mRNA vaccine or an AI-powered coding assistant,
or pick your thing that you think is a really interesting achievement.
We have it because it's a layering of all of these discoveries
and accomplishments and abstractions and tools.
And no one, when they were thinking about the part of the problem that they were solving,
they were not imagining this thing that came out in the end. And so, I don't know,
maybe there are other things as well, but I think working on beautiful problems, hard problems,
has a lot of value, even if it's not immediately obvious to everyone else why it's
important. Yeah, I've always wondered if there's a part of our brains that is like our muscles,
that if we don't work them out all the time, they kind of atrophy. But one other thought that
your comments triggered is actually 100 years ago this year in 1921, a guy named
Abraham Fluxner wrote an essay. He was writing it to the board of the Rockefeller Foundation,
trying to explain exactly your point, you know, that people work on really hard problems just
to satisfy their curiosity. And lo and behold, more often,
way more often than you would expect, that new knowledge ends up being really important.
And he wrote that in 1921 to try to explain to the Rockefeller Foundation why they should support
research. And then more than 10 years, 15 years later, when there was the desire to rescue people from Europe, bad things happening in the late 1930s in Europe, really important people like Albert Einstein or, you know, von Neumann and others, to justify the cost and expense and political risks of immigrating them and forming the Institute
for Advanced Study at Princeton, he published that memo publicly.
And it's an essay called The Usefulness of Useless Knowledge.
And he just ticks through, like, even in this world where really bad things are happening,
you know, World War II was brewing and all these other things, you know, there are things that we need to do.
There are problems we need to think about and work on.
There's new knowledge to discover.
And it really matters and maybe matters even more than ever in the context of a world that's struggling.
And I reread that essay, and it's available free online.
Just search for The Useness of useless knowledge.
I read it about once a year because it's important.
You know, look, you and I are consumed with helping Microsoft become a more successful company.
It's all grounded in the real world and so on.
But it's important to not lose a grip on those sorts of enduring values.
And so it's sort of a pilgrimage for me.
And I swear, you read the first page of that essay
and it could have been written yesterday.
Yeah.
It's that timeless.
And it's a little flowery and dramatic
to call it The Search for Truth and Beauty,
but it is spiritual in that way.
You know, at the same time,
I think Microsoft Research has a special obligation to put its
brainpower into the real world. You know, that real world might be far in the future. Like,
you know, I know, Kevin, you're thinking so hard about the inevitability that general artificial
intelligence will become real.
Maybe that's a few years into the future, but it's inevitably going to happen.
And so that's situating research in the real world, because it's a world we know is going to become real.
And so in that sense, we're different than Isaac Newton or Galileo.
But that doesn't mean we're still not obligated to have a firm grip on these
enduring values of research. Yeah, I could not agree more. I mean, like one of the other things
along those lines that you and I spend a bunch of time talking about is how to create the right
incentives and culture for people taking the right types of intellectual risk.
I think trying to solve a hard problem and failing in general, I will go ahead and make
this bold assertion, is more valuable than spending a bunch of time trying to make incremental
progress on something that's already reasonably well developed.
And that is a hard, hard thing to get people to do.
And I want to ground it.
You have a particular example that I'm familiar with from your days as a Carnegie Mellon professor.
So you had a PhD student, George Necula, who wrote what I think is one of the most beautiful PhD dissertations ever.
And it really affected me as a graduate student.
It was this idea called proof carrying code.
And that is a risky idea to let a PhD student go pursue because he could have failed.
And, you know, like you have to have a dissertation at the end of your graduate studies to get that PhD.
So talk to me about how that happened and what can we learn from good examples like
that?
Or Mendel Rosenblum is another example.
His resulted in a ton of the virtualization stuff that we have now and like VMware and whatnot.
So there are positive examples, but it's a lot of energy that gets thrown into incrementalism.
It's true. And I think what happens also is it's a real test as a mentor or manager or advisor.
You know, I think part of the struggle for you and me is sort of the tension between,
you know, like you and I are opinionated. We have our own clear ideas. And it is not the case that the people that work for us across Microsoft Research share our points of view. Correct.
Which is a good thing. Which is a good thing, but it's still really hard.
And when I was a professor,
I started off my career as an academic thinking,
wow, I'm going to be able to work with these graduate students
and form them all in my own image.
And they're going to amplify how I view the world
and the kinds of scientific research I like to do.
And it'll all be grand and glorious.
And of course, you learn pretty quickly.
It just does not work that way.
Well, there might be some second-rate graduate students that do that.
But at Carnegie Mellon, everyone is first-rate.
And wow, they have their own opinions.
And no, they're not going to just take my lead on things. And so George Necula, you know, was one of those students. And he had this idea, you know, which you refer to as proof carrying code. And it's true. I thought he was really on the wrong track, you know, that this would be just way too difficult.
The first drafts of some of the early papers and proofs that he wrote, it would take me
less than 10 minutes to find horrible bugs in the proofs.
They would be simple little proofs, less than 10 lines long, and they would be wrong.
And so it just sort of casts doubt over the whole thing. But you have to decide, are you going to give the freedom to fail here and learn and grow from that or not?
And one of the golden rules then to translate to our current jobs that we have now is to decide, are you betting on the person and their commitment to something or are you betting on the idea?
Yep. And time and time again, you learn that you're better off trying to make an assessment of
betting on the person than on the idea.
And that makes it then super important for us to make sure that we're viewing things
fairly, that we're not engaging in any kind of favoritism or biases.
Ultimately, when we're leading research,
what we're doing is we're trying to understand where is the passion and the commitment
to really go deep to follow through.
If a researcher came to me and said,
I have a better idea for growing cabbages faster,
I might think it's a crazy thing to work on.
But if that passion and
that drive to really, really go deep is there, I have to really stop myself and decide, well,
maybe it's worth giving a little bit of time and rope for this to play out because you just never
know where the next big thing is going to come in you
know george ended up writing an amazing thesis became a professor at berkeley uh then you know
went into industry and you know he's had amazing impact an amazing career yeah i think you make
such a brilliant and important point around betting in people, not ideas.
And this other thing of like giving people the ability to fail is also important.
The learning that I have been able to get in failure is so much more powerful than the learning that I get in success.
And the fear of failure is just a terrible thing.
Right.
I mean, it really is crippling.
It is.
And it is painful.
There are growth experiences.
You know, Satya Nadella, our CEO, talks about the growth mindset. And I joked with him once that growth mindset is
a euphemism because when you grow through failures, it's incredibly painful. I think
we've all had failures that have made us want even just to give up. There have been times I've
thought about quitting from Microsoft because of a failure. And then you somehow lick your wounds and you find a way to overcome it.
And you find out that you emerged as a better person for it.
Yeah.
I had a boss a while ago who was running a part of a business that was responsible for just enormous amounts of money.
And so whenever you made an engineering mistake in this part of the business, it wasn't, you know, reputational loss or,
you know, like your pride was wounded because something went down and then you had a tough
time debugging it. No, like failures in the things that he was responsible for,
the meter started running on
the dollars that were going out of the door.
We made mistakes. It's impossible not to
make mistakes when you're building complex systems.
He would be very calm and collected.
It never made anyone feel bad about
this colossal amount of money that was uh you know
just just being lost and you know he would patiently guide everyone through the crisis
and then you know at the end of it ask us like okay what do we learn from this it's like the
real tragedy here would be to have experienced this and not have learned anything at all.
Like we can't let this crisis go to waste.
Yep.
You know, you're reminding me also, there's another way to fail.
One way to fail is to make mistakes.
But another way is to be wrong about an idea.
I think one of my most recent examples that really kind of stopped me dead in my tracks,
I joined Microsoft Research in 2010.
And I joined and I was doing a review of a bunch of projects.
And there was one project
that was in the speech recognition group
at Microsoft Research.
And in 2010, everybody knew
that the way to do speech recognition
was to use some form of hidden Markov models or Gaussian mixture models.
But here, the speech team was describing the use of a layered cake of neural nets.
And they even explained that, you know, the summer before, a guy named Jeff Hinton had spent the summer along with a postdoc of his and maybe
some students and suggested the idea. And the research team decided to give it a try to see
how well it worked. And I knew Jeff because Jeff and I were both professors at Carnegie Mellon.
Jeff, after 1991 or so, left and went to Toronto. But he was at CMU when I started there.
And I remember Jeff was working on neural nets back in the late 1980s.
And so my first thought was, wait a minute.
People are still working on this stuff?
Yeah.
On neural nets?
And why on earth would anyone do this?
You know, everyone knows Gaussian mixture models are the future of speech recognition.
And, of course, you know, three or four months later, when the engineering results came in, you know, we realized, wow, we have a real revolution here because it just works so well.
And then maybe six months after that, Andrew Eng and Jeff Dean over at Google showed
that the same things held up for computer vision. Look at where we are 10 years later. It's amazing.
But I've reflected that if I had joined, if I had been hired to Microsoft Research a year earlier,
none of this would have happened. And it just makes you think, how many times have I inadvertently held the whole world back by making a judgment like that?
It's one of those near misses that really makes you think.
Yeah, and it's a hard thing because even at a company like Microsoft that invests a lot in R&D, we still have finite resources and you have to have some way to
focus. Because at the end of the day, the types of things that we're building now
rarely are the work of a lone genius cranking away in their office and they have their Archimedean epiphany and all of a sudden
this big problem is solved. It's usually the work of
layering and combining and collaborating.
So you do have to focus in some way,
but I totally agree with you. In a certain sense,
Jeff Hinton is almost heroic
in the extent to which he stuck with that idea.
Because people, I think now,
you're just like, oh, deep neural networks,
this is clearly the way,
it's the same way that the hidden Markov models
and the Gaussian mixture models
were clearly the way that you did speech recognition
20 years ago or 10 years ago. I think both 20 and 10 years ago, but you know, like just as obvious
as that was then, it's as obvious now that like, oh, well, this is clearly the way that you do
computer vision and speech recognition, natural language processing in 1991, not obvious at all.
In fact, quite to the contrary,
I remember AI throughout my entire academic life,
which was off and on from 1990 until 2003 when I joined Google.
AI was not the hot thing to study.
And neural networks, particularly so,
were this sort of weirdly looked upon thing.
And yet, he was convinced that this was something that had merit
and stuck with it and had to listen to all of the people
for years and years and years telling him he was wrong you know and then all of a sudden he wasn't
and you know he helped catalyze this huge amount of progress and like now has a touring award uh
well this sort of relates back to what we were saying at the start of the conversation, because there is a stick-to-itiveness in all of this,
in the face of a lot of doubts or even skepticism. And I think it actually even relates to the trust
issue that you raised earlier, because there's something about that, you know, when you
demonstrate that sort of commitment, it's one path, one ingredient
in earning people's trust. If I think about the speech group 10 years ago at Microsoft Research,
they probably in the back of their minds, maybe it wasn't conscious, but they had to think,
well, maybe this is worth a try. After all, this guy, Jeff Hinton, has been at this for more than a decade.
And earning trust in that way, I think, ends up being maybe one ingredient in all of this.
And then it all does come around to more urgent priorities. it looks now like some of the things that we need to be able
to do to remove carbon from the atmosphere or, you know, find drugs for global pandemics faster.
These sorts of things, it looks like they're really going to depend on things like deep
neural nets in a really fundamental way. And thank God that people did stick to these ideas and were willing to experiment.
Yeah. You know, the really interesting thing that wasn't obvious to me, even when I started doing
machine learning work in 2003, is, so I left graduate school before I finished my dissertation,
which was on dynamic binary translation. So I was doing a bunch of like deep systems,
these stuff to try to figure out like how much information you could recover
from a executing stream of binary level instructions.
You know,
could you do alias analysis like with high enough precision that you can do
any sort of like memory safety analysis at the binary level and like a whole
bunch of other things like that. And I stopped doing that and went to Google and pretty quickly
was doing machine learning stuff. And I thought I would never, ever use any of my compiler domain
specific knowledge ever again. And like one of the things that we're seeing right now with the deep learning revolution
is that there's a whole bunch of really interesting algorithmic stuff happening and how you architect
these neural networks and, you know, like what you do with data and whatnot. But the systems work
that sits beneath it is very reminiscent, to me at least, of 90s era high-performance computing
and high-performance system software work.
Because we're building supercomputers to train these things.
It's a bunch of numerical optimization.
It's like programming languages matter again.
And they're very interesting sorts of programming languages
often built on top of other PLs.
So I don't know,
it's like, this is another lesson for me, like, you know, things just seem to come around.
Yeah, well, it makes perfect sense. Because when we're talking about machine learning,
and AI systems today, they are staged computations, you know, right at the highest level,
there's the training stage, then there's the inference stage. But then when you break those down, you know, each of those big stages are broken down into smaller stages.
And whenever you have that staging, all of those sort of dynamic compilation ideas become super relevant.
It becomes sort of the key to making them practical and affordable to run at all.
Yeah, and a bunch of these computations,
like the way that you express some of them,
looks very functional,
and there are a bunch of functional language compilation ideas that are useful now as well.
Yes.
Really interesting.
It is.
In fact, it is functional.
I mean, you're operating over some large network,
and each one of these stages is referentially transparent.
You can remove one stage and replace it with another one and there's a modularity there, which is purely functional.
Yeah, and it may be the most effective demonstration
of the power and the promise of functional programming that
anyone has ever had. Because the beautiful thing about these machine learning training programs
that you express in something like PyTorch is they're short, they're functional, and they're
brief and concise, and you understand exactly what they're saying. It's not like you're writing hundreds of thousands of lines of imperative code to build a transformer.
It's like usually a few hundred or very small thousands of lines of code that you're writing.
I find it really interesting for people who are working on the cutting edge of machine learning in AI.
They have to be multilingual today in terms of programming languages.
They have to have a facility
to work back and forth
between the mathematics
and the algorithms
and the systems kind of architecture
kind of all at the same time.
And then increasingly,
they have to be sensitive
to fairness and ethical issues. And if you just
think about the growth that a human being has to go through to be able to kind of think through
that span of things, it's no surprise that those people are rare today. Hopefully,
they'll become much less rare five years from now, but right now, they're kind of hard to find. And it's also no surprise that more and more of the most brilliant minds
on the planet are drawn to this field. It's not just the goal of artificial intelligence, but
it's the fact that it kind of covers all of these different things to think about in such a wide span.
It just attracts a certain type of brilliant mind.
Well, and I think it also points to how important it is to have a computer science education
for either undergraduates or graduates where you really are getting people exposed
to a very wide range of ideas,
like everything from classic ethics
all the way to pretty serious statistic linear algebra
and differential and integral calculus
to just sort of the core ideas of computation.
And I think it's less important
that you graduate with a four-year degree
and you know the particulars of a programming language
and all of its accordant APIs and whatnot.
Because the thing you and I have learned
is all of that's going to change over
and over and over and over again. So the important thing
is that you get the core concept
so that you can adapt as
new things come along and so that you can
integrate things across all
of these things
that should not be silos of knowledge or
expertise. Yeah,
I think one thing that we've both become
is we've both become students again.
We spend a lot of time just reading papers,
and it's fun in a way.
It's also humbling because you just realize
how hard and deep some of the technical ideas are.
But I feel like my own personal growth has really accelerated just from having a student mindset and taking the time to try to read what people do. Yeah, so I want to spend a few minutes before we run out of time on societal resilience.
And one of the things that you have
certainly had a student mindset on
is all of the things related to healthcare
and the biosciences.
So it was really a bit of good fortune that you had already immersed
yourself in this area and you were running Microsoft Health prior to the pandemic. And
when the pandemic started, I just asked you to take over Microsoft Research and then the pandemic
starts and then a company asks you to help coordinate the
work that we were trying to do to help support people with pandemic response. So like talk a
little bit about that whole experience and how that's informed what it is you're trying to do
right now with societal resilience and research. Well, I blame you, Kevin.
Because, you know, I was happily helping the company build a new health technology business.
I was focused on that.
And then you decided to hire me to lead Microsoft research.
And so I agreed and I took that job on March 1st, 2020.
That's the date. And I remember that very clearly because then less than a week later,
you and a couple of others like our CEO
asked me to put that aside temporarily
and help coordinate Microsoft's science technology response
to COVID-19.
And it was a heck of a way to start a new job.
And it was total chaos because, you know, this pandemic, people were grokking just how serious this was.
And we had within Microsoft Research and across the company, you know, hundreds of people stepping forward, wanting to help, looking for ways to help. Most of them had their own ideas,
and they all had in their own personal networks
connections to people outside of Microsoft
that also had ideas, wanted help,
or were parts of organizations
that were in desperate need of our help.
And so there was just this huge kind of cacophony
of stuff going on.
And we had to very quickly get ourselves organized and mobilize a way to get focused attention on the manageable number of
efforts so that we could actually help make a difference. And so, you know, you know, all the
work that happened. But then this created another problem. Because in my mind, this all started in March of 2020.
And I thought, and in fact, you and I both thought, well, the pandemic is going to be with us through the summer.
But by the fall of 2020, we'll be past it.
And we'll be able to get back to our normal jobs.
I'll be able to get back to the job you hired me for.
And so August comes, September comes,
and it's clear that this thing is not over.
And then I had a management problem
because I had a large number of researchers
that were spending full time
not doing their normal research jobs,
but instead were working on projects
in pandemic response.
And I looked around and I realized that it wasn't just pandemic response.
We had researchers working full-time looking at the security of voting machines.
We had researchers doing predictive analytics to understand
where to put firefighting resources for major wildfires in Australia and California.
We had researchers working on machine learning to accelerate new diagnostics for COVID-19.
None of these were in anyone's job description in Microsoft Research. And yet, it would be wrong to say, you should stop doing those things and get back to your normal research.
And it also made us realize there's something going on here.
There's a form of scientific research that we now call crisis response science
that actually is legitimately a part of some people's jobs in Microsoft research.
And so with that whole thought process, we wanted to allow some of our researchers to
actually have as their full-time jobs doing crisis response science.
And so we formed a new group called the Societal Resilience Group, and it's led by Chris White
under Johannes Gerke's Research at Redmond organization at Microsoft Research.
And one of the first tasks besides creating those job descriptions is to define this new field of scientific research.
And it reminds me a lot back in the 1980s when the field of bioethics emerged.
We were mapping the human genome and it became important to understand
what the ethical considerations are
in the future of genetic engineering.
And a whole new research discipline called bioethics,
which is now really vibrant and important.
In fact, I went and gave a keynote
at one of the recent bioethics conferences
just to understand this better.
I think we're starting to see today
the emergence of a new field
in the same way that we saw the emergence of bioethics
and then somehow there's something about
crisis response science
or the scientific research
that helps make societies and communities
and people more resilient to future crises,
I think is emerging as a new discipline. And it's something that we really
are taking very seriously. How do we build our capacity to anticipate, absorb, and adapt
to disruptions that might threaten people, communities, or societies. And I think it's something that leads to some surprising structures.
For example, community empowerment, grassroots community leaders,
end up being really important.
It helps establish trust, but there's knowledge and insight there.
And so having elite researchers shoulder to shoulder with grassroots community leaders working on research problems together, it's a new form of collaboration that wasn't that common a few years ago, but is becoming sort of an everyday thing in this societal resilience group. Yeah. I'm really happy that you found a way to structure all of this energy and enthusiasm and
intellect that people want to be able to focus on these problems because I fully agree with you that we are facing a increasingly complex world,
which isn't necessarily a bad thing. It just sort of is like, there are more of us humans.
Now I was thinking about this the other day, like there are twice as many humans on the planet
in 2021 than there were in 1972 when I was born actually actually a little over 2x.
Population growth is slowing down, but we won't hit peak population, I don't think,
until the end of this century or later in the century.
But where the population growth is happening is interesting.
What the impacts of climate change on the conditions for those parts of the growing population is interesting.
I mean, like even the basic thing that you just mentioned, like these grassroots organizing things, like one of the things my family foundation wanted to do throughout the pandemic is we're focused on how to break systemic poverty cycles,
like the things that hold generation after generation of families into structural poverty.
And we were trying to figure out how to, as quickly as possible, get these underprivileged
kids in a bunch of the communities that are our communities here in Silicon Valley
back to high quality education,
because that is one of the things
that holds people in poverty.
And you've got a whole bunch of things
that you have to do to get education
to resume safely in a pandemic.
And one of those things is
you got to get people vaccinated.
And the way trust networks work, like the way that people get to a level of comfort and taking
a vaccine or like a medicine that didn't exist 24 months ago is really different. So for you and I,
the trust networks are fundamentally different than they are for other folks. And grassroots this was looking at the importance of convalescent plasma
as a treatment for COVID-19 patients.
And if you're the U.S. government
or if you're a big corporation like Microsoft
and you step into some community somewhere in the world,
you don't automatically
earn people's trust at all. And that trust is actually not warranted because we also don't
understand everything that's going on in the context of those communities. And so in the
Fighters In Us, that coalition, yes, it included big tech companies like Microsoft. Uber was
involved. It involved big healthcare organizations
like the Mayo Clinic and Johns Hopkins Medicine and so on. But it also included grassroots community
leaders, people like Chaim Leibovitz, who's a community leader in the Hasidic Jewish community
in New York City, or Joe Lopez, who's in the Hispanic community in the Houston area.
And these people were absolutely first-class citizens in this coalition and actually emerged as real leaders, not just for relationships, but actually contributing to the science.
Yep.
Actually earning, you know, named recognition on scientific research papers.
And so it's an element that I think is going to be incredibly important
because when you're responding to a crisis, yes,
there's a research component, there's a science component,
there's a financial component,
but there's also a political component to these things.
Yes.
And so you have to find ways to be inclusive and work together in order for any of this to work.
Well, I mean, this is one of the interesting things to me in general. So I do think that
there is crisis response research that is thinking about what the trends are in human society and in technology and science
so that we focus our research efforts and build a toolkit of ideas and concepts and actual
scientific artifacts and tools. But there's also this component that blurs the line between science and engineering and
politics and sociology and all of these things. And I think these lines have been blurring more
and more over the past decade or so as technology has had such a large impact on society at large.
It may sound like a small thing,
but I think one of the very encouraging signs to me is that you
can have people from all of these different disciplines
participating in these works as equals.
It goes back to this, you know,
this thing we were laughing at earlier, like this,
you know, you have, and you have a different telling,
like, you know, mathematicians think they're better
than the physicists, and physicists think they're better
than computer scientists, right?
But you just can't have that in crisis response research.
Like, everybody has to have a full seat at the
table. That's right. I think one of the biggest challenges is that normal scientific research,
when it transitions to the real world, it has luxury of time normally. So like if there's a new
drug to treat some disease, you know, you go through a whole bunch of trials, you publish
papers, it gets debated at conferences. And over a course of, say, five to 10 years, it gets
thoroughly discussed and the scientific consensus emerges. When you're dealing with a crisis,
that luxury of time evaporates. And so another reason that crisis response science,
I think, is a different discipline is because of that. And if the crisis has the power to bring
down power structures, bring down governments, like a global pandemic has that power, then it
also becomes political and very public. And all of the debate that normally happens in the kind of cloistered halls of academia and big research labs like Microsoft Research,
it becomes exposed to the world.
All the sausage making gets exposed. researchers and as a research community, we're all going to have to learn how to do that well and do that correctly.
And there's tremendous power in being explicit about it, recognizing this is what's going on and understanding that context.
Because once you understand that, then you have a chance to write it down on paper, teach it, and become better at it in the future.
Yeah. I think one of the big challenges there,
and we're not going to solve this in this podcast today,
but there are many, many challenges.
One of them is,
as everybody gets exposed to the sausage-making of science,
it can be a little bit disconcerting if you've never seen it before.
I mean, time and again in this pandemic, people have looked and continue to look to science for a degree of certainty that science can probably never provide. Because like the idea of science is it is a process to discover truth. And it is a messy process.
Well, to my mind, we're coming full circle in our conversation because we started it off with
researchers, a researcher's life, always confronting skepticism and doubt. And, you know,
we're kind of going through that now because the public, let's just take the vaccines.
You know, scientists are being confronted with, you know, the doubt and skepticism because they're being forced to be much more open and more preliminary with the work that they're doing than they would normally be.
And it's not easy for anybody. I actually have a lot of empathy for doubters.
Because, you know, in fact, as researchers,
you know, you and I were trained to be skeptics.
Yes.
And, you know, that's normally a good thing.
But it just becomes...
And in fact, honestly, like you and i probably dispositionally like we're born
skeptics yep like i was constantly asking why why why why like i want to understand why and if i was
unconvinced that you're why like i wanted more yes and so i think what we want to do is to
understand that it's fine and in fact appropriate to be skeptical,
but to not allow your skepticism to become such a hardened position that you're closed off to
future evidence and future learnings. And that is at core the scientific method that we are hoping
that the world can adopt. I think that is very well said.
Because this is what we just understood throughout the whole history of science and particularly since the Enlightenment, like when we had a scientific method.
Scientific theories rise and they fall. believed all sorts of things about the world that have proven outright false or, you know,
like they're a special case understanding of a more complicated, nuanced reality. And so,
this whole scientific pursuit is just dealing with all of this messy complexity and trying to
get closer and closer to what truth looks like, which means that sometimes you have to, you know,
backtrack. And like that, that I think is, is just hard for folks in general to like watch,
you know, very smart people who believe something. And then what is very natural to them as
researchers say, okay, we were wrong about that. Like, you know, here's the thing that looks more accurate now.
Like that can be a very confusing thing that makes you wonder,
well, can I trust these folks or not?
And, you know, if you just understood how the scientific process works, you're like, yeah, actually, I trust someone who goes through that journey
way more than I trust someone who is just absolutely dogmatically rigid about a point
of view. Right. So we're almost out of time. But one thing that I like to ask everyone in these
podcasts before we end, and I suspect I know the answer for you, is what do you do for fun when you're not thinking about medicine or computer
science or running Microsoft Research or like any of the cool stuff that you get to do in your day
job? Yeah, that's, I always feel a little embarrassed by that. So thanks for outing me publicly.
But, you know, all my life, I've been interested in cars and auto racing. In fact, I became a
certified auto technician and all this other stuff when I was younger. And then one of my
sisters and I were very interested in auto racing and got into kart racing and then
Formula Ford and then later sports car racing. But then, you know,
you have a life and kids and so on, and that all stops. Last March, at the same time that you hired
me into this role, all of the major professional car racing series like Formula One, IndyCars,
NASCAR, they all got delayed. They all normally start in March every year, but their
starts all got delayed because of the pandemic. And what happened is that a remarkable number of
the very best professional car racers on the planet migrated to online simulation racing
on platforms like iRacing. And what was cool was that if you were also in iRacing, you might be able to go wheel-to-wheel with Dale Earnhardt Jr. or Lando Norris or Fernando Alonso.
It's incredible.
And so for me, this was like, I had to do this because, okay, I was never going to become a professional race driver, but I could actually drive with these guys.
And so all of the time I would normally spend in airports and airplanes, you know, flying around somewhere in the world has been channeled into simulation racing.
It's awesome.
And I have seen your simulated driving rig, which is really cool. And like, I just wasn't aware of how good this
simulation tech had gotten. Like you can have a pretty seriously immersive experience in these
things, you know, and as far as hobbies go, like I'm guessing it's no more expensive than, you know,
being an amateur woodworker and like filling your garage full of woodworking equipment, right?
Right.
Well, iRacing, which is the largest simulation racing platform, has about 200,000 subscribers.
So in our business, that's not a huge number.
But it is a community that takes this very seriously and invests in some pretty significant equipment.
Yeah, and you got pretty good, right?
I'm doing okay.
I'm still an amateur, but yeah, I'm having some success.
I think it's awesome. really amazing to me the interesting things that humans put themselves up to doing.
And like the thing that I love is just watching that intensity of like someone really, really,
really getting into something and just learning everything about it and trying to get as good
as they possibly can at it. Like whether or not you're a professional, like just that journey is so inspiring.
Well, these things intersect
because I should publicly thank you.
You've, you know, you've 3D printed
some nice parts for my sim rig.
Yeah, I mean, that is the thing
that I have gotten really into,
especially over the past couple of years,
because like a lot of my hours that would be spent in airports and on airplanes have been learning how to be a better machinist.
And yeah, so it's always fun.
It's always fun when things can intersect.
Well, someday maybe we'll both retire and we can form a business that, you know, immersive simulation rings for people.
Yeah, that would be that would be awesome.
Well, I think we are officially out of time.
This was so awesome, Peter.
Obviously, on behalf of Microsoft,
I'm very grateful for everything that you do,
and especially the extent to which you went above and beyond
over the past year to help the world with pandemic response.
Just as a human being, I'm super grateful for that.
As always, this has been a super interesting conversation.
Well, the thanks is all mine.
I think working all together like this, it's allowed us to accomplish a few things.
Awesome. All right. Well, with that, thank you so much for your time. I think working all together like this, it's allowed us to accomplish a few things.
Awesome. All right. Well, with that, thank you so much for your time.
So that was Kevin's chat with Dr. Peter Lee, Corporate Vice President of Research and Incubation at Microsoft.
What an amazing conversation.
Yeah, thank you. I always enjoy chatting with Peter. You know, we share these roots where from earlier in our careers, like we were experts, he more so than me, by a mile
in a particular flavor of computer science. And yeah, he just transformed several times over the course of a career,
which, you know, sort of an interesting thing that we all do. But, you know, it's sort of
culminated in this very interesting place. And like, particularly over the past 18 months,
he was just sort of, you know, in the right place at the right time to be able to really apply all
of these things that he's learned and all of his
leadership skills to helping with crisis response with research and using that even as a pattern for
like how to systematize a new type of research so hopefully we can be better prepared for the
next set of crises that inevitably will come yeah Yeah, no, I thought that was so interesting and obviously so great for the world and for us that
he was in that kind of right place at the right time. But I was really struck, you know, given
his background, and it makes sense because he is, you know, he was a professor and he has had,
as you said, you know, this distinguished career across different areas. But I love how he was
talking about his student mindset and that he never stops reading and learning and trying to figure out the next thing. That's amazing.
The world is, in my opinion, infinitely fascinating. And one of the things that I
do see that's sort of correlated with an ability to have a lot of impact is just having a lot of curiosity, like not being satisfied the more time you spend learning, the easier it is to learn.
And so just having that student mindset throughout your life and like not just saying, okay,
well, you know, that stage of my existence is over with and like everything's going to
be in stasis now.
That is not a winning strategy for the complicated world that we live in.
No, it's not. It's not. But it's interesting because, and maybe this is just anecdotally,
but I do run into people who I think sometimes are afraid or feel like, oh, well, good. I've
reached this certain stage. I don't have to learn anymore. So seeing someone like him who
obviously has this insatiable curiosity and has this student mindset and then take that from a
leadership perspective and take that into the areas in the groups that he runs, I think is
really fantastic. Yeah, I totally agree. Helping other people become better learners and encouraging
the curiosity that I think we all have in us is a really important leadership trait. And I think we all have and is a really important leadership trait.
And I think he's had that for a while.
Like you just wouldn't choose to be a computer science professor
if you weren't interested in cultivating
that learning process in other people.
But sticking with it and like understanding
that that's just sort of an important part
of your job as a leader is just important and great.
Yeah, no, I totally agree.
The other thing, and I was struck by the conversation that you had, and this kind of ties into the learning a little bit because they are a little bit related, was learning from that fear of failure.
And as he pointed out, there are real growth opportunities that come from that.
But so many times, you know, with innovation, people are afraid to try because they don't want to fail when that's what you have to do. I mean, I know
just my own experiences and some of the failures I've had in life have been the most instrumental.
But it was great hearing you two talk about that because I think a lot of times people just assume,
especially people who've been very successful, that they either have always succeeded or that they don't still have that
in the back of their mind, you know?
Yeah.
I think this is such an important
part of the human experience.
The fear of failure causes people
to do all sorts of weird stuff.
So in lots of people,
fear of failure prevents people
from even making an attempt.
Right.
And sometimes it makes people attempt things
that aren't nearly as ambitious
as the things that they're truly capable of accomplishing.
And I understand why.
Like, failing is deeply unpleasant.
Yeah.
Like, it never gets to the point
where failure feels great.
But, you know, I learned this from my dad
who failed many times when I was a kid. And the
extraordinary thing that I always watched him do was he just dusted himself off and got back up,
even when the failure was excruciatingly painful, and tried again. Part of that is we were poor,
and so he didn't have much in the way of choice.
Right.
But having that resilience and just, you know, like, okay, well, I failed, like,
no sense wallowing in it. Let's just try again. And like, we will use what we learned from last time to try to make it better this time.
Yes, exactly. And I mean, I think that ties in so well with what Peter does and the work he works
on, because it is research, it's incubation, It's about innovation. And you're going to have those things that work or that don't. But if you weren't willing to try, if you weren't willing to fail, think about all the things that we wouldn his tenure at Microsoft were showing him the new deep neural network stuff for doing speech recognition.
And it's another aspect of and sort of saying, look, these are really smart people.
I'm going to trust them to let them potentially fail in the attempt at something interesting versus like, oh, I'm going to protect them from failure by shutting this down now.
Like, that is a very hard thing to do. And it is, yeah, like, look, it can have catastrophic consequences for all of us just curtailing these interesting new avenues of exploration. let them do that because think about the innovation and all the massive changes in the neural nets and in the speech recognition that we might not have, you know, if they hadn't taken
those chances.
So I love that.
So great.
Yeah.
Okay.
That's our show for today.
You can send us an email anytime at behindthetech at microsoft.com.
We'd really like to hear from you.
Thanks for listening and stay safe out there.
See you next time.