Microsoft Research Podcast - 108 - Neural architecture search, imitation learning and other ML “defense against the dark arts” techniques with Dr. Debadeepta Dey
Episode Date: February 26, 2020Dr. Debadeepta Dey is a Principal Researcher in the Adaptive Systems and Interaction group at MSR and he’s currently exploring several lines of research that may help bridge the gap between percepti...on and planning for autonomous agents, teaching them make decisions under uncertainty and even to stop and ask for directions when they get lost! On today’s podcast, Dr. Dey talks about how his latest work in meta-reasoning helps improve modular system pipelines and how imitation learning hits the ML sweet spot between supervised and reinforcement learning. He also explains how neural architecture search helps enlighten the “dark arts” of neural network training and reveals how boredom, an old robot and a several “book runs” between India and the US led to a rewarding career in research. https://www.microsoft.com/research
Transcript
Discussion (0)
We said, like, you know what, agents should just train themselves on when to ask during training time.
Like, when they make mistakes, they should just ask and learn to use their budget of asking questions back to the human at training time itself.
When you are in the simulation environments, we used imitation learning as opposed to reinforcement learning.
Because you are in simulation, you have this nice programmatic expert.
An expert need not be just a human being, right, or a human teacher.
It can also be an algorithm.
You're listening to the Microsoft Research Podcast,
a show that brings you closer to the cutting edge of technology research
and the scientists behind it.
I'm your host, Gretchen Huizinga.
Dr. Debadeep today is a principal researcher in the Adaptive Systems and Interaction Group at MSR,
and he's currently exploring several lines of research that may help bridge the gap between
perception and planning for autonomous agents, teaching them to make decisions under uncertainty
and even to stop and ask for directions when they get lost. On today's podcast, Dr. Day talks about how his latest work in meta-reasoning
helps improve modular system pipelines,
and how imitation learning hits the ML sweet spot
between supervised and reinforcement learning.
He also explains how neural architecture search
helps enlighten the dark arts of neural network training,
and reveals how boredom, an old robot,
and several book runs between India and the U.S.
led to a rewarding career in research. That and much more on this episode of the Microsoft Research Podcast.
Deba Deep today. Welcome to the podcast.
Thank you.
It's really great to have you here.
I talked to one of your colleagues early on because I loved your name.
You have one of the most lyrical names on the planet, I think.
And he said, we call him 3D.
That's right. That's right. Yeah.
And then you got your PhD and they said, now we have to call him 4D.
That's right. Oh, yes. Yes.
So the joke amongst my friends is like, well, I became a dad, so that's 5D. But they're like, well, we'll have to wait until you become like 20, 30 years. If you became the director of some institute'm so glad you're here. You're a principal researcher in the
Adaptive Systems and Interaction, or ASI group, at Microsoft Research, and you situate your work
at the intersection of robotics and machine learning, yeah? That's right. So before I go
deep on you, I'd like you to situate the work of your group. What's the big goal of the Adaptive
Systems team, and what do you hope to accomplish as a group or collectively?
ASI is one of the earliest groups at MSR, right?
Like, you know, because it was founded by Eric and if you dig into the history of how MSR groups have been, many groups have spun off from ASI, right?
So ASI is more, I would say, instead of a thematic group, it's more like a
family. ASI is a different group than most groups because it has people who have very diverse
interests. But there's certain common themes which tie the group together. And I would say
it is decision-making under uncertainty. There's people doing work on interpretability for machine
learning. There's people doing work on human-robot interaction, social robotics.
There's people doing work in reinforcement learning, planning, decision-making under uncertainty.
But all of these things have in common is, like, you have to do decision-making under bounded constraints.
What do we know? How do we get agents to be adaptive?
How do we endow agents, be it robots or virtual agents, with the ability to know what they don't know and act
how we would expect intelligent beings to act. All right, well, let's zoom in a little bit and
talk about you and what gets you up in the morning. What's your big goal as a scientist?
And if I could put a finer point on it, what do you want to be known for at the end of your career?
You know, I was thinking about it yesterday. And one of the things I think which leaped out to me
is like, you know, I want to be known
for fundamental contributions to decision theory.
And by that, I don't mean just coming up with new theory,
but also principles of how to apply them,
principles of how to practice
good decision science in the world.
Well, let's talk about your work, Demitipta.
Our big arena here is machine learning. And on the podcast, I've had many of your colleagues who've talked about the different kinds of machine learning in their work. And each flavor has its own unique strengths and weaknesses. But you're doing some really interesting work in an area of ML that you call learning from demonstration and more specifically imitation learning. So I'd like you to unpack those terms for us
and tell us how they're different from the other methods
and what they're good for and why we need them.
First of all, the big chunk of machine learning
that we well understand today is supervised learning, right?
You get a data set of labeled data
and then you train some,
basically a curve-fitting algorithm, right?
Like you are fitting a function approximator
who says that if you get new data samples, as long as they're under the same distribution
that produce the training data, you should be able to predict what their label should be,
right? And same holds for even for regression tasks. So supervised learning theory
and practice is very well understood. I think the challenge that the world has been focusing
or has a renewed focus on in the last five, 10 years has been reinforcement learning.
Right. And reinforcement learning algorithms try to explore from scratch.
Right. You are doing learning tabula rasa. You assume that to learn a policy or a good way of acting in the world
based on what experts are showing me, right?
And the reason this is powerful is because you can bootstrap learning.
It's assuming more things that you need access to an expert, a teacher,
but if the teacher is available and is good,
then you can very quickly learn a policy which will do reasonable things because
all you need to do is mimic the teacher. So that's the learning from demonstration.
The teacher demonstrates to the agent and then the agent learns from that. And it's somewhere
between just having this data poured down from the heavens and knowing nothing.
And knowing nothing, right? And mostly in the world, especially in domains like robotics,
you don't want your robot to learn from nothing, right?
Like, you know, to begin tabula rasa,
because now you have this random policy
that you will start with, right?
Because in the beginning,
you're just going to try things at random, right?
And robots are expensive, robots can hurt people,
and also the amount of data needed is immense, right? Like the sample complexity of and theoretically of reinforcement learning algorithms is really high. And so it means that it will be a long, long time before you do interesting things. interesting exploration in what you call neural architecture search, or NAS, we'll call it for short. What is NAS? What's the motivation for it? And how is it impacting other areas in the
machine learning world? So NAS is this subfield of this other subfield in machine learning,
colloquially called AutoML right now, right? Like where AutoML's aim is to let algorithms search
for the right algorithm for a given data set. Let's say this
is a vision data set or an NLP data set, and it's labeled, right? So let's assume in the simpler
setting instead of RL, and you're going to like, okay, I'm going to like, you know, try my favorite
algorithms that I have in this toolkit, but you are not really sure, is this the best algorithm?
Is this the best way to pre-process data? What not, right? So the question then becomes, what is
the right architecture, right?
And what are the right hyperparameters for that architecture?
What's the learning rate schedule?
And these are all things which are, we call it the dark arts of training and finding a
good neural network for, let's say, a new data set, right?
So this is more art than science, right?
And as a field, that's very unsatisfying.
Like, it's all great.
The progress that deep learning has made is fantastic. Everybody is very excited. But there's this dark art part which is there. And people are like, well, you just need to build up a lot of practitioner intuition once you get there. Right. And this is an answer which is deeply unsatisfying to the community as a whole. Right. Like we refuse to accept this as status quo.
Well, when you're telling a scientist that it's art and he can't codify it,
that's just terrible.
That's just terrible. And it also shows that like, you know, we have given up or we have like lost
the battle here. So, and our understanding of deep learning is so shallow that we don't know
how to codify things.
All right. So you're working on that with NAS, yeah?
Yes. So the goal in neural architecture search is let algorithms search for architectures.
Let's remove the human from this tedious dark arts world of trying to figure out things
from experience.
And it's also very expensive, right?
Like most companies and organizations cannot afford armies of PhDs just sitting around
trying things, and it's also not a
very good usage of your best scientist's time, right? And we want this ideally that you bring
data set, let the machine figure out what it should run, and spit back out the model.
Well, the first time we met, Debadevti, you were on a panel talking about how researchers
were using ML to troubleshoot and improve real-time systems on the fly. And you published a paper just recently on
the concept of meta-reasoning to monitor and adjust software modules on the fly using reinforcement
learning to optimize the pipeline. This is fascinating, and I really loved how you framed
the trade-offs for modular software and its impact on other parts of the systems, right?
Right.
So I'd like you to kind of give us a review
of what the trade-offs are
in modular software systems in general,
and then tell us why you believe meta-reasoning
is critical to improving those pipelines.
So this project, so just a little bit of fun background,
like actually started because of a discussion
with the Platform for Situated Interaction team and Dan Bohus, who is in the ASI group and like,
you know, sits a few doors down from me, right? And so the problem statement actually comes from
Dan and Eric. I immediately jumped on the problem because I believed reinforcement learning,
contextual bandwidth provide feasible lines of attack right now.
So why don't you articulate the problem writ large for us?
Okay. So let me give you this nice example, which will be easy to follow. Imagine you are a self
driving car team, right? And you are the software team, right? And the software team is divided into
many sub teams, which are building many components of the self driving car software. Like let's say
somebody is writing the planner, somebody is writing low level motor controller, somebody is writing vision system, perception system,
and then there is parts of the team
where everybody's integrating all these pieces together
and the end application runs, right?
And this is a phenomenon with software teams,
not just in robotics, but also like
if you're developing web software or whatnot,
you find this all the time.
Let's say you have a team which is developing the
computer vision software that detects rocks. And if there are rocks, it will just say that these
parts near the robot right now are rocks, don't drive over them. And in the beginning, they have
some machine learned model where they collected some data and that model is, let's say, 60, 70%
accurate. It's not super nice, but they don't want to hold up the rest of the team. So they push the
first version of the module out so that there is no bottleneck, right?
And so while they have pushed this out on the side,
they're trying to improve it, right?
Because clearly 60, 70% is not good enough,
but that's okay.
Like, you know, we will improve it.
Three months go by, they do lots of hard work
and they say, now we have a 99% good rock detector, right?
So rest of the team, you don't need to do anything.
Just pull our latest code.
Nothing will change for you.
You will just get an update and everything should work great, right? So everybody goes and does that,
and the entire robot just starts breaking down, right? And here you have done three months of
super hard work to improve rock detection to close to 100%, and the robot is just horrible,
right? And then all the teams get together, It's like, what happened? What happened is because the previous rock detector was only like 60, 70% accurate,
the parameters of downstream modules had been adjusted to account for that. They're like,
oh, we are not going to trust the rock detector most of the time. We are actually going to like,
you know, be very conservative. These kinds of decisions have been made downstream,
which actually have been dependent upon the quality of the results coming out upstream in order to make the whole system
behave reasonably. But now that the quality of this module has drastically shifted, even though
it is better, the net system actually has not become globally better. It has become globally
worse. And this is a phenomenon that large software teams see all the time. This is just a canonical example, which is easy to explain.
Like, you know, if you imagine anything from like Windows software or anything else.
Any system that has multiple parts.
Yeah. So improving one part doesn't mean the whole system becomes better.
In fact, it may make it worse.
In fact, it may make it worse.
Just like in NAS, how we are like, you know, using algorithms to search for algorithms. This is another kind of auto ML where we are saying, hey, we want a machine learned
monitor to check the entire pipeline and see what I should do to react to changing conditions,
right? So this monitor is looking at system specific details like CPU usage, memory usage,
the runtime taken by each compute, like it's
monitoring everything, the entire pipeline, as well as the hardware on which it is running and
its conditions, right? And it is learning policies to change the configuration of the entire pipeline
on the fly to try to do the best it can as the environment changes.
As the modules change, get better and impact the whole system. How's it working?
We have found really good promises, right? And right now we are looking for bigger and bigger
pipelines to prove this out on and see where we can showcase this even better than what we
already have in a research paper. Real briefly, tell me about the paper that you just published
and what's going on with that and the meta-reasoning for these pipelines.
So that paper is at AAAI. It'll come out in February, actually, at New York next week.
And there we show that you can use techniques like contextual band-aids, as well as stateful
reinforcement learning to safely change the configurations of entire pipelines all at once,
right? And let them not degrade very drastically to adversarial changes in conditions,
right? You know, just as a side note, my husband had knee replacement surgery. But for decades,
he had had a compressed knee because he blew it out playing football. And he had no cartilage.
So his body was totally used to working in a particular way. When they did the knee surgery,
he gained an inch in that leg.
Suddenly he has back problems. Yeah, because now your back has to like, you know, it's the entire
configuration, right? You can't just... No. And it's true of basically every system, including
the human body. As you push down here, it comes out there. Yeah, no, that's true. Cars, like people
go and put, oh, I'm going to go and put a big tire on my car. And then the entire performance of the car is degraded because the suspension is not adapted.
It's a cool tire.
Yeah, it's a cool tire. The steering is now rock hard and unwieldy.
But the tire looks good, though. Well, let's talk a little more about robots, Debadeep, since that's your roots.
Yes.
So most of us are familiar with digital assistants like Cortana and Siri and Alexa.
And some of us even have physical robots like Roomba to do menial tasks like vacuuming.
But you'd like us to be able to interact with physical robots via natural language and not only train them to do a broader variety of tasks for us, but also to ask us for help when they need it.
Yeah.
So tell us about the work that you're doing here.
I know that there's some really interesting threads of research happening. This project actually, the one that you're referring to, actually started with
a hallway conversation with Bill Dolan, who runs the NLP group, after an AI seminar on
a Tuesday where we just got talking, right?
Because of my previous experience with robotics and also AirSim, which is a simulation system
with Ashish and Sheetal and Chris Lovett.
And we found that, hey, simulation
is starting to play a big role.
And the community sees that.
And already, for home robotics, not just outdoor things
that fly and drive by themselves and whatnot,
people are building rich simulators.
And every day, we are getting better and better data sets,
very rich data sets of real people's homes scanned
and put into AirSim-like environments with Unreal Engine as the backend or Unity as the backend,
which game engines have become so good, right? Like, I can't believe how good game engines are
at rendering photorealistic scenes. And we saw this opportunity that, hey, maybe we can train
agents to not just react reasonably to people's commands and language instructions in indoor scenarios, but also like ask for help.
Because one of the things we saw was that at the time we had dismal performance on even the best algorithms, very complicated algorithms.
We're doing terrible, like 6% accuracy on doing any task provided by our language.
Right.
But just like any human being, like, you know,
imagine you ask your family member to, hey, can you help me? Can you get me this, right?
While I'm working on this, can you just go upstairs and get me this? They may not know
exactly what you're talking about, or they may go upstairs and be like, I don't know,
I don't see it there. Where else should I look? Human beings ask for help. They know when they
have an awareness that, hey, we are lost or I'm being inefficient.
I should just ask the domain expert.
Ask for directions.
Exactly.
Ask for directions.
And especially when we feel that we have become uncertain and are getting lost.
Right.
So that scenario, we should have our agents doing that as well.
Right.
So let's see if we give a budgeted number of tries to an agent.
And this is almost like if you have seen those game shows where you get to call a friend, a lifeline, exactly, right? Like,
you know, you and let's say you have three lifelines, right? And so you have to be strategic
about how you play those lifelines. Don't call me. Or at least don't use them up on easy questions,
right? Like, you know, something like that. But also there's this trade-off like,
hey, if you mess up early in the beginning
and you didn't use the lifeline when you should have,
you will be out of the game, right?
So you won't live in the game long enough, right?
So there's this strategy.
So we said like, you know what?
Agents should just train themselves
on when to ask during training time.
Like when they make mistakes,
they should just ask
and learn to use their budget of asking questions
back to the human at training time itself, right?
When you are in the simulation environments, we used imitation learning as opposed to reinforcement learning,
and we were just talking about imitation before.
Because you are in simulation, you have this nice programmatic expert.
An expert need not be just a human being, right, or a human teacher.
It can also be an algorithm which has access to lots more information at training time.
You will not have that information at test time.
But if at training time you have that information,
you try to mimic what that expert would do, right?
And in simulation, you can just run a planning algorithm, which
is just like shortest path algorithm,
and learn to mimic what the shortest path algorithm would
do at test time, even though
now you don't have the underlying information to run the planning algorithm. And with that,
we also like built in the ability for the agent to become self-aware. Like I'm very uncertain right
now. I should ask for help. And it greatly improved performance, right? Of course, we are asking for
more information strategically. So I don't think it's a fair comparison to just compare it to the agent
which doesn't get to ask. But we show that like, you know, instead of randomly asking or asking
only at the beginning or at the end, like various normal baselines that you would think of,
learning how to ask gives you a huge boost. Well, Devadipta, this is the part of the podcast
where I always ask my guests, what could possibly go wrong? And when we're talking about robots and autonomous systems and automated machine learning, the answer is,
in general, a lot. That's why you're doing this work. So since the stakes are high in these arenas,
I want to know what you're thinking about specifically. What keeps you up at night?
And more importantly, what are you doing about it to help us all get a better night's sleep?
So in robotics, in self-driving cars, drones, even for home robotics, like safety is very critical, right?
Like, you know, you are running robots around humans, close to humans in the open world and not just in factories, which have cordoned off spaces.
Right. So robots can be isolated from humans pretty reasonably, but not inside homes and on the road, right?
Or in the sky.
Or in the sky, absolutely.
The good thing is the regulations bodies are pretty aware of this.
And even the community as a whole realizes that you can't just go and field a robot with any not well-tested machine learning algorithms or decision-making running, right?
So there's huge research efforts right now on how to do safe reinforcement learning.
I'm not personally involved a lot in safe reinforcement learning, but I work closely
with, for example, the reinforcement learning group in Redmond, the reinforcement learning
group in New York City.
And there's huge efforts within MSR on doing safe reinforcement learning, safe decision
making, safe control.
I sleep better knowing that these efforts are going on. And there's also huge efforts, safe decision making, safe control.
I sleep better knowing that these efforts are going on.
And there's also huge efforts, for example, in ASI and people working on model interpretability,
people working on pipeline debugging and ethics and fairness, including at other parts of MSR and Microsoft and the community in general.
So I feel like people are hyper aware. The
community is hyper aware. Everybody's also very worried that we will get an AI winter if we over
promise and under deliver again. So we need to make our contributions be very realistic and not
just over hype all the buzzes going around. The things that I'm looking forward to do is like,
for example, like meta reasoning, we were thinking about like how to do safe meta reasoning, right? Just the fact that the system knows that it's not very aware and I should not be taking decisions blindly. These are beginning steps. Without doing that, you won't be able to make decisions which will evade dangerous situations. You first have to know that I'm in a dangerous spot because I'm doing decisions without knowing what I'm doing, right? And that's like the first key step. And even there, we are a ways away.
Right. Well, interestingly, you talk about Microsoft and Microsoft research,
and I know Brad Smith's book, Tools and Weapons, addresses some of these big questions
in that weird space between regulated and unregulated, especially when we're talking
about AI and machine learning.
But there's other actors out there that have access to and brains for this kind of technology that might use it for more nefarious purposes or might not just even follow best practices.
So how is the community thinking about that?
You're making these tools that are incredibly powerful.
Yeah. So that is a big debate right now in the research community, because oftentimes what happens is that we want to attract more VC funding. We want to grow bigger. It's land grab.
So everybody wants to show that they have better technology and racing to production or deployment.
First to deploy.
First to deploy,
right? And then first to convince others, even if it's not completely ready, means that you maybe get like, you know, the biggest share of the pie, right? It is indeed very concerning, right? Like
even without robotics, right? Even if you have like services, machine learning services and whatnot,
right? And what do we do about things which are beyond our control, right? We can write tooling to verify any model which is out there and do interpretability, find where the
model has blind spots that we can provide, right? Personally, what I always want to do is be the
anti-hype person. I remember there was this tweet at Current NeurIPS where Lin Ziao, who won the
Test of Time Award, which is a very hard award to win for his paper almost 12 years ago, started his talk saying, oh, this is just a minor extension of Nesterov's famous theorem, right?
Like, you know, and Subbarao Kambapati tweeted that, hey, in this world where everybody has pretty much invented or is about to invent AGI. So refreshing to see somebody say, oh,
this is just a minor extension of...
It's an iteration.
Yeah. And most work is that, right? Like, you know, irrespective of the fancy articles you see
or in pop-sci magazines, robots are not taking over the world right now. There's lots of problems
to be solved, right?
All right. Well, I want to know a little more about you,
Demideepta, and I bet our listeners do too.
So tell us about your journey,
mostly professionally,
but where did you start?
What got a young Debadeep today
interested in computer science and robotics?
And how did you end up here at Microsoft Research?
Okay, well, I'll try to keep it short,
but the story begins in undergrad
in engineering college in New Delhi.
The Indian system for getting into
engineering school is that there's a very tough all-India entrance exam. And then depending upon
the rank you get, you either get in or you don't to good places, right? And that's pretty much it.
It's that four-hour or six-hour exam and how you do on it matters. And that is so tough that you
prepare a lot for that. And often what happens is after you get to college, the first
year is really boring. Okay. Because I remember because we knew everything that was already in
the curriculum in the first two years of college. Yeah, just to get in. So you're like, okay, we
have nothing to do. And so I remember the first summer after the first year of college, we were
just a bunch of us friends were just bored. So we were like, we need to do something, man,
because we are going out of our mind.
And we were like, hey, how about we do robotics?
That seems cool.
Okay, first of all, none of us knew anything about robotics, right?
But this is like young people who brisk, right?
You don't know what you don't know.
Yeah, like confidence of the young.
I guess that's needed at some point.
You should not get jaded too early in life.
So we were like, okay, we are going to do robotics and we are going to build a robot
and we are going to take part in this competition in the US in two, three years time. But we need
to just learn everything about robotics, right? And okay, you must understand this is like, you
know, pre-internet was there, but the kind of online course material you have now, especially
in India, we didn't have anything. There was nobody to teach robotics.
And this was a top school, right?
And there was like one dusty robot in the basement of some, I think, the mechanical
engineering department, which had not been used in like 10 years.
Nobody even knew where the software was and everything.
Like we went and found some old dusty book on robotics.
But luckily what happened is, because we were in Delhi,
somebody had returned from CMU, Anuj Kapuriya, and had started this company called High Tech
Robotics. So we kind of got a meeting with him and we just started doing unpaid internships there,
right? We are like, we don't care. We don't know because he actually knew what robotics was,
right? Because he had come in right from CMU and finishing his master's and he was starting this company. He would sometimes go to the U.S. and it was so dire that we would like,
will you buy this book for us and bring it back from the U.S., right? Because there's nobody here.
We can't even find that book, right? And so I got my like first taste of modern day robotics
and research there. And then in undergrad, after the end of my third year, I did an internship at
the Field Robotics Center at Carnegie Mellon. And then after that, I finished my master's and PhD
there. I came back to India, finished, and then went back to the U.S. And that's how I got started,
mostly because I think it was, I would say, pure perseverance. I'm well aware I'm not the smartest
person in the room, but as somebody had told me right before I started at Intel Research and who is now at Google, finishing a PhD is 99% perseverance.
And research is, as almost all big things in life, it's all perseverance.
You just got to stick at it, right, and through the ups and the downs.
And luckily enough, I also had fantastic advisors.
CMU was a wonderful place.
When I came to MSR, it also
re-energized me in the middle of my PhD. Would it be fair to say you're not bored anymore?
No, no, not at all. Like, you know, nowadays we have the opposite problem. We are like,
too many cool problems to work on and yeah, not enough time. Yeah.
Tell us something we don't know about you. I often ask this question in terms of how a particular character trait or defining moment led to a career in research, but I'm down me all kinds of books, not just history, like literature and everything.
And I was very good at English literature.
And I wanted always to be an English professor.
I never wanted to do anything with CS.
In fact, I was actually kind of bad at math.
I remember I flunked basic calculus in grade 11, right?
Mostly because of not paying attention and whatnot.
But all of that was very boring.
And the way math was
predominantly taught at the time was in this very imperialistic manner. Here's a set of rules, go do
this set of rules and keep applying them over and over. And I was like, why? This all seems very
punitive, right? But my mother one day sat me down and said, look, you're a good student. Here's the
economic realities. At least in India, I am one in
thousand who makes a living from the humanities. Most people don't and will not make it. And
it's very difficult to get actually a living wage out of being an English professor, at
least in India. And you're good at science and engineering. Do something there. At least
you will make enough money to pay your bills. Right. But there's always this part of me
which believes
that if there was a parallel life, if only I can be an English professor at a small rural college
somewhere, that would work out great as well. As we close, I want to frame my last question in
terms of one of your big research interests, and you started off with it, decision-making
under uncertainty. Many of our listeners are at the beginning of their
career decision trees, but absent what we might call big data for life choices, they're trying
to make optimal decisions as to their future in high-tech research. So what would you say to them?
I'll give you the last word. The one thing I have found, no matter what you choose, be it technology,
arts, and this is particularly true for becoming good at what you do is pay attention to the fundamentals, right? Like I have never seen a great researcher who doesn't
have mastery over their fundamentals, right? This is just like going to the gym. You're not going to
go bench press 400 pounds the first day you go to the gym. That's just not going to happen, right?
So a lot of people are like, well, I'm in this calculus 101. It seems boring and whatnot. And I don't know why I'm doing this.
But all of that stuff, especially if you are going to be in a tech career, math is super useful.
Just try to become very, very good at fundamentals.
The rest kind of takes care of itself.
And wherever you are, irrespective of the prestige of your university, even that doesn't matter. One of the
principles that we have found true, especially for recruiting purposes, is always pick the candidate
who has really strong fundamentals, because it doesn't matter what the rest of the CV says,
really good fundamentals, we will make something good out of that. So if you just focus on that,
wherever you are in the world, you will be good.
Debadeep Dede, this has been so much fun. Thanks for coming on the podcast and sharing
all these great stories and your great work. Thank you. I had a lot of fun as well.
To learn more about Dr. Debadeepep today and how researchers are helping your robot make
good decisions, visit Microsoft.com slash research.