Behind The Tech with Kevin Scott - Dr. Daniela Rus, Director of MIT Computer Science and AI Lab
Episode Date: January 18, 2022Imagine nature inspiring us to create robots of the future! Kevin explores the future with Dr. Daniela Rus, who heads MIT’s CSAIL program. Hear about the latest research on everything from AI & ML t...o Human-Computer Interaction and Computational Biology. Daniela Rus Kevin Scott Behind the Tech with Kevin Scott Discover and listen to other Microsoft podcasts.
Transcript
Discussion (0)
I believe that we can stretch ourselves and go to a different stage where we think about soft robots that are inspired in shape by the animal kingdom, with its form diversity, by the natural world, with its form diversity, and even by the built environment.
Because then we would have so much more potential for applications.
Hi, everyone. Welcome to Behind the Tech.
I'm your host, Kevin Scott, Chief Technology Officer for Microsoft.
In this podcast, we're going to get behind the tech.
We'll talk with some of the people who made our modern tech world possible and understand what motivated them to create what they did.
So join me to maybe learn a little bit about the history of computing
and get a few behind-the-scenes insights into what's happening today.
Stick around.
Hello and welcome to Behind the Tech.
I'm Christina Warren, Senior Cloud Advocate at Microsoft.
And I'm Kevin Scott.
And our guest on the show today is Daniela Roos.
Daniela is a researcher, MIT professor,
and the head of MIT's Computer Science and Artificial Intelligence Lab,
better known as CSAIL.
She's a USA expert member for Global Partnerships in AI.
She's a member of the Board for Scientific America and was
recently recognized along with another behind the tech guest,
Fei-Fei Li, as one of the eight most influential women in AI.
Daniella is really impressive,
as is the institution that she leads.
I've always been an admirer of her work in AI robotics.
She's just a tremendous computer scientist.
Yeah. I've always been an admirer of what happens at MIT,
especially with the robotics stuff,
and I can't wait to hear the conversation that you two have.
Yeah. Me too.
All right. Let's chat with Daniela.
Our guest today is Daniela Roos.
Daniela is one of the world's leading roboticists.
She's an MIT professor and the first female head of
MIT's Computer Science and Artificial Intelligence Lab,
where her research focuses on robotics, mobile computing,
and data science.
Daniela is a class of 2002 MacArthur Fellow,
a member of the National Academy of Engineers,
and the recipient of the 2017 Engelberger Robotics Award
from the Robotics Industries Association.
Her work is dedicated to envisioning a future where robots are integrated into the fabric of everyday life. Welcome to the show,
Daniela. Thank you so much. I'm excited to be here. Yeah, we're super excited to have you. And
so I would love to start our conversation today by talking a little bit about how you got interested
in technology in the first place. Were you a kid when the spark got lit or was it
later? So I grew up in Romania and during the time of my childhood in Romania, there was not much
activity on TV, but I watched Lost in Space with great fervor. That was one of the few shows there was on TV at that time.
I also read the books of Jules Verne, and I fantasized about faraway places and about superpowers.
But I was also interested in poetry and art and history and math and literature
and just about anything else I could get my hands on.
But the constraints of growing up in Romania and being good at math and physics put me directly on the STEM track. And so at that time, it was standard practice for high school age
students to spend a week each month working in a factory. The Romanian government believed this would help us get some
trade skills and it would be easier to join the proletariat this way. So for some time, one week
each month, I worked in a factory that made spare parts for locomotives. And I was a teenager at that
time. This work didn't feel very useful to me then. But as I look back, I can now see many ways
in which this experience contributed to my career journey, because I learned how to use machines
such as the lace. I made screws from scratch, and I began to understand the physical aspects of
making things. And as the math we learned in school became more abstract,
I realized that I wanted to do something related to STEM, but also something with a physical
component. And in my opinion, getting to be at MIT to work with the most extraordinary students
on building new robots, designing and inventing new machines,
is really living the dream. It's getting to work on an area that brings together the world of
computation and the physical world of mechanisms and materials. That is so interesting. I mean, it's hard to imagine what it must have been like as a teenager,
but I got to say, I'm pretty impressed that you learned to cut threads on a lathe when you were
in high school. That's one of the coolest things I've heard in a while. So you got all of this
experience as a high school student. So
where did you go to college and what did you major in?
So I went to college at the University of Iowa. My family migrated to the U.S. just around the
time I was finishing high school. And as an undergraduate student, I studied computer science, mathematics, and astronomy.
And then one day, I had a chance to speak with our then distinguished lecturer. This was Professor John Hopcroft.
John is one of the giants and founding fathers of computer science.
So he gave a talk.
And after the talk, I had a chance to talk with him.
And he told me that classical computer
science was solved. Okay, so by that, he meant that a lot of the algorithmic architecture and
systems problems that were posed at the beginning of the field had solutions. And then John said,
it is time for the great applications of computing. I think he used the word grand applications of
computing. And the application he was most passionate about at that time was robotics.
And he saw robotics as a way of enabling computation to interact with the physical
world. Now, that to me was absolutely fascinating. It was a tremendous opportunity.
And I decided to go and work with John.
So I recall an early conversation with John where he said, wouldn't it be great to have robots that could make and bring coffee?
And this kind of became a challenge for my PhD studies. But the robots that existed back then were just these big, bulky, large
industrial manipulators that could not execute the kind of delicate operation that's needed to pick
up a coffee or a cup, coffee cup or a teaspoon. So John's coffee challenge enabled me to realize
the important role of the body of the machine
in the process of generating autonomous behavior.
And I started working on algorithms for computing that interacts with the physical world.
I did not have much of a chance to think about different types of robot bodies, but
I was thinking about what a different kind of robot could in principle do. So I worked
on in-hand manipulation, so essentially moving things with your hands. I learned on pushing
objects, on rotating objects, on moving furniture. I developed a lot of algorithms that connected
computation with the actual movement of the object. And so even though I could not solve the coffee challenge,
I was able to work on various aspects
of fine manipulation planning for dexterous manipulation
and multi-robot manipulation.
And it really turned out that the theory was hard
and the experiments were even harder.
And in fact, the robots couldn't
really implement the theory. We had these results in theory, but we had no robots that could
implement them. And so after that, I became very interested in the connection between body and
brain as a way of solving the kind of computation for interacting with the physical world problems that are so prevalent in
the field of robotics. So I want to tell you one more thing about that. So get this.
So some years later, maybe about two decades later, let's just say, the robots became more
dexterous and their computation and the set of algorithms became richer. And then it was my turn to give my students the coffee challenge.
And so when I said, hey, can you make me a robot that would bring me coffee?
They said, sure thing.
And it only took a little bit of time to tackle the sort of version of the coffee challenge.
They didn't really solve the whole thing, but they certainly solved way more than I could when I was a student. And so interestingly, it took over 20 years for
the hardware and computation to be able to deliver on the kind of operations required by dexterous
manipulation. And we are not even fully there yet. And I would say that the biggest problems,
the biggest technical challenges
require this kind of time horizon and require this long ramp to go from science fiction to science
and then to reality. It's so interesting. I think what you have been working on over the years is
one of the things that still fascinates me. Because if I think about how I might go solve
or try to solve the coffee challenge
if I were a graduate student now,
I would probably strap one of these commodities,
six-axis robotic arms.
I would 3D print a custom end effector for it.
I'd put it on some sort of mobile cart.
And it feels to me a little bit like that's cheating.
That's still not the dexterous manipulation of
objects that humans can do
quite easily from the time that they're infants.
How do you think about how the field is progressing?
On the one hand, it's fantastic that we have all of
this robotic technology that is practically solving a bunch of important problems, but we still aren't
quite there yet with full human level dexterity. Well, absolutely. This remains a very big
challenge in the field. But as you said, we're making progress and we are also understanding the
impact of the various components of the machine for solving the problem. And so I really believe
that in order to solve any meaningful problem, you really need to have the right machine body
for that problem. You really need to have the right machine brain and the right interaction with a
machine. So what do I mean by this? Well, let's say you want to run GPT-3 on your mobile phone.
You can't because you don't have a body that's capable of that task. Let's say you want a robot
that needs to climb stairs, but all you have is a robot on wheels. The robot is not going to be able to deliver on the
challenge. And so the body plays an important role in how we think about solutions to problems.
But then once you have a body, you also need to have a brain that's able to control the body to
deliver on what it's meant to do. And that requires advancements in theory, advancements in algorithms,
advancements in systems and architectures.
So really, we need to connect the two pieces.
We need to think about the body and the brain
as being connected, as influencing one another.
And in my research, I started thinking about design
as a process that simultaneously considers the robot body and the robot brain or the machine body and the machine brain because these ideas are not limited to robots.
Computers also fall under the same category. robot compiler, where the idea is you give the machine or you give the computational design
platform the task you want the machine to do. And what you expect is a solution that involves
the hardware configuration and also the software and programming environment required
to deliver the function of that machine. And oh, and I didn't tell you about interaction.
So if you want to use the machine, then you have to understand it, right? And that requires that
we think seriously about how machines and people interact with one another. And until very recently,
in order to use a computer, you really needed to be an expert. Like if you think about where
computing was 20 years ago, 20 years ago, only experts could use computers. You really needed to understand
what to do. You needed to have a lot of money because computers were so expensive.
But all that changed with a smartphone, which has democratized computing. And I believe that the
same can be true for physical work. And so we should ask
ourselves, in this world so changed by computation that helps us with number crunching tasks,
what might it be with machines that can help us with cognitive and physical work? How much work
can we offload to machines? And how would machines interact with us? Now, I believe that in the future,
we will have increasingly machines that adapt to people rather than the other way around,
because today we have to adapt to the machines that we use.
Yeah, I totally agree. Well, I'm sort of interested, maybe this is a little bit
related to what you just said. One of the big changes in artificial intelligence over the course of our careers,
it has been this adoption,
particularly over the past,
let's say 15, 20 years of machine learning techniques.
I know when I was in grad school,
I was a compiler and programming languages person,
but all of my friends who were working on
artificial intelligence or robotics were working on
planning algorithms and it's a bunch of
knowledge-based systems and expert systems and trying
to build these systems that
emulated intelligence and that had agency in the physical world
by developing algorithms that effectively manipulated symbols and rules and just had
mathematically precise understandings of what they were doing and how they were interacting
with the world. And the thing that's really changed since I left academia and defined my
entire career in industry is machine learning, where you just have lots and lots of data and
you are trying to train models from those data that allow you to do things. And this has obviously
changed how robotics works a little bit. I'd love, though, to get your perspective on how you've seen this change impact your work. And there are lots of shortcomings to machine learning technologies as well. And so I'd love to observe a few things. First of all, AI, machine learning,
and robotics are really transforming the scope of problems that we can solve. And
they're pointing to so many opportunities for the future. So with these technologies,
we can imagine a world with no traffic accidents and no wasted time in congestion. We can imagine a world with no traffic accidents and no wasted time in congestion.
We can imagine a world where everybody has personalized and individualized healthcare,
a world where we can better engineer medicines and better monitor, diagnose, and treat disease.
We can imagine a world where we can have instantaneous communication between people,
no matter what language we speak.
We can imagine a world where machines take on the routine tasks, allowing people to focus on the more creative, cognitive, and physical tasks.
So that is the promise.
But we are quite far from that promise. So even though we have tremendous successes due to the advancements in AI, machine learning,
and robotics, there are also limitations.
And I would like to remind us that the successes we see today are due to decades-old ideas
that are augmented with tremendous power and data.
So this is why they work.
They didn't work 20 years ago, but they work now
because we have so much more compute power and so much more data.
And so I think that it's important to harness these ideas
and deliver the maximum that we can from these ideas.
But I also think that it's super important
to think about new ideas. New ideas are really critical to advancements and without new ideas
and also funding to back them, more and more people will be plowing the same field. And that
means that the results will be increasingly incremental. So we really need major breakthroughs if we're going to live up to the promise,
but also if we're going to manage the major technical challenges facing the field,
the major technical challenges facing how we deploy this technology in a way that is responsible
and that ensures the greater good.
We also need a computational infrastructure in order to enable progress.
And this is an infrastructure that could deliver data and computation to us like we get utilities,
like we get water and energy.
And so one way to harness this challenge is to look towards the natural world and see
if we can learn something look towards the natural world and see if we can learn something
by studying the natural world. And this is actually super important because as we define ourselves as
researchers advancing the science and engineering of intelligence or the science and engineering of
autonomy, it's useful to focus both on the science part, meaning understanding how the
natural world works, and also on using the new insights, let's say, to create new types of
machines, to create new solutions. I think that there is a lot to be learned, but there is also
a lot to be discovering from the world. We know so little about intelligence
and understanding natural intelligence remains one of the most profound problems that is facing
humanity today. There are so many other problems, including understanding what we can do to save
the planet, understanding how to be equitable about the use of technology, understanding how to ensure
sustainability. But understanding life is among these very important problems. And so I think
people who study the science and engineering of autonomy or the science and engineering
of intelligence can get new insights about life, about ourselves. And then we can harness those
insights to create different types of
machines. So this doesn't mean that what we have now is not sufficient. What we have now is
fantastic. It's great. We can harness, but we also can do more. And this fires innovation.
Yeah, I very, very, very strongly agree with that.. There are a lot of interesting things I think that we can still
discover by trying to exploit scale in some of these problems.
But we do have this widening gap,
I think, between how much computation is involved in training a big machine learn model versus
just how much energy you expend to train one of these models versus the quiescent energy that a
biological brain consumes to implement its intelligence. And so that's a really inspiring gap there.
I mean, there's much, much to be discovered
between these systems that we're building
that aren't yet fully intelligent
and these biological systems that are.
Yeah, I completely agree, Kevin.
I mean, just think about what a kid can do
on a chocolate bar, right?
So you give a kid a chocolate bar
and then you have hours and hours
of extraordinary intelligent and autonomous activity.
And as compared to that,
if we look at what our machines need,
if we look at the cost of training a model like the GPT-3,
it's really extraordinary.
And honestly, as we move forward,
I really believe we need to get serious about sustainable AI.
And I believe we can actually have good solutions.
So you pointed out that the machine learning models
are getting bigger and bigger
and the data needed to train them is greater and greater.
And this is because the more data the system sees,
the better the scope or the bigger the scope of the system. But that uses a lot of energy.
That process results in huge amounts of energy and huge amounts of carbon dioxide. That is a result
of doing those computations. For instance, the researchers at the University of Massachusetts at Amherst
estimated that training a large-ish deep learning model
produces 626,000 pounds of carbon dioxide.
This is equivalent to the lifetime emissions of five cars.
And that's just one average model.
Now, it costs $4.6 million in energy
costs to train GPT-3. And so, if I think about it, I have 50 to 100 models that are being trained in
my lab right now. And that's just one researcher. If we think about all the activity that is going
on in the space, it's challenging, right? I mean, so we have to
be responsible and responsive to the needs of the planet. And so this is an area where
I think technological innovation can truly contribute and extend the scope of our tools.
So the systems are so costly because each system contains hundreds of thousands of neurons
and billions of interconnections. But if you can develop simpler models, this can drastically
reduce the carbon footprint of AI. And this isn't really a hypothetical statement. It's an area where
many MIT researchers are already making progress. They are making progress in multiple ways.
We are looking at how we can take a huge machine learning model
and compress it so that we throw out all the redundancies in the model.
And we have shown that we can throw out up to 90% of the parameters
and still get approximately the same performance.
So that's already good. We are making good progress towards protecting our planet. We also are looking at
inspiration from the natural world to create more compact solutions from the beginning.
And so in particular, my group has been developing a mathematical structure we call neural circuit policies.
And the structure is inspired by nature.
It's inspired by the neural systems of small creatures like worms that have been studied by biologists to the point where the entire neural structure is understood and carefully characterized.
And so, for instance, C. elegans, a very small organism, has 300 or so
neurons. And on 300 neurons, this worm lives a good life, right? The worm finds food, moves in
the world, reproduces, and that's really extraordinary. That's much less than what we
have in our machine learning models. But each neuron employed by
this creature is actually a pretty complex mathematical function. It turns out that the
function of the neurons was characterized to be differential equations, not the typical step
function that is used in deep neural networks. And so the question is, can we, from the get-go,
create models where the neurons are more capable, where the neurons can compute more than a step function?
And so in our research with neural circuit policies, we have done exactly that.
We have allowed our neurons to compute what we call liquid time differential equations.
So these are differential equations where time can be varying.
So time is not constant. And we have also allowed that we have specialized neurons.
And we observed really interesting effects. And so for instance, we have looked at what it takes
to learn from a human, how to drive a car, how to steer a car. That means how to accelerate and how to steer.
And we built a deep neural network model
that required about 100 neurons
and a half a million parameters.
And we had pretty good performance.
But then we also built a neural circuit policy solution.
And our solution has 19 neurons.
And with these 19 neurons, we are so much more able
to visualize what is happening in the engine in the black box. We can create a decision tree that's
associated with that computation. And then we have kind of an understanding of how the engine
does what it does. And this is important for safety-critical applications.
Yeah, that's really, really fascinating. I'd love to learn more about that work.
So just switching gears for just a second, you run the MIT Computer Science and AI Lab,
which is one of the iconic computer science research institutions.
So tell us a little bit about what it's like to do your job.
Well, Kevin, I am honored and humbled to be able to work with such brilliant colleagues
and students at MIT and at CSAIL. Everyone here is advancing computer science
with the objective to contribute towards the future of computing and to making the world
better through computing. And through this work, there are foundational contributions to computing,
but also there is a lot of inspiration for applications and businesses.
And so being part of this community is inspiring and mind-bending.
The community has an extraordinary history.
Now you can think of CSAIL as having two parts, CS for computer science and AI for artificial intelligence.
The AI part of our name goes back to 1956,
when Marvin Minsky decided one summer to gather his friends. They went to the woods of New Hampshire.
They spent a month discussing the deepest questions in science. And when they emerged
from the woods, they told the world we coined a new field of study, artificial intelligence, which is about the science and engineering of creating machines with human-like characteristics in how they move in the world, how they see the world, how they play games, even how they learn.
And so our members have been advancing these ideas from the very beginning with such imagination and insight. The CS side has an
equally proud history that goes back to 1963, when the big dream was for two people to use the same
computer at the same time, right? And those computers were about as big as the rooms we sit
in. And so can you imagine in 60 short years, we moved from dreaming that two people might use the
same gigantic machine to a world where everybody computes, everybody benefits through smartphones
and other means of computation.
And so it's really extraordinary.
So I will tell you that CSAIL has always been about moonshots, about big dreams, about how
to go from science fiction to science and then
reality. And this is so inspiring. And for our students and researchers, no question is too crazy.
No future is too far off. Everyone takes pride in imagining the impossible and then finding ways to
make it possible. And so it's really an extraordinary privilege to be part of
this tradition and to have a chance to develop programs and opportunities to support the dreams
of our researchers. Yes, I'd love to hear what you think some of those big moonshots are. I know
one of the things that I'm fascinated by that you're working on is
soft robotics. So, you know, if you could tell us a little bit about that and like any other
like big interesting things that you all are working on, we'd love to hear about.
So let me say that there are things I work on as a researcher. And then there are things that
CSAIL as a community works on together. So CSAIL is a
very large community. There are 125 faculty here and 1300 members and so there's just a lot of
activity. Now one of my passions is to bring machines, materials and people closer together.
I want to have more intelligent materials and at the same. I want to have more intelligent materials.
And at the same time, I want to have more flexible, safer, more dexterous machines.
And one way to think about this is to consider what robots were like when they were introduced
in 1961, also 60 years ago. The first industrial robot was Unimate.
It was introduced in 1961,
and it was invented to do industrial pick-and-place operations.
Now, since then, the number of industrial robots in production reached tens of millions,
and these industrial robots are true masterpieces of engineering
that can do so much more than people do.
And yet these robots remain isolated from people on the factory floor because they're large and heavy and dangerous to be around.
So we'd like to have machines that are safer to be around and that can be teammates for people. Now, if we compare industrial robots with organisms in nature, organisms in nature are
soft and safe and compliant and more dexterous and more intelligent. How can we get to the point
where we have robots that are like that? And so as I think about our interaction with machines
and the natural world, I actually feel inspired to rethink what a robot is.
Because while the past 60 years have defined the field of industrial robots and empowered
hard-bodied robots to execute complex assembly tasks in industrial settings, I really wish for
the next 60 years to be ushering in robots for human-centric
environments and robots that can help people with cognitive and physical tasks. Now, as we think
about what these robots might look like, I'd like to ask us to look back at what our current robots
look like. So when you think about a robot today,
the images that come to mind are like an industrial manipulator,
a humanoid, or a box on wheels, right?
These are the robots that are most used today.
And so these robots are primarily inspired by the human form or by boxes on wheels.
And so what I believe is that,
I believe that we can do more than that.
I believe that we can stretch ourselves and go to a different stage
where we think about soft robots
that are inspired in shape by the animal kingdom,
with its form diversity,
by the natural world with its form diversity,
and even by the built environment,
because then we would have so much more potential for applications.
I also believe that we can consider a wider range of materials that we have available to us to make these extraordinary machines.
The robots of the past 60 years have been made mostly by hard plastics and metal.
But what about machines that are made out of all materials available to us? And so we can consider
plastic and silicone and wood and paper, even food. And we can also consider synthesized materials.
I think there is so much opportunity to create a whole new type of machine
that will be a good teammate for people,
that will be a more capable tool for people who need help with physical and cognitive work.
Yeah, I'm really excited about the possibility.
So it feels like we're at this point in time where we're really ripe for new
breakthroughs. I'm a hobbyist, a machinist, and one of the things that I'm seeing in a bunch of
machine shops now and one of the things that people are thinking more and more about is how to integrate simple things like six axis robotic arms into their workflows. So how you can have
a thing that will pick a raw piece of metal up, you know, open a door on a
milling machine, like place it into a fixture in the machine, like cycle start,
you know, the part gets made and then you reverse the whole process.
You pull the finished part out,
put it on a pallet and that can be
an amazing thing in some of these shops where you can run
an extra shift and you keep
these really expensive machines running all the time.
But they are sort of simple things. You program them by basically having a human guide them
through a bunch of waypoints in the process you want them to accomplish. And you usually are
custom designing some sort of end effector so that it can pick up the things you want it to pick up.
But it's really exciting to think about things that aren't that simple, that have really
complicated dexterous end effectors and that can be programmed in more robust ways.
Well, and Kevin, let's even go beyond that.
Let's even bring more cognition to these tools. And let's say that these tools,
these machines will be able to watch you and understand what you want to do and come and give
you a hand. So let's say you're trying to lift a heavy box and a machine comes to help you lift it
up just like a friend would today. Yeah, that's a great vision. I'm so glad you all are working on these things.
I'm just curious, what technological breakthroughs
do you think we are going to have to
have in order for some of these things to happen?
We've got a bunch of things that have gotten really cheap
over the past handful of years and those things
where we clearly understand what the scaling path looks like.
But what breakthroughs do you think we are waiting for
in order to see this next round of progress?
Like to have a machine that can do what you just said,
that can notice that I'm struggling to pick something up
and come to assist me.
Well, Kevin, we will need progress in all aspects of using the machine. And so remember
that the machine has a body, the machine has a brain, and the machine has the interaction
with people. So we need progress on how we build and design machines. And here, I think computation,
machine learning, AI can play a tremendous role.
I think we're slowly moving towards computational design and computational fabrication
of machines. And this allows us to experiment with different types of design. It allows us to
experiment in simulation so that the time required to create the final product is greatly reduced.
This is back to our robot compiler that I mentioned earlier in our conversation. the time required to create the final product is greatly reduced.
This is back to our robot compiler that I mentioned earlier in our conversation.
And as we think about designing these machines,
we have to consider the shape, we have to consider the materials,
we have to consider energy requirements. So we also need to think about the energy aspect of the machine. Because honestly,
thinking about a machine flying through free space is simpler from a computational point of view
than a self-driving car that has to drive through congested traffic. So I'm excited about those
possibilities. So those are technical challenges on the hardware side. We also need to consider challenges on the computational side or what I call the brain. So we need to make more capable algorithms, more capable machine learning solutions, more compact machine learning solutions, more solutions that can deliver safetyical algorithms that are characterizable
with respect to what they're able to do.
And then we need advancements
on how machines and people interact with one another
so that we can have more intuitive interactions
that will allow people at large to use machines
without needing to become experts in robotics. So as you look forward to the next
5, 10, 20 years, what are you most excited about? Well, I'm really most excited about this vision
of building machines that truly help people with cognitive and physical work. And doing that in the process of advancing the science and engineering of intelligence,
or if that's too ambitious, maybe it's at least the science and engineering of autonomy.
And so I think that there are so many really exciting problems with respect to automating
how we think about the hardware piece of the machine, expanding the
capabilities of the machine, thinking deeply about how machines interact with each other
and interact with people. This is all tremendously important. But I think that as we think about
these challenges, it is also important to consider how the challenges benefit people and humanity in general.
And so we talked about positive impact in healthcare, positive impact in transportation, positive impact in so many aspects of our lives.
I think that AI and computation holds so much potential to help, but we actually have to do it very carefully. And so, for instance,
in healthcare, there is tremendous potential to improve diagnosis. In fact, machines today will
look at more data in a day than a doctor will see in a lifetime. And there was a fairly recent experiment where an AI-based approach and a doctor were tasked to look at scans of lymph node cells and diagnose cancer or not cancer.
And it's interesting to see that the machine had an error rate of 7.5%.
The doctor had an error rate of 3.5%. But when the machine, when the AI system and the doctor worked together, the error rate went down by 80% to 0.5%.
This is really substantive.
And so today, these systems may be deployed in the world's most advanced cancer treatment centers.
But imagine a future where every practitioner, even those working in
small practices in rural settings, had access to these systems. An overworked doctor may not have
time to review every new clinical study, every new paper, but these systems could provide the
doctor with pointers that will enable the doctor to offer patients the most cutting-edge
diagnosis and treatment options. But now I'm rushing because I want to make sure we realize
that this work requires data. And every time you need data, you need to consider the risks to
privacy. And here regulation can help, but there are also potential advancements in a technological
context.
So technological breakthroughs will help us get through the issues of privacy in a much
faster, much easier way.
And in fact, technological breakthroughs can help with broad challenges in computation, like the cybersecurity
challenges that we see every day when we turn on the news. And here what I mean is advancements
in homomorphic encryption that can enable us to compute on encrypted data without decrypting it,
so that organizations that need information, like your insurance company,
for example, can post queries, can use the data without actually looking at the details.
And so I'm so excited about the possibility for technology to work closely with policy
and with leaders of businesses to get to the point where we have safe and responsible deployment of technology.
Yeah, I really agree with everything that you just said.
And there's a bunch to be inspired by.
We've been excited about homomorphic encryption and the general bag of techniques
to make sure that you are getting all of the benefits from machine learning and AI
without having to make the trade-offs that you mentioned on making the data available to the
model versus privacy quite as stark as they are right now, or they can be in some cases.
Maybe the most inspiring thing that you just said is this example of a machine learning system and a doctor working together to get to a fundamentally better outcome than either of them could achieve on their own.
I think this cooperative AI or collaborative AI is really an exciting thing to think about.
I like to think of our machine learning solutions of today
as sort of like interns.
They're running in the background,
they're doing tasks,
but then they're bringing the results to the human
to make an informed decision on.
And I think that that's the best way
to use AI and machine learning,
especially in safety-critical applications.
Yeah, very, very, very strongly agree with that.
So we are running out of time here, but before we end, I would love to ask you a question
I ask everyone, which is, what do you do in your quote-unquote spare time?
I'm sure you're one of the busier people on the planet and you've got lots of
fun stuff to occupy your time when you are working. But what do you do when you aren't?
Well, I actually like to spend time with family and friends. I also like to enjoy nature and I
love to ski in the winter. I love to scuba dive in the summer.
I'm trying to get back into tennis.
I used to play tennis.
I used to be pretty decent,
but I haven't played in about 20 years.
I love to cook.
I love to host dinner parties
where we could have conversations
like we had today, Kevin,
but this has been kind of tough
during the pandemic.
Yeah, it has been.
What's your favorite thing to cook?
Oh, that's like asking who's your favorite child.
So I actually have a wide range of things I like to make.
So for my girls, I like to make my very special meat sauce pasta
that they enjoy every time.
But I also like to get a little bit fancy and make higher skill recipes that mostly come from French cookbooks. Or I really like the cookbooks of
Annabelle Langbein. She's a New Zealander chef who has a really, really wonderful cookbooks. In fact,
sometimes after a long day at work, I might pick
one of her books before I go to sleep and I might just read through her recipes as a way of relaxing.
And then what that does is it creates a kind of a cache of ideas. So oftentimes when I'm in the
kitchen, I actually don't follow a cookbook. I just look at the ingredients in the fridge and I make whatever I dream up on
the spot. I love it. That sounds like so much fun. So thank you so much for taking time to
chat with us today. I think you've given us a ton to think about and I am so glad that you are at MIT in CSAIL doing the work that you're doing.
It was really great to have you on the show.
Thank you so much, Kevin.
Well, that was Kevin's conversation with Daniela Roos.
Okay, so I have to start with this. I kind of, I'm still trying to wrap my mind around the fact that as a kid, she grew up watching Lost in Space, which I have to admit I've never seen.
But I know the memes, Dangerous Robinson.
I know that has something to do with robots.
There are robots in it.
She grew up watching this show in Romania, and now she literally builds robots and works on kind of the future of this stuff.
How cool is that?
That is super cool.
And it goes to this point that I make a lot,
that inspiration and role models are so important when you're a little kid.
Yeah.
I think a lot of people of her generation and mine are doing what we do today because of the TV shows and the science fiction books and the things that the news media was consumed with talking about when we were little kids thinking about what we wanted to be when we were grown up. Yeah, no, I think we've talked about this before
on other episodes, but kind of showing off
like what that future world might look like,
you know, encourages and kind of creates opportunities
for technologists to actually go out
and make those things a reality,
which, I mean, I just, I love that
because now, you know, she's literally working on
the future of how...
Robotics is such a fascinating space and there's so many interesting implications
as you both were chatting about.
And I love that that is coming out of,
as you were just kind of saying,
like having that role model,
having that modeling, I guess you could say,
for someone when they're younger.
Yeah.
The other thing too that was fascinating to me is
that she was working in a factory for,
I think a week out of a month and learning how to use
a manual lathe and single-point turning threads.
All of this stuff is so awesome.
I think she mentioned it herself,
one of the reasons why she thinks about the work that she does is a mashup across a bunch of different disciplines. Yeah, and I probably would have been absolutely unenthused to be forced to go work in a factory when I was a teenager, but like, it had to have been a really interesting, formative part of her experience as a kid growing up.
Yeah, I was struck by that too, because as she said, you know, she said that it kind of really did inform and contribute to her journey now.
But yeah, like you, I would have been completely unenthused.
But when you look at the work she's done, having that experience and having like that fundamental understanding of how those parts work and having that experience of seeing that machinery up close, actually touching and
feeling it, you know, as a teenager had to have an impact than when you're creating these other
types of machines, right? Yeah. You can sort of see a thread that runs through a bunch of our
guests and I, you know, suspected, you suspected, runs through you and I as well.
It's like a lot of people who end up in
tech are super curious about a pretty broad range of things.
They also pretty early on
learn that the world doesn't work on magic.
The world works because lots and lots and lots of scientists and engineers and scholars and people of all walks of life have come together to build really complicated things.
And if you know where to look, I think, the mindset that she probably had inculcated in her pretty young because she was working in a factory.
Yeah. Yeah, no, I think so.
And I think it's also an interesting perspective to bring, having that kind of that wonderment and that curiosity when you're teaching students, when you're overseeing, you know, this group of people at MIT.
I wanted to ask you, you know, towards the end of the conversation, you talked a little bit about
maybe kind of the future of what might happen next. And she was talking about some of the
different materials and things that might be used in robotics and whatnot. But I'm just curious,
from your perspective, what is exciting to you in the robotics space specifically coming around the future?
I'm excited at how fast the technology is evolving and has been evolving over the past couple of decades. that is pretty thrilling to me is to see robotics and automation
being used in rural communities
to revitalize the economies in these places,
to sort of give some of these communities
like a new lease on life
because through the leverage of all of that automation, you can take the workforce that is available in these places and do an awful lot with it, which is, to me, really incredible. future because you sort of look at the demographics of most of the industrialized world right now.
And we have a growing population of retired people and a smaller population of working
age people coming in behind them. And you need lots of this robotics and automation and technology
in general so that the world can keep on functioning. I'm excited to see that I think the technology
is actually up to the challenge ahead of us.
Yeah, no, I totally agree with you.
I think that is really interesting.
Like you said, it can bring a new lease on life,
so to speak, to some of these communities.
I think that's an interesting turn on the head of
the typical what has been one of the narratives that
the last more than 50
years of, you know, the robots are going to take over the world and seeing it as a negative. And
instead, no, actually this automation could, as you said, help when we have a workforce that is
in industrial parts of the world that are, you know, more people are retired, less people are
working, but we still need to get things done. Yeah. I know as a young engineer earlier in my
career, every time I was able to build a piece of automation to help me with a job that I was doing,
it was because the job that I was doing was repetitive and annoying.
I mean, look, I will be the first to admit I've been very, very guilty of spending three times
as long to automate a solution than to actually go through the repetitive task over and over again,
just on the off chance that I might need to use that script or automation again in the future.
It is a virtue of a good programmer.
All right. Well, that was a fantastic conversation with Daniela.
We are out of time for today.
Thank you again, Daniela, for sharing her time
and her amazing insights
and her great story with us.
Remember, we always love to hear
your ideas for guests.
So please email us anytime
at behindthetech at microsoft.com.
Thanks for listening.
See you next time.