Microsoft Research Podcast - 027 - The Democratization of Data Science with Dr. Chris White
Episode Date: June 6, 2018When we think of medals, we usually picture them over the pocket of a military hero, not over the pocket protector of a computer scientist. That may be because not many academics end up working with t...he Department of Defense. But Dr. Chris White, now a Principal Researcher at Microsoft Research, has, and he’s received several awards for his efforts in fighting terrorism and crime with big data, statistics and machine learning. Today, Dr. White talks about his “problem-first” approach to research, explains the vital importance of making data understandable for everyone, and shares the story of how a one-week detour from academia turned into an extended tour in Afghanistan, a stint at DARPA, and, eventually, a career at Microsoft Research.
Transcript
Discussion (0)
I got approached to work on this very short-term project in Washington, D.C., and I said, no thanks.
I got asked a second time. I said, that's great, but no thanks.
And then a third time as kind of a personal favor, and so I said yes, of course.
Turns out, I went down for one week. One week turned into two weeks. Two weeks turned into three months.
And instead of going back to Harvard, I went to Afghanistan.
You're listening to the Microsoft Research Podcast, a show that brings you closer to the cutting edge of technology research and the scientists behind it. I'm your host, Gretchen
Huizinga. When we think of metals, we usually picture them over the pocket of a military hero,
not over the pocket protector of a computer scientist.
That may be because not many academics end up working with the Department of Defense.
But Dr. Chris White, now a principal researcher at Microsoft Research, has.
And he's received several awards for his efforts in fighting terrorism and crime with big data, statistics, and machine learning.
Today, Dr. White talks about his problem-first approach to research, explains the vital importance
of making data understandable for everyone, and shares the story of how a one-week detour
from academia turned into an extended tour in Afghanistan, a stint at DARPA, and eventually
a career at Microsoft Research.
That and much more on this episode of the Microsoft Research Podcast.
Chris White, welcome to the podcast.
Hi, Gretchen. Thanks.
So you're a principal researcher at MSR and you work on special projects.
We'll talk about that in a second, what that means.
But for now, let's talk in general about what gets you up in the morning.
What are the big problems you're working on and the big questions that you're asking?
Well, there's a bunch of questions and problems worth looking at.
And from my point of view, I looked at problems that were happening in society,
problems that technology companies could build technology to address,
technology that could also have a business purpose.
And it sort of inspires me in lots of ways.
One way to think about it is, you know, what does terrorist financing,
human trafficking, propaganda, ransomware, what do those have in common?
Well, those are all things that appear online.
They appear in the vastness of big data and the darkness of usernames.
And so if one were trying to address those problems, technology would be a helpful aid.
And a company like Microsoft, who has a huge technology platform and has the responsibility
to maintain trust worldwide, would be a great place to work on it.
So the term research covers a broad range of approaches to discovery, finding stuff out
more colloquially. And the special projects approach you bring to the mix here by way of
the Defense Advanced Research Project Agency or DARPA is a little different from traditional
academic research. Can you talk about that? How is this approach different? What advantages does
it offer for specific kinds of research questions?
Sure. One way to think about research is not knowing what you're doing. That's
why it's called research, right? And given that, there are many ways to approach solving
problems that you don't know how to solve. Sometimes the question depends on the scope
of the problem. Sometimes it depends on the maturity of the approaches to
solve the problem. Sometimes it depends on the state of society and its ability to adopt and use
and afford solutions to problems. And so at DARPA, the way it's approached is by identifying the
problem first and then understanding how to organize money, technology, talent, people,
organizations to best execute
against solving that problem.
And that usually means that you assemble lots of different kinds of people, as opposed to
a classical view of research from the academic point of view, where you study a problem,
you write papers that are reviewed by your peers, and you advance the field by that kind
of approach. In the projects kind of approach,
you bring together people with different skills and then you organize them to approach a problem
with larger scope than you could address as a single person. And the hope is that you can do
something impactful. Yeah. Do you find purchase with that method, as they say in the academic room? Well, for sure. In my world, we work on data analytics. We work on how to enable people to
interact with computers to make decisions from information. We want them to have help from AI,
and we focus on how to help them organize and structure information and how to help them
interact and visualize that to make decisions. To do that requires people with different backgrounds. It requires
user interface application developers. It requires big data distributed computing developers. It
requires people familiar with machine learning and artificial intelligence techniques.
Any one of those people can address part of the problem, but to address it end-to-end requires
organizing them together. And so that's the project's approach we take here.
Okay. Let's talk about data science writ large for a minute.
The massive amounts and multiple sources of data that we have today have prompted a need for big data analytics and data visualization tools.
So what are the specific challenges of big data problems and how are you tackling them?
Well, big data, much like artificial intelligence, are terms that are vague.
In fact, they don't mean anything, neither big nor data.
They're not qualified.
Just like artificial intelligence.
And so that's good and bad. It's good because there's a movement.
That movement has funding and interest from policymakers.
It has the need for understanding implications.
But that movement is still very large and very vague.
And so I think about big data really in terms of publicly accessible information as a starting point, because that's something people are familiar with.
They've all gone to a search bar.
They've all issued a bunch of queries.
They've all had a bunch of browser tabs open and had that familiar feeling of, God, there's just like a lot of information
out there. How do I find what I need? How do I organize it? And when that problem is a business
problem, it's even bigger. And so I think of it like that. Sometimes there's images like an
iceberg where what you see from a search bar, what you see if you did a Bing search for a product or a
celebrity or an event, and you get a list of links and an answer card, they think of that as data
interaction. And it's true, but behind that, there's a lot more. There are databases, there are
APIs with streams like Twitter and Facebook, there are public records from FOIA results,
there are all kinds of things that you
have to have a different kind of skill to access. And a lot of the big data approaches are how to
use technology to access that information and how to present it to people so they can make use of it.
One way I think about it sometimes is I go all the way back to the beginning of computers,
1830s. Charles Babbage is envisioning a computational machine
of some kind.
In the end, one gets built,
they call it the difference engine.
And really that was an engine that let you compare numbers.
But what is happening now,
it's the same fundamental operation of comparison,
but it's not zeros and ones.
People want to compare concept-like entities.
They want to compare events. They want to understand the reaction to things.
And those are the kinds of questions you can answer with big data. It's not a fact. It's more
understanding the situation, understanding what's happening, who's involved, how to do the analysis.
And those are the empowered abilities
we want our users to have. What's your particular focus in the machine learning space?
Right. So I view the process as one where people are more or less doing the same thing they've
always been doing in the world. But what's changed is now we have lots more partial and noisy
observations of that behavior. And with
that, we can start to infer what was going on and what to do about it, what's changing. And so we
view that process through the lens of data analysis. How do we take in lots of partial,
noisy observations from streams of sensors like social media, like news, like webpages and
documents, internal reports, measurements of various kinds,
to organize them in a way that lets people understand what might have been going on
and to understand what might be related and what they might be able to do about it.
And so we employ methods of graph analysis and graph statistics to posit a data generating
process that we can measure and then to evaluate that as a good
representation. And then to bridge to the user, we find that's not enough. So we need ways to
visualize and explain it. Those require advances and inventions in that visual space of representation
and in the space of interaction. And so we have focused in that space as well. So I say that in a way, my home is in machine learning and information theory, and I'm a tourist in the HCI space,
but really now there's like a second home. When I went overseas, I was there to do a machine
learning problem. It turned out that I needed to do visualization and HCI because people wouldn't
use the results if they didn't understand them. People wouldn't
take advantage and take action on information if they couldn't interrogate it. And so very quickly,
that process of visualization, of interaction, of application development became as important,
if not more, than the machine learning algorithms to transform data into a data structure
for use. And so we continue focusing on advances in modeling that are more realistic, that are
more efficient, that are more expressive, as well as advances in HCI that are more representative,
that have more points of view for interaction, that allow for different kinds of users to
understand things more quickly, to have less training material and tutorial. And those are the basis for our research.
Let's go back a little bit. You have a fascinating background. Talk about that for a minute.
Well, I grew up in the Midwest, in Oklahoma, and focused on electrical engineering. And over time,
became more and more academic, got a PhD, did a postdoctoral fellowship, was planning to be a
faculty. That's kind of what you get taught to do when you're in that kind of schooling.
And I got approached to
work on this very short-term project in Washington, D.C., and I said, no thanks. I was really enjoying
the summer in Cambridge. I got asked a second time, said it was good for my professional development.
I said, that's great, but no thanks. And then a third time as kind of a personal favor, and so I
said, yes, of course. Turns out, I went down for one week, one week turned into two weeks, two weeks turned into three months. And instead of going back to
Harvard, I went to Afghanistan. And that began a very odd detour into fieldwork, where the reality
of people using technology to understand information and make decisions really dramatically
affected me and the way that I think about problem solving and the way I think about research.
That led to several years of understanding and fielding technology for use in the Middle East,
understanding how people adopt technology, what they need, how they prioritize their problems.
And then all of those lessons got to come back with me and be
part of major investments for DARPA. That included Xdata, which is a program that was part of
President Obama's Big Data Initiative. It was the lead project for DARPA. And Memex, which is titled
after Vannevar Bush's famous article, As We May Think, where he anticipated Wikipedia and hyperlinks.
And he describes this machine called the Mimex
that lets you dig into information and make sense of it.
And the Open Catalog, which lets us publish and share the results
from research funded by taxpayers at DARPA.
And all of these, my view is that if we're investing taxpayer dollars,
unless there's a compelling security reason to keep something secret,
we should make it free and easy to access.
And those projects, those were opportunities to invest in small companies, large companies, universities,
to bring both general purpose technology, but then to bring it together to solve important
problems like human trafficking and terrorism financing. Let's talk about that for a minute.
You went from doing military work in Afghanistan to digital crime fighting. Can you
talk about that era of your life for a minute? Sure. Many of these technologies, data analysis
technologies, machine learning technologies, human computer interfaces, these are very general. Again,
they're almost like utilities from my point of view. That's why having them as open source
projects makes a lot of sense to me, or having them as open source projects makes a lot of sense to me or having them as low cost products makes a lot of sense to me. And the observation coming
back from Afghanistan was that many parts of the government had similar problems. And there were
many law enforcement and related organizations that had similar problems. And so we decided to
pick a few of those and focus on them as applications, knowing that if we could address
those as well as build general
purpose technology, then other people might help apply them to other problems. And coming from the
government to Microsoft, there was a real opportunity. And therefore, we thought this is a
great place to work on that problem. We have a digital crimes unit. We have the ability to apply
those techniques we've learned. And we have the technology platforms that Microsoft has.
So we took a shot at it.
And?
Well, the first problem we worked on with the Digital Crimes Unit was a cousin problem to ransomware.
There's a version of that problem called tech scams, and it works like this.
You're on your computer, and either you click on something or you get an email,
or somehow your computer gets into a state where there's a pop-up that says, whoa, hey, you have a virus.
You need to call tech support.
And here's the number.
Call it.
Now, first of all, none of you should ever do that, ever call tech support if you're prompted to from a computer.
It doesn't work that way and you're going to get in trouble. We get tens of thousands of complaints in writing a month where people say, there was this thing that popped up on my computer, pop-ups, pop-ups, pop-ups, pop-ups, wouldn't go away, wouldn't go away, wouldn't go away, couldn't delete it, have to lose control of the computer, someone using it
remotely, all these kinds of things. And it's just a difficult situation. And so this problem,
it's pervasive. It's something like, you know, one in 12 people that were interviewed had this
problem happen to them. And of those, many of them were ensnared. And so we took on this problem.
And the way we approached it was by building a web scale
crawling architecture to find where all of these scams are happening on web pages anywhere.
And this is not easy.
No, this is at the same scale as crawling the web to build a search index.
It doesn't have the same difficulties in one sense because you're not enabling millions
or billions of people to access that simultaneously,
which is a very hard operational problem. You're talking about instead the analysts at the Digital
Crimes Unit or the Federal Trade Commission that we cooperate with or other internal groups. So
it's a smaller number of users, but the size of data are still very large.
More needle in the haystack kind of problem.
Yeah, exactly. It's more targeted. And so with this problem, it's a crime problem, but we need something that people can use. How can they understand
the vastness of this tech scam problem? How big is the problem? Where is it occurring?
How many scams? How organized are the scams? These are the kinds of analysis questions
that one can answer with big data access and these kinds of tools, but we have to build them.
And so we built a web crawling,
distributed backend infrastructure
that would find where these scams were happening online.
One of the challenges was to find them
as they were happening and to capture that,
because one detail around law enforcement and digital crime
is you have to have evidence.
By building the ability for people
to organize this kind of content,
organize it with provenance, with comparability, with the ability to query and reason, we were
able to then start to build tools that analysts could use. The second half of that problem was,
okay, now we maybe have found all of this content. Maybe we've started to use machine learning and
artificial intelligence to structure it. How do we make that available to people? How do we make we maybe have found all of this content. Maybe we've started to use machine learning and artificial
intelligence to structure it. How do we make that available to people? How do we make that
artificial intelligence visible and usable? Well, we have to build a bridge, and that bridge is
through user interfaces or through HCI approaches in general. And so we had to build many of those
and organize them into an application and make that available to these analysts.
The outcome, though, was satisfying.
The outcome was that we worked with the Federal Trade Commission on Operation Tech Trap last year.
We were able to supply them with the appropriate and relevant information to contribute to indictments.
And they levied several indictments and raids. One of them involved a group in Ohio that had been defrauding 25,000 people of $40 million.
Wow.
So the ability to go end-to-end, to identify the problem, to organize the technologies, to find relevant information, to make it accessible to an analyst, and then to matriculate those results into action, that to me is the real challenge of
the modern era of computing using data and evidence-based decision making.
And so that's why our research focuses on both that organizational aspect, but then
also how do you present it?
How do you make it navigable?
How do you make it understandable and cheap?
And it seems like it'd be satisfying as well. I mean, you've got all of this huge problem,
and then the outcome is good. People were able to catch the bigger fish using those kinds of tools.
Absolutely. With those tools, those are the kinds of questions you can ask. What is the biggest fish?
What's the most recent fish? Using that to stay ahead and work at the speed and scale
of those that are doing exploitation,
that's the opportunity. And we're in a good position to do it.
So let me ask you this, because it sounds like you're matching wits. I mean,
the people that are doing these scams, do they employ guys like you to do the bad stuff? And
then you've got this cat and mouse game of who's going to stay ahead of whom in the big tech
picture?
One of the lessons I learned in Afghanistan was that people are capable of a lot. People see movies, they read books, but very few people really are exposed to what humans are capable
of doing to each other and for money. And so the problem you mentioned, this problem of keeping up
with the adversary, is also one of the limitations of the work that we've done with the Digital Crimes Unit, is that it still has a little bit of a whack-a-mole kind of approach for trying to fight crime and catch bad people.
And that is useful, for example. It's useful for deterrence. It's useful for characterizing the problem, but it has limitation in terms of the long-termness of
its impact. And so from my point of view, the other reason we focus on the general purpose-ness
of the technology is because the real impact in those problems is likely to be one through
economics, through understanding how people are making money doing this stuff, and how to then
use that as a way to approach a more systematic way to deal with the problem.
And so given all of that, to me, it allows us to talk about beyond the whack-a-mole approach,
beyond the case-by-case approach.
Because if we think about how spheres of influence like cyber and information are being used systematically,
then we can start to approach them with tactics and understanding.
For example, if we now see that there are organizations that are trying to influence
groups of people, we can ask questions like, how long does that take? What kind of measurement
system would we need to understand that? And if we had such a system, such a big data and
technology measurement system, how might we use it to protect ourselves?
It sounds like new fronts and new frontiers.
Well, I think what we're seeing with the rise of the cloud and with the pervasive increase in sensors collecting and storing information
is we're seeing how people are starting to use that.
And in the end, a lot of this really is about people and the way they're using it to make money and exploit each other is something that's really happening, just like people are also using it for business purposes, for normal
everyday life, for just getting their work done. And so we have to acknowledge that and make sure
that we can protect our platforms as a company built on trust, as the designated driver of the
IT industry has been recently reported. And then at the same time, when we do see these things happening,
how can we make sure to empower the people that are protecting us
with the tools they need to make decisions using information? You've been called the man who lit the dark web.
Despite the sensationalism of that headline, how did you do that?
How did you light the dark web?
I mean, what was the context of why they said that is a better question. Well, as I mentioned, with big data in general and the rise of publicly accessible
information and the way it's being used now for both exploitative purposes as well as
constructive purposes, we were trying to understand the right place to start applying the technology
we were investing in from DARPA's point of view. And we looked around and we found,
to our surprise, that there was a tremendous use of the internet and communication networks on the
internet as a mechanism for connecting buyers to their products where the products were people.
And that's the way I talk about it because in the end, it does seem to be a function of economics,
that there is a demand for products, and they're willing to pay for them. And there is a supply,
and those are people who are willing to take risk from law enforcement in order to make money
meeting demand. And the way that the internet is used for advertising and connecting the
buyer and the product is where there is both an opportunity as well as a use that seems suspicious.
And so when people are going online and they're doing these searches, some of them are not looking for anything in particular.
And then those that are trying to look for something a little bit more risky or a little bit more dangerous, start to find places online where
they are sought out. And so our view is that these places that are before then a place where you
could operate with relative impunity because no one could see what you were doing. If we could
start to make it available for people to see what you're doing, even without judging exactly
whether what you're doing is good or not,
because that's what our legal system is for, is to help us make arbitrary distinctions,
that at least then people could have the evidence to know what you were doing and then decide
whether it's worth prosecuting under our legal system or not.
And that was the big opportunity.
It was that this darkness of different types of networks, of webpages, of usernames and large databases, of FOIA documents and leaks.
This darkness was something where if we could make the information within it visible to regular people, subject matter experts, then maybe they would do something about it. And so we took on the worst of the worst people who were abusing children
and who were abusing women and men in labor and sex situations.
And they were doing it a lot and without really much consequence.
And so that was why we picked that problem.
And I think that we had good impact, although there's still a lot of work to do.
I really like the framework of illuminating simply by putting a light on something and then allowing people to discern and follow up if they can.
Oh, for sure. Well, and when things are dark and when things are vague or unknown, they can be scary because you don't know their qualities.
Yeah. They can be scary because you don't know their qualities. And once you start to make something visible, then you can operate on it. You can ask questions like, okay, we're in New York and we have a special victims bureau and there are 500,000 ads for sex in a year in this jurisdiction. prioritize who to go after. How do we complement that work with work in domestic violence? How do
we understand what hospital staff are needed? What victim outreach services? These are analysis
questions. Analysis questions require the granularity to answer them. And from our point
of view, a lot of that was available online. And so if we could enable people to have access to it
in an understandable way, then they wouldn't have to make those decisions by gut instinct or by precedent or by highest paid opinion.
They could put it into a framework that let them evaluate it.
And that way it wasn't even forever.
They could make a decision.
They would then have a measurement system to see the effect of their decision.
And then they could decide to keep doing it or not.
Right.
That to me was a much more
maintainable, workable situation. Yeah, and this is a distinctly digital problem. I mean,
the ways that people communicated about exploiting people or themselves for money in the past were
much more, you know, seeable. And so now what you've done is transfer that into this digital realm and you're doing fingerprint in the cyberspace, no such protections really exist even here.
I mean, very few.
There's a large burden placed on platform companies to protect our customers and their data.
And for now, if we want to proceed with business, if we want to have a market where people have value assigned to the services we offer, including the trust of our platform, then we have to protect
it well. It's often the case, especially in the research community, that people think of users in
terms of novice and advanced. I just think that's the wrong approach. There are not novice and
advanced users. There's really technical experts and domain experts. And domain experts know a lot
about what they're doing. They know
the patrol, they know their patients, they know their company, they know the issues. They may not
be comfortable with random variables or different AI techniques, but they certainly have been doing
something well for a while. And those are the kind of people that we have to enable, that we have to
protect, we have to provide information to. And understanding that then also affects research because you then design and build for those people, not just based
on what the literature says is an innovation. Let's talk specifically about how the background you brought, the same fundamental technological thinking, but applied in different areas.
And so it's been useful in law enforcement, useful in the military, useful in digital crime. How is it playing out now? Well, our approach to applied research is to
take a look at organizing data on one side, creating data structures using graph modeling,
graph analytics, graph statistics, using natural language processing and computer vision,
basically turning streaming access points of unstructured information into a data structure that one can work with,
and at the same time, bridging the gap to the user through user interfaces, through
HCI that has AI enabling it.
That combination is possible partly because of our cloud, but also because of Power BI.
Power BI is an Excel-like tool for business intelligence, ostensibly,
but it's grown and growing into more of a platform for data analysis. The business intelligence
community was one that had a dashboard approach to looking at graphs of numbers and status updates,
but it also was a marketplace where people were going to be doing more
complicated kinds of analytics. And so we used that as our approach to organizing our research.
And so we decided that we could take that business intelligence market and expand what that meant
so that it didn't just mean tables of numbers, but it meant metadata and graphs and streams of content. To do that,
we took the two areas of our research, streaming graph analytics and visual analytics with user
interfaces, and we built them into Power BI. We built them by enabling end-to-end processes that
would transform data using AI techniques, And we built them using new visual
representations for interacting with content. That was something that allowed us to bridge the
gap between the abstract notion of AI, the API notion of AI, the algorithmic notion of AI,
and more like a real application experience. And the outcomes were quite useful. And it was a way that we found
to take the abstract notion of AI and make it approachable and workable with something that
you can download and see and start to work with. But one part of the impact was, to me, very
important. If the thing you're building in research can only be used by a Fortune 10 and costs millions of dollars and requires a PhD in computer science, then there will be limited impact by definition almost.
If instead we want to enable a billion people, how do we get a billion people to understand how to use data?
It's got to be cheap and it doesn't require a PhD in computer science.
Right. it's got to be cheap and it doesn't require a PhD in computer science. And so to me, that outcome, while it seems like an economic or a business issue,
is actually to me a research issue because it helps confirm that as a priority,
we can still build useful things but have the constraint of low cost and ease of use.
I've heard Microsoft Research described as a values-forward organization,
and I hear that over and over when I talk to the researchers in this booth.
It's interesting where you can marry business interests with global good interests.
How could we move toward making sure those two stay married?
Well, my dad always tells me that our goal is to do well by doing good. And with Microsoft, there's the opportunity because of its position in the marketplace,
because of its size, where things like trust, things like responsibility,
those things are core to our business interest.
They're not just company values.
But for me, you know, on the research side, I often ask researchers, you know,
what's a high-risk project for you?
How do you think about risk?
Because to me, research is where you should take risk, which means that you should do things that product groups can't do, can't afford, is not on their roadmap.
If we take research, what we can do is we can take risk, and then we can, if successful, make sure that the rest of the company benefits.
And so this problem of how do we do well in the world?
How do we address impacts to society?
How do we approach problems that may have a non-market value at first?
Well, research is a great place to take that risk because when successful, it helps the company in many ways.
And there's room for all kinds of people.
Sometimes I think about complicated research as like baseball pitching,
where with baseball, occasionally one person can pitch nine innings and succeed.
But often you need three pitchers, a starter, a middle reliever, and a closer.
And they often have to have different skills.
And so in research, the starter, these are people that have really original ideas that are groundbreaking.
You know, people like Yoshua Bengio, who works with us on deep learning, they set the field for everyone else to work in.
Middle relievers, they take work and then they advance it to a usable degree, maybe.
And then closers, they have to have the patience to deal with the tediousness of deployment
and fielding and the reality of operations, right? All those skills are very necessary to stay
innovative. And so we need those different people. And so to me, having a large research organization,
almost an institution, comprised of these different kinds of people is the best chance we have to stay innovative,
to stay on the edge of what's relevant for society, and to make sure that Microsoft has
businesses of the future. Proving once again that there's a baseball analogy for everything.
Listen, I ask all the researchers that come in this booth, what keeps you up at night? I'm not
even going to ask you that question because most of the stuff you said at the beginning keeps me up at night. But I do want to ask you as we close, I wonder if you could give some advice to aspiring researchers who might look at you and say, what should I be thinking as I plunge into what I'm going to do after my PhD. Right. My point of view is that being flexible, being relaxed, having an open mind,
that those are really important characteristics. Right before I went over to Afghanistan,
there was this former three-star general, and he told me a piece of advice. He said that when I was
in charge, if I were organized, if I was on top of the situation, then I could really see an
opportunity walk in the door. I could see it,
I could take advantage of it, and I could execute on it. But if I was too concerned with my position,
with my career, with what I thought of myself, with my identity, then I would miss them. And I
really took that to heart. And it's not that there's one way to approach any of these things,
but I do think that given the pace of technology change in computer science and given the role that's changing between companies, governments, and the locus of action might change from
one kind of institution to another. And so my point of view is to roll with that and to take
advantage of those opportunities and then to try to make it about the work. Because when I make it
about the work, then it's not about me, and we can debate about the work in ways we can measure.
And then other people can contribute things.
And if no one cares where that comes from, then the work can proceed.
And that's to me also why I left DARPA.
I wanted to make sure I could leave in time for the work to survive my own point of view.
Because if it were good enough, then it should.
And if it requires a single person's personality or oversight, then it's
fragile. And so I would encourage any prospective researcher to take a broad view of research
and to avoid getting stuck too much on how they think of themselves and their career.
And to say yes on that third ask.
Yeah. Yes and.
Yes and. Chris White, it's been a delight. Thank you so much for coming in and talking to us today.
Happy to. Thanks for having me.
To learn more about Dr. Chris White and how data science, AI, and the cloud are solving big data problems at scale? Visit Microsoft.com slash research.