Microsoft Research Podcast - 058 - Scaling the Everest of software security with Dr. Jonathan Protzenko
Episode Date: January 9, 2019When people first started making software, computers were relatively rare and there was no internet, so programming languages were designed to get the job done quickly and run efficiently, with little... thought for security. But software is everywhere now, from our desktops to our cars, from the cloud to the internet of things. That’s why Dr. Jonathan Protzenko, a researcher in the RiSE – or Research in Software Engineering – group at Microsoft Research, is working on designing better software tools in order to make our growing software ecosystem safer and more secure. Today, Dr. Protzenko talks about what’s wrong with software (and why it’s vitally important to get it right), explains why there are so many programming languages (and tells us about a few he’s been working on), and finally, acts as our digital Sherpa for Project Everest, an assault on software integrity and confidentiality that aims to build and deploy a verified HTTPS stack.
Transcript
Discussion (0)
What we're concerned with in my day-to-day work these days is the security of the internet.
You come to think of it, it's absolutely everywhere.
And it's not just between you and a website, it's also in the cloud, it's also in all sorts of software.
And the big awakening is that for the past 20 years, this has never been secure.
And the consequences are so huge, there's so much at stake, that people are ready to try something different.
You're listening to the Microsoft Research Podcast, a show that brings you closer to the cutting edge of technology research and the scientists behind it.
I'm your host, Gretchen Huizinga.
When people first started making software, computers were relatively rare and there was no Internet.
So programming languages were designed to get the job done quickly and run efficiently, with little thought for security.
But software is everywhere now, from our desktops to our cars, from the cloud to the Internet of Things.
That's why Dr. Jonathan Prozenko, a researcher in the RISE, or Research in Software Engineering group at Microsoft Research,
is working on designing better software tools in order to make our growing software ecosystem safer and more secure.
Today, Dr. Protzenko talks about what's wrong with software and why it's vitally important to get it right,
explains why there are so many programming languages, and tells us about a few he's been working on, and finally,
acts as our digital Sherpa for Project Everest, an assault on software integrity and confidentiality
that aims to build and deploy a verified HTTPS stack.
That and much more on this episode of the Microsoft Research Podcast. Jonathan Prodzenko, welcome to the podcast. Hi. You're a researcher in the RISE
or Research in Software Engineering group. What are the big problems you're trying to solve? I
usually ask my guests what gets you up in the morning, but I can't help myself. What makes you rise and shine? Well, Rise is focused with many aspects of
software engineering. And the specific thing that I'm concerned with is programming languages.
And what gets me up in the morning is the fact that there's a lot of software out there that
needs us to intervene and do something about. I was recently seeing an ad
for a pet feeder that has a camera and that can report to me on my smartphone. And the immediate
thing that I thought about was that's going to be a security failure. And soon enough, there will be
videos of my cat all over the internet. That's not a huge deal. But when you think about the millions
of devices that are out there and that are ready to be hacked, and they're ready to be covered by
the New York Times in the next wave of cyber attacks, I'm very concerned about the millions of devices that are out there and that are ready to be hacked and that are ready to be covered by the New York Times in the next wave of cyber attacks.
I'm very concerned about the state of the software ecosystem.
And in Rise, what we're trying to do is address problems related to software through
research and trying to make software better in general.
Before we plunge into the specific research you're doing, I do want to talk about programming
languages in general. To paraphrase a colleague of yours, Ben Zorn, from a software perspective, you build the things
people build things with. And there wouldn't be any big technical revolutions if there weren't
programming languages. That said, there are so many of them. They're legion. Why are there so many?
So it's actually a good question.
There have been like hundreds,
if not thousands of programming languages being developed
and there's always new ones popping up.
I think one of the reasons is
there's that holy grail of programming languages
that you're always seeking the right programming language
that will compromise on nothing.
The one that will allow you to get the speed that you want,
the programmer productivity that you want,
and also zero bugs.
And I don't think we've attained that holy grail yet,
so the quest continues.
There's always room for improvement.
You realize that there exist large ecosystems of software
and that we're now reaching millions,
hundreds of millions of lines of code,
and any improvement in the tool
that you use every single day of your life is going to be an improvement over software in general. So
the responsibility is enormous in a sense. We have to design the right tools that will make people
write better tools for other people. So I usually frame my more general questions from what I like
to call the 10,000 foot view. But since we're going to land on Everest in this podcast, I'll frame it from the 29,000-foot view. Let's talk about
software verification and security writ large. These are the kind of reasons for being or the
raison d'etre for you guys. Give us an overview of what's wrong here and why it's important to
get it right and what you're doing with the research to tackle this.
So it kind of ties into what we were mentioning earlier.
There's a lot of software that's absolutely critical
where either gigantic breaches of security are at stake
or human lives or accidents.
And there's usually a large set of examples
that you can draw from,
ranging from DNA testing software to planes
to nuclear software, medical devices.
It's a very large array.
And there are, I think, a few of these areas where we've hit the right spot.
And by the right spot, I mean that the consequences are so dramatic
and the past incidents have been so wide-ranging and have had such an impact
that the people in the field are ready to try something different.
I think for instance, for medical devices, we're nearly getting there.
There have been hackable pacemakers.
There have been demonstrations that you can actually control someone's medical device
at a distance.
And I think for instance, in this specific configuration, soon enough, people will realize
that the previous approaches no longer work, that they need to try something different and that they need to start from the ground up and actually
think about how to write better software.
And in our case, and what we're concerned with in my day-to-day work these days is the
security of the internet.
When you come to think of it, it's absolutely everywhere.
And it's not just between you and a website.
It's also in the cloud.
It's also in all sorts of software.
And the big awakening is that for the past 20 years,
this has never been secure.
And the consequences are so huge.
There's so much at stake
that people are ready to try something different.
They might be willing to let go of the technology
from the 70s that I was mentioning
and try something a little strange,
a little functional,
but that might give them greater confidence
that maybe there won't be a huge failure after all.
This technology from the 70s you're referring to?
I'm thinking of the C programming language.
That remains the language that people use
to write the majority of the software.
And it's extremely easy to shoot yourself in the foot
with that language.
So what we're trying to do, and we're working with some great people, is to say, this has been so broken and so badly that we have a new methodology.
Let us try to help you.
Like, here's how we're going to do it.
What do you think?
How much of an education curve is this requiring?
It's an extremely steep curve.
And I think that's also the big challenge in my field.
Like, how can we make these tools usable by people outside of our field? It's an extremely steep curve, and I think that's also the big challenge in my field.
How can we make these tools usable by people outside of our field?
There's a reason why people stick with these well-established programming languages, it's
because they're very usable.
And I think the tools that we use to write more secure software are sometimes not so
usable or require a very high degree of training.
That's also a challenge for us. How can we make software verification more usable
for someone that doesn't have that background?
Yeah, that trade-off between power and ease of use.
Absolutely, and that's why new languages keep showing up.
It's because sometimes you're ready to have an expert
that is going to use a language that will make them very productive,
but maybe you can't find 10,000 experts. Or maybe your software is already written and you can't swap it out and
rewrite everything from the ground up. Wow. My mind is just reeling because I've had a couple
of cryptographers in the booth here and the revelation that things aren't as secure as you
think. I mean, even if you go to an HTTP, not an HTTPS,
most people don't even know that there's a difference there.
And I'm talking about the vast majority of people
that are using the software.
Yes, and that's why we need to help these people
because we can't rely on the fact
that people are going to check the padlock on their website.
It's just too much to ask from a regular user.
So we need to help them out.
And there's been really great initiatives.
The number of people who use HTTPS is skyrocketing
in no small part due to an initiative called Let's Encrypt
from Mozilla that is nudging people to switch to HTTPS.
Nice.
And that has achieved really excellent results.
And so if on top of that,
we hope to give them more secure software to perform HTTPS, then I think the world would be a slightly better place.
Or maybe even more than slightly.
You're being modest.
Because like you said, software is pervasive and we're all just sort of onboarding it into our lives.
The trend is that software is going to be everywhere and we're delegating extremely important decisions to software. When you think about cars, for instance, the millions, tens, maybe hundreds of millions of lines of code that are running in a car is just frightening. And you see these videos on YouTube of people taking remote control of a car. And it's not just cars, it's absolutely everywhere. And we're just putting an enormous amount of stress into it.
And we will come back in a while to what keeps you up at night, but let's get happy for a minute.
Yes.
Let's talk about the cool projects you're working on or have worked on. And one of them
is a language, a functional language called F-star.
Yes. And I keep looking at it going F-sharp, but it's a star.
And it's a general-purpose functional programming language
with effects aimed at program verification.
Yes.
So F-star draws inspiration from F-sharp, hence the similar name,
and also the general family of functional programming languages,
such as OCaml and the general ML family.
But it adds verification on top of it.
The idea is that if you're a functionally trained programmer
and you're comfortable with ML,
maybe you've written a bunch of OCaml in your life,
here's a language that allows you to go further
and to start thinking about what your programs do,
what their specification is,
to reason about them using pre- and post-conditions,
and to generally use the
assistance of an SMT solver to scale up verification. So in that sense, I think that F-star
is also at a very interesting point in the design space and that it's trying to cater to an audience
of people who want to write software first. It's not a tool that you would use to do a massive
mathematical proof about some theorem, but it's a tool that you would use to write your software in.
Who's your audience for this?
It's actually a good question.
A good way to think about that is to look at who's been working on FSTAR.
FSTAR is a large joint project, the brainchild of Nick Swamy and a lot of his collaborators,
including INRIA and many other institutions.
And I would say that the audience is academics
that are trying to write verified software.
It can be both for learners, master students, PhD students
who are warming up to verification.
It can be for people who want to write a verified piece of code
that they're hoping to lend into some existing software,
and I'm sure we'll talk about that soon.
What we're hoping our audience is is the enlightened programmer. Someone who's functionally trained, realizes that it's great,
but that there's also something beyond that and who's willing to learn more and go further and
deeper to actually convince themselves that their software is doing the right thing. So another thing you're working on is called Kremlin.
I'm going to spell this.
Big K, little r, little e, big M, big L, little i, little n.
Yes.
Why the change up in lowercase, uppercase?
Well, I just wanted to highlight ML.
I like puns.
But all jokes aside,
the interesting bit that you see in Kremlin
is that there's K and R and ML,
and it's kind of, you know,
Kernighan and Ritchie meets ML.
Kremlin is a compiler from F-star to C,
and here I'm tying together two things
that we talked about before.
The fact that you want to use
a high-end, sophisticated,
advanced programming language
to prove important things about your software,
but also the fact that the majority of software these days
is written in C, and that it's not going away anytime soon.
And one interesting thing is that
you need to meet software where it is.
So the idea behind my work these days
is that we need to reconcile these two visions,
the vision of the enlightened
programmer that works in F-star and does their very advanced verification result, and the fact
that people want C code if you want your software to be used. And so Kremlin is a compiler that will
transform a subset of F-star into C code that is ready to be integrated into existing software.
And you can use all of F-star, you can write code as you normally would. You can conduct proofs that are just like you would normally, except that
at the end of the tool chain, you get C code and you can walk to people and say, hey,
now I have code that you might want to use. Speaking of acronyms, there's another program
you're working on called HACLSTAR. H-A-C-L stands for High Assurance Cryptographic Library.
What is that and what does it allow you to do?
What's cool about this particular project?
So we talked about how there exists that programming language called F-Star
and the companion compiler that will give you C code that people actually want to use.
And once you have that tool chain, the question is, how do you make the best use of it?
And I've mentioned that there's a lot of software disasters.
And in my opinion, cryptography is one of the biggest ones
because it's the underpinning of almost every secure software.
And if cryptography breaks, your whole scaffolding breaks
and everything shatters.
And so cryptography is perhaps the place
where we have the right combination of factors
that make people want to use different
software, new software, verified software. And so Hacklestar is our answer to that. It's a library
of cryptographic code verified in F-Star that comes out as a bunch of C code that people want
to use. It's been designed in collaboration with INRIA, the French Institute for Computer Science,
and it's already been used by
several people. What's the Venn diagram overlap with cryptographers and programming language
people? There's a lot of layers of people working on different things and then needing to get
together and make it usable. There's the cryptographers that design algorithms, and for
me, that's not my job. I know that algorithms exist. I know how they're done,
but I would have a really hard time creating a new one.
So don't let me do that.
That would be a disaster.
There's us, the people who implement the cryptography,
taking a reference or a design or some math on a paper
that describes the inner workings of an algorithm.
We just transform it into actual software.
And then there's the consumers,
people who know that they need cryptography and are going to make use of it in their own software.
And we're kind of bridging the gap between the cryptographers and the consumers by taking existing implementations that have been done before, but re-verifying them.
So we don't design new code. What we design is verification methodologies that will allow us to show that either the existing code was broken and we found broken code,
or that the existing code or our code is fine and that we're ready to give it to consumers.
And we work closely with our consumers, such as Firefox,
and we spent a lot of time making sure that the C code that comes out of our tool chain is something that they actually want to use.
And that turned out to be surprisingly difficult.
The question just in my head when you said that is,
how hard is this?
So there's two halves to the work in a sense.
First, you want to verify code,
and that is tedious, difficult,
but in a sense, we already know that we're going to make it.
It's just a matter of throwing enough energy at it.
Sure.
But then once you've done that,
actually going to meet people in software,
telling them that you have something new
and that really they should use it,
it's a different problem.
And you need to build trust.
You need to show them that you're serious.
You need to show them that your code makes sense.
They're probably not going to trust
your very exotic verification methodology
and they're going to reread the C code
that comes out
of it just to convince themselves that it makes sense. So we spent a lot of time and energy
engaging with them, polishing what comes out of our verification suite and making sure that it
was something that looked solid to them. So let's talk about MeTLS, a verified reference
implementation of TLS. Why don't you give us a little bit of history of that project?
Who's involved in it?
What are you doing with it?
And why do we need other organizations involved in order to tackle this problem?
Yes.
So maybe I can say a little word about TLS first.
Please.
TLS is the protocol that you're using when you're connecting to a website over HTTPS.
And I was talking about the cryptography being at the bottom of the stack. It's the base building block, but on top of the cryptography,
you can work out a protocol that will allow you to negotiate keys, parameters, sessions, and then
establish a secure channel between you and a third party. So there's two key properties that you want
to establish when you have a channel with a third party, integrity and confidentiality. If you're going, I don't know, to a web search,
confidentiality is making sure that no one can see what you're looking for. And integrity is
making sure that no one can temper with your search while you're receiving the results.
So the protocol that does this is called TLS. It was previously known as SSL. It's from the early
90s, originated at Escape,
and it's the one you use
every time you go to an HTTPS website.
And so MiTLS is our attempt
at making TLS more secure.
It's a fairly old project
that started as a joint collaboration
between Microsoft and INRIA.
Hence the name MI.
Microsoft, INRIA, TLS.
We've gone through successive languages.
The first version of Mi TLS was using a precursor of F-Star called F7.
And we've just evolved both the verification artifact and the language that enables us to verify the software.
So the two go hand in hand.
We both designed the tools that enable verification and take it upon ourselves to actually use them to perform verification.
And I would say that that's the best way to design a tool,
is to use it yourself, realize where you have shortcomings,
and just try to fix it and improve it.
So it's actually a continuous back-and-forth process
where you try to verify something,
realize that the tool could help you better
or could do something more easily for you,
and then you fix the tool, and the tool keeps improving,
driven by the needs of
the verification so how much time do you spend trying to break your stuff well theoretically
because it's verified it should never be broken so that's the theory the practice is we also try to
find flaws and it's an interesting question there's occasionally flaws in our tools and we
need to either fix the tools, improve the tools,
or ideally verify the tools themselves
to show that our tools have no flaws.
And also, sometimes we don't cover everything.
For instance, you can always write
a very secure implementation of TLS
that just declines to connect to any website,
and you get the property that every connection is secure.
So we also need to make sure that we can actually interoperate with the rest of the world.
Now let's climb the big one. Be our digital Sherpa on Project Everest. Your goal, I'll
tee this up by saying your goal is to build and deploy a verified HTTPS stack. Tell us
all about Everest.
So Everest is, as you mentioned, our answer to that very sorry state of the world, which
is that there hasn't been a secure TLS implementation since the beginning of TLS.
It's always broken.
So what we set out to do a few years ago is to say, all right, we've been working
on that Mi TLS technology, can we push it to the next level, verify the entire stack
and also get people to use it?
So it was an ambitious effort that we set out both in terms of breadth, we want to
verify the entire stack and also in terms of impact.
We want people to use it.
And so for that, we got together people at several institutions.
There's Microsoft Research here in Redmond, but also Cambridge and Bangalore.
There's people at Carnegie Mellon University.
There's people at University of Edinburgh.
And of course, there's our very close collaborators at INRIA. And so
what we're doing is that it's so wide and big that we have different people working on different
things. INRIA, as I mentioned, is mostly concerned with the base building blocks to cryptography.
We have another tool that we use to verify assembly code for those bits that really need
absolute performance. And that's between,
say, CMU and here. We have in Cambridge our expert cryptographers and protocol experts that find the
flaws and try to fix the standard by submitting feedback whenever they find a conceptual flaw in
TLS. And we try to bring all these people together behind that effort.
Yeah. All right. So Everest is a goal mountain for a lot of people who climb.
Would you frame this as a goal mountain to summit for software engineering?
I would say that for us, yes. There's that theorem that I was alluding to earlier that you get
integrity and confidentiality, and that's kind of the top of the pyramid, the summit of the mountain.
Once we get that theorem, our entire stack will be verified.
So we're hoping to get there.
And for us, it will also be a summit because it's a very wide ranging and ambitious verification
effort.
One thing to realize is that we're aiming for a very comprehensive set of software.
Just looking at the cryptography, we're aiming to verify not just one algorithm,
but entire families.
And we're aiming to offer
a comprehensive cryptographic library
that has everything you need
to do crypto.
And that covers
all the popular algorithms
that you might need.
So it's a lot of work
and it's a lot of software.
So I think for us,
we're hoping that by the time
we reach that final theorem,
we will have advanced both what we're capable of doing in terms of size and also like the state of the art in terms of verification tools.
So what's the relationship of MeTLS with Project Everest?
MeTLS is so far the main component of Project Everest and the one that depends on almost everything else and the one that will be our implementation of TLS.
I think there will be other things afterwards.
It depends on what you see at the top of the mountain.
The top of the mountain could be meTLS in C, but it could also be, say, a verified web
server that uses meTLS and then we're setting another goal.
So I guess that remains to be determined.
But yeah, the V1 would be
the TLS library, and maybe the V2 will have other things such as a small server or even other
components of the stack. Academics by nature are open source people. You do some work, you share
it with everybody. You build off the work of others and they build off your work. But that hasn't
necessarily historically been the case in industry with IP and competitive advantage issues. But lately, it seems there's been a bit of a shift towards an embrace of a more open source mentality.
So why do you think we're seeing more sharing in the open source community among industry
researchers? And why is that a good thing?
So, yes, one thing that I should have mentioned is that all of our work is open source,
and we work completely in the open
with academics and universities.
All of our code is on GitHub,
and we have public discussion channels.
And I think there's two aspects to consider.
The first one is we just work better when it's in the open.
I find that it fosters better collaborations, more productive ideas, and generally a better atmosphere.
It's more motivating to know that you're doing software for the general good that will benefit a lot of other people than just software for a small subset of the planet.
So that is something that motivates me personally.
And I very much believe in the societal value of having code
that is for everyone to use.
And the other more
pragmatic reason, perhaps,
is that in security,
if people can't see your code,
they're just going to have
a hard time convincing themselves
that you're secure.
It's a hard claim, right,
that your software is secure,
and you usually want
to provide evidence for that.
Sure.
And if your code is closed source,
then you're going to have
a hard time convincing other
people that really you have secure software. So it's just better to do it in the open.
That way everyone can go and see for themselves. Do you feel like you've noticed a sea change in
this area? I know that, you know, in certain decades prior, a lot of companies kept their
cards really close to their chest. It's proprietary. We don't want anyone to see the source code.
We don't, you know.
I have seen like a big change at Microsoft in just the four years that I've been here.
I think I came in just at the right time when the mentalities were changing.
And it's been just getting better and better.
I found that I can use a Linux computer to do my day-to-day work.
And I can post on GitHub.
I don't have to worry about anything. And I think it's just the reality of software development now that so much software that you depend on
is open source that you're just more productive if you can
contribute back.
And we have done that occasionally
in my day-to-day work.
I use an open source project.
There's a bug in it.
Fine, I'll go and fix it.
And it's just a better way to work.
MELANIE WARRICK- And maybe somebody else
has found your bug and helped you out there.
Oh yeah. And people can come and sometimes report bugs on our stuff. If you look at F-Star,
for instance, we have contributors, people we've never met, who just come in out of the blue and
submit improvements. And for us, that's just... Thank you very much.
It's yeah, we're very happy with that. And it just makes you feel better when you wake up in
the morning and you have someone who just fixed one of your books i wish that would happen to me not software but you know
jonathan you're working on security and verification and software and you know the
realities we've talked a little bit about how scary that world is and you're working on security and verification in software, and you know the realities. We've talked a little bit about how scary that world is, and you're working on it every day.
Is there anything that keeps you up at night, particularly as it pertains to the state of software that you haven't discussed already?
And what should we be aware of?
I think the general awareness of the fragility of software is not as it should be. What I mean by that is there seems to be, in my opinion, too much of an
enthusiasm for software without the awareness of what this entails.
It's Christmas season.
People are going to get gifts and I'm being bombarded with smart appliances.
Every time I see one of these, I'm wondering what are the consequences?
And so far it's kind of the wild west.
If you think about all the IoT things that
are going to be spread out all across the world, what's going to happen to these devices? Who's
going to maintain the software? Who's even going to look at the software before it's released and
sold in millions? And it's actually happening faster than we realize. And that's something
that keeps me up at night. I see everyone getting smart light bulbs. What if your light bulb is
being taken over by someone else? I'm very worried about these things, but I'm also hoping me up at night. I see everyone getting smart light bulbs. What if your light bulb is being
taken over by someone else? I'm very worried about these things, but I'm also hoping that
soon enough there will be a change, both in terms of the software and also in terms of expectations.
And that could be, you know, people just don't want broken software, or maybe there's a regulation
that says you can't put on the market something that is obviously broken and that's going to be actually a threat.
When I talked to Ben, it was actually a year ago, we talked about the Internet
of Things and how many devices get put out in the world and the key is verification and
that was when he alluded to Everest. How is that project informing this world of devices that are
quote-unquote smart and unverified? What I would hope for at least is that by the
time we manage to get a working version of Meteorless then it can become a
viable option. Say you're doing a smart security camera and of course you need
to upload in the cloud on the server,
well, hopefully, BTRS can be a viable option for people who want to establish a secure channel between the little device and the cloud. That won't solve every problem, far from it, but I
really firmly believe in one brick at a time and one step at a time. My hope is that we have at
least a better brick in that big software ecosystem.
One way to sum it up is software has consequences and everyone needs to be aware of it. Software
is not benign. Software could do all sorts of terrible things and it hardly ever does. But we
need to be cognizant of the fact that it could and we need to exercise caution you install an app
think twice you're getting a smart appliance also think twice but the problem is the binary option
you know do you agree if you don't you can't have the app so i hit agree yes and for me that's the
big question like how do we create the social and economic dynamic for better. And for me, that's the big question. Like how do we create the social and economic
dynamic for better software? And for me, that's a big open question because I'm just like you,
I click on agree all the time. I couldn't like go into existential mode whenever I install an app.
So yes, I do click on agree, but where's the incentive going to come from?
For internet security, the interesting bit is that there's a big economic
incentive. If you're broken, people are going to have to be paged on a weekend. There's going to be
like an incident response, a crisis. So this resonates very well with the people that we
work with. And that's why they're ready for better software. What is the equivalent for all these
devices and the security of that software? I think that's something we need to figure out and it can
come from many different places. But my hope is that ultimately, yes, there is an incentive to make
software in general more secure. Jonathan, tell us a bit about yourself academically and otherwise.
What's your story? How did you come to be working in software engineering research at MSR? What
don't people know about Jonathan Procenko? I did my PhD in France at INRIA.
And as oftentimes happens at the end of one's PhD, I was pretty fed up by the end of my
dissertation writing.
And I wasn't sure what I wanted to do with my life.
I had a lot of existential moments and I was ready to quit research and go work in software
because I thought research was too hard.
And then my PhD advisor really nudged me and said, well, Jonathan, at least try Microsoft
Research. You know, you might find a good vibe there. So I did. And that's how I arrived here.
I didn't believe in it at first. I didn't think I would make it. I remember chatting with my
advisor about it being like, come on, Francois, they're never going to take me. But I tried and
yes, they did take me. So that was good. And I started out as a postdoc here.
I worked in a variety of projects and then I became a researcher after a year and a half, I think.
And I've been here for four years now.
Prior to that, you did your PhD in France.
Did you grow up in France?
Yes, I did grow up in France.
I'm a pure product of the French system.
Because you keep talking about existential moments and so on. It's like, oh, Jean-Paul
Sartre. Where are you from in France?
I grew up near Bordeaux. I stayed in Bordeaux for high school, then did my master's degree
in Lyon, and then my PhD program in Paris.
Nice. What's your future goal, aside from conquering Everest? Where do you see yourself in five years? tackling more fundamental questions about software,
about the things that we were talking about,
social consequences of software,
and that it can bring greater awareness of what is software,
what happens with it,
and that we can bring the message of software matters beyond just the niche areas
where people have come to that conclusion already
and make people more aware of what software entails.
And maybe that's where the incentive to have better software is going to come from,
from better software literacy. You're likely going to be dealing with computers almost every
single day of your life. So you need a little bit of computer literacy. You need people to understand
how software works because that's actually an essential skill in the modern world. And maybe
eventually people are aware enough that they're
going to start demanding better software, and hopefully we'll be there.
As we close, give us an idea. This will sound like a large question, but you can
scale it down if you like. What's left to discover or solve in the field of research
in software engineering? What advice or encouragement would you give to listeners
who might be interested or inspired
by the work you're doing?
One thing to keep in mind
is that we're getting a lot of people
to use our software.
Our software runs in Firefox.
Our software runs in Windows.
This is great.
And hopefully soon it will run in Linux.
So it all seems wonderful
from an outsider's perspective,
but it's a lot of hard work
and it's not as easy as it sometimes seems.
So we need a lot of help.
And I think the next frontier is
how can we make this more usable?
How can we make it more accessible to people
who don't have the luxury of spending,
you know, almost a decade training in computer science
and who will still be writing software for a living?
How can we bring the technology
to the masses of programmers that are out there? And I think that's an enormous challenge
and any help would be most welcome. Jonathan Prodzenko, thanks for joining us today. It's
been really fun. It's been a pleasure. Thanks for having me.
To learn more about Dr. Jonathan Prodzenko and how researchers are trying to summit the
verification mountain so you don't have to, visit Microsoft.com slash research.