Microsoft Research Podcast - Ideas: Bug hunting with Shan Lu
Episode Date: January 23, 2025Struggles with programming languages helped research manager Shan Lu find her calling as a bug hunter. She discusses one bug that really haunted her, the thousands she’s identified since, and how sh...e’s turning to LLMs to help make software more reliable.
Transcript
Discussion (0)
I remember, you know, those old days myself, right?
That is really, like, I have this struggle that I feel like I can do better.
I feel like I have an idea to contribute.
But it just, for whatever reason, right, took me forever to learn something,
which I feel like it's a very mechanical thing, but it just took me forever to learn, right?
And then now, actually, I see this hope, right?
With AI, you know, a lot of mechanical things
that can actually now be done
in a much more automated way, you know, by AI, right?
So then now truly, you know, my daughter,
many girls, many kids out there, right?
Whatever, you know, they are good at,
their creativity, it will be much easier, right? For are good at their creativity it'll be much easier right for them
to contribute their creativity to whatever discipline they are passionate about you're
listening to ideas a microsoft research podcast that dives deep into the world of technology
research and the profound questions behind the code.
I'm Gretchen Huizinga.
In this series, we'll explore the technologies that are shaping our future and the big ideas that propel them forward.
Today I'm talking to Shan Liu,
a Senior Principal Research Manager at Microsoft Research and a computer science professor at the University of Chicago.
Part of the systems research group, Shan and her colleagues are working to make our computer systems, and I quote, secure, scalable, fault-tolerant, manageable, fast, and efficient. That's no small order. So I'm excited to explore the big ideas behind Shan's
influential research and find out more about her reputation as a bug bounty hunter. Shan Lu,
welcome to Ideas. Thank you. So I like to start these episodes with what I've been calling the
research origin story. And you have a unique, almost counterintuitive story about what got you started
in the field of systems research. Would you share that story with our listeners?
Sure, sure. Yeah, I grew up fascinating that I would become a mathematician. I think I was good
at math. And at some point, actually, until I think I entered college, I was still, you know, thinking about should I do math?
Should I do computer science?
For whatever reason, I think someone told me, you know, doing computer science will help you.
It's easier to get a job.
And I reluctantly pick up computer science major.
And then there was a few years in my college.
I had a really difficult time for programming. And I also remember that there was like, I spent a lot of time learning one
language. We started with Pascal. And I felt like I finally know what to do. And then there's yet another language, C, and another class, Java.
And I remember the teacher would ask us to do a programming project.
And there are times I don't even, I just don't know how to get started.
And I remember at that time in my class, I think there were, we only had like four girls taking this class that requires programming in Java.
And none of us have learned Java before.
And when we asked our classmates, when we asked the boys, they just naturally know what to do.
It was really, really humiliating embarrassing I had the feeling that I I felt like I
I I'm just not born to be a programmer and then I came to a graduate school I was thinking about
you know what kind of research direction I should do and I was thinking that oh maybe I should do. And I was thinking that, oh, maybe I should do theory research, like,
you know, complexity theory or something. You know, after a lot of back and forth, I met my
eventual advisor. She was a great, great mentor to me. And she told me that, hey, Shan, you know,
my group is doing research about finding bugs in software. And she said her group is doing system research.
And she said a lot of current my team members are all great programmers.
And as a result, they are not really well motivated by finding bugs in software.
Interesting.
And then she said, you were really motivated, right, by, you know, getting help
to developers, to help developers finding bugs in your software. So maybe that's the research
project for you. So that's how I get started. Well, let's go a little bit further on this mentor
and mentors in general. As Dr. Seuss might say, every what has a who. So by that, I mean an
inspirational person or people behind every successful researcher's career. And most often,
they're kind of big names and meaningful relationships, but you have another unique
story on who has influenced you in your career. So why don't you tell us about the spectrum of people
who've been influential in your life and your career?
Yeah, I mean, I think I mentioned my advisor,
and she's just so supportive.
And I remember when I started doing research,
I just felt like I seemed to be so far behind everyone else.
You know, I felt like, hmm, how come everybody else knows
how to ask, you know, insightful questions, and like they know how to program really fast, bug free.
And my advisor really encouraged me saying, you know, there are background knowledge that you can pick up.
You just need to be patient.
But then there are also like, you know, how to do research, you know, how to think about things, problem solving.
And she encouraged me saying, Shan, you're good at that. Well, I don't know how she found out. But and anyway, so she was emotional, sensitive person. I would just, you know, move the timeline to be kind of more recent.
So I joined Microsoft Research as manager.
And there's something called Connect that I, you know, people write down twice every year talking about what they've been doing.
So I was just checking, you know, my members in my team to see what they have been doing over the years,
just to get myself familiar with them.
And I remember I read several of them.
I felt like I almost had tears in my eyes. I realized, wow.
And just to give an example, for Chris, Chris Hoblizzo,
I read his Connect, and I saw that he's working on something called program verification.
It's a very, very difficult problem.
And from outsider, you know, I've read many of his papers.
But when I read, you know, his own writing and I realized, wow, you know, it's almost two decades, right?
Like he just keep doing this very difficult things. And I read his words about,
you know, how his old approach has problem, how he's thinking about how to address that problem.
Oh, I have an idea, right? And then spend multiple years to implement that idea and get improvement,
find a new problem, and then just find new solutions.
And I really feel like, wow, I'm really, really, like, I feel like this is kind of like, you know,
there's a, how to say, a hero-ish story behind this, you know, this kind of goal. And you're willing to spend many years to keep tackling this challenging problem.
And I just feel like, wow, I'm so honored,
you know, to be in the same group with a group of fighters, you know, determined to tackle
difficult research problems. Yeah. And I think when you talk about it, it's like, this is a
person that was working for you, a direct report. And often we think about our heroes as being the ones who mentored us, who taught us, who managed us. But yours is kind of 360, my colleagues, my direct reports, Dan Potts and Jacob Nelson.
And again, this is something like their story really inspired me.
Like they were, again, spent five or six years on something.
And it looks like, oh, it's close to the success of tech transfer.
And then something out of their control happened.
This happened because Intel decided to stop manufacturing a chip that their research
relied on. And it's kind of like end of the world to them. And then they did not give up.
And then, you know, like one year later, they found the solution, you know, together with their
product team collaborators. And I just feel like, wow, you know, I feel so, I feel like I'm inspired every day. Like, I'm so happy to be working together with all, you next, but I think it warrants a little explication up front for those people in the audience who don't spend all their time working on concurrent systems themselves.
So give us a short 101 on concurrent systems and explain why the work you do matters to both the people who make it and the people who use it. Sure, yeah. So I think a lot of people may not realize,
so actually the software we're using every day,
almost every software we use these days are concurrent.
So the meaning of concurrent is that you have multiple threads
of execution going on at the same time in parallel.
And then when we go to a web browser, right?
So it's not just one rendering that is going on.
There are actually multiple concurrent rendering that is going on.
So the problem of writing for software developers to develop this type of concurrent system,
a challenge is the timing.
So because you have multiple concurrent things going on, it's very difficult to manage and reason about what may happen first, what may happen second. And also, there's an inherent non-determinism in it. What happened first this time may happen second next time. So as a result, a lot of bugs are introduced by this. And it was a very
challenging problem because I would say about 20 years ago, there was a shift. Like in the old
days, actually most of our software is written in a sequential way instead of concurrent way.
So, you know, a lot of developers also has a difficult time to shift their mindset from the like behind the scenes from a reasoning perspective of how do we keep that from happening to our users?
How do we identify the bugs, which we'll get to in a second.
Thanks for that.
Your research now revolves around what I would call the big idea of learning from mistakes. And in fact, it all seems to have
started with a paper that you published way back in 2008 called Learning from Mistakes,
a comprehensive study on real-world concurrency bug characteristics. And you say this strongly
influenced your research style and approach. And by the way, I'll note that this paper received the most influential paper award
in 2022 from Asplos, which is the Architectural Support for Programming Languages and Operating
Systems. Huge mouthful. And it also has more than a thousand citations, so I dare say it's influenced
other researchers' approach to research as well. Talk about the big idea behind this paper
and exactly how it informed your research style and approach today.
Yeah, so I think, again, going back to the days that I,
you know, my PhD days, I started working with my advisor, you know, YY.
So at that time, there had been a lot of people working on bug finding.
But then now when I think about it, people just magically say, So at that time, there had been a lot of people working on bug finding.
But then now when I think about it, people just magically say, hey, I want to look at this type of bug.
Just magically, oh, I want to look at that type of bug.
And then my advisor at that time suggested me saying, hey, maybe, you know, actually take a look. At that time, as I mentioned, software was kind of shifting from sequential software to concurrent software.
And my advisor said, hey, just take a look at those real systems bug database and see what type of concurrency bugs are actually there. You know, instead of just randomly say, oh, I want to work on this type of bug.
And then also, of course, it's not just look at it.
It's not just like you read a novel or something, right?
And again, my advisor said, hey, Shan, right, you have a connection, natural connection, you know, with bugs and the developers who make them.
So she said, you know, try to think about the patterns behind them, right?
Try to think about whether you can generalize some characteristics
and use that to guide people's research in this domain. And at that time, we were actually
thinking we don't know whether, you know, we can actually write a paper about it because
traditionally you publish a paper, just say, oh, have a new tool, right? Which can do this and that.
At that time in system conferences people rarely have you know
just say here's a study right but we studied that and indeed you know i had this thought that hey
why i make a lot of mistakes and when i study a lot of books the more and more i feel like you
know there's a reason sure behind it right it's like I'm not the only dumb person in the world, right? There's a reason that,
you know, there's some part of this language is difficult to use, right? And there's certain type
of concurrent reasoning. It's just not natural to many people, right? So because of that, there are
patterns behind these bugs. And so at that time time we were surprised that the paper was actually
accepted and because i'm just happy with the learning i get um but um after this paper was
accepted um in the next i would say many years there were more and more people realize hey
um before we actually you know do book finding things let's first do a study right to understand
and then this paper was um yeah i
was i was very happy it was cited many many times yeah and then gets the most influential paper
many years later many years later yes um yeah i i feel like there's a lot of things going through
my head right now one of which is what ai is a pattern detector. And you were doing that before AI even came on the scene, which goes to show you that humans are pretty good at pattern detection.
Also, we might not do it as fast as an AI.
But so this idea of learning from mistakes is a broad theme.
Another theme that I see coming through your papers and your work is persistence.
And you mentioned this about your team, right? I was like, these people are people who don't give
up. So we covered this idea in an abstracts podcast recently, talking about a paper,
which really brings this to light. If at first you don't succeed, try, try again.
That's the name of the paper.
And we didn't have time to discuss it in depth at the time
because the abstract show is so quick, but we do now.
So I'd like you to expand a little bit on this big idea of persistence
and how large language models are not only changing
the way programming and verification happens,
but also providing
insights into detecting retry bugs. Yes. So I guess maybe I will, since you mentioned this
persistence, you know, after that learning from mistakes paper, so that was in 2008, and in the
next 10 years, a little bit more than 10 years in terms of persistence, right?
So we have continued, me and my students, my collaborators,
we have continued to working on finding concurrency bugs,
which is kind of related to why I'm here at Microsoft Research.
And we keep doing it, doing it.
And then I felt like a high point was that I had a collaboration
with my now colleagues here,
Madan Musawasi and Shuman Nath.
So we built a tool to detect concurrency bugs
and after more than 15 years of effort on this,
we were able to find more than 1,000 concurrency bugs.
It was built in a tool called Torch
that was deployed in the company. And it won the best
paper award at the top system conference, SOSP. And it was actually a bittersweet moment. This
paper seems to put an end to our research. And also some of the findings from that paper is that we used to
do very sophisticated program analysis to reason about the timing. And in that paper,
we realized actually sometimes if you're a little bit fuzzy, don't aim to do perfect analysis, the resulting tool is actually more effective.
So after that paper, Madame Schumann and me, we kind of, you know, shifted our focus to
looking at other types of books.
And at the same time, the three of us realized the traditional, very precise program analysis
may not be needed for some of the bug finding.
And so then for this paper, this retry bugs, after we shifted our focus away from concurrency bugs,
we realized, oh, there are many other types of important bugs, such as in this case, like retry, right?
When your software goes wrong, right?
Another thing we learned is that it looks like you can never eliminate all bugs.
So something will go wrong.
And so that's why you need something like retry.
So if something goes wrong, at least you won't give up immediately.
The software will retry.
And another thing that started from this earlier effort is we started using large language model because we realized, yeah, you know, traditional program analysis sometimes can give you very strong guarantee.
But in some other cases, like in this retry case, some kind of fuzzy analysis, you know, not so precise, offered by large language models, sometimes even more beneficial.
Yeah, so that's kind of, you know, the story behind this paper.
So, Shan, we're hearing a lot about how large language models are writing code nowadays. In fact, NVIDIA's CEO says, mamas don't let your babies grow up to be coders.
Because AI is going to do that.
I don't know if he's right,
but one of the projects you're most excited about right now
is called Verus.
And your colleague Jay Lorch recently said
that he sees a lot of synergy between AI and verification
where each discipline brings something to the other.
And Rafa Hozen has referred to this as co-innovation
or bidirectional enrichment.
I don't know if that's exactly what is going on here, but it seems like it is.
Tell us more about this project, Verus, and how AI and software verification are helping each other out.
Yes, yes, yes, yes. I'm very excited about this project now.
So first of all, starting from Verus.
So Verus is a tool that helps you verify the correctness of Rust code.
So this is a relatively new tool, but it's creating a lot of excitement in the research community.
And it's created by my colleague, Chris Hoblissel, and his collaborators outside Microsoft Research.
And as I mentioned, this is part that really inspired me.
So traditionally, to verify your program is correct, it requires a lot of expertise.
You actually have to write your proof typically in a special language.
And you know, so a lot of people, including me, right, who are so eager to get rid of
bugs in my software, because there are people told me saying just to learn that language.
So they were referring to a language called COG.
Just to learn that language, they said it takes one or two years.
And then once you learn that language,
then you have to learn about how to write proof in that special language.
So people, particularly in the bug finding community,
people know that, oh, in
theory, you can verify it. But in reality, people don't do that. Okay, so now going back to this
virus tool, why it's exciting. So it actually allows people to write proof in Rust. So Rust is
an increasingly popular language. And there's more and more people picking up Rust. The first time
I heard about, oh, you can write proof in a popular language. And also another thing is,
in the past, you cannot verify a implementation directly. You can only verify something written
in a special language. And the proof is proving something that is in a special language. And then finally, that special language is maybe then transformed into an implementation. So there's just too many special languages there.
A lot of layers. A lot of layers. So now this Verus tool allows you to write proof in Rust to prove an implementation that is in Rust.
So it's very direct.
I just feel like I'm just not good at learning a new language.
Interesting.
So when I came here and learned about this Verus tool by Chris and his collaborators, I felt like, oh, looks like maybe I can give it a try.
And surprisingly, I realized, oh, wow, I can actually write proof using this virus tool.
And then, of course, you know, I was told if you really want to write proof for a large system,
it still takes a lot of effort.
And then this idea came to me that, hey, maybe,
you know, these days, like large language model can write code, then why not let large language
model write proof, right? And of course, you know, other people actually has this idea as well,
but there's a doubt that, you know, can large language model really write proof, right? And
also people have this feeling that, you know, large language model seems not very disciplined,
you know, by nature.
But, you know, that's what intrigues me, right?
And also I used to be a doubter for, say, GitHub Copilot.
Used to.
And so, because I feel like, yes, it can generate a lot of code,
but who knows whether it's right.
Yeah.
Right.
So I feel like, wow, you know, this could be a game changer, right?
Like if AI can write not only code, but also proof.
Yeah.
So that's what I've been doing.
I've been working on this for one year.
And then I gradually get more collaborators, both, you know Research Asia and expertise here like Chris and JLodge.
They all help me a lot.
So we actually have made a lot of progress.
Like now it's like we've tried like for some small programs, benchmarks, and we see that actually large language model can correctly prove the
majority of the benchmarks that we threw to it. Yeah, it's very, very exciting.
Well, and so we're going to talk a little bit more about some of those doubts and some of those
interesting concerns in a bit. I do want you to address what I think Jay was getting at, which is that somehow the two help each other.
The verification improves the AI.
The AI improves the verification.
Yes.
How?
Yes.
My feeling is that a lot of people, if they're concerned with using AI, is because they feel like there's no guarantee for the content generated by AI, right?
And then we also all heard about, you know, hallucination.
And I tried it myself.
Like, I remember at some point if I asked AI, say, you know, which is bigger?
Is this three times three or eight?
And the AI would tell me eight is bigger.
And...
What?
So I feel like verification can really help ai get better because
um because now you can give you know kind of add you know mathematical rigorous into
whatever that is generated by ai right and i i would it helps AI, it will also help people who use
AI, right? So that they know what can be trusted, what is guaranteed by this content generated
by AI. And then of course AI can help verification because, you know, verification, you know,
it's hard. There's a lot of mathematical reasoning behind it.
And so now with AI, it will enable verification to be picked up by more and more developers so that we can get higher quality software. Yeah, and we'll get to that too about what I would call the democratization of things. But before that, I want to again
say an observation that I had based on your work and my conversations with you is that you've
basically dedicated your career to hunting bugs. And maybe that's partly due to a personal story
about how a tiny mistake became a bug that haunted you for years. Tell us the story and explain why and how it launched a lifelong quest to understand, detect, and expose bugs of
all kinds. Yes. So before I came here, I already had multiple times, you know, interacting with
Microsoft Research. So I was a summer intern at Microsoft Research Redmond almost 20 years ago. I think it was in the summer of 2005.
And I remember I came here, you know, full of ambition.
And I thought, okay, you know, I will implement some smart algorithm.
I will deliver some useful tools.
So at that time, I just finished two years of my PhD.
So I kind of just started my research on bug finding and so on.
And I remember I came here and I was told that I need a program in C Sharp.
And, you know, I just naturally had a fear of learning a new language.
But anyway, I remember I thought, oh, the task I was assigned was very straightforward.
And I think I went ahead of myself. I was thinking, oh, I task I was assigned was very straightforward. And I think I went ahead of
myself. I was thinking, oh, I want to quickly finish this. And I want to do something more
novel, you know, that can be more creative. But then this simple task I was assigned,
I ended up spending the whole summer on it. So the tool that I wrote was supposed to process very huge logs.
And then the problem is my software, it's like you run it initially, it's like I can only run
it for 10 minutes. So because my software used so much memory and it will crash. And then I spent
a lot of time, I was thinking, oh, my software is just using too much memory. Let me optimize it. Right. And then so I, you know, I try to make sure to use memory in a very efficient way. But then as a result,'s a bug in my code, and I spend a lot of time,
and there's an engineer helping me checking my code.
We spend a lot of time.
We were just not able to find that bug.
And at the end, the solution is I just sit in front of my computer
waiting for my program to crash and restart.
And at that time,
because there was very little remote
working option,
so in order to finish processing
all those logs,
it's like, you know, after dinner.
You had to stay all night.
I had to stay all night.
And all my intern friends,
they were saying,
well, Shen, you work really hard. And I'm just feeling like, you know, what I'm doing, just sitting in front of my computer, waiting for my program to crash so that I can restart it. And near the end of my internship, I finally find the bug.
It turns out that I missed a pair of brackets in one line of code.
That's it.
That's it.
Oh, my goodness.
And then it turns out, because I was used to C, and in C, when you want to free, which means deallocate an array. You just say free array. And if I remember correctly,
in this language, C Sharp,
you have to say free this array name
and you put a bracket behind it.
Otherwise, it will only free the first element.
And it was a nightmare.
And I also felt like the most frustrating thing is it's a clever bug, right?R I feel like I ended up achieving very little
in my summer internship but maybe the humility of making a stupid mistake is the kind of thing that
somebody who's good at hunting bugs it's like it's like missing an error in the headline of an article because the print is so big
that you're looking for the little things in the, I know that's a journalist's problem.
I actually love that story. And it kind of presents a big picture of you, Shan, as a person
who has a realistic self-awareness and humility, which I think is rare at times in the software
world. So thanks for sharing that. So moving on, when we talked before, you mentioned the large
variety of programming languages and how that can be a barrier to entry, or at least a big hurdle
to overcome in software programming and verification. But you also talked about, as we just mentioned, how LLMs have been a democratizing force in this field.
So going back to when you first started and what you see now with the advent of tools like GitHub Copilot, what's changed?
Oh, so much has changed.
Well, I don't even know how to start.
I used to be really scared about programming.
You know, when I tell this story, a lot of people say, no, I don't believe you.
And I feel like it's a trauma.
I almost feel like it's like, you know, the college day me, right, who was scared of starting any programming project.
Somehow I felt humiliated when I was asking those very, I feel like, stupid questions to my classmates.
It almost changed my personality.
It's like, for a long time, whenever someone introduced me a new software tool, my first reaction is I probably will not be able to successfully even install it.
Like whenever, you know, there's a new language, my first reaction is, no, I'm not good at it.
And then like, for example, this GitHub Copilot thing.
Actually, I did not try it until I joined Microsoft.
And then I actually,
I haven't programmed for a long time.
And then I started collaborating
with people in Microsoft Research Asia
and he write program in Python, right?
And I have never written
a single line of Python code before.
And also this Verus tool, it helps you to verify code in Rust. But I have never
learned Rust before. So I thought, okay, maybe let me just try GitHub Copilot. And wow, you know,
it's like I realized, wow, like, I can do this and uh and of course sometimes i feel like my colleague may
sometimes be surprised because on one hand it looks like i'm able to just finish like you know
write a rust function but on some other days i ask very basic questions and i have those questions
because you know the github copilot just help you finish. Right. I just started something to start it,
and it just helped me finish.
And I wish when I started my college,
if at that time there was GitHub Copilot,
I feel like my mindset towards programming
and towards computer science might be different.
So it does make me feel very positive
about what future we have. might be different. So it does make me feel very positive, you know, about, you know,
what future we have, you know, with AI, with computer science.
Okay. Usually I ask researchers at this time, what could possibly go wrong if you got everything
right? And I was thinking about this question in a different way until just this minute. I want to ask you,
what do you think that it means to have a tool that can do things for you that you don't have
to struggle with? And maybe is there anything good about the struggle because you're framing it as it sapped your confidence.
Yes.
And at the same time, I see a woman who emerged stronger because of the struggle with an amazing
career, a huge list of publications, influential papers, citations, leadership role.
So in light of that, what do you see as the tension between struggling to learn a new language versus having this tool that can just do it that makes you look amazing?
And maybe the truth of it is you don't know.
Yeah, that's a very good point.
I guess you need some kind of balance.
And on one hand, yes, I feel like, again, this goes back to my internship.
I left with a frustration that I felt like I have so much creativity to contribute, and yet I could not because of this language barrier.
You know, I feel positive in the sense that just from GitHub Copilot, right,
how it has enabled me to just bravely try something new.
I feel like this goes beyond just computer science, right?
I can imagine it will help people to truly unleash their creativity,
not being bothered by some challenges in learning the tool.
But on the other hand, you made a very good point.
My advisor told me she feels like, you know, I write code slowly,
but I tend to make fewer mistakes.
And the difficulty of learning, right, and all these nightmares I had definitely made me more cautious.
I pay more respect to the task that is given to me. So there is definitely the
other side of AI, right? Which is you feel like everything is easy and maybe you did not have the
experience of those bugs, right? That a software can bring to you and you have over-reliance, right, on this too.
So hopefully, you know, some of the things we are doing now, right,
like, for example, say verification, right,
like bring this mathematical rigor to AI, hopefully that can help.
Yeah. You know, even as you unpack the nuances there,
it strikes me that both are good. Both having the struggle and
learning languages and understanding the core of it, and the idea that in natural language,
you could just say, here's what I want to happen. And the AI does the code, the verification, et cetera. That said, do we trust it? And this was where I
was going with the first what could possibly go wrong question. How do we know that it is really
as clever as it appears to be? Yeah, I think I would just use the research part we are working on now, right? Like, I think on one hand, I can use AI to generate proof, right? To prove the code generated by AI is correct. But having said that, even if we're wildly successful, you know, in this thing, human beings expertise still needed. Because just take this as an example.
What do you mean by correct, right?
Sure.
And so someone first has to define what correctness means.
Yeah.
And then so far, the experience shows that
you can't just define it using natural language
because our natural language is inherently imprecise.
Sure. So you still need to translate it to a formal specification in a programming language.
It could be in a popular language like in Rust, right, which is what Verus is aiming at.
And then we are, like, for example, some of the research we do is show that, yes, you know,
I can also use AI to do this translation from natural language to specification.
But again, then who to verify that, right?
So at the end of the day, I think there still do need to have human in the loop.
But what we can do is to lower the burden and make the interface not so complicated, right? So that it will be easy
for human being to check what AI has been doing. Yeah. You know, everything we're talking about
just reinforces this idea that we're living in a time where the advances in computer science
that seemed unrealistic or impossible, unattainable, even a few years ago,
are now so common that we take it for granted. And they don't even seem outrageous, but they are.
So I'm interested to know what, if anything, you would classify now as blue sky research in your
field. Maybe something in systems research today that looks like a moonshot.
You've actually anchored this in the fact that you kind of have, you know, blinders on for the
work you're doing, head down in the work you're doing. But even as you peak up from the work that
might be outrageous, is there anything else? I just like to get this out there that, you know, what's going on 10 years down the line?
You know, sometimes I feel like I'm just now so much into my own work. But, you know, occasionally, like say when I had a chat with my daughter and I explained to her, you know, oh, I'm working on, you know, not only have AI to generate code, but also have AI to prove,
right, the code is correct. And she would feel, wow, that sounds amazing.
So I don't know whether that is, you know, a moonshot thing. But that's the thing that I'm
super excited about, about the potential. And then they also have, you know, my colleagues,
we spend a lot of time build system
and it's not just about correctness, right? Like the verification thing I'm doing now is related to
automatically verify it's correct. But also you need to do a lot of performance tuning, right?
Just so that your system can react fast, right? It can have good utilization of computer resource. And my colleagues are also
working on using AI, right, to automatically do performance tuning. And I know what they are doing,
so I don't particularly feel that's a moonshot, but I guess... I feel like because you are so
immersed that you just don't see how much we think.
Yeah.
It's amazing.
Well, I'm just delighted to talk to you today, Shan.
As we close, and you've sort of just done a little vision casting, but let's take your daughter, my daughter, all of our daughters.
Yes. How does what we believe about the future in terms of these things that we could accomplish influence the work we do today as sort of a vision casting for the next Shan Lu who's struggling in undergrad, grad school?
Yes, yes. Oh, thank you for asking that question. Yeah, I have to say, you know, I think we're in a very interesting time, right, with all this AI thing.
Isn't that a curse in China? May you live in interesting times? fully embrace AI. I was, indeed, I had my daughter in mind. I was worried when she grows up,
what would happen? There will be no job for her because everything will be done by AI.
Oh, interesting.
But then now, now that I have, you know, kind of fully embrace AI myself, actually, I see this
more and more positive. Like you said, I remember that, you know, those old days myself, right, that is really like I have this struggle
that I feel like I can do better. I feel like I have an idea to contribute. But just for whatever
reason, right, it took me forever to learn something, which I feel like it's a very mechanical
thing, but it just took me forever to learn, right? And then now actually I see this hope, right, with AI, you know, a lot of mechanical things that can actually now be done in a much more automated way by AI, right?
So then now truly, you know, my daughter, many girls, many kids out there, right?
Whatever, you know, they are good at, their creativity creativity it'll be much easier right for them
to contribute their creativity to whatever discipline they are passionate about hopefully
they don't have to you know go through what i went through right to finally be able to
to contribute and but then of course you know at the same time I do feel this responsibility of me, my colleague, MSR.
We have the capability and also the responsibility of building AI tools in a responsible way so that it will be used in a positive way by the next generation.
Yeah.
Shanlu, thank you so much for coming on the show generation. Yeah. Shanlu, thank you so much
for coming on the show today.
It's been absolutely delightful,
instructive, informative, wonderful.
Thank you.
My pleasure. Thank you.