Microsoft Research Podcast - 056 (rerun) - Functional Programming Languages and the Pursuit of Laziness with Dr. Simon Peyton Jones
Episode Date: December 26, 2018This episode first aired in January, 2018.When we look at a skyscraper or a suspension bridge, a simple search engine box on a screen looks tiny by comparison. But Dr. Simon Peyton Jones would like to... remind us that computer programs, with hundreds of millions of lines of code, are actually among the largest structures human beings have ever built. A principle researcher at the Microsoft Research Lab in Cambridge, England, co-developer of the programming language Haskell, and a Fellow of Britain’s Royal Society, Simon Peyton Jones has dedicated his life to this very particular kind of construction work. Today, Dr. Peyton Jones shares his passion for functional programming research, reveals how a desire to help other researchers write and present better turned him into an unlikely YouTube star, and explains why, at least in the world of programming languages, purity is embarrassing, laziness is cool, and success should be avoided at all costs.
Transcript
Discussion (0)
My podcast in January with Simon Payton-Jones was one of the first shows to ring in 2018
and one of the most popular downloads of the series,
so it seems only appropriate to ring out the year with him as well.
Whether you've already downloaded the podcast or you're just discovering the notorious SPJ,
I know you'll enjoy Episode 7 of the Microsoft Research Podcast,
Functional Programming Languages and the Pursuit of Laziness.
I like to put it like this. When the limestone of imperative programming has worn away,
the granite of functional programming will be revealed underneath.
You're listening to the Microsoft Research Podcast, a show that brings you closer to
the cutting edge of technology research and the scientists behind it.
I'm your host, Gretchen Huizenga.
When we look at a skyscraper or a suspension bridge,
a simple search engine box on a screen looks tiny by comparison.
But Dr. Simon Peyton-Jones would like to remind us that computer programs,
with hundreds of millions of lines of code, are actually among the largest structures human beings have ever
built.
A principal researcher at the Microsoft Research Lab in Cambridge, England, co-developer of
the programming language Haskell, and a fellow of Britain's Royal Society, Simon Peyton-Jones
has dedicated his life to this very particular kind of construction work. Today, Dr. Peyton Jones shares his passion for functional programming research,
reveals how a desire to help other researchers write and present better
turned him into an unlikely YouTube star,
and explains why, at least in the world of programming languages,
purity is embarrassing, laziness is cool,
and success should be avoided at all costs.
That and much more on this episode of the Microsoft Research Podcast.
Simon, welcome. You're in the Programming Principles and Tools group at Microsoft Research
in Cambridge.
What do you spend most of your time doing there?
Well, programming languages are the fundamental material out of which we build programs.
When a builder builds a building, they can build out of bricks or out of straw, out of bananas or out of steel girders.
And it makes a difference what you build out of, how ambitious your building can be and
how likely it is to fall down. So when developers write programs, the material that they use, the fabric of their programs,
that is the programming language, is super important to the robustness and longevity
and reliability of their programs. So programming language researchers study programming languages
with the aim of building more robust building materials for developers to use.
What role does research play in making good programming languages with the aim of building more robust building materials for developers to use. What role does research play in making good programming languages?
Well, at first you might think that a programming language was, well, you just kind of throw it
together. But actually, when you build a programming language, you want to be sure that you know what
it means. That is to say, if you write a program, you'd like it to be clear what the program means,
what should happen when you execute it. That's called its semantics. So having a good way to specify in a rigorous way
what that program means, what it does, is really important. So we need to find formalisms that we
can write down rigorously what a program means, and then we need to implement it. So if we're
going to build a compiler that, say, translates a high-level language program into low-level
machine code that's going to run on your machine, you'd like to be confident that the compiler itself was correct.
That is, that it didn't change the meaning of the program along the way, and it would do so ideas and theories that will enable people to build programming language designs and implementations that will be robust.
I wouldn't say that programming languages tend to arise specifically from academics having clever ideas
about what a language design might look like.
They're very often born in a much more random way in the white heat of,
oh, I just need to get something done.
And then retrospectively, programming language designers and researchers
start to look closely at the design and try to improve it.
So there's been dozens and dozens of papers about JavaScript, for example,
but JavaScript was not designed initially that academic.
I've seen your talks and you use some slides that show
sort of the trajectory of a lot of different languages.
You've suggested that there's hundreds of languages. Most of them share the fate of an
early death with only one or two at the memorial service. And then there's some that just resonate
and take off. What does it take to make it big? And is that something you should aim for?
So I think every computer scientist wants their language to be used.
That's one of the exciting things of working in Microsoft Research is that there's a real
chance your stuff might get used and have impact.
So we all want to make the world a better place.
In programming language research, I would say that while everybody would aspire to have
languages that have impact and are successful, it's pretty random which ones are.
The ones that are wildly successful are not necessarily the ones that are technically beautiful or well-designed.
They just hit some sweet spot at some particular moment. That's a bit frustrating in a way. I think
the Haskell, the language that I've been involved in, has been quite successful, but it could easily
not have been. There's a lot of randomness in the process. You mentioned two giants in computer science, Alan Turing and Alonzo Church,
who came up with ideas at about the same time that have had a big impact on programming languages in two different streams.
I think you talked about declarative and imperative languages.
Can you talk about that for a minute?
So my entire research life, ever since I first got excited about functional programming when I was studying at Cambridge in 1979 or thereabouts,
my entire research life has been following through the idea of what might purely functional programming mean.
And if you look back a long way, as you say, it does all date back to Alonzo Church and Alan Turing to pick just two giants from the literature.
So Turing said, what is computation? What does it
mean to compute something? And he designed this thing that we now call the Turing machine,
that was a very much step at a time, do this, do this, read a thing from the tape,
write that onto the tape. It was a very imperative machine. Meanwhile, at the very same time,
actually in the same place, it was in Princeton, Alonzo Church was designing the lambda calculus,
which is a more kind of, seems much more abstract, algebraic thing. It's like rewriting expressions. And he
discovered this tiny language in which expression rewrites could also apparently model computation.
So then it seemed obvious to ask, is there anything you could compute with the Turing
machine that you couldn't compute with lambda calculus or vice versa. And in the end, it turned out, very surprisingly, that these two notions of
computation were the same. That is, anything you could do with the Turing machine, you could do
with lambda calculus and vice versa. But although they were equally powerful in the sense of what
can you in principle do, they gave rise to very different language streams. So Turing machines, ultimately,
you could see this is a bit of a retrospective justification, but you could see Turing machines
as the basis for all imperative languages, right? Do this and then do that step-at-a-time computation
in which the program is a sequence of steps that you do in sequence. Lambda calculus is then the
grandmother of functional programming in which you program
executes by evaluation.
You evaluate an expression.
And it seems like completely different ways about thinking about your program.
You have to think about programming in a completely different way.
But nevertheless, they're equally expressive.
So the interest for me has been, what would it mean to take this much less popular but
nevertheless universal programming
paradigm of functional programming, and really push it through to see what could that mean in
a practical way for writing practical programs? So talk about the difference between functional
programming languages and other programming languages. The imperative approach, step-at-a-time
programming, is what everybody's used to. It's what C is like, Java is like, C++ is like, Python is like, Perl is like, Ruby is like, you know,
you name it, they're mostly imperative programming languages. Functional programming is very
different. It's more like everybody's used a spreadsheet, and in a spreadsheet cell, you say,
here is a formula that gives the value of a cell, and you compute the value of a whole spreadsheet full of cells by computing each cell, perhaps one at a time, perhaps in parallel,
but in data dependency order.
If cell A1 depends on cell B3, you must compute cell B3 first, and then A1.
But there's no notion of open a valve or launch the missiles or print something.
You can't do that in the middle of a formula.
It wouldn't make sense.
So that's functional programming, right?
All of the Excel's built-in functions are functions.
That is to say, they take some inputs and they produce some outputs.
They have no side effects.
And so the surprising thing, really, is that this purely functional approach to programming
is, in fact, universal.
If you think about it in a spreadsheet way, you'd think, well, that's good enough for writing business plans, maybe, or computing my bank
balance, but it couldn't do anything useful. Could you write a word processor in a spreadsheet? Well,
probably not, right? But the insight of functional programming, which stems right back to church,
is that this programming paradigm is universal. You can do anything. And so the functional programming language researchers have
said, supposing we took that execution by evaluation idea and scaled it up, what would
that mean? And that's what my whole research life has been about, really.
Why did you get interested in that, I mean, at the very beginning?
Because it's like a radical and elegant attack on the entire enterprise of programming. Rather than just
being, well, let's just try doing this a slightly different way. It's like saying,
let's just attack programming from a completely different direction.
Moreover, it's very close to mathematics. The whole idea of lambda calculus really grew out
of logic. And there's very beautiful dualities between programming on the one hand and logic
on the other. It's called the Curry-Howard isomorphism, in which you can view a, let's say, I have a function whose type is, it takes two integers
and it produces an integer. Well, that type tells you something about the program. So in a sense,
it's a weak theorem about the program. It tells you something about the program, but not everything.
And indeed, you could regard the program as a proof of that theorem.
So the idea of types as theorems and programs as proofs is a very deep connection between logic
on the one hand and programming on the other. And this duality is very immediate in functional
programming, but it's rather distant in imperative programming. So I tried to give you a sense for what got me excited about it. I just got excited about it because I thought it's
such a beautiful, simple, elegant way of thinking about the enterprise of programming. Let's see if
we can make it practical. I love that. Now, how many people are in your, is camp the right word?
Because you have people writing imperative languages all over the place.
Is this something that needs to be evangelized, functional languages?
Sure, yes. So, you know, I like to put it like this. When the limestone of imperative
programming has worn away, the granite of functional programming will be revealed underneath.
So imperative programming is very appealing,
don't get me wrong, right? It's sort of what real machines do. If you look at what a microprocessor
does, it does loads and stores and adds and it sets things in registers that make valves go open
or launches the missiles or prints something, right? Functional programming is a bit more
abstract. So, that's why it's been a sort of minority pursuit for a long time. And over,
I guess, the 40-year period of my adventure with functional programming, it's
gradually infected the mainstream more and more, but not too fast.
That's quite important, right?
Avoid success at all costs is one of my little mottos, right?
Because if you're too successful too quickly, you get sort of stuck and you can't change
anything anymore.
But functional programming has become more and more influential.
We can talk about ways in which that has happened.
Well, I do want to talk about Haskell and what you've just said about the slow burn, the slow rise, and the benefits of not getting too successful too quickly or dying an early death,
but having the tenacity to stay there for long enough to start to grow and get more useful.
Yeah. So for me, one of the glories and privileges of being a research computer scientist is that
you're not just allowed, but actually paid to work on a simple and elegant idea and to do so
for 35 or 40 years. That's amazing that society allows us to do that.
So as far as Haskell goes, I mean, you don't just want to work on abstract ideas. You want to work
on things that have impact. So Haskell was developed by a group of research colleagues
around the world, including myself. And our idea was just to embody the current consensus among
ourselves about what purely functional programming
actually was pure lazy functional programming might look like. And at that time, it was very
much a university enterprise. But by having an actual language and then turning it into an actual
compiler that people could actually use to get their job done, and then extending the compiler
so we could deal with input output, and we could deal with foreign function interfaces and talk to C and so forth and we could develop the
type system to actually be useful. Over time we've turned Haskell into something
that is useful for practical applications and now in fact it's really
quite widely used by developers in mostly small companies. So let's talk about laziness for a little bit. When I was growing up, that wasn't a virtuous
quality in our household, but somehow lazy functional computing is a good thing. Why is that?
Yes. So at first, it was just an amazingly clever and elegant thing.
So laziness is the idea that if you call a function in a normal imperative language or call-by-value language,
then before calling the function, you're going to evaluate the arguments to the values of those arguments,
and then you pass them to the function.
In a lazy functional language, you don't evaluate the arguments before passing them to the function. In a lazy functional language, you don't evaluate the arguments before passing them
to the function. You create recipes or suspensions or funcs which you pass to the function. And if it
needs that argument, then it will evaluate it. So you can write a function that might evaluate
one or other but not both of its arguments. And that can be super important. Just think of a
function like if, a conditional, where you don't want to evaluate both the then branch and the else branch. So why did that happen?
Well, firstly, it was because we could, because the lambda calculus, a program in the lambda
calculus is an expression that you evaluate. And when you evaluate an expression, like
if I evaluate the arithmetic expression 3 plus 4 times 7 plus 8, then I could evaluate
the 3 plus 4 first or the 7 plus 8 first. There isn't an
inherent order in expression evaluation, except that I must evaluate the 3 plus 4 and the 7 plus
8 before I multiply them, right? So there's some data dependencies, but there's a lot of fluidity
about evaluation order. And it's the same with the lambda calculus. And it turns out there was a lot
of study in the theoretical literature about evaluation order.
And some of these evaluation orders, called normal order, naturally led to lazy evaluation.
We thought, oh, that's interesting.
Oh, it just sort of naturally arises.
What would that be good for?
At first, we just thought it was cool.
And then John Hughes wrote this very interesting paper called Why Functional Programming Matters,
in which he said laziness is not just cool, it's useful. And he did that by describing how laziness gives you a new form of modularity.
And his classic example was this. Supposing I'm writing a program to play chess. Well,
one thing I might do is explore the tree of possible moves. He could move this way,
then I could move that way, then you could move that way. There's a big tree.
Suppose I first generated the tree and then pruned it to figure out the best move.
Well, that tree would be too big.
So usually we would have to generate and prune at the same time.
And John said, well, no, with lazy evaluation, you can generate in one piece of program and prune in another.
And that gives you a new form of modularity.
So that was really an interesting idea that's worked out exactly that way.
I love it.
And I'll probably use it. That laziness is not just cool, but useful.
It is not just cool, but useful. Yes.
How did laziness and purity come together?
So Haskell's initial defining characteristic was that it was a lazy language. That's what
brought that particular group of people together, what we thought was exciting and cool.
But in retrospect,
I now think what was much more important was that laziness forced Haskell to be a pure language,
by which I mean, in a call-by-value functional language like ML or Lisp, if you wanted to print something, it was too tempting to have a function, in quotes, which, when you call it, would print
something as a side effect. That is, it wouldn't just return, well,
what would print return? Unit or three or something, but it would print something on the side. So we
couldn't do that in the lazy language because we couldn't predict the evaluation order well enough.
So laziness kept us pure. And purity was embarrassing for a long time because you
couldn't really do much by way of input output. You couldn't print things or open files
or launch missiles or sail the boat. So that forced us to invent what came to be called monadic
input output. And there was another classic example in which Phil Wadler, my colleague at Glasgow,
took ideas from the logic world, the theory of monads developed by various people, but he was
particularly drawing on the work of Eugenio Moggi, who was very much a theorist. Phil Wadler wrote this wonderful paper comprehending monads,
in which he described monads as a programming idiom. And then he and I subsequently wrote a
paper called Imperative Functional Programming, which showed how you could apply monadic programming
to do input-output to affect the world. And that idea has been wildly infectious. That's spread to all sorts of places.
So people now use the monadic thought pattern as a design idea for designing their programming
languages or ways to, you can see it all over the place now. But it only happened because
we were stuck with purity because we had laziness. It was another place where the sort of theory both helped the practice and also almost forced the practice because we would have had to break with our principles too much to just have side effects.
So we were stuck with no side effects and were forced to invent this alternative way of going about things. Aside from your pioneering work in functional programming language,
a good part of what you do involves inspiring the next generation
to take up the computer science baton and run with it.
How have you gone about doing that?
What have you done in the inspiration business
for computer science? I started with this about 10 years ago when my children were at school,
and we would sit around the dinner table. They would tell me what they did at school,
and they had complete contempt for their lessons in ICT, information and communication technology.
And so in talking to them, I was unable to make any connection between the subject that I thought
was so interesting that I devoted my professional life to it and the subject that they were learning
at school. And that was different to, say, biology, in which I think a biologist sitting
around the dinner table with their children would be able to make a connection between the subject
discipline that their children, even at primary school, were learning at school and the subject
discipline that they thought was so interesting they devoted their professional life to it so that seemed like a very big disconnect
the more people I talked to the more people said well yeah it doesn't make sense but that's the way
it is so I helped start an outfit called computing at school which is based in the UK but open to
anybody anywhere in the world whose sole mission was to try to say what might it mean to teach computer science as a subject discipline
to school children and to teach it at the same levels and for the same reasons that we teach
natural science or mathematics that is not because they're going to become physicists or mathematicians
necessarily a few will but most will not but because knowing some elementary principles about the physical or chemical,
biological or digital world that surrounds them will make them more empowered, better informed
citizens. And that applies from primary school onwards. So that was the mission of computing
at school. It's now part of the core curriculum in the UK. That's right. So we were unexpectedly
successful. And we started in 2007-8. It was
like we felt as if we were at the bottom of a deep well, you know, shouting up towards the
daylight, you know, computer science is important, you know. We got lucky. We wrote a curriculum.
There was a review of the entire national curriculum serendipitously, started by the
then-Conservative government. So we were ready to make input to that curriculum debate. And in the end,
we achieved almost all our policy goals. The new national curriculum for computing in England
pretty much says in black and white, all children should learn the fundamental principles of
computer science and should do so from primary school onwards. So that's amazing. And that came
into force in 2014. But there's a big challenge after that. It's like when you scale one apparently
insurmountable mountain, what do you find behind it? Another bigger mountain. And in this case,
it's how do we turn that aspirational idea into a tangible and living reality in every classroom
in the land? And that is a big challenge because while teachers are willing and committed and hardworking and able, they're by and large not qualified in computer science. So there's a lot to do. There's a lot to do. The state in this country is pockets of excellence, but overall, it's quite fragile.
I think that in various stages, most countries in the world are facing the same issues within policy goals and implementation.
And then how do you prepare teachers?
We're watching the UK, I think.
Yeah, I think pretty much every nation in the world is thinking hard about what they teach their children about computing and how they go about teaching it.
And I don't think anybody has a monopoly on truth here.
We're all trying to figure it out as we go along. Do you think there's any room in the research
community for this kind of line of inquiry? Oh, tremendous. Yes. So both among computer
scientists, who I think individually and collectively, computer scientists should be
active in talking to their local school teachers, in being on school boards of governors,
because there's a seismic change taking place. It's like establishing an entirely new subject
at school level. And what is that entirely new subject? Well, it's called computer science.
And who would know about that? Well, computer scientists, particularly research computer
scientists. So we may not know how to teach. We may not know much about children, but we know
the subject discipline, so we should get involved. But the other thing at the research end we need
is research in education, right? Because computer scientists know nothing
about education. What is good pedagogy for computer science concepts? How might you teach
computational thinking? What role does formative assessment play? How could you use, you know,
hinge point questions to teach computing more effectively? When we teach programming, does it
make sense to start from a blank sheet of paper and say, write a program to do X? Or should we instead spend a lot of time
showing programs and saying, please explain to your neighbor how this works, or here is a program
with a bug in it, please find the bug and explain what's wrong and fix it. There are a lot of
different approaches to how you go about teaching, and we need educational research, in the end,
backed by research evidence to say which of these approaches works better.
I think you've just given any number of listeners to this podcast some ideas about where they might want to go with research in the future, if they have a passion for education and for computer science.
Yeah, this is it.
The intersection of education and computer science is a very rich area at the moment. And everybody wants to make a difference to the education that
we give our children, because many of us have children and want to see them succeed.
Listen, let's talk about another intersection that you're really interested in,
theory and practice.
Computer science is unusual. If you're in biology, then just finding out something that is true
is progress. So novelty has value in its own right. That's true
of any natural science. In computer science, novelty has no value in and of itself. It's too
easy to make up new stuff. It's a kind of like a practical discipline. Everywhere you dig, you can
make new details. We're creating ideas out of nothing, out of pure thought stuff. Fred Brooks
had this wonderful Newell Award lecturer who's called the computer
scientist as Toolsmith. And he says, computer science and its theories only have value insofar
as they demonstrate utility. So that's a question I ask about every paper, every research proposal
I see. It's not just ideas, but utility. So to return to your question then about theory and
practice, nevertheless, it's much more fun if theory and practice live quite close together.
If you can use a piece of theory to give practical results and make that crossover without bending
the theory too much out of shape. And in functional programming, that's particularly true.
So, for example, in the compiler that we built for Haskell, which is called GHC, we were struggling
in the very early 90s to think, what should its intermediate language be like? Haskell, which is called GHC. We were struggling in the very early 90s to think,
what should its intermediate language be like? Haskell is very large. Source language,
we compile it into a small intermediate language that we will then transform,
optimize, transform and optimize, and then finally spit out machine code.
What should that intermediate language be? We want it to be strongly typed itself. And I was
worrying about, oh, where could we put the types and how would they live and how would they survive transformation? And Phil Wadler said to me, you know what, Simon,
we should use System F. I sort of rocked back in my chair and thought, System F? I learned about
that in an extremely theoretical seminar that I went to run by Samson Abramski. I thought that
was a purely theoretical interest, but it turned out we ended up directly implementing SystemF in GHC,
and it's still there to this day. It's a very pure embodiment of an idea that was developed
solely in a theory context, but turned out to have immediate practical utility. And that happens
again and again in functional programming. I love that. I want to ask you about a couple of videos you're in that have tens of thousands of downloads on YouTube
about how to write a research paper and how to give a research talk.
Could you talk about that a little bit and why that was important to you
and how that came about that you became a video star on YouTube?
Well, a lot of research is about communicating.
As I say in these talks, no matter how brilliant you are, if you sit in a sealed room and have fantastic ideas but don't tell anybody, then all you've done is heat up the universe.
You've not really made it a better place.
So communication is key. I think I wrote the first
of these that was about how to give a talk with John Hughes and John Lodgebury after we were
colleagues in the same department. We'd been to a lot of research talks. We started talking to
each other about, couldn't a lot of these talks be a lot better with some quite simple suggestions?
So then we wrote them down in a SIGPLAN notices paper, and I gave a talk about it. And then
subsequently, I developed a talk about how to write a research paper, which has been extremely popular. And it arose in the same way.
I just thought, I'm reading a lot of papers, I'm reviewing a lot of papers, and some quite simple
ideas I feel could make them a lot better. And so I thought that it was worth putting a bit of
effort into trying to articulate or distill the techniques or ideas that I used and hope they'd
be useful to others. And to my astonishment, they seem to have been quite widely looked at, including by people in
completely different disciplines, like psychology and history. It's really strange. I get email from
the most remarkable places. I think if in terms of citations or views or web page hits, all the rest of this functional programming stuff is, you know,
nothing. That's why I bought this out of a search paper.
One of the most interesting things I heard you say is that computer programs are among the largest structures or the largest things humans have ever built.
And when we look at other structures, they seem enormous to our eyes, but people don't usually see the millions of lines of code behind a very small thing like a search engine box.
Why do you tell that story and what's important for us to understand about that? 99.9% of the population has no visceral sort of gut feel for just how complicated, remarkable,
and fragile our software infrastructure is. The search box looks simple, but there's the
millions of lines of code. If you could see that in the way that you can see an aircraft carrier
or some complicated machine you can see inside, you'd have a more visceral sense for
how amazing it is that it works at all, still less that it works so well. But you don't get that
sense from a computer program because it's so tiny, right? All of my intellectual output for
my entire life, including GHC, would easily fit on a USB stick. On that little thumbnail-sized
thing, I've just changed some ones to zeros and some
zeros to ones, and all the ones and zeros were there to begin with. All I've done is change the
state of some of them as my entire professional output. And yet, these artifacts are so complex
and so large, they need entirely new techniques for dealing with them. So if you think about how
a large piece of software is built, we built it with layer upon layer of abstraction. We built
libraries which hide their insight, but provide an API that you can call.
And you build another library on top of that and another library on top of that.
And so we manage the complexity of these gigantic systems by building abstractions and learning how to describe those abstractions.
I mean, that's another big part of what programming language people are interested in, right? So why is that important? One, I would like people who are not computer
scientists to have the idea that there's something rather amazing going on, and also that it's so
complicated, it's not surprising if it goes wrong occasionally. We shouldn't place too much trust
in it, right? It's not magic. Sometimes I think people are too guilelessly trustworthy of computers.
But also, for computer scientists or people thinking about, is this a field I'd like to be interested in?
The idea of this whole remarkable wonderland of interest and complexity and creativity, right?
Programming is one of the most creative disciplines in the world where you can create completely new things that nobody has ever built before.
That's something I'd like to get across to people.
What's the best thing about being a researcher to you?
And why would a young computer scientist want to follow in your footsteps in the field of research?
Well, for me, it's been a great privilege just to be able to take one idea and follow it through,
take the idea of functional programming and run with it. And I've been able to do that both at university for about 15 years, 17 years, and then subsequently at Microsoft for rather longer now, actually,
coming up on 20 years at Microsoft. And for me, it's been this mixture of elegant theoretical
ideas that have direct practical impact has always been my powerful motivator. So why might a young
person want to be interested in computing,
whether in research or not? Because you can build amazing things out of this pure thought stuff.
Why might somebody want to go in research specifically? Well, typically, if you're
working in industry, you're building amazing programs out of nothing. In research, you build amazing ideas out of nothing. So as we close, what thoughts would you share about your long life of research that would
give the next generation, say, a vision for what might be next?
So I never had a long-term research plan.
I never had a, oh, here are the three big things I'm going to do with my life,
and I'm on this 20-year trajectory to do it.
I was always just doing the next thing.
So I'm not really a very long-range planner,
but I did have hold of one idea, this functional programming idea.
I didn't know how it would turn out, but I just found it fascinating.
So I would suggest to younger people, just start with something.
I remember when I started as a researcher at University College London, I didn't have a PhD.
My head of department gave me some time off to do research, but I had no idea what to do.
So I just sat there with a sharp pencil and a blank sheet of paper, hoping for great ideas to come, which, of course, they didn't.
And then my colleague, John Washbrook, he said to me, Simon, just do something, anything, no matter how humble and simple, just start
something. And so I did. I wrote a little parser generator for a functional language
called SASL. And that eventually turned into a research paper as it happened. So the wonderful
thing about computer science is if you start almost anything, it'll turn into something
interesting. Don't be too worried. Just get started on something that interests you.
Simon Peyton-Jones, thanks for coming all the way over
from England on Skype with us today.
Oh, it's been fun.
To learn more about Dr. Simon Peyton-Jones
and his work in the field
of lazy functional programming languages,
visit microsoft.com slash research.