Embedded - 316: Obviously Wasn't Obvious (Repeat)
Episode Date: August 18, 2022Professor Barbara Liskov spoke with us about the Liskov substitution principle, data abstraction, software crisis, and winning a Turing Award. See Professor Liskov’s page at MIT, including her incre...dible CV.
Transcript
Discussion (0)
Welcome to Embedded.
I am Alicia White, here with Christopher White.
And this week we are talking to Professor Barbara Lisskov of MIT.
You may have heard of her.
Hello, Professor Lisskov. Welcome.
Thank you.
Could you tell us about yourself as though you were on a panel?
Well, as you said earlier, I'm a professor at MIT.
I'm in the electrical engineering and computer science department.
I'm a computer scientist.
I got into the field quite early, and this meant that I had the opportunity to work on some really interesting
things that have led to the way that computing is today.
Yes, you've worked on many interesting things.
Before I ask more about those, we want to do lightning round,
where we'll ask you short questions, we want short answers. Are you ready?
Yeah.
Favorite college course to teach? Well, I have been teaching computer science for many years. And I guess that a couple of the
courses that I taught were my favorites. One of them was the course I developed myself with my colleague John Guttag, which was a course in how to write big programs.
Another one was a course on computer systems.
Do you have a Hopper nanosecond?
Do I have a what?
A Grace Hopper nanosecond.
No, I don't.
I don't actually know what that is.
When somebody finds out what you do, what question do they tend to ask you?
I'm not sure that there's any particular question that comes to mind.
It's very hard to explain what I do to people who are not computer scientists, because it has to do with how you organize programs and other kind of technical details that don't relate directly to application.
So it's a little difficult.
Do you have a favorite programming language or a least favorite programming language?
Well, I guess that my favorite programming language is the one that I developed myself in the 1970s.
CLU?
CLU, Clue, yes.
I'm not sure that I have a least favorite language.
I'm going to ask you a random question, and you may not know the answer.
Was CLU the inspiration for the character in Tron? You know, it might have been, but I don't know the answer. Was CLU the inspiration for the character in Tron?
You know, it might have been, but I don't know the answer for sure.
Okay. Do you have a favorite theorem or algorithm?
I wouldn't say so.
Equal opportunity.
Equal opportunity, right.
I guess we should have asked a favorite abstraction.
Favorite abstraction, right. Yeah.
So you're a professor at MIT. How long have you been there?
I have been at MIT since 1972.
You've been a professor at MIT since 1972.
That's right. Yep.
How have things changed in computer science since you were one of the first women to graduate with a PhD in computer science.
Well, when I got my PhD, computer science was still in its infancy.
And people didn't understand very well how to organize big programs. There wasn't any distributed systems like what we're
familiar with today with the cloud and all the different computers connected by networks.
People were doing, they're just sort of getting started and trying to figure out how to do things.
So it's changed tremendously.
How is it different to make a big system versus an iPhone app or some other application?
What are the first few things you tell your students that they have to start thinking about?
Well, first of all, some apps might be big. I'm not trying, but when you're
building, so computer programs are really big. There are millions of lines of code.
They need to be worked on by many different people. They live for a long time. And so after the original development is done,
additional people may get involved in maintaining them and adding new features.
So you have to have a way of organizing them that allows people to understand them and reason about
them. When you're working on a small program and it's just you, you can kind of throw
it together. You're the only one involved. But when you start to work on big systems, you have to
break them up into components that we call modules and think about each module independently.
And that is the core problem. How do you break systems up into modules and what are these modules that impact software development? I mean, I think about good APIs and as few
cross-module variables as humanly possible and trying to keep data in one area but not anywhere else?
I mean, these all have real formal names,
but what else are you looking at when you're making modules?
Is there advice that maybe people don't know?
Well, the things that you just mentioned are important things. And
it's not easy when you're designing a large system to figure out what the module should be.
No, it isn't. You also have to think about how the program might change in the future because you can design in such a way that it's easy
to accommodate certain kinds of changes.
So that's another thing you think about when you're designing a big system.
And you did your dissertation when you got your PhD on playing chess.
How did you get from there to big systems?
So when I went to Stanford, I started working with John McCarthy.
And John was a specialist in artificial intelligence.
And so I did my thesis in AI.
And at the time I did that work, the idea with artificial intelligence was that you wanted your programs to work sort of the way people think about things.
And so my thesis was about picking up certain ideas about how to play chess endgames and getting
them into the program so the program could work the way it was recommended that people think about
these endgames. I knew partway through my PhD that I actually wanted to switch out of AI and
into computer systems, but I decided to wait until I finished my PhD,
since I thought I would finish it faster that way. And as soon as I finished at Stanford,
I switched out of AI and into computer systems. What made you switch?
I was just, AI has changed a lot. And I felt that it had an awful long way to go before it
was going to be able to tackle really interesting problems.
You were right.
They didn't have deep neural nets in 1968.
No, they did not.
Not only that, but the computers were quite small and not very fast.
And so this limited greatly what you were able to do. When I start looking at big systems or even small systems that are complex, I always do block diagrams.
That's my initial how am I going to make sense of the world solution.
Do you, I mean, that's how I've been doing it.
Are there better methods?
Is there a formal theory to modularity?
Yeah.
Well, there is this thing called computational thinking.
And the idea, what I tell my students is that you start by inventing abstractions.
So you think about data types, you think about functions, that if you had a machine that did all that stuff, this would be the ideal machine on which to build the program to do whatever it is you're working on.
You work with that for a while.
So you actually write that program.
It's just a bunch of comments.
And you define the meaning of all the abstractions that you invented. And after you're confident that you have a system that would work, then you pick up one or more of those abstractions that you invented and you do the whole thing again.
And of course, you document it.
And the document is this thing I call a module dependency diagram that shows, you know, what abstractions each piece of the program depends on.
So I think that's probably a bit like your block diagram.
And when you say abstractions, it's one of those words that people kind of run away screaming,
oh my god, abstractions. But it's not that complicated of an idea? No, it's not. And for people who work in computing,
I think it's a pretty natural idea. So an example of an abstract operation is a routine that would
sort arrays. An array itself is an example of a data abstraction. A file is another example.
So there are many, many examples of what these abstractions are.
And if you're working with programming, you learn about those, and then it's not that difficult to think of other things that are like that.
And it isn't just object-oriented programming that has abstractions?
No, abstraction is just a basic way in which you build programs.
Although I think most programs that are built today are object-oriented just because
it's a natural way to think about what's going on.
Even working in C on low-level devices, I tend to take an object-oriented approach because I think about
sensors or buses or motors or whatever I'm working on and that those are objects.
That's right. That's right. And they exist in all programs.
But sometimes figuring out which is an important object is difficult.
I think that there's an art to design, and that was the one thing I felt that I could not teach my students.
I could tell them what abstractions were, I could give them examples,
but coming up with the right abstractions makes a huge difference, and that is a kind of an art.
Are there any tips on that? She just said she couldn't teach it a kind of an art. Are there any tips on that?
She just said she couldn't teach it.
It is an art.
I think, you know, you practice, you look at programs, you see what works and what doesn't work and has to do with abstractions and being able to reuse subtypes.
Right.
Is this what you call it? Do you call it the Liskov substitution principle? No. What happened was I defined the idea in a talk I gave around 1986 or 87.
And about four or five years later, I received an email from someone asking me, is this the correct definition of the Liskov substitution principle? So that was
the first time I knew there was such a thing as the Liskov substitution principle.
What was your reaction? What did you say?
Well, I was astounded. My reaction was I looked at whatever it was that was presented and sent
back an answer, but it was a big surprise. I didn't realize.
Then at that point, I realized that there was all this chatter on the Internet
where people were talking about the Liskov substitution principle
and trying to figure out exactly what it was, but it was all news to me.
Could you explain it to us?
Well, the way that I defined it was I said, you have a program and you've written it in terms of a particular data type.
And that means that you assume not only that that data type has certain operations, but that they have a certain behavior.
And you want that program to be able to work correctly, even if it's past an object that
belongs to a subtype of the type that you were assuming. And this imposes certain constraints
on the subtype. Namely, the subtype, of course, obviously has to have the same operations that
you're expecting, but they also have to behave exactly the same way that the supertype would behave
as long as you're using it in this context of the supertype
and you're not looking at any of the extra behavior that the subtype provides for you.
Okay.
So a concrete example, if I had a string object that I'd subclassed
to be a special type of string with some extra operations. A string search would still work on the sub,
you know, a search function should still work
on the subclass as well as the superclass.
That's right.
So the string subtype that you define
might have some extra operations
that aren't being used by this program
that was written in terms of the supertype, the strings.
But it better behave the same
with all the operations that string provides so that that program will continue to work correctly.
I found this a little easier to understand in its failure. If I have a dime and I call that
a subtype of a coin, I want to flip a coin. Okay, I can use a dime. But if there is some
cubical form of money or spherical sort of money, and I call that a subtype of a coin,
when I flip it, the whole universe crashes. Yes. So I think that what we have there is
an example of why it's such an important principle. We write lots and lots of code.
It's all modular, so the code is written in terms of other modules and relies heavily on the expected
behavior. And if the expected behavior isn't there, then all the code that depends on it doesn't work.
And that's what's really important about what you refer to as the Liskov substitution principle.
Technically, it's called behavioral subtyping,
because if you don't obey the rules, your program doesn't work.
That seems so basic.
I mean, if you don't obey the rules, your program doesn't work.
But you have to know the rules.
Yeah. I mean, if you don't obey the rules, your program doesn't work. But you have to know the rules. But you have to, yeah.
Even with this, it's, I wouldn't call a motorcycle a car.
Messing up abstraction is confusing.
Why, this seems really obvious.
Is it just a hindsight?
Not obvious.
I don't think it's obvious.
Well, at the time that I invented it, it obviously wasn't obvious.
And I think that what happened was I had been working on clue and data abstraction, and I was thinking about it behaviorally.
So I would think about a data abstraction.
It has a specification.
This is how it's supposed to behave.
And I didn't think at all. I had a very clear separation between what a module was supposed to do and how it was implemented.
In the object-oriented world as it existed at that time, it was much more implementation-oriented.
And people would talk about subtypes by talking about how the implementation of the supertype was changed in order to build the subtype.
Because they were working on inheritance. And so I guess that because I was thinking about it from the point of view of behavior, I was able to see this rule that other people hadn't been able to see.
What was the state of object-oriented programming in the mid-'80s?
Was it small talk was around?
Small talk was around, and there were a few other languages being looked at, like, I forget the names, you know, some version of Lisp and so forth.
And people were just, they were searching for an understanding of what subtypes should be like,
but they just didn't understand it. They hadn't quite got there yet. And I happened to start
thinking about this because I was giving a talk at OOPSLA, which is the big object-oriented
conference. And I decided I better look at what's going on with object-oriented programming. And I
discovered they were interested in this idea of subtypes, but they didn't understand what it was.
And to me, it seemed, you know, kind of obvious. You want the subtype to behave like the supertype
when used where you expect the supertype.
I mean, it is kind of common sense, but for some reason I was able to see it.
I don't know why.
It is both common sense and yet so much of computer science is being able to figure out which common sense applies and which part doesn't.
I guess that's true. Did, at the talk that you gave, did anybody say, did anybody stand up and shout,
wow, this is the best thing ever? Or was it? You've been to technical conferences. Is there
a lot of shouting? So honestly, I have no memory of this. I think the talk was well received, but
I don't remember any particular, I mean, as I said,
you know, five years later, all of a sudden out of the blue, I got this email. And meanwhile,
I was working on other stuff. I wasn't thinking about it at all. So, you know, I don't know.
Clearly, it had a big impact because it sort of took off on a life of
its own. It really did. And did that, once it came back five years later and people were talking
about it, did you revisit it and think about it further or had you kind of moved on? Well,
I had moved on, but Jeanette Wing had gotten in touch with me. She
was interested in trying to come up with a formal definition of what it was all about.
Because what I just gave you was a very informal sort of intuitive explanation.
And so I did do work with Jeanette in the 90s. And we wrote some papers based on that work about
what exactly is this substitution
principle mean if you try to pin it down. Did you make any refinements that you can
explain here given, you know, not an hour's worth of lecture?
And the fact that it's a long time ago.
Well, that too.
Yeah, right.
I don't know. The mid-90s seems so much closer than the mid-70s.
That's true. But no, I don't think I can. So sorry.
No, no, don't worry about it.
If I went back and thought about it, yeah, I probably could, but I haven't thought about it for quite a while. When we talk about computer science, this is the science part. But there's also this area of how do we know which goals or ideals or methodologies work best? Is there some metric that says this is better other than common sense? I'm certainly not aware of metrics. There was quite a bit of work
on metrics in the 70s and 80s, and I think it never got anywhere. But in truth, most of my research
starting in the 80s has been in distributed systems. And I haven't been following the
research that goes on in the programming engineering, what's called software engineering
or programming methodology. And so I can't really answer the question whether there are metrics.
I can tell you that the proper use of the Liskov substitution principle is not something that can be enforced by a compiler.
And so people have to apply it on their own and think about it.
It would be better if there were a way that it could be enforced.
And then when somebody makes a mistake and doesn't quite get it right,
rather than having to find out later because some module breaks.
You know, you could find out just when you ran your program through the compiler.
I have been thinking for a while now that I kind of need,
I think a friend called it a daily devotional for software engineer.
Something that would, like every day would remind me,
oh, the Lisskov
substitution principle says this, and don't forget about it. And then the next day, maybe
here's a good, you know, don't forget to have good posture as part of good software engineering. And
all of the things that I know, that I learned, that I read about. But then when I'm sitting at my keyboard trying to get
things done, they don't always sink in. Yeah. It's tough. That's why it would be nice if the
compiler could enforce it. That is true. I don't worry about semicolons destroying my programs
anymore. Right. And you probably don't worry so much about data types either. You know,
if you have a strongly typed language, the compiler can enforce the types themselves.
And so you don't have to worry that, you know, I'm expecting a particular type and something else
comes in. But they've made a lot of progress in program verification. So I suspect that tools are on the way. When you started working on Clue,
was that around, actually, when did you start working on Clue?
Well, I invented data abstraction. I did this work with a friend, a colleague of mine, Steve Zillis, and this was in 1972, 73.
And so that was the first step,
was just this idea of a data abstraction,
which people didn't understand what it was.
And that, in fact, is a bigger, in my opinion, a bigger contribution than the Liskov substitution principle
because without the notion of data abstraction,
we don't have the modules that we know today.
Wait a minute, stop there. I invented data abstraction.
Yeah.
I mean, this seems...
Like she said, it's more important.
It is way more important. And I mean, that's amazing. And I was like,
oh, come on, that's been around since computers started. And then you said 1972. I'm like,
okay, yeah.
Yeah. Yeah.
Yeah.
So let me think.
70s, there'd be Fortran then, right?
And COBOL.
There's no data abstraction there.
No data abstraction.
Okay.
Okay.
That is a big leap.
It was a big leap.
And when I got the Turing Award, I went back and I read a lot of the papers
that I had read at the time about programming methodology.
And what I see in retrospect is that the idea of data abstraction is lurking in those papers.
It's just that nobody quite managed to pull it out and say what it was. And I think if I hadn't done it, you know, somebody else would have done it at some point not too far after when Steve and I came up with that original paper.
But it didn't exist before then.
And so what was the impact of that? How quickly did that start getting adopted?
Well, that had a huge impact in the sense that, I mean, everybody, we wrote a paper on it.
Everybody who read that paper knew this was a really important idea because people had been groping.
They wanted to figure out how to do modular program construction, and this gave them a way of doing it.
So it had a big impact.
And what I did next was I worked on Clue, and that took several years because having an idea of what a data abstraction is, well, that's just an idea. And figuring out the rules and getting it into, that here's how you write programs with it. It actually performs okay. You can really use it in practice.
So Clue, what is it based on? What kind of like a class, except that it had complete encapsulation.
So the implementation that was present inside a cluster was not visible to any code.
There was no way that code outside the cluster could access the details of how you represented the data type. And, you know, what happened with Clue was
finally in the 90s, Java appeared.
So if you think about Java,
Java picked up the ideas of data abstraction
and strong typing from Clue,
and it picked up the idea of inheritance from small talk
and put them together, and then it moved into the idea of inheritance from small talk and put them together
and then it moved into the mainstream of computing.
So it was a research project and research often takes many years
before it moves from the research world to the real world.
On one hand, I'm sitting here thinking we've always had these things.
We've always had data abstraction.
I remember when inheritance was the big new thing and it was super cool and everybody should use it.
And yet it hasn't been that long.
No, it has not been that long.
The idea that the concept of data abstraction, which now we
build upon, we're like, okay, objects
everywhere. Yep.
What do you think
is the next thing?
Invent something new.
No, I mean, I can't answer that question.
Anyone who can, please contact the show.
No, they're too busy inventing things.
You mentioned you won the Turing Award, and that is significantly different than the Turing test.
Yes, significantly.
And you won it before it came with the lovely cash prizes.
Oh, no, I got quite a bit of money.
Oh, that's nice. Yes. I mean, the award's gotten bigger, but it was still significant.
And do they award it to a career or to a particular idea or to a series of papers?
That's a good question. There has to be a significant contribution that you can point to
so it's not to somebody oh they've had a distinguished career you can sort of point
out but nothing sort of sticks out if you look at all the awards there's always been a specific
thing that they got the award for so i got the award for the work on data abstraction and the
list of substitution principle and also
later work that I did on replication techniques. But, you know, other examples of awards are
Ron Revesque got one for inventing RSA. And, you know, most recently, so Mike Strombrecher got one
for all his work on databases. So, you know, you have to point to specific contributions.
And how did that change your career?
Well, I don't think it did change my career, but it's very nice to be recognized.
Oh, absolutely. Do you think that if the Lifscoff substitution principle had ended up being called
behavioral typing,
you would have gotten as much credit for all of this?
You know, I think that's an interesting point because having your name attached to something does make your name recognizable. You know, data abstraction doesn't have my name on it.
So yeah, but you know, on the other hand, if you work in research, the things that you invent are known to people who are working in the area.
Yeah. And as you said, data abstraction is kind of a bigger contribution as a whole. Although, you know, it is very closely tied to the Liskov substitution principle,
because it does talk about how you can arrange data types in a hierarchy and have it make sense.
Who were your biggest influences as you decided to go into computer science and throughout your career?
Well, that's a very hard question for me to answer because there's no particular person that I can point to.
I think that, as I said, computer science was in a very early stage.
And I was just, you know, reading what people
wrote and thinking about the problems I thought were important. And it wasn't like there was some
specific person that I felt was an influence. Was there some reason you decided computer science was the path for you?
Yeah, I got my bachelor's degree at Berkeley in math.
I decided that I wasn't ready to go to graduate school, so I should get a job.
I didn't know there was such a thing as a computer at that point.
But I was offered a job as a programmer. And as soon as I started working
in the field, I knew it was the field for me. It's a wonderful combination of math and engineering.
And it sort of fit my skill set. And I really enjoyed it.
But then you went back to school and got a PhD.
Well, I worked for a couple of years and I decided that I was essentially self-taught.
You know, I was my first day on the job, they handed me a Fortran manual and told me and say,
they said, write a little program to do something. I forget what it was, you know, so
I had to learn it all by myself. And partway through this two-year period, I decided that
maybe I could learn faster if I went back to school. Were these the days of punch cards?
Oh, yeah. Punch cards and, you know, the computer room where you would submit your deck and then
maybe you'd get it back tomorrow or the day after.
It's kind of hard to really think about abstraction when your compiles take a couple of days.
Well, there's one advantage to that, which is that you have to think very carefully about what you're going to accomplish in a run. So one of the things I used to tell my students was that you need to think
carefully about what test cases you want to run and what you plan to learn from them. Because you
can waste an awful lot of time sitting at the computer just doing random stuff. Monkey typing.
Which is how most development is done now. That's right. And I, well, I'm not so sure about that,
but I think that there was, there was a discipline in, you know, this was your shot, and so you better think carefully about what you were going to accomplish.
Yes.
Is it possible to have too much abstraction? How do you decide when you've had too much?
I think that there's an art to designing an abstraction. It needs to be as simple as possible, but it also has to have all the right stuff in it.
This is an art.
Yeah.
So, you know, you need to think about what is this abstraction being used for?
What operations does it need to have?
Is it really essential that it have this one, which is a sort of a little change to
something that's already there? Or could you get by without it? And you want to be it has to be
what we used to call complete. So you can do everything with it you need to do. But there's
no point in getting it over elaborate, because that just makes it confusing.
Means you write code you didn't have to write, and that's usually money down the drain.
So I'm not sure you can have...
I mean, the other thing is the process of design has to stop, so you have to be making progress.
So you were programming, and you enjoyed it, and you went to get a PhD,
and then you went to be a professor and do research
and teach. Well, actually I first went and worked in research at industry for four years.
Ah, but then you left that. And I mean, it's programming and being a professor
seem like pretty different jobs. I mean, one's a very introverted job and one has students asking you questions all day long.
Well, of course, when you're working on large projects, you're always working with other people.
So it's not as introverted as all that.
But as I said before, I changed from AI to systems. And those four years when I worked in industry gave me space to make that change.
And not only that, but it was during my work in industry that I started working on what was called software crisis, which was that programs didn't work.
And millions of dollars and hundreds of man yearsyears would be wasted in project after project,
and then they'd have to announce a failure.
And that's what I started to work on that led to the ideas of data abstraction.
I don't follow.
I've never heard that term before. This is very interesting.
So we probably should resurrect it. Yeah, well, actually, and if you, you know,
even up into the 80s, there were still in the newspapers reports of failures of software.
So it was a really big problem for many years.
Was that with certain classes of applications?
No, it was across the board.
It was just that people didn't know how to build big systems
because they didn't understand how to do modularity.
And that's where data abstraction came in.
That gave you an idea of a pretty big, powerful module,
but one that was separated from everything else
and had a well-defined behavior.
So would you say that the computers got more powerful, but our thinking about how to
build larger systems didn't quite keep up with that at that time? Is that sort of what happened?
So if you think, I used to think about software versus electrical engineering with a little bit
of envy, because when you're doing
stuff with circuits,
you have to have well-defined interfaces and you have to sort of break things
up into pieces and they have to be independent.
But with software,
you have to,
it's a completely open,
there's no rules.
Nothing is enforced.
You have to make it all up yourself.
So we had to figure out what those rules ought to be and how to define them.
And with electrical engineering, at least, you have physics, sort of.
You have physics, right. And, you know, you've got wires, and you can't have too many wires.
But with data, with programs, you know, they used to be, they used to call them bowls of spaghetti. They were just all a mess with no notion of locality and modularity.
And we think about our programs now. I mean, we talk about spaghetti code, and I've definitely seen programs that seemed impenetrable and unmanageable, but they're still using libraries and modules and data abstraction.
It wouldn't even be possible without that.
So we still have messy code, but it's a different kind of messy now.
Well, I think that you just have to keep plugging away at it.
You mentioned testing as an element in this.
Yeah.
And there's this agile methodology.
Were you involved?
And I think the LISCAV substitution principle is mentioned in much of the agile methodology.
It could be, but, you know, I don't actually know much about that because, as I said, I had moved over into distributed systems and I wasn't paying any attention.
Cool. What do you over into distributed systems and I wasn't paying any attention. Cool.
What do you do in distributed systems?
Well, I started working on replication in the 80s.
At that time, people were starting to develop programs where,
or rather, another way of saying it is
there could be a program that had many servers connected by an internet. And people were starting
to think about using remote file servers. And it occurred to me at the time that
when you had all your files on your own machine, if your machine was down, you couldn't use them.
But if it was up, you knew you could get access to your files. But if your files were sitting on
a remote server, then even if your machine was up, you might not be able to access your files
because the internet wasn't working or the computer where your files were stored was crashed. And so I started to work on replication techniques that would provide better behavior.
You invented Dropbox.
Well, no.
I invented something called view stamp replication, which is also called Paxos. And it's a replication technique that allows continued service even when
some computers have failed by crashing. And then later I worked on a more complicated kind of
failure called a Byzantine failure where the computer is still running, but it's misbehaving.
And so you might say to it, here, store this
piece of data for me. And it comes back and says, okay, but in fact, it's lying.
And at some time in the future might decide just to forget about you altogether.
So I feel like I've met these computers.
Well, actually, you know, we meet them all the time because when somebody breaks into your computer and brings it down, they can cause it to behave in a Byzantine way.
Yes.
Yeah.
This is all very much cloud computing, but this was the 80s?
Well, the view stamp replication was in the 80s. The Byzantine fault tolerance was in the late 90s.
We still had computers.
We had networks and computers back then.
Remember, I worked on networks in the 90s.
They just started calling stuff cloud.
That's all marketing.
Yeah, more.
We're doing this forever. Uh, in your CV as of 2009, you had 137 papers and 102 reports. Which one was your favorite?
Oh, geez.
Well, I'm not sure that I have a favorite, but I think some of those papers are more
important than others. So the original paper that introduced data abstraction,
that was a very important paper.
You know, the work I've just been describing on view stamp replication and then Byzantine fault tolerance, you know, that's also papers that I, there are quite a few papers in there I like.
I confess.
Are you working on new research these days or are you enjoying a life of retirement?
I am still working part-time on a couple of different projects that I work on with colleagues now because I'm semi-retired and I don't have any students anymore.
So, yeah, still, you know, it's fun.
Oh, yeah.
Yeah.
Have things gone the way you expected?
Or did you not think about things that way when you were doing research?
I would say that I'm somewhat dismayed by the state of the Internet these days.
And I hope that we will be able to come to grips with the not so good uses of technology that we all see around us.
I didn't foresee that stuff.
Although, when you think about it, you know, I think I and the research community were maybe awfully naive.
I think a lot of folks working in tech were, with the exception of maybe a couple of sci-fi authors.
Yes, but of course, sci-fi authors.
Yeah, I mean, it's interesting. It's interesting where we are now. And I worry about, you know, what's going on with artificial intelligence. And I worry about fake news and, you know, a lot of the misuses that are happening on the Internet. We have to come to grips with this new technology and figure out how to keep it under control.
Which parts of artificial intelligence concerns you the most?
Well, you know, I mean, many things are good. So voice recognition is good and so forth. But I think AI can produce fake news.
Yes, definitely.
I think, you know, they can, you can no longer, you know, it used to be seeing is believing.
I don't know that seeing is believing anymore.
So I worry about stuff like that.
And then, you know, there's some really bad stuff like robotic warfare and stuff like that.
So, you know, there are many things about computers that are wonderful. In fact, I went to see Little Women the other day with my granddaughter and my daughter-in-law.
And Joe, the character in the movie, is writing a book on paper with a pen.
And I'm thinking, wow, writing was so much harder then than it is now. I mean, it's really, just think about what was involved
and what happens when you want to make a change.
You know, how much labor was involved in doing stuff
that's really pretty easy to do now using a computer.
We thought we were pretty cool when we invented Whiteout.
That was a major advance.
It was, and you know, the ability,
I remember when I still had a secretary who was typing stuff for me and she was a genius at cutting and pasting, you know, so you didn't have to retype pages. You could actually move them around. But, you know, what you can do just by working online with an editor is really amazing. It is amazing that we say cut and paste for code.
I mean, it's control C, control V.
It's what we call them, but it actually used to mean cut and paste.
That's right.
Yes.
Are there science fiction concepts that have come true over your career that you find very interesting?
Well, I don't read science fiction, so I don't.
Oh, okay. That was a terrible question then.
Well, one that doesn't apply to me. Yeah.
Do you have, what do you hope for the future? What do you think will change?
Well, as I said, I hope that we, you know, figure out how to deal with this new technology in a way that's beneficial for mankind and keeps some of the more criminal elements under control.
So that's my hope is that we can do that.
And I think we probably need to teach our students more about ethics and think about when you're designing software, you can think about how, you know, what's good about it, what it's going to do
that's beneficial and not just sort of blindly go in creating something that can be immediately
misused in a bad way.
I think that's something that is just starting to be talked about extensively,
including ethics and CS and engineering curriculum, which would be great.
Yeah.
Yeah.
Yeah.
Yeah.
We do need more ethics and we need more continuing education.
Do you have any good resources for people who have heard more about data abstraction or abstraction in general than they usually do and want to read something?
Well, first of all, you know, there's a lot of online stuff, education online.
And courses are offered at MIT and at many other places.
And so that's a very good way to become educated about things like how do you design and stuff
like that.
Fair enough.
That is the good part of the internet, though.
Yeah, that's right. There's a lot of good stuff. It's just that there's also a lot of bad stuff.
Barbara, are there any thoughts you'd like to leave us with?
Well, I just thought I would tell you a story, which is because it relates to the stuff we were talking about earlier about how, oh, yeah, this stuff really didn't happen all that long ago.
So when I got the Turing Award, one thing I discovered was that my students, in fact,
did not know that there was a world in which there was no data abstraction. But also, my husband was
online every day reading all the chatter that was, you know, people were talking about how I got the Turing Award and
so forth. And one day he saw a comment from somebody that said, why did she get the award?
Everybody knows this already, something like that. And, you know, it was it wasn't meant in a kind way but it was actually
incredibly
an incredibly nice
thing because it was true
the work that I had done
in the 70s
had become so embedded in computer
culture that everybody did know
it by then
it just emerged fully formed in the world
right but you know it's wonderful when It just emerged fully formed in the world.
Right.
But, you know, I mean, it's wonderful when your work has that kind of impact.
So that was, yeah.
It becomes the water that the fish just don't even know exists.
Right.
Right.
Yeah.
Our guest has been Professor Barbara Liskov, Institute Professor, Department of Electrical Engineering and Computer Science
at the Massachusetts Institute of Technology. Thanks for joining us again.
Thank you. Thank you to Christopher for producing and co-hosting. Thank you to our wonderful patrons
for Barbara's mic, and thank you for listening. You can always contact us at show at embedded.fm
or hit the contact link on embedded.fm.
Now a quote to leave you with from Dykstra in his 1972 Turing Award lecture.
The major cause of the software crisis is that machines have become several orders of magnitude more powerful.
To put it quite bluntly,
as long as there were no machines,
programming was no problem at all.
When we had a few weak computers,
programming became a mild problem.
Now we have gigantic computers.
Programming has become an equally gigantic problem.
Embedded is an independently produced radio show that focuses on the many aspects of engineering.
It is a production of Logical Elegance, an embedded software consulting company in California.
If there are advertisements in the show, we did not put them there and do not receive money from them.
At this time, our sponsors are Logical Elegance and listeners like you.