ACM ByteCast - Vint Cerf - Episode 9
Episode Date: January 5, 2021In the latest episode of ACM ByteCast, host Jessica Bell chats with former ACM President Vint Cerf, Vice President and Chief Internet Evangelist at Google, an Internet pioneer widely recognized as one... of “the fathers of the Internet.” His many recognitions include the ACM A.M. Turing Award, the National Medal of Technology, the Presidential Medal of Freedom, and the Marconi Prize. Cerf takes us along on an amazing voyage from seeing his first tube-based computer in 1958 to his work on ARPANET and TCP/IP with Bob Kahn, providing a brief history of the Internet in the process. Along the way, he explains how they approached the problem of building a network architecture that scaled astronomically over time. Cerf also points to important social, business, and ethical problems yet to be resolved, and explains why it’s an exciting time to be a student in computing.
Transcript
Discussion (0)
This is ACM ByteCast, a podcast series from the Association for Computing Machinery, the
world's largest educational and scientific computing society. We talk to researchers,
practitioners and innovators who are all at the intersection of computing research and
practice. They share their experiences, the lessons they've learned, and their own visions
for the future of computing. I'm your host, Jessica Bell.
All right, welcome everybody to another episode of the ACM ByteCast. Today we have a really exciting guest. Vince Sirt is here to tell us about some of the history of the internet,
his amazing career, and what he's thinking about for the future. So,
Vince, will you please introduce yourself to our audience? Well, thanks so much, Jessica. It's a real pleasure. My name is Vint
Cerf. I'm the Vice President and Chief Internet Evangelist at Google since 2005, but my career
goes pretty far back into the late 50s and early 60s. Awesome. Great. So, yeah, right off on that
point, let's start at the beginning. I'd
love to have you contextualize for our audience, especially for our younger members of the audience,
what it was like to be in the computing field at the very beginning of your career,
and sort of talk about your path of how you got involved, and then how you sort of moved through
this computing world before there was all this
stuff we take for granted, like, you know, all of the TCP IP stuff that you were so pivotal in
and the internet and things like that. So yeah, take us back and talk and talk about that that
time. So we need some kind of weird audio effects like we know where we go back.
Let's go back to the late 1930s for just a second.
Konrad in Germany is beginning to play around with computing based on, you know, switching systems.
Things that you would have associated with the telephone network, read switches and things like that, or vacuum tubes.
When you get into the 1940s, World War II has hit and there is a focus,
attention on computing, partly to do things like ballistic calculations, but most importantly,
of course, code cracking. Everyone knows about Alan Turing and the cracking of the Enigma,
a German encryption system at Rutschley Park. But I bring this up because John von Neumann was the American engineer
who worked with Turing and others and conceived the sort of the basic structure of computing as
we think of it today, whether it's CPU, a bus, and a memory, and things move back and forth across
the bus. So that classic von Neumann architecture emerges in the 1940s and shows up
in commercial quantity with the UNIVAC machine in the early 1950s. So we're seeing tube-based
machines and eventually, of course, the transistor gets invented in 1947 and eventually turns out to
be a replacement for tubes, much more efficient, much smaller. So we start to see transistor-based
machines coming out of IBM, for example. My first start to see transistor-based machines coming out of IBM,
for example. My first introduction to a tube-based computer comes in 1958. I was all 15 years old.
And my father got permission to take me to visit something called the semi-automated ground
environment, which is a machine that was physically so big, it was made out of vacuum tubes,
that you had to walk into the computer, literally inside the building. You walked inside the computer
to use it. So the tubes are glowing red. It looked like Dr. Strangelove, except Strangelove was four
years in the future from the time that I was seeing this thing. There are guys looking at,
you know, 24-inch radar screens. The system, semi-automated ground environment, was taking radar information from the distant early warning radars in the northern part of Canada to detect Russian bombers coming over the pole.
Supposed to automatically detect an alert when that happened.
So at this point, I don't have access to anything.
I'm just goggling.
But years later, my best friend and I got permission to use computers at UCLA while we were still in high school. So we would commute to UCLA and make
use of a paper tape based machine, a Bendix G15, which we normally use for computer controlled
milling. But we were programming it to do some interesting transcendental function calculations.
You'd type up the program on a paper tape,
feed it in, and it would run for a while and it would punch out a bunch of paper tape and you'd
put that into a flexo writer and print out whatever the answers were. So we both got very
excited about using those computers for that sort of thing. We were all of the 16, 17 years old.
Then I went to Stanford University as an undergraduate in the math
department, but I took every computer science course that I could. And when I graduated in 65,
after using Burroughs 5000s and 5500s, which are very sophisticated computers, I went to work for
IBM as a systems engineer. And I ran a timesharing system called QuickTran. Now,
you have to understand that timesharing was invented in the early 1960s at MIT with
John McCarthy and others. So it was fairly new. And the fact that IBM had a commercial
timesharing system running in 1965 was pretty amazing. So I ran that for a couple of years
and realized at the end of the two years that I
didn't have the theoretical base that I really needed to pursue a career in computer science.
I returned to school at UCLA as a graduate student to learn, you know, what's a compiler,
how do operating systems get designed, theory of computation, all those things. But right in the middle of that process of working on my PhD,
I got involved in a project from the U.S. Defense Department called the ARP.
Right.
That was the Advanced Research Projects Agency Packet Switch Network that was exploring a way
of hooking a wide range of different brands of computers together over a homogeneous packet switch net.
Now, at the time, this is the late 1960s, packet switching was heretical.
If you were doing any kind of network switching, it was supposed to be circuit switching,
which is the way the telephone system works.
But that would have been really slow.
We were hooking a dozen university computer science departments together
with their machines from all kinds of places for digital
equipment and IBM and HP and so on. And so we, instead of having each machine dial one up when
it needed to send something, which is too long, we introduced this packet switching idea. And
this turned out to be just stunningly successful. It worked very, very well. And by 1971 or 72, networked electronic mail gets
invented as one of the several applications that people, we could do remote login to a time-sharing
machine on the other side of the network. We could do file transfers. Then we could do electronic
mail. And that we could see emerging out of the electronic mail, the social aspect of that kind of communication.
This got created.
The first one that I knew about was called Sci-Fi Lovers because we're all geeks and, you know, we're arguing over who's the best science fiction writer.
The next one was Yum Yum, which was the Stanford University Restaurant Review. And so 50 years ago, we were already seeing the sort of the roots of social networking emerging from email distribution lists.
We saw a variety of things emerging in nascent form.
So that's the early 1970s.
We do a public demonstration in October of 72 in Washington, D.C., of this ARPANET concept.
And then I go to Stanford University to join the faculty with a joint appointment between computer science and electrical engineering.
And in the beginning of spring of 1973, the guy that I had worked with on the ARPANET project, Robert Kahn, had gone from Bolt, Baronek, and Newman,
which is the contractor that built the basic underlying packet switch network,
went to ARPA.
And so he shows up in my office and he says, ARPANET really worked well.
We are thinking of using computers in command and control.
But that means some of the computers have to be in ships at sea and in mobile vehicles.
But ARPANET was running on dedicated telephone circuits, connecting the packet switches together.
So you can't tie the tanks together with wires because they run over wires and they break and the airplanes never make it off the tarmac.
So he had already started working on a mobile packet radio system, which we use today effectively
carrying our mobiles around. But back then, we're talking in the early 70s, this was
amazing stuff. The radio was a cubic foot and cost $50,000 each.
Oh, my gosh.
Imagine how much has happened over the course of the last year or so. So he had a packet radio
system in the San Francisco Bay Area and a packet satellite system over the Atlantic.
So now we have a problem.
How do we hook the packet radio, packet radio, packet satellite and ARPANETs together in order to make it look uniform?
And that was the Internet problem.
And ironically, about a mile and a half from my laboratory at Stanford is Xerox Palo Alto Research Center.
And Bob Metcalf and David Boggs are experimenting with Ethernet in May of 73.
And that idea they got from the University of Hawaii, which had been running a program for a few years called AlohaNet.
It was called AlohaNet because you just transmit whenever you want to,
and if there's a collision in the air, this is a radio-based system,
if there's a collision and the data doesn't get to the central computer
and you don't hear an acknowledgment, then you just retransmit.
But instead of retransmitting after a fixed delay,
you randomize the delay so that you don't have another collision.
Aloha is sort of, you know, hang loose, you know, do whatever you want.
Right. And so Ethernet was a little more sophisticated because it could detect the
collisions very quickly and then stop transmitting in order to make it more efficient. So we had
four different kinds of packet switch nets. Bob and I developed something we call TCP,
transmission control protocol. We then engaged three different groups at Stanford University,
Bolt, Baronek, and Newman in Cambridge, Mass., and University College London in London to build
the first versions of the TCP protocols. So we're trying to do that implementation in 1975. We
iterate through several instances of the protocol design. We split the internet protocol off from the TCP part
in order to allow for real-time but unreliable communication so we can handle radar traces,
real-time voice and video, which I have to point out was part of our objective. So voice and video
that we're doing, well, we're doing voice right now, but doing voice and video on a regular basis,
we were planning for that. Wow. In the 1970s, we were doing experiments with it in the early 1980s,
but we just didn't have very much capacity to do it.
Right. How did you think about, like, so I was reading an interview of yours and
someone had asked you, oh, well, you know, did you have any idea what this would have become
today? And you're like, well, you know, we designed this network to be really future-proof.
How do you go about breaking down a problem like that to sort of be strong enough to and flexible
enough to accommodate a future that feels like it just exploded into this thing that we call
the internet now? Yeah, how did you break that problem down? Two things. First of all, we made
some fundamental
assumptions. The first one was couldn't change any of the networks that were going to be part
of the internet because they'd already been built. And second, we said, we don't want them to know
that they're part of the internet. So we said, so they have to be interconnected with computers
that the computers that interconnect the networks, we call them gateways, today we call them routers,
those gateways had to know that they were part of the internet, even though the networks they
were connected to didn't. And so the network addressing, the global addressing of the internet
was not known by the networks, but it was known by the gateways and the host computers that were
talking to each other end to end. Second, end to end principle was important. Whatever you put
into the net popped out the other side, no matter how many routers it went through or gateways it
went through. Just like when you throw a postcard into the post office, it may be carried in a
variety of different ways, but it's out intact at the other end most of the time.
It's a best effort system. And we said, we will make the packet switching system,
the core of the system, best efforts, but we won't make any guarantees. If you need guarantees,
then you have to have an end-to-end process for detecting loss, retransmitting, detecting
duplicates. That's what TCP did. The IP layer and the adjacent user datagram protocol that sat on
top of it was a real-time unreliable service, didn't guarantee
sequencing or anything else, but it was fast and that's good. If you want to know where the missile
is now, you don't want to know where it was 10 minutes ago. So for real-time applications,
we needed that. So the problem sort of dictated what the solutions looked like.
Two very important principles that I think need to be understood in order to realize why this system has been so dramatically capable of scaling and of adopting and supporting new applications.
The first one is that we didn't put a limit to the number of networks that could be connected, although some addressing considerations that we had to deal with as the network, the number of networks grew.
Second, we said that the Internet packets won't know technically how they're being carried, just like the postcards don't know.
Right, right.
That was important because since they don't know, they don't care.
And when you add optical fiber, for example, which was not part of the original design, the Internet protocol layer didn't care, didn't know.
Right, right.
All it knew is that
it just got dumped down into some network that was going to carry the internet packets. The second
thing though, equally important is that the packets don't know what they're carrying,
just like a postcard doesn't know what you wrote on it. And the consequence of that is that if you
introduced a new application, the only place that needed to know what the bits meant that were in the
packets were at the edges of the net where the applications were, not in the core of the net.
So the net is actually application ignorant. You could have made the applications more efficient
if the network knew about the details, but we didn't want that because we didn't know what
the applications were going to be over time. And we didn't want the network to constrain the applications.
And you can see from the origins of the ARPANET at 50 kilobits a second in the backbone
to present day internet whose core backbones run at 400 gigabits a second,
go on to a terabit in the next year or two.
The system has scaled by a factor of 1 to 10 million, 67 orders of magnitude.
It's very rare to see an architecture that can do that.
The number of applications, the number of protocols now is in the hundreds.
The arrival of the World Wide Web in December of 1991, a new layer of protocol was put on top of TCP IP by Tim Berners-Lee.
And that opened up a whole new batch of potential applications. They were enhanced by Mark Andreessen
and Eric Bina at the National Center for Supercomputer Applications in 1993 or so,
when they said, why don't we make this a graphical interface
instead of a text interface, which is what Tim had produced. So the graphical user interface of
the Mosaic application, the Mosaic browser, was a stunning achievement because it transformed the
network. It suddenly looked like a magazine with formatted text and imagery, eventually streaming
audio and video. Jim Clark, who was the founder of Silicon Graphics, takes one look at Mosaic and
says, holy crap, that's a big deal. He brings Andreessen and Bina to the West Coast and starts
Netscape Communications in 1994. In 1995, they go public. This is very unusual to go public after a year. The stock
goes through the roof. Suddenly, everybody's throwing money at anything that looks like it
might be part of the internet. That's the big boom. Then in April of 2000, there's a big dot
bust when a whole lot of those companies didn't have a business model. Capital, which they ran
out of, and theo scratching their head saying what
happened and the answer is you know capital is finite revenue is supposed to keep going if you
have a business model and some of them didn't feel familiar at all does it so so they you know they
sort of just flap that fell on their faces but the worldwide web and the Internet continued to expand dramatically. People kept throwing new content into the web, not to get paid for it,
but simply because they wanted to know that what they knew was useful for somebody else.
Right, right.
So now we're awash in information, and we can't find anything, which promotes the search engines.
AltaVista originally from Digital Equipment Corporation, and then
Yahoo, and then Google, and Bing, and, you know, others. So, so the, you know, here we are, that's
like in the 1990s. And then along comes the mobile phone, the smartphone. Right. That has a long
history, which we don't have time to talk through. But it started in 73 with Marty Cooper at Motorola, the same
year Bob Kahn and I are starting to work on the internet. So Marty's working on mobile phones,
we're working on internet. And my son is born in 73 and he wants to know whether he's the brother
of the internet. So, and everybody says, okay, so you and Bob are fathers of the internet. So
who are the mothers? Another long half hour conversation.
Yeah, yeah.
What's important, though, is the milestone of the smartphone coming from Steve Jobs.
The reason it's so important is that the two technologies, mobile telephony and Internet, had been going in parallel for quite some time.
Suddenly in the smartphone, they come together and they are mutually reinforcing.
So the smartphone makes it possible for you to get access to everything on the internet,
wherever you are and can find a radio link. And of course, the smartphone makes the internet
more useful because you can get to it from anywhere you can find a link. So the two are
dramatically powerful. And we see that today as we see smartphones proliferating around
the world. People experience the internet primarily through applications on the smartphone.
Right, right. And I think that brings us now back to the present day and thinking about this
extremely powerful network that has now been connected to us in so many different ways. Like you said, it's in our pocket.
We can deal with it all the time. I'm curious to hear what you think the major challenges and
problem spaces are around this network today. Do they feel very similar to the challenges and
problem spaces when you were starting out to think about this or they feel new and different and yeah,
sort of speak about what you think is our next big hill to climb.
It's a very big hill. At the beginning, even though this was being done for the Defense
Department, we were not focused heavily on security technology and cryptography like that.
Now, I will say as a side observation that I was working with NSA in 1975 on a secure version of the net
using classified technology to secure the packet switch system.
But in the commercial sector or in the university sector,
I can't imagine relying on the graduate students to be disciplined about their use of cryptographic keys and other
kinds of things. So we didn't try to do that. So security was not an afterthought. It's just that
the technology of public key crypto wasn't available in the earliest periods when we were
doing the design. It didn't become available until somewhat later in the mid to late 1970s
and early 80s. But we retrofitted it in. That's why we have AWS and we have TLS and
we have IPsec and DNSsec and all these other things are retrofittable. So that's the good news.
Security is still a big challenge. And I want to come back to that later in the conversation about
why is security such a big problem. Second thing is information in the internet and misinformation and disinformation
and the side effects of social networking, which have built in feedback loops, which lead to some
fairly serious problems associated with people's behavior. Right, right. And this is just a major problem that we are experiencing right now,
because it's very hard for an algorithm to figure out that someone has spoken an untruth,
for example, right, right, represented something. And so we are now challenged by, by the social
networking environments to figure out how do we protect users from the harmful side
effects of social networking, some of which are by accident, you know, people spreading
misinformation because they don't know any better. Or worse, they spread misinformation
and disinformation deliberately, whatever, you know, is motivating them. It might be critical, it might be pecuniary.
And so we, and of course, scientists will tell you that whatever you think is true now may not
be true 10 years later when you discover your theory was wrong. Right, right. So we have,
are challenged right now by the harmful hazards that show up in the net, including malware and
distributed denial of service attacks and
other, what am I thinking about, identity theft and so on, bullying and so on. So now the reason
that's such a big problem is that governments are looking at this saying, oh, our citizens are at
risk. We need to do something. And who do we blame? And who do we make responsible for fixing everything? And you see the companies that offer social networking being hammered on by members of Congress here in the U.S. and Parliament.
You also are starting to see fragmentation of the Internet where nation states are drawing boundaries around the network, claiming they have data sovereignty inside of their countries. They want to introduce rules that would frankly make the internet not work very well because now
data transfer from country to country is no longer easily accomplished. There are rules that
are in conflict as you cross from one international boundary to another. So
you start to hear a call for digital cooperation among countries in order
to find a way for the countries to cooperate with each other and come to common agreements
of how they will deal with abuse on the network, how they'll deal with law enforcement, how they
might deal with extradition treaties and other kinds of things, how to deal with digital evidence,
how do we establish a chain of
custody of digital content? How do we assure that digital evidence hasn't been tampered with?
You can easily extrapolate to a wide range of problems that already exist in the physical world
and have their counterparts in the online world. And we're struggling to figure out how to cope with those
in a way that doesn't just essentially fragment the internet into a useless collection of islands.
I would argue that we've already seen how powerful the internet can be in terms of enabling people
to share information, discover information, to be educated, to do just-in-time learning,
like going to YouTube and saying, how do I cook Chinese eggplant?
Right, right.
So there is this huge upside and this very difficult downside, and that is the big arm
wrestling match that we have to cope with.
And I would say that it also introduces one of the big problems.
Why do we have so much trouble with safety and security and reliability?
The answer is bugs in software.
Now we are faced with the problem of teaching people who are interested in computer science or just want to use it,
that if they're going to build software that others are going to rely on,
they have to take responsibility for and be accountable for the mistakes that they make. So we have to create incentives and we have
to create technology that will help programmers discover stupid mistakes before they get out into
the field. That won't be perfect either. So then we also have to build in mechanisms for updating
software safely and securely. We have to know where did the update come from? also have to build in mechanisms for updating software safely and securely.
We have to know where the update come from.
We have to know that it hasn't been altered on its journey from the source to the destination.
It's especially important with the Internet of Things where you have boxes all over the place,
full of software able to communicate in addition to compute.
Right.
And if there are bugs, we need to be able to fix them.
There are questions about how long will they be supported? What if this is a heating, ventilation,
and air conditioning system with a lifetime of 30 years? Will the IoT aspect of it be supported by
the manufacturer for that period of time? And if not, how do you get your hands on the source code
to fix something after they
say we don't want to support it anymore right turn to a third party and i haven't even gotten
into the intellectual property problem and the store you know digital content for hundreds of
years which is yet another huge uh area of concern yeah yeah yeah and i'm curious you
talked briefly upon where the responsibility lies in creating these codes. You know, is it the responsibility of the manufacturer, of the programmer, of the company, of the nation state? I'd be curious how you think this then ties back to our education of the kinds of people who are developing these technologies, writing code, you know, dealing with these problem
spaces. How do you think these big challenges are affecting or shaping the way that we're
thinking about computer science and computer science research now? So two ways. First of all,
I think everyone should have the experience of writing a program, discovering how hard it is
to write a program that doesn't have a bug and how hard to find the bug
to fix it. So that forces you into a certain mode of thinking. Sometimes we call it computational
thinking. It is critical thinking. It is break down the problem, find evidence, compare that
with your theory. It's very scientific. And I think everyone should have that experience,
not necessarily because they're going to be programmers, but because it establishes a modus operandi, which will serve you well in a wide range of
disciplines, a wide range of jobs. So that's one thing. The second thing is better tools.
The research community needs to help us build tools that will help us track down bugs or avoid
making mistakes that are exploitable. So that's a big research thing.
And the third thing is to figure out where and how accountability should be applied
so that we don't lose the value of the enabling power of computers
in our zeal to protect people from harm.
We want to also provide them with the enabling power of computing to let them do
that a human being could not normally do. So when you do a search, you're doing something no human
being could do, because the scale of the search is so big. When you do translation from 100,
among 100 different languages, very few people speak 100 languages as far as I know.
And even if the translations are not perfect, they enable you to do something that you would not otherwise be able to do,
which is to use some useful information, even if it's not precisely right, or at least the gist of something.
So and, you know, we're what about things like real time transcription so that people who are deaf can see what's being said?
Cochlear implants, which is yet another way of neural interfacing to electronics.
The field of computing and electronics is an endless frontier.
You're limited only by what you can figure out how to program.
And from my point of view, this is a
fantastic field to be in, but it does have some real challenges that on the ethics side and on
the technical side and on the business side, that will be a rich territory for students to
contemplate as they try to figure out their place in the economy.
Yeah. And thinking about that, as we wrap up our time together, I
always like to hear from guests, especially guests who've been as pivotal and involved in the
creation of the internet is what keeps them so excited? What are you just really pumped to hear
about in the next, you know, 5, 10, 15 years of computing? What keeps you here and,
you know, fuels your, continues to fuel your passion?
Well, we haven't talked about artificial intelligence and machine learning, but it's
to be an incredibly powerful tool. Multi-layer neural networks are doing things that we
used to think were not possible. Moreover, they also make mistakes. And so figuring out why and how they make mistakes
and being able to anticipate that and plan around it is a super important thing because some of
those mistakes could be fatal, literally. Right, right. Self-driving cars being an obvious example
of that. So that's one thing. The second thing from the network point of view, which is my world,
primarily, we've already gone off planet.
Starting in 1998, we began thinking about how to design and build an interplanetary Internet.
NASA, the Jet Propulsion Laboratory, and now the other space agencies like ESA and JAXA and the Korean Space Agency have been working for the last 22 years to standardize a set of protocols that will work at interplanetary
distances, unlike TCP IP. We now have on board the International Space Station, we have standardized
interplanetary protocols, we have prototype software running on Mars right now in the
and in the rovers, and they will be available for the return to the moon in 2024. So as we send out more scientific spacecraft,
as they complete their scientific missions,
we can repurpose them to be nodes
of an interplanetary backbone.
So for me, this is like the beginning
of a fantastic science fiction novel.
Yeah.
I'm not seeing the end of it,
but I'm having a ball at the beginning.
Yeah, yeah.
Well, Vin, I want to thank you so much
for giving us your time.
This was a wonderful conversation. I wish we had many more hours to talk about all these
different rabbit holes that go down. But thank you so much for being with us today.
It's a real pleasure, Jessica. Thanks for taking the time to chat. I look forward to
another opportunity someday. Awesome. Thanks so much.
ACM ByteCast is a production of the Association for Computing Machinery's Practitioners Board. Thanks so much. That's learning.acm.org.