Advent of Computing - Episode 10 - Networking for a Nuclear War, the Americans
Episode Date: August 11, 2019In this episode we are going to explore the ARPANET. This is a companion to the last episode, which covered contemporary Soviet attempts to create an early internet. Like with last time, today we are ...still in the Cold War era. Now, this won't be a point by point comparison of Soviet to US networks. They are totally different beasts. Instead, what I want to do is look at how ARPANET was developed, what influenced it, and how it would kick start the creation of the internet.
Transcript
Discussion (0)
On October 29th in 1969, the first message was sent over what would one day become the internet.
At this point, the entire network was called ARPANET and it consisted of just four computers, spanning from UCLA to Stanford Research Institute, back to University of Utah and over all the way to UC Santa Barbara.
and over all the way to UC Santa Barbara.
Traveling all the way from UCLA in, well, Los Angeles,
to SRI at Stanford, the first message was just two letters.
L-O.
I know, it's not a very exciting start to the network that we now know and love.
The plan was to test out this new network by logging into a computer at SRI from a terminal in
UCLA.
And to do that, they had to tell the computer to log in.
The L went over just fine, so did the O. But when the letter G was sent, the computer at
Stanford decided that this was just all too much for it to handle and crashed.
Behold the beauty of the internet. While this is far
from an impressive first show, it does portent what is to come. We all know the end of the story.
The world would end up being connected with a single, unified network. But how about the
beginning? ARPANET would end up networking a whole nation, but how did it start, and what was its original purpose?
Welcome back to Advent of Computing.
I'm your host, Sean Haas.
This is episode 10, Networking for a Nuclear War, The Americans.
10, Networking for a Nuclear War, The Americans. This is going to be a sort of companion episode of the last episode, where we looked at attempts to network the Soviet Union. So if you haven't
already, I suggest you go and check out episode 9. Just like last episode, today we're still going
to be in the Cold War era. Now, I won't be doing a point- point comparison of Soviet to US network technology since they're
totally different beasts.
Instead, what I want to do is look at how ARPANET was developed, what influenced it,
and how it would kickstart the creation of the internet.
And before we get too far into this episode, I want to reiterate a point that I made last
time.
This isn't really the birth of the internet we're talking about.
There isn't one conclusive event that is the birth of the internet. Instead, things like ARPANET are
a step towards our modern networks. Now, in the case of ARPANET, this was a big step, but there
were steps before it and there would be steps after it. And while ARPANET may be the groundwork
for what would become the internet,
it would be unrecognizable to an internet user today. So what are some key traits of the internet?
Well, it's open access. Anyone with a computer can get onto the network. It's civilian. The military
or the government doesn't run it. And it's distributed. That final descriptor may be new
to you, so let me take a second to explain it. A distributed network means that the operations
of the system are spread over many computers instead of being centralized in one or a handful
of machines. Distributed networks are advantageous for a few reasons. They're redundant. If a computer goes down, the rest
of the network can stay online. It's also scalable. If you need more storage or computing
power, you just add more computers to the network. This leads to the question, where
did these ideas come from? The internet didn't just pop up fully formed. So let's get into
the context around the creation of ARPANET and how it set
the stage for a recognizable online experience. The first large-scale network in the US was called
SAGE, the Semi-Automatic Ground Environment. This was a military defense network that started way
back in 58. SAGE connected a series of radar stations, computers, and terminals. This was
to create a detection perimeter for the imminent Soviet attack. As you can guess, this was a
single-purpose network, and it was purely military-owned and operated. For this story,
though, the network itself isn't that important. What matters is one of the researchers who was largely involved with
the project, J.C.R. Licklider. Now, Lick is someone that I've known would come up on this podcast
since day one, but so far he's been waiting in the wings. As our story goes on, I'm hoping you'll
understand why his influence on the field of computing is a little bit too large to contain in any one topic.
But for now, just know that he will end up having a hand in most of the influential advances made
during this era. But in the mid-50s, he was a professor of psychology at MIT. While at the
university, he became involved with Project SAGE,
and worked as the head of an internal team to handle human-computer interfaces.
Around this time, leading up to his work at SAGE, is when Lick started to become what we
would now think of as a computer scientist. Keep in mind that we're still in the 50s.
This is very early in the story of computers.
The idea of a computer scientist wasn't yet set in stone,
so computer researchers in this era had a widespread of backgrounds and motivations.
In the case of Licklider, he was interested in the idea of computer-human symbiosis,
or how to use computers to better people on a personal level.
Already, we start to see a different approach to computing than licks Soviet contemporaries.
Cyberneticists like Kitov and Glushkov look to computers, and largely networking,
as a means to enact reform for their nation. Whereas whereas many American computer scientists saw computers as a way to
improve a person's capabilities. After his work on SAGE and some intervening years working at
other computer research gigs, Licklider would find himself at the nexus of our story, ARPA,
the U.S. DoD's Advanced Research Project Agency. Today, the agency is better known as DARPA, the U.S. DoD's Advanced Research Project Agency.
Today, the agency is better known as DARPA,
and they are responsible for working to develop cutting-edge technology for the U.S. government and Defense Department.
Originally founded in 1957, ARPA was a direct response to the Soviets' launch of Sputnik, the first man-made satellite.
This is the Cold War, after all, and the U and the US didn't want to fall behind technologically.
This meant that ARPA was not only well-funded, but heavily backed politically.
Lick would join ARPA in 1962 as the head of their Information Processing Techniques Office.
This was an internal department within ARPA responsible for advancing
computer science in the states. At this new government job, Lick would have all the access
and funding he needed to enact his vision for a future of computing. Now, he would separate that
vision into three interconnected goals. Goal one was to start computer science departments and degree programs
at U.S. universities. Two, to create time-sharing systems. And three, to create a nationwide network.
Point one is relatively simple. To make any advancements sustainable, you're going to need
a way to make future generations of researchers.
There's not going to be one scientist who lives for the next hundred years.
And especially in this era, you needed some way to make computer scientists, since they still don't really exist yet. Point two boils down to the creation of systems for sharing large computers
amongst multiple people. If you want
to learn more about that, I have a few episodes on the subject in the archive. Check out episode
four and five. The crux of that matter was finding a way to quickly switch between different jobs on
large computers to make it seem like a lot of people could each have their own smaller systems. And the final point is, well, the creation of some kind of widespread computer network.
None of these ideas on their own were new.
Timesharing, for instance, had been suggested as a goal for researchers going back to the early 50s.
But what was new was the combination of these ideas into a single long-term plan.
Putting it all together, you get a self-sustaining system, powerful computers that can be used by many multiple people at once, accessed from anywhere over a network.
That access to computing power is key, because it's used to increase the amount of work and research that any one person
can accomplish. In turn, that means better computers and networks can be designed and created.
And by having computer science programs at colleges, you have a home for this kind of research,
and, hopefully, a source of ongoing funding. The other large innovation that Lick spearheaded
was the idea of general-purpose networks.
Earlier military networks like SAGE were only designed for one purpose.
They were just for military defense and for sending signals about incoming attacks.
We should keep in mind that this shift from single-purpose to general-purpose networking was a revolution.
But it wasn't just happening in the states.
Concurrent to this, cyberneticists in the USSR were also proposing general networks.
But Lick and his cohort didn't know about this parallel evolution that was taking place.
All of this is great in practice, and luckily, Lick wouldn't have to deal with getting his plans approved, or at least anything that needed approval he would quickly get.
So that just leaves the tricky part of figuring out how exactly to implement a nationwide network.
In the Memorandum for Members and Affiliates of the Intergalactic Computer Network, Lick would write,
It seems easiest to approach this matter from the individual user's point of view,
to see what he would like to have, what he might like to do,
and then to try to figure out how to make a system within which his requirements can be met.
End quote. I think this cuts to the heart of his philosophy in the years leading up to ARPANET.
How can a network best help someone on a personal level,
and how can we get that to happen?
And a lot of his contemporaries would share this view.
But all this good hope and human-centric ideas
wouldn't solve the brass tacks of designing a network.
And while Lick and his confidants had designs on a lot of the details, there would be outside architects.
Now, I know things are becoming an alphabet soup of government agencies.
I mean, we have ARPA, DOD, IPTO, but I need to introduce one more agency, and then we should have all the major players on the board.
That organization is RAND.
And while technically a private company, it was actually established to carry out research for the US military from within the civilian sector.
Started in 1948, RAND established itself as a useful tool in America's Cold War arsenal.
And one of those useful projects was networking.
Specifically, researchers at RAND were looking at how to maintain command and control functionality
during a nuclear strike.
There were a few different ideas on why and how to implement such a network, but they boil down to something like this.
In one camp, you have the plan to be able to launch a counterattack and maintain control of armed forces during a Soviet surprise attack.
The other side of the argument was a lot less bellicose.
Having a reliable communications grid during a disaster could prevent a dangerous
mistake. With nuclear arms on the table, no one really wanted to go to war, since that would
result in mutual destruction. But at the same time, both the US and the USSR were jumpy about
the possibility of an attack. And with the possibility of natural disasters, mistakes,
or just plain systems errors causing false alarms, it became increasingly important to communicate.
The thought being that if a false alarm did come in, it would be better to be able to confirm the
situation instead of acting on bad information, because any mistake or false step would result in a nuclear conflict.
In fact, some researchers would end up suggesting that the US share their eventual network designs with the USSR
so that both sides would be better equipped to avoid this deadly mistake.
Rand's networking research started in 1959, headed up by a man named Paul Barron.
Previously to being employed at RAND, Barron had worked as an electrical engineer on the
UNIVAC project, one of the first commercially available computers.
In the course of the next 5 years, a plan for a fault tolerant network would start to
take shape, and a lot of the decisions that
Barron would make in the early 60s end up being visible in the networks of today. His plans and
designs for a nationwide network were laid out in a massive document spanning 11 volumes.
Looking at the massive report, one of the first and most noticeable departures from earlier networks was its
structure. Baron's network was distributed, just like the modern day internet. If you look at a
map of the network, you'd see it laid out like a mesh, where every computer on the system is
connected to multiple other nearby computers. This means that there is more than one way to
connect any two computers on the network.
These are called redundant routes.
The reason for this is laid bare in Barron's report.
The high redundancy of a distributed network gives it a better chance of surviving an attack.
The entire point of a distributed network in this report is to protect communication,
even when most of the network has been destroyed.
To put it in Baron's words, quote, it appears theoretically possible to build large networks
able to withstand heavy damage whether caused by unreliability of components or by enemy attack,
end quote. But that's not the only new designs here.
Barron goes on to suggest that the network be entirely digital and use existing communication infrastructure, such as telephone and telegraph lines.
These two points go together because, well, it wasn't exactly set in stone that the best way to communicate would be over digital messaging.
But it turns out that digital has some big advantages when it comes to sending messages over long distances when compared to analog.
And one of those is error tolerance.
Since a digital signal is either off or on, there's less gray area that can get distorted over a noisy communication line.
If you've ever made a phone call, which is usually analog, then you should be familiar with the idea of a noisy communication line. Another quote from Barron that sticks out to me is this, quote,
It is appropriate to redesign user input-output instruments, such as telephones and typewriters, for the described system in order to gain the full benefit that accrues to an all-digital communication network.
Now, that's just one sentence, but there's a lot in it, especially if you've been reading into as many early 60s CS papers as I have lately.
The surface of this is simple.
We're going to need to update a lot of stuff to take advantage of this cool new digital network.
The telephone point is something that I find oddly specific in Barron's report that keeps coming up over and over.
In another part of the paper, he talks about the network being able to transfer
real-time voice data. Today, we know that as VoIP, or Voice Over Internet Protocol.
The other part of this quote that sticks out to me is the word benefit. Remember,
most of this proposal is talking about a network designed to survive a nuclear conflict.
But Barron talking about the benefits to be had from better human interfaces is an exact mirror of Lick's idea of computer-human symbiosis.
There's one other key aspect of this network, and that's how it actually sends data.
Specifically, it uses a method called packet switching.
This is where any message that's sent over the network is broken down into smaller packets of data as it's sent,
then travels as packets, and is reconstructed when all the packets are received.
The kicker that makes the idea of packets go so well with a distributed network
is that each packet can take a different route from sender to receiver.
This means that if part of the route a message is planned to take is damaged, it can easily be shifted onto another route without losing any data.
When you put all these factors together, a distributed network sending packets of digital data, you get, well, the current infrastructure of our internet.
It turns out that Baron's networking design is nearly a one-to-one match for what the internet looks like today.
One of the chief differences between this RAND document and what would become ARPANET was the intent.
Baron had designed a military network to survive a nuclear conflict.
ARPANET was going to be based in part off of Baron's plans, but in the end it wasn't
going to be purely military and it wasn't going to be used during a nuclear war.
Despite the differences between the final network and this earlier design, I think this
is an important part of the story. It underlines the time and place that the internet came out of,
the depths of the Cold War. And even though there would be changes and other contributors,
it's plain to see how the spirit of this era remains in our modern network technology.
So that sets up all the pieces. We have Lick and his team at ARPA providing the
ideological framework for a new network and the funding, and we have Baron coming from RAND with
concrete plans for how to make a military network. So how do the two halves come together and what
does ARPANET end up looking like? Luckily, a route for this was already in place.
Remember that one of Lick's plans was to set up computer science departments at U.S. universities.
Well, that was about to pay off in a big way.
In 1964, Lick had already departed from ARPA, but he had left a mark on his successors.
By 67, a new face was on the block, Lawrence Roberts.
Prior to his ARPA career, Roberts had been working as a researcher at MIT investigating
long-distance networking. Specifically, he had been working on a packet-based network
connecting a mainframe at MIT to another computer in Santa Monica. He was scouted by ARPA scientists while
presenting his work at a conference and was quickly brought into the government's fold.
This was an opportunity too good for ARPA to pass up. Roberts was one of the first researchers in
the states to create a working network of this kind. Over the next years, Roberts would be tasked with turning the idea of an American network
into the actual ARPANET. He was already well familiar with Licklider's work, so ideologically,
he was a good fit for the existing nebulous plans. The interesting part of this contribution to our
story was his ability to combine the ideology with the real. Part of that was working with Paul Barron's earlier plans for distributed network.
Barron himself would end up contributing greatly to the final plans for ARPANET.
As I mentioned, Roberts had practical experience with creating a long-distance network.
But there is a slight complication.
It turns out that Barron and Roberts weren't alone in their work with packet switching.
Across the ocean in the UK, a researcher named Donald Davies, a computer scientist at the
National Physics Laboratory, was also building a network. Davies' work would not go unnoticed by
ARPA, and while not joining the team directly, a lot of his ideas and contributions would work their way into ARPANET.
The crux of Davies' invention involved the idea of packet switching, or how the packets that make up a message are routed around in a large distributed network.
It turns out that managing how best to route packets can take quite a bit of computing power, which could become a problem.
Davies was able to put a final piece in the designs for ARPANET. He was the first to propose
a solution to this issue, which in retrospect seems pretty obvious. He was the first to think
up the idea of a special purpose device just for routing packets. In the end, these devices
would end up being small mainframes in their own right. In ARPANET, these were called IMPs,
or Internet Message Processors, but today, you may know of them as routers.
The addition of IMPs to the network was one of the last big steps towards a finalized plan.
By having specialized machines to handle the heavy lifting, you free up a lot of power from other computers on the network.
It also pushes you towards a standardized interface, since each computer on the network only has to know how to talk to an IMP, and not to every other computer in the world.
IMP, and not to every other computer in the world. By 1968, under the direction of Roberts,
with help from Barron, Davies, and countless other researchers, the designs for ARPANET were finished.
A series of IMPs would be put up around the country at universities. These computers would serve as the routers to let mainframes at each campus connect up to the larger network.
The next year, 1969, construction of the network would begin. The first IMP was installed at UCLA,
followed shortly by Stanford's Research Institute, University of Utah, and UC Santa Barbara.
University computer science departments ended up being the perfect homes for the first stages of ARPANET.
Licklider had already fostered a strong relationship between ARPA and many of the larger universities
in America.
But beyond that, universities were some of the only places during the 60s that really
had computers that could be put to use on cutting edge research, and ARPANET was the
very cutting edge at this point.
After the first four installations, ARPANET would only grow. After just seven years,
the network would expand to cover 61 IMPs, spanning both ARPA, military, DOD, and academic
systems. The network truly covered the nation at this point, from Hawaii all the way to Massachusetts.
Part of the reason for this growth was simply the money behind it. The program was fully
government-backed. Part of the adoption was convenience. ARPANET enabled fast and easy
and most of all secure communication like never before. In fact, a large amount of the traffic early on on ARPANET
ended up being used for sending emails.
But more than all that, ARPANET was enticing because it was groundbreaking.
It was able to reliably connect computers for the first time.
And that really goes back to Licklider's vision.
Networking wasn't ever an end unto itself.
Instead, it was another tool to bring humans and computers closer together. A single person
at a single keyboard can get a lot done, sure. But when you expand that out to a whole network
of people working together, you suddenly get to a point where the whole is greater than the sum of the parts.
And that's the true power of the internet.
So I think it's about time to wrap this episode up.
ARPANET would continue on into the late 80s, finally being discontinued around 1990.
While impressive as both a feat of engineering and cooperation,
it was still missing a key feature of the future internet, openness. ARPANET's use was limited to
people working at institutions, be those government labs, the military, or universities. To quote from
MIT's Getting Started at the AI Lab, an orientation handout for working at one of MIT's larger computer labs,
quote,
It is considered illegal to use the ARPANET for anything which is not in direct support of government business.
Sending electronic mail over ARPANET for commercial profit or political purposes is both antisocial and illegal.
is both antisocial and illegal.
By sending such messages, you can offend many people,
and it is possible to get MIT in serious trouble with the government agencies which manage the ARPANET.
Doesn't really sound like the same world we know online today.
In the 80s, purely civilian networks would start to form, and eventually surpass ARPANET.
But a lot of the technology used by ARPA remains in our networking infrastructure in the modern day. So, going back to last episode, why did
ARPANET succeed while Soviet attempts like OGAS and ESS failed? I think it comes down to the stated
goals. A network like OGAS was billed as a computer revolution. It was set to digitize
all of life and governance in the USSR. Remember that all three of the network proposals I covered
last episode were intended to save the national economy of the Soviet Union by automating it.
On the other side of the Iron Curtain, ARPANET was sold as a tool. True,
there were revolutionary ideas backed into the network, but it was never sold as a revolution
itself. In the end, I don't think anyone in the world could have guessed how revolutionary a
worldwide network would truly be. Thanks for listening to Advent of Computing.
I'll be back in two weeks time with a new episode. And since the last two have been kind of heavy,
I'm planning on working up something a bit more fun. If you like the show, then take a minute to
share it with your friends. You can rate and review on Apple Podcasts. If you have any comments or
suggestions for a future show, go ahead and shoot me a tweet.
I'm at Advent of Comp on Twitter. And as always, have a great rest of your day.