Advent of Computing - Episode 59 - ALOHANET
Episode Date: June 27, 2021ALOHANET was a wireless networking project started at the University of Hawaii in 1968. Initially, it had relatively little to do with ARPANET. But that relative isolation didn't last for long. As the... two networks matured and connected together we start to see the first vision of a modern Internet. That alone is interesting, but what brings this story to the next level is the protocol developed for ALOHANET. Ya see, in this wireless network data delivery wasn't guaranteed. Everyone user shared a single radio channel, and terminals could talk over each other. So how did ALOHANET even function? Selected sources used in this episode: https://archive.org/details/DTIC_AD0707853 - The initial 1970 ALOHANET report https://archive.org/details/jresv86n6p591_A1b/page/n3/mode/2up - Summary paper by Kuo, contains a map of ALOHANET https://sci-hub.do/10.1145/1499949.1499983 - Khan's 1973 PRNET paperhttps://www.eng.hawaii.edu/wp-content/uploads/2020/06/abramson1985-Development-of-the-ALOHANET.pdf - 1985 wrap-up of ALOHANET, by Abramson Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and bonus content: https://www.patreon.com/adventofcomputing
Transcript
Discussion (0)
I might be a little bit jaded, but there's something bland about the internet these days.
Sure, the technology is breathtaking.
Its reach is amazing.
Our lives, and really the lives of everyone on the planet, have been irrevocably changed
just by the fact that the internet exists.
But at the same time, the sheer scope of the network can make it hard to grasp. Its
ubiquity has only made that problem worse. It's not just that everyone has a website, but it's
also the infrastructure needed to support so much stuff online. You can't just make a checklist of
everything on the internet anymore. Really, you can get lost in a maze of pages that all feel and look
very similar. And I guess that's a big reason why I always get drawn back to ARPANET.
Folk like to reminisce about the so-called Wild West days of the internet. Well, it doesn't get
much more wild than the extreme early days at ARPA. Despite seeding the data
revolution that we're currently living in, ARPANET existed as a relatively small network.
It was such a manageable size that a map of the entire network fit on a single piece of paper.
Now, that's not just some hyperbole. Over its life, maps of the network were made, circulated, and very regularly updated.
Over the years, these maps got a little crowded,
but you can still make out every computer connected to the network.
But don't be tricked into thinking that ARPANET doesn't have its own secrets.
I've pored over these maps a lot myself.
It's a fun way to see how the proto-internet grew over the years.
They just look like a page full of squares and circles connected up with straight lines,
but each node has its own story to tell.
One of the nodes labeled BBN is where Colossal Cave Adventure was written.
The node labeled Xerox hooked into some of the first graphics-based computers.
And off to one of the extreme sides is a particularly special node.
It shows up in 1973.
It's the first computer on the map to be connected by a zigzag instead of just a straight solid line.
It's labeled Hawaii. And it's where wireless networking was invented.
Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is episode 57, AlohaNet.
Today, we're examining a pivotal event in computer history that's remarkably easy to overlook.
The traditional story of the first message ever sent over ARPANET is, at least in some circles,
pretty well known. It's usually told as the first packet of data sent over the early internet.
It goes something like this. In October of 1969, a few letters were transferred over the first leg of the network,
rocketing from a computer at UCLA to another system at the Stanford Research Institute.
Thus, the first network, and the predecessor of the internet we all know and love, sprang to life.
Of course, as with all neat little stories, there's issues.
This doesn't get into the whole myriad of details that surround the early internet.
ARPANET was the most high-profile network that the US government funded,
but it wasn't the only project in the government's digital portfolio.
The chronology of ARPANET to internet is a nice and easy story,
but it's only nice and easy if you disregard a lot of other projects that were involved with shaping the network. Today we're going to be talking about one
of those projects. AlohaNet was a networking research project that started in 1968. By that
time, ARPANET was already pretty far into development, at least four years, maybe a little
bit more depending on how you count the beginning of ARPANET. AlohaNet wouldn't come fully online
until 1971. But here's the important thing to consider. AlohaNet was crucial in pushing forward
new networking technology. Ethernet was initially based off technology developed for AlohaNet. Early mobile
phones drew on techniques from AlohaNet to handle data transfer. 3G, a technology that's easier to
consider obsolete instead of vintage, that even has traces of Aloha inside it. Wi-Fi shares the
same distinction. Its wireless transmission works by using technology
developed for AlohaNet. ARPANET is often remembered as the forerunner to the internet.
In a lot of ways, it is. But the specifics of how we use our modern network are pretty far
from the days of the ARPANET. So, should AlohaNet be remembered instead? Well, that gets complicated.
Partly because Aloha and ARPANET start bleeding together in really interesting ways. There were
a lot of researchers behind the development of AlohaNet, but the one that was nominally in
charge of the new project was Norm Abramson. Also, he wrote the most about AlohaNet, so he makes a good
touchstone source-wise. Now, if Abramson had made a few different choices in life, then he would
have probably been more deeply involved with ARPANET. By the middle of the 1960s, he already
had an impressive career. During undergrad, he studied physics at Harvard.
He earned a master's degree in the same field at UCLA. Then he graduated Stanford with a PhD
in electrical engineering. He went on to teach at both Stanford and Berkeley. Along the way,
he published Information Theory and Coding. It's a textbook that covers the, at the time,
somewhat new discipline of transmitting and handling data.
The point is, Abramson was at the right place and had the right set of skills at the right time
that ARPANET would have been a logical next project for him to become involved with.
But the timing didn't work out exactly right.
The wrench in the works here is that sometime around late 1967 or
early 1968, he packed up and moved to Hawaii. Now, I know I've certainly had a similar impulse
myself, but Norm wasn't just running off to spend time on the beach enjoying the surf.
He had received a job offer from the University of Hawaii. Now, I haven't been able to track down an explicit explanation as to why he accepted,
but I have my theories.
Part of it was probably location.
I can tell you that Hawaii is a much nicer locale than the exact vicinity around Stanford.
It's at least much warmer.
Judging by Abramson's later actions, I think another draw was a sense of a challenge.
Most university professors aren't just professors to teach.
For some, that's their passion, but usually they go into the field for a mix of teaching
and research.
How that ratio actually shakes out has a lot to do with the college you're working at.
Some schools are heavy working at. Some schools
are heavy on research. Some care more about shaping young minds. Abramson himself was more
interested in research than purely teaching. In late 1960, the University of Hawaii wasn't
particularly known for its research programs. It definitely wasn't on the level of a Stanford or MIT at least.
While that may have been an issue for some prospective professors, it could have been a
major draw for others. It meant that U of H could be a great place to grow a new research project,
to start something exciting without the fetters of an existing program.
It also meant that some hotshot research professor from Stanford
could be a big fish in a smaller pond. Like I said, Abramson never explicitly stated this in
the interviews and writings that I've gone through, but that's my educated guess. Regardless of reality,
in 1968, Norm started his new gig in Hawaii, teaching a split between computer science and electrical engineering.
While not in the classroom, he was looking for some new research to take on.
And progress on that front would come surprisingly quickly.
Shortly after settling into his new position, Abramson learned about a new Department of Defense project.
It was called Project Themis, and as with everything DoD related,
it's a bit of a maze. Basically, the goal of Themis was to help give new research groups
and smaller universities a bit of a financial push. The actual details are a little scarce,
I haven't been able to find any specific announcements. But I can tell you that
Project Themis was pretty wide-reaching, and it shows up as a funding source for a lot of research
in different fields. I've seen papers funded by it in medicine, biology, statistics, even computer
science. Word of Project Themis got to Hawaii at a good time. There was a growing group of professors who wanted to push the university deeper into research.
Abramson was among the rabble-rousers, but he wasn't the only one.
Norm and a crew of three other faculty began scheming up a proposal for...
well, for something.
They knew they wanted some kind of DOD-funded research program on campus,
but they weren't entirely sure what that would look like.
The crew eventually agreed on a proposal for a wireless packet network they called AlohaNet.
The broad strokes of the pitch were to create a network built for the specific needs of the University of Hawaii.
Everything would be wireless, it would be used for digital
data like ARPANET, and it would function as a testbed for new networking ideas.
The DoD jumped at the chance. And just as a quick aside that complicates Themis,
the official Aloha project was paid for by Project Themis, which was funded by the Department of Defense.
AlohaNet was supervised by ARPA, but ARPA wasn't paying for anything.
I bring this detail up because reports on Aloha-related research have this mix of
ARPA-slash-DOD-slash-Themis funding statements on their front covers.
It took me a little bit to sort
out what was going on there, so I wanted to share that bit of work. Anyway, Abramson would sum up
the early phase of Aloha like this, quote, The original project had, as its goal, research more
than operational needs of the university. I think it's possible that we may have justified the research,
as researchers often do, by pointing to its possible applications for the University of
Hawaii and other areas that had difficulty with telephone communication of data. But certainly
the goals of myself and the other people who were involved in the project were research goals
rather than operational goals." That's all pretty vague,
but it seems that Aloha started out with only vague plans. After all, the group that wrote
the proposal were researchers looking for a project. They happened to have a collective
background in communications technology and theory, so they went with what they knew.
background in communications technology and theory, so they went with what they knew. But even in 1968, some parts of AlohaNet were already set in stone.
As Norm stated, the issue with telephony in Hawaii was crucial in defining the early network.
In the late 60s, the islands just didn't have a very large or reliable phone grid.
Why should this matter for networking?
Well, it all goes back to ARPANET's
original design. So, if you'll humor me, we need to take a quick detour to another government-funded
networking project. The touchstone I always use for ARPANET's design rationale is Paul Barron's
1964 report on distributed networks. It was written at RAND. It's not 100% the design
that the final network went with, but it's pretty close. Plus, Barron gives detailed reasoning for
each choice that went into ARPANET. Really early on, the project's whole point was to create a
nationwide network for military command and control. According to Barron's report, this network
had to be reliable, able to function if part of the network was destroyed, and as cheap to make
as possible. His solution, after meticulous research and number crunching, was a fully
distributed packet network. That's become the bedrock that the internet is built off of.
One of the specifics that Barron threw into his
work and that would stick around in ARPANET throughout its lifetime was the use of existing
phone lines. Data on ARPANET traveled through the same telephone grid that was used for placing
calls around the country. Barron's rationale for this was simple. The national phone grid was
right there waiting for use. It was cheaper than building new
infrastructure. Plus, there was a ready pool of knowledge for maintaining and repairing phone
lines. Sure, over long distances, phone lines can get noisy, but digital packets can survive
a surprising amount of noise before they become useless. There were other reasons that ARPANET
was able to leverage existing phone infrastructure.
The network was going to be used for host-to-host communications.
That is, every computer that was connected directly to the network was some kind of mini-computer or mainframe.
There's a little bit of complication with routing hardware, but we'll get to that later.
Now, that may sound like a pretty obvious thing.
Of course, every computer on the network had to be a computer.
But that one design choice had some major implications. And remember, we're not talking about personal computers.
We're talking about multi-user systems.
Host systems were shared hardware.
They would only be accessed from local terminals.
To connect to a system somewhere on
ARPANET, you had to get your terminal connected up to the university or lab's computer. Then,
from there, you could reach the larger network. Key here is that you already had to have access
to some big computer to use ARPANET. You couldn't go plug your terminal directly into the network
and be online. So while each terminal doesn't need a phone line to connect up to the network,
each host does need a phone line. You could have a hundred user terminals clacking away,
but they're all connected to the same local host, so you only need one line to feed into the ARPANET.
the same local host, so you only need one line to feed into the ARPANET. That matters because each host had to hog a phone circuit while it was connected to the network. This is all pretty
broad and generalized, but the overall design here meant that ARPANET used as few phone lines
as possible, and that minimal use of phone circuits was possible because it was networking host-to-host systems.
We're networking big computers, not terminals. The point I'm getting to is that ARPANET's use
of the phone grid may, at least at first, sound like a really simple general-purpose solution.
But that's not entirely the case. ARPANET's design, at least in part, made it work well with the existing US phone grid.
At least, the grid in the continental United States.
While Barron's RAND report spends a good amount of time talking about the use of unreliable phone lines,
there is a certain point where the network will just break down.
And it turns out that Hawaii's telephone system was pretty close to
that magical breakdown point. ARPANET was the new and exciting thing in networking, but its technology
wasn't as broadly applicable as some may have hoped. The complicating factor was that the Aloha
team wasn't trying to network multiple big computers. They wanted to connect many multiple
terminals to a single remote computer. There were already existing low-tech methods for connecting
terminals to remote computers, but all of those relied on consistent telephone connections.
More than that, in those earlier schemes, each incoming connection from a terminal needed to have its own phone circuit.
That means that a call from a terminal would end up hogging part of the island's limited switchboards.
Scale that up a few hundred terminals high, and you run into a real resource contention problem.
The solution, and one of the first real concrete decisions made by Project Aloha, was to just
ditch phone lines.
In fact, they ditched wires altogether.
AlohaNet was going to be purely wireless.
So let me just summarize that really quickly so we all understand where we're at.
In 1968, a very small group of researchers started developing a long-range wireless network.
This is happening at the same time that a huge collective of researchers and government organizations are still trying to get the first large-scale network off the ground.
This kind of goes without saying, but we're dealing with something really radical here.
So that's going
to be our baseline. In late 1968 and early 1969, the Aloha team started to fill in the specifics.
The entire project gets a comprehensive progress report in 1970, so I'm going to be drawing off
that paper a lot moving forward. Just to get the lay of the land, the 1970 report explains,
quote, Kauai, Maui, and Hawaii. In addition, the university operates a number of research institutes
with operating units distributed throughout the state within a radius of 200 miles from Honolulu.
Even without all the specific ARPANET details that I went over, we can see that AlohaNet will
have to serve a very different purpose. The fact that this new network had to deal with inter-island communications
shaped how it was developed. In the short term, that gave Abramson and his colleagues a good
excuse for an interesting project. But long term, it would have huge ramifications on the very
technology that they would develop. It was also determined pretty early on that AlohaNet would use radio transceivers as its medium of choice.
But how exactly would that be accomplished?
You can't just pick, oh, we'll do wireless radio networking and have it be done.
Well, this is where it gets complicated to talk about the Aloha project.
You see, Aloha wasn't some early anti-ARPANET. The two are intertwined in a number
of ways, but how they meet up, especially in this early period of 1968 and 1969, isn't always super
clear. I've tried to figure out how much the crew at Hawaii knew about ARPANET, and all I can really
get is that they knew something. Later in the project,
researchers at the University of Hawaii would become personally acquainted with ARPANET figures
such as the aforementioned Paul Barron, but there must have been some cross-pollination beforehand.
Once we get to 1970 and some of the earliest published works on AlohaNet, there are citations
to ARPANET publications, but this early period
is still a little bit ambiguous. Anyway, we can start to see the connection to ARPANET in one of
AlohaNet's core technologies. That's the packet radio. Packetized data was a cornerstone of ARPANET
since inception. In this method, you don't send data as a long continuous stream.
Instead, every message is broken up into packets. Each packet is the same size and contains a short
header describing where it's going, where it came from, and a few more bits for things like
error correction. In ARPANET, of course, these packets are sent down a boring phone line.
these packets are sent down a boring phone line. That's easy. For a LojaNet, radio is the only way to go. There have been previous systems that sent data over the air. The trick to it has always come
down to encoding. The most simple example that I can think of is Morse code. In that system,
textual data is broken down into a series of long and short pulses. That signal can then be transmitted over a wire or via radio.
Skilled operators can even keep up a conversation this way.
One person just has to tap out a message while the recipient listens,
then once the message is over, the roles are switched,
and a new message goes back to the original sender.
It's a bit basic, but that forms a basic data transfer protocol.
For a LOHAN net, you couldn't just have an operator sitting at a terminal tapping out
data for the computer. Although I guess it would be kind of charming to imagine some poor intern
sitting listening to boops and beeps on a radio and trying to quickly type out data. Instead,
on a radio and trying to quickly type out data. Instead, they needed a way to have the computer send its own version of Morse code over the air. In 1968, there just wasn't an existing system
that did that. So Norm and the team had to create a new type of radio system, known as a packet radio.
Here, we have a wonderful example of a technology that's named really well.
The packet radio is, simply put, a radio that can send and receive packets of data instead of
just raw radio signals. This is one of the places where the line between Aloha and ARPANET gets
intertwined. This packet radio system was
a twist on what was happening back on the mainland. It goes beyond just shared use of packets, though.
On ARPANET, there was a layer that sat between the mainframes and the wider network. This was
built up from systems called IMPs, or Interface Message Processors, basically the precursor to a modern-day router. Aloha had
an equivalent of the IMP, which they called the Menehune. Now, that name is always written in all
caps, but it's not an acronym. Menehune is a reference to a Hawaiian legend of crafty elves
or dwarf-like entities. I've seen that it may be a pun, since IMP can be read as imp,
and a minihuna is somewhat like a Hawaiian form of an imp. The best sourcing I have for that is
a 1984 paper where, in parentheses, the author notes that minihuna is Hawaiian for imp as
kind of an aside. But may or may not have been contemporary. Either way, I think it's a fun joke.
The Minahuna operated like the main router for AlohaNet.
It's at between a set of radio antennas at the university's fancy IBM data center.
Any incoming packets would have to first pass through Minahuna.
Same goes for outbound packets.
So this layer is where most of the actual action
and work takes place. Crucially, there's only one Minahuna on the network. On ARPANET, a series of
IMPs handled traffic, but in Hawaii, we're dealing with just one central routing device. The actual
protocol is really simple, so let's go ahead and describe the entire data
transfer protocol for AlohaNet. On the terminal side was a small packet radio and modem. A message
coming from a terminal was first turned into a series of pulses by the modem, then transmitted
out on one channel by the packet radio. Then, the transmitting station just waited for an
acknowledgement signal sent on a second channel. If it didn't hear anything after a set amount of by the packet radio. Then, the transmitting station just waited for an acknowledgement
signal sent on a second channel. If it didn't hear anything after a set amount of time,
the message was rebroadcasted. Over at the main campus, the Minahuna was just sitting around
waiting for some incoming packets. Now, just to be clear, the Minahuna wasn't just some mess of circuits and antennas. It was powered by an HP 2115
minicomputer. And a mess of wires and circuits and antennas. We're in this weird spot in history
where we're almost up to the microprocessor, but that's still a few years off. And while the
Minihuna didn't have to do anything overly complicated, it did need to crunch some numbers and make some
decisions, so it kind of had to be computerized. Once again, the parallels with ARPANET are really
clear. On the mainland, IMPs were made by repurposing Honeywell computers. Each incoming
packet contained the actual payload data, that's what you're actually trying to send, plus identification
information and something called a cyclic redundancy check, or the CRC. The payload,
that's self-explanatory. The ID bits are also pretty simple, it's just some information about
what the minihuna should do with the packet and which terminal the packet came from.
should do with the packet and which terminal the packet came from. But the CRC is where some magic comes into play. CRC is what's known as an error checking code. It sounds intimidating, but in
practice it's pretty simple. To generate a CRC code, you just need to carry out a series of
divisions on the payload. Then you store the final remainder from a final division as the CRC.
In AlohaNet, that calculation was carried out by the transmitting terminal.
Once the Minahuna received a packet, it ran the same CRC calculation on the packet's payload.
The idea here is that the CRC algorithm is well known. So if the data that was sent from the
terminal is the same as the data that was
received by the Minahuna, then the CRC codes should be the same. To detect errors, you just
compare the freshly calculated code with the code that was stored in the packet's header.
If the two codes are equal, then you're good. The Minahuna sends an acknowledgement signal back to
its transmit channel, and it passes the packet back to the university's mainframe.
But if the codes don't line up, then something bad must have happened to the packet mid-flight.
To deal with this problem, the Minahuna was programmed to take the best course of action.
It just drops the packet.
It doesn't send an ACK signal, and it doesn't pass on anything to the mainframe.
An error just resulted in nothing.
When a transmission from the mainframe was ready, the process just ran the other way around.
The mainframe sends a packet to the Minihuna, the packet is broadcast out on the transmit channel,
and then a receiver hooked up to some terminal hears the packets on the airwaves,
sees that it has its correct ID,
and boom, you have data coming onto your screen.
If you just have one terminal on the network,
then everything's fine.
Data goes into the Minahuna,
data comes out,
and a student can seamlessly,
at least seemingly so,
type away at their terminal.
But once you get two, or ten, or a hundred terminals,
then things actually get interesting. As the 1970 Progress Report puts it,
quote, a transmitted packet can be received incorrectly because of two different types of
errors. One, random noise errors, and two, errors caused by interference with a packet transmitted
by another console. The first type of error is not expected to be a serious problem. You see, AlohaNet, and this is one of the really cool things about the network,
it ran with just two radio
channels. One going out of the Minahuna, and one coming back in. The channel that the Minahuna
transmits on will always be clean and error-free. It has full control over that, so it's not going
to send more than one packet at once. But its receiving channel is where we get possible errors.
If two terminals try to send in a packet at the same time,
then they'll collide mid-air.
It's like if two Morse code operators tried to tap out a message at the same time.
You'd get dots and dashes, but it would be a garbled mess.
If you tried to translate it into English, it would be
unintelligible. That's why the CRC code matters so much. If packets collide and the Minihuna gets
some weird mess, then the CRC check will fail, and the packet will get dropped. Neither of the
colliding terminals will receive an ACK signal, so each will retransmit. The 1970 report just says that the terminals
retransmit after some ambiguous set timeout. Later works clarified that this backoff was
a little more complicated than some pre-selected number. In general, transmitters used a random
delay. This meant that the retransmissions were highly unlikely to collide, unless somehow both transmitters
rolled the same random number. And that's it. Thus ends the description of the AlohaNet protocol.
It's surprisingly simple. It may sound a little bit janky, but it works really well.
The actual Aloha paper goes over mathematical proofs that show that, at least up to a certain
point, collisions are infrequent enough that packet delivery can be guaranteed.
Now, I know how exciting that is.
A mathematical derivation makes for some really good podcast material, but I'm going to skip
that for now, since I don't want to try to describe pages of math.
since I don't want to try to describe pages of math.
Their math model basically shows that collisions happen infrequently enough that they're negligible as long as you have under a certain amount of clients.
For the initial AlohaNet spec, that magic number is 162 terminals.
That's the theoretical upper limit,
but AlohaNet could actually support way more traffic than math alone suggests.
And this all comes down to the fact that computers operate way faster than humans do.
Since all parameters on AlohaNet are constant, we can figure out how long it takes to transmit a packet.
Luckily for us, Norm and his co-conspirators already worked up the math in their pages of derivations.
It works out to 34 milliseconds per packet. That's counting the actual data transmission
and some added time for system overhead. Now, go ahead and count to 34 milliseconds for me.
Maybe there's something wrong with me, but I personally can never get it just right. It goes by too quickly.
Just around 30 packets could be sent on the network per second. While that may not sound like very fast data transfer compared to modern standards, it's fast enough that the Aloha team
was able to get away with some tricks. Now, I guess this is a corollary to my last fun game,
but try counting how long it takes to type out 80
characters. That number's important because a single packet on a LojaNet could hold up to 80
characters of data. To supply a constant stream of data from Terminal to Minahuna, you'd have to
type at around 2400 characters per second. That, I can assure you, is totally outside the realm of the real
for human typing speed. The simple fact is that people spend most of their time in front of a
computer either reading what's on the screen, thinking, or, if you're a programmer, swearing.
Now, of those activities, none of them require data transmission.
So the radio channel that terminals transmitted on an Aloha net remained relatively quiet.
This type of data pattern is often called bursty, since data comes only in bursts.
The interesting result is that, at least in practice,
Aloha net could handle way more than the theoretical maximum
load of 162 terminals. Collisions were much more unlikely than the theoretical model because users
weren't transmitting data constantly. So the upper bound on AlohaNet just was never an issue.
All this is showing how the protocol, despite sounding a little bit unsafe, is actually very practical.
Technically speaking, you don't have a 100% guarantee that a packet will arrive in a fixed amount of time.
But it's still fast and reliable enough that you don't run into issues.
The Aloha protocol also made it possible to cut a few more corners, which brings us to the actual client
hardware. That's the terminal control units. The network had just one Minahuna, but every client
terminal had to be wired into a terminal control unit, or TCU. I said before that it's just a modem
and a radio, but it's a little more complicated. This was an all-in-one device that actually
handled talking
and coordinating with the rest of the network. The important piece here is that a TCU was much
less complicated and much cheaper to produce than the single Minahuna. A TCU consisted of a serial
bridge that the terminal hooked into, an 80-character buffer, a packet radio transceiver,
a modem, and a small computer to handle all the
hardware. The protocol wasn't simple enough that it could work without a computer, but the task
on the client side was simple enough that they didn't need anything as powerful as Minahuna.
TCUs initially used small mini-computers, but once microprocessors became available,
then they adopted the newer technology. In all, the only task you really have to do is
wait to get 80 characters, calculate a CRC, and then send out your data. On the receiving side,
you just have to wait for some data to come in. It's not that complicated, so you don't need a
whole lot of computing power, or a lot of memory.
Now, I keep saying that AlohaNet used radio like that was some magic bullet. Can't rely on phone lines? Then just use the air. All your problems will evaporate. In some ways, it was a fantastic
solution. But a radio-based network comes with its own unique issues. Any solution does.
AlohaNet, at least in the initial years, used UHF-banned radio.
UHF, or ultra-high frequency, can get blocked by almost anything.
As Frank Kuo, one of the main researchers spearheading the project, explains,
quote,
Since the transmission scheme of AlohaNet was by line of sight, explains, quote, beyond the range of its central transmitting stations. I'm cutting that off before Kuo gets into the juicy details,
but that's for a good reason.
There were issues with Hawaii's phone grid at the time
that made it unsuitable for networking.
Using radio introduced different issues of a similar genre.
The Hawaiian islands are volcanic.
That's how they formed in the first place.
So AlohaNet wasn't just
beaming packets across some endless flat expanse. It had to contend with geological features.
Perhaps just as importantly, the islands are lush with foliage. As someone who's lived in a forest
most of his life, I can tell you that trees kill radio signals like nobody's business.
The latter development that
Kuo mentioned was able to partly solve these line-of-sight issues and introduce some new
functionality. Initially, AlohaNet only had a range of 10 kilometers, which is neat, but not
all that practical. As Kuo described in a 1981 paper that I quoted above, the next step was to produce a series of radio
relay stations. This is another place where AlohaNet's radically simplified protocol really
helped. A relay station on the network was straightforward. It just listened for incoming
packets, put them in a buffer, then sent them out. Since all data was already broken into discrete
packets, it was easy to grab and
repeat at the relay's leisure. One little complication showed up in the relay station.
Since every terminal on the network was sending data over the same channel, you couldn't just
store and retransmit every packet you heard. You'd end up with this weird digital echo.
But there was a simple way around this. Each packet's header
contained the ID number of the terminal that sent it. So each relay station was configured to only
retransmit packets with certain ID codes. So in practice, terminals that needed to relay messages
could be added to this list. It would only take two of these relay stations to complete the network
and connect all of the university's
campuses together. All this groundwork was laid out by 1970, and over the next two years, the
network grew and came into full operation. The Minahuna started seeing real traffic as radio
towers went up across the island's campuses. However, it's not totally clear how much traffic we're talking here.
AlohaNet was definitely in use for research.
That's what all this paper trail points to.
I haven't been able to find really anything about how students or professors outside the core research group were using the network.
It may have been that it was used in some limited capacity, but if there was heavy use,
then there isn't much
written about that aspect of the project. That aside, AlohaNet saw major growth in the early
70s. One result of this expansion across Hawaii was that the network became very heterogeneous.
This is really clearly shown in a 75 progress report that includes a diagram of AlohaNet.
You have Minahuna and its mainframe in the center,
surrounded by nine transceivers of varying design.
Of course, you have the original TCU boxes,
but you also have a number of microprocessor-powered transmitters.
Some use the Intel 8008, others use the 8080.
But beyond that, we have so-called concentrators.
These are marked down on the
network map as big boxes. These were mini-computers that functioned as standalone machines, but were
also connected to the network over their own TCU-like devices. Each of these concentrators
had more terminals connected to it, serving as additional access points for the network.
The map probably makes it more clear,
so I'll be sure to drop a link to the paper that includes it in this episode's description.
Basically, we start to see a pretty strangely shaped network. We have a bit of the characteristic
spoke and wheel shape of a centralized network, but the concentrators add in a little bit of
complication. It's almost decentralized, but there's really only one core system that every terminal connects to.
Contrast this to ARPANET, which is fully, you know, distributed.
In ARPANET, any node can communicate with any other node.
Imps form this mesh-like series of connections between nodes.
On AlohaNet, you just have one destination,
no point-to-point communications inherent to the network. This may sound like another little
detail that we can gloss over, but I think that would miss a really important implication.
ARPANET was a product of the U.S. military-industrial complex. The internet may have some more soft edges today,
but it still has this bellicose origin. ARPANET was designed as a distributed network because
that made it more resilient to the destruction of nodes. It was designed, very explicitly,
to survive a nuclear war. That's why I always go back to Paul Barron's reports when I describe
ARPANET. His writing doesn't hide the fact as much as some later research does. AlohaNet didn't have
the same roots at all. It almost seems like a miracle that researchers like Abramson's weren't
folded into ARPANET early on. Distance may have slightly insulated the team at the
University of Hawaii, and that gave them leeway to make different decisions.
We've already seen how there was a lot of crossover between the two networks,
but the founding goals of AlohaNet had nothing to do with military or industry. It was a purely
research-focused network. It received government funding. It
was administered by ARPA, but it wasn't started by ARPA. This gave researchers a lot more of a
free hand in its design and implementation. The network's strange topology is one example of this
freedom. I'm using some big theoretical air quotes, but AlohaNet can never be described as being as
reliable as ARPANET. If the Minahuna went down, the network was gone. If the central mainframe
went down, then there was nothing to connect to. If there was too much interference on the right
UHF band, then everything just stopped working. Every single
packet would get dropped. That wouldn't have flown at all on ARPANET, but this was a totally
different project. Now, after going through all that, in the early 70s, ALOHANET did get a lot
closer to ARPANET. The Hawaiian network didn't change its design, but it did get connected up to the continental system.
The story of how AlohaNet joined the wider world is, as far as I'm concerned,
it's up there with some of the fun sagas in computing's history.
Abramson described the event in some detail in a 1985 report.
For some context, in 1972, he went to Washington, D.C. to visit Larry Roberts. He was
one of the key players in the design and implementation of ARPANET. At the time,
Roberts was looking to expand ARPANET. Abramson described the visit like this,
quote, I was visiting Roberts' office in Washington for discussions dealing with both technical and
administrative matters in the Aloha system
when he was called out of his office for a few minutes to handle a minor emergency.
1972 was a year of rapid growth for the ARPANET, as the Interface Message Processors,
IMPs, which define the nodes of the network, were installed in the first network locations.
While waiting for Robert's return, I noticed on the blackboard in his office a list
of locations where ARPA was planning to install IMPs during the next six-month period, together
with installation dates. He continues, I took the chalk and inserted the Aloha system in his list
and beside it placed the date of December 17th, chosen more or less at random. After Robert's
return, we continued our discussion, but
because of the rather long agenda, we did not discuss the installation of an imp in Hawaii,
and I forgot that I had inserted an installation date of December 17th for us in the ARPA schedule
on his blackboard." Now, as the fated date approached, Abramson got a call saying that the hardware to connect to ARPANET was on its way.
It seemed that he was able to slip right into the scheduled expansion and the University of Hawaii was entering deeper into the fold.
Norm, a little bit scared probably, called Roberts to fess up to what he had done,
but from what I've read, it seems like AlohaNet was already going to be on the roadmap eventually. Perhaps Norm's chalk adventure just pushed that date up.
Soon, this unique radio-based system would become another node on ARPANET,
connected with the little zigzag line I mentioned at the top. So, what did that look like? How do
you connect up two networks that are so different? And more
importantly, what do you get when you combine two divergent technologies? Hooking Aloha and
ARPANET together was a little bit harder than just penciling in an installation date. It involved
another emerging technology, the Commercial Communication Satellite. In 1962, ComSat was founded. It was
one of these private public companies that the government likes to spin off.
Randcorp, one of the research centers that proved influential on the early design of ARPANET,
is another example of a private public lab. But ComSat wasn't a think tank. This company
was concerned with monetizing space. As the name suggests,
ComSat was all about communication satellites. The corporate structure and chronology is long
and confusing. It involves multiple federal administrations and outside companies, but
none of that really matters today. We want to get to the cool stuff. In 1965, ComSat launches its first satellite,
InterSat-1, nicknamed Early Bird.
During this seven-year lead-up to ARPANET's expansion,
ComSat was building up a small fleet of satellites.
Importantly, the company's ties to the federal government
meant that ARPA could get easy access to space-bound resources.
It also paved the way for connecting AlohaNet to the mainland.
The general rundown of how early comm satellites works is pretty simple,
but for whatever reason, finding precise details on their design is annoying.
My best guess is that no one is super eager to share possible trade or government secrets online.
The net result is I don't get to talk about the specific chips that made these things tick,
but I can give a general overview of their operation.
In short, it's a radio relay.
But in space.
There are a few more components, but none of that affects their core function.
You have a little booster rocket for adjusting orbits, you have solar panels and batteries for power, and some early satellites had onboard tape for storing data.
If I can find resources, I definitely want to cover these things in more depth.
But anyway, at the core, we're still dealing with a radio relay station.
Each satellite comes tricked out with a set of
antennas for receiving and transmitting signals. On board, there are the circuits needed to
independently receive, boost, and transmit a number of separate radio channels. The idea is that
someone in, say, Hawaii can essentially bounce a signal off one of these satellites. Then someone
in California could catch the new
transmission. The specific satellite that AlohaNet ended up using was ATS-1. It was an experimental
geostationary satellite. That just means that it was on an orbit such that it was stationary
relative to the ground. It responded on VHF channels, which, just like UHF, operate on line
of sight. The interesting part, and one of
the reasons that I want to know more about these early comm satellites, is the transmissions were
sent as PCM audio. The signal flow worked like this. The University of Hawaii's main mainframe
was connected up to an ARPA-provided TIP, which is just a fancier imp router. Outbound messages were ran
through a modem, then converted to a stream of digital PCM audio and shot up to the commsat.
A satellite dish at NASA's Ames campus in California was connected to the satellite's
downlink, and then it would pass messages on to the rest of ARPANET. Messages going into AlohaNet followed the same path, but in reverse.
The PCM channel that was being used was originally intended for voice communications,
so we're dealing with a really, really fancy and really expensive telephone system.
But this use of PCM really sticks out to me.
For those unfamiliar, PCM, or pulse code modulation,
is an encoding scheme used to represent audio as a series of binary pulses. It's usually used for,
you know, actual audio you're gonna listen to. It comes along with all the advantages of digital
communication, namely it's easy to boost or relay and it has higher tolerance for error.
That makes it great for long-distance communications, like beaming data off satellites.
But PCM is usually used for data that you want to listen to, you know, actual audio that will eventually be turned back into analog waves.
It's how audio files work.
They use PCM to store audio data as bits on a disk.
Data transfer using modems,
while similar to the realm of PCM,
is kind of its own thing.
Modems turn incoming binary data into a series of audio pulses.
So there's some overlap,
but if my understanding is correct, you'd still need to do
some special tweaks to get a modem's data stream to actually play nice with the PCM specification.
I might be overthinking it, I might just not know enough about satellite relays from the 1960s, but
this stuck out to me as a little convoluted. That encoding tangent aside, in 1973, we start seeing AlohaNet show up on maps of ARPANET.
The net result is that anyone connected to ARPANET's continental network could now connect to mainframes at the University of Hawaii.
But that's the less interesting direction for traffic, really.
Going the opposite direction, any terminal connected to AlohaNet could connect to any mainframe on ARPANET.
And that was all done wirelessly.
The hop from terminal to the university's mainframe was wireless.
And the hop from Hawaii to California was all wireless. Once logged in,
you could surf the proto-internet without any cords. Beyond just the novelty factor,
we're starting to see a really modern network topology forming. The hosts, what users are
actually trying to connect to, are off at some far distant location on some larger packet switching network.
Most of what a user is trying to get access to lives outside of a lohan net.
But the local wireless network functions as a distribution system. While not 100% identical,
we see a similar setup with Wi-Fi networks. You connect up to the nearest wireless router, which acts as a gateway
up to a much larger wired network somewhere else. It doesn't matter where, because it's all connected
at some point. In this period, we run into some sourcing issues. There just isn't a huge paper
trail on the practical use of a LOHA net, but that's just fine. The University of Hawaii became a hotbed for
networking experiments that weren't as possible on the ordinary parts of ARPANET. Shortly after
the boring PCM-based satellite connection came online, the team at Hawaii was on to newer
projects. In 1973, that same year, another open satellite channel was added to the mix,
In 1973, that same year, another open satellite channel was added to the mix,
this time using AlohaNet's standard packet-based protocol.
Abramson, Kuo, and everyone else on the team were finding more and more ways to show off their new technology,
and they were really proving that the University of Hawaii was now a top-grade research college.
The work done on AlohaNet also planted the seeds for a number of future projects.
Ethernet is probably the most well-known, and a story that deserves its own place.
But I want to close out this episode by discussing one of the more wild projects inspired by Aloha, the Packet Radio Van. The actual project was called the Packet Radio Network, or PRNet, and was funded by yet
another DARPA grant. This project is equal parts hilarious and foundational for what we think of
as the modern internet. It also provides a great case study in how good design can make technology
more generally useful. The new project started in 1973, right in the same period where AlohaNet was really
hitting its stride.
It was headed up by Robert Kahn, another major figure in the development of ARPANET.
The rationale for PRNet is multifaceted.
On one hand, it was an investigation of how packet radio could be used to form quick ad-hoc
networks.
That is, DARPA wanted to see if there were military applications for the technology.
On any battlefield, information is key, so I'm sure the idea of an instant network spanning a
combat zone had major appeal to the military. There is also the more technical aspect to
consider. AlohaNet was showing that the limitations
of ARPANET's rigid wire design were a major problem, so researchers were eager to see
what it would look like to overcome that rigidity. Here I'm drawing heavily from a paper written by
Kahn in 1975, the Organization of Computer Resources into a Packet Radio Network.
It's a bit of a wordy title for the project.
The paper could have just as well have been called
Adapting AlohaNet for Fun and Profit.
Within the pages, Kahn gives a wonderful explanation
of why packet radio networking was so exciting.
Quote,
The use of packet broadcasting techniques for interconnection becomes attractive when the number
of minicomputers or microprocessors is sufficiently large and the overall traffic flow is small.
The use of wire buses for packet broadcasting appears certain to be an effective interconnection
technique. However, packet radio provides another alternative that may be useful for organizing the The main point here is that wired networks like ARPANET are fine.
They work.
But wireless systems like ALOHAnet can work just as well,
plus they offer a level of flexibility that would be impossible with wires.
On its own, that's cool.
It's the interconnection part that makes Khan's paper really exciting to me.
You see, PRNet wasn't just an attempt to remake AlohaNet on the mainland.
One of its major goals was to adapt the technology developed for AlohaNet as a new way to link together isolated networks.
We're starting to see shades of that with the Aloha ARPANET satellite link, but
this was taking that to a whole other level. The team in Hawaii had already shown that AlohaNet's
packet radio system could be used to send whatever you wanted. It was generic enough that the data
payload didn't actually matter. It could be text, images, or even other types of packets. Also, packet networking was
reliable enough that it could be almost transparent. That's the whole illusion the technology conjures.
To a human user, it just looks like you have a continuous connection to some far-off computer.
But in reality, there are only short bursts of data making up that connection.
but in reality, there are only short bursts of data making up that connection.
So we have this technology that's reliable, it can fade into the background,
and it can provide a convenient way to let networks talk to each other.
You could call it a way to facilitate inter-network communication.
In theory, AlohaNet could be used as the glue to create a network of networks.
Kahn's paper has a handy word for this, internetting.
We can nounify that to a more familiar word.
PRNet was really the first clumsy step to create a recognizable internet.
But that's just the theory.
How did PRNet as a project actually connect multiple networks?
Well, with a networking van, of course. What else? I'm not joking here. The vehicle of choice for this early inter-networking experiment was literally a van. On the surface, this may sound
absurd, but this fits PRNet's research goals really well.
The project wanted to investigate the possible use of mobile networks.
What's more mobile than a van full of computer scientists and hardware?
There's also a big convenience factor.
Having a ready-to-go van full of networking equipment means that you can travel around and perform research anywhere the road takes you. Construction of this packet radio van, as it was known,
began at SRI in 1975. An old GMC delivery van was outfitted with experimental packet radios
designed specifically for the project. For computing power, the van sported an LSI-11-based machine.
This was a microprocessor-based variant of the popular PDP-11.
Terminals, test equipment, satellite dishes, antennas, and monitors were all jammed into the van,
along with air conditioning to keep everything operational.
So, we have a miniaturized and refined version of AlohaNet.
The LSI-11 played the dual role of MiniHuna, IMP, and host computer.
Radio hardware was racked up right next to it.
And just like AlohaNet, the packet radio van was positioned as a testbed for new networking tech.
The van would go into immediate service testing the breaking point of packet radio networking.
A few other packet radios were installed around the Bay Area with their own hosts to give SRI a tiny network to experiment with.
This included the obvious tests, just probing the range that connections could be maintained at.
It also veered into more complex testing, such as measuring error rates at varying speeds,
and observing how changing obstructions could affect the radio channel.
Over the course of late 1975, bugs were worked out, protocols were changed,
and the radio van proved its usefulness beyond any shadow of a doubt.
I think this is where we see the blossoming of what was developed at AlohaNet. You could almost say that
AlohaNet's link with the mainland was a proto-inter-networking. Almost. The issue is that
the link between AlohaNet and ARPANET was still using ARPA's protocol. Data just happened to be
sent over a satellite feed. And once on AlohaNet, there was only one host, the mainframe that was
connected up to ARPANET over the satellite. With PRNet, we see a more mature network forming.
In addition to the van itself, PRNet had multiple nodes, each with its own computers waiting to be
accessed. Those systems could all talk to each other over PRNet, and they could interconnect up to ARPANET to access other hosts.
That, dear listener, sounds a lot more like a real internet.
As we discussed, AlohaNet had a passing resemblance to a more modern network.
Its packet radio system functioned as a way to wirelessly connect terminals up to a much
larger wired system. But with PRNet, we can see more than just a passing resemblance. There were
real nodes on the network, peer nodes that could communicate with each other. Plus, you could also
call out to the larger ARPANET. The base technology that made that possible, the layer that glued
all of this wireless networking together, came from a little lab in Hawaii.
Alright, that does it for our latest exploration of early networking. AlohaNet is one of the less
well-known chapters in the development of the
internet, and I think that does the project a real disservice. Not only is AlohaNet important
for understanding where the internet came from, it's also just a fascinating story.
The wireless network project was started as a way to get the University of Hawaii
deeper into the research game. Along the way, it became something much bigger than just
that. We also get a fantastic example of how to solve a problem the right way. The team at the
University of Hawaii faced some very specific hurdles, but they came up with a generally
applicable solution. AlohaNet used a radically simplified protocol. It cut corners that other
researchers wouldn't have tried cutting. And in
the end, it created something both practical and adaptable. The same packet radio system that made
it easy to connect Oahu and Hawaii could just as easily connect the radio van to SRI. There's a lot
more I could say about the legacy of AlohaNet. A lot more than I ever expected, actually.
say about the legacy of AlohaNet. A lot more than I ever expected, actually. A noteworthy omission in this episode has, of course, been Ethernet. It's a wired protocol that adapted
AlohaNet's packet-based system. I have good reason for that. Ethernet needs its own separate coverage
once we start getting deeper into Xerox stuff. It's a massive story that just has to have its own space to breathe, so that's
coming eventually. And before you stop listening, I need to throw in just one quick programming
announcement. We're fast approaching the first Advent of Computing Q&A episode. If you haven't
been keeping up lately, or this is your first time experiencing the show, we recently passed
50,000 all-time downloads. So to celebrate,
I'm doing a bonus Q&A episode that should be airing in about three weeks, I think, if my math's
right. I'm going to be posting it on an off week, so don't worry. If you don't like that sort of
thing, then it won't take away from my usual content. In preparation, I'm collecting questions
from my audience up until July 16th.
So there's still time if you're listening to this as it comes out.
If you ever want to know something about the show, myself, or a topic I've covered,
then send in your questions.
I have a pinned tweet on my Twitter account over at Advent of Comp.
Or you can shoot an email to adventofcomputing at gmail.com.
Now, with that out of the way, we've come to the end.
So thanks for listening to Advent of Computing. I'll be back in two weeks' time with another piece of the story of the computer. And hey, if you like the show, there are now a few ways you
can support it. If you know someone else who's interested in computing history, then why not
take a minute to share the show with them? You can also rate and review on Apple Podcasts.
If you want to be a super fan, you can now support the show directly through Advent of
Computing merch or signing up as a patron on Patreon.
Patrons get early access to episodes, polls for the direction of the show, and bonus content.
You can find links to everything on my website, adventofcomputing.com.
If you have any comments or suggestions for a future episode, then go ahead
and shoot me a tweet. I'm at adventofcomp on Twitter. And as always, have a great rest of your day.