Advent of Computing - Episode 131 - Computer... Books?
Episode Date: May 5, 2024I've been feeling like rambling, so it's time for a classic ramble. This time we are looking at the origins of books about computers. More specifically, computer books targeted at a general audience. ...Along the way we stumble into the first public disclosure of digital computers, the first intentionally unimportant machine, and wild speculation about the future of mechanical brains. No sources listed this time, because I want the journey to be a surprise!
Transcript
Discussion (0)
One of my favorite books back in my younger years was Peter Norton's Inside the IBM PC.
You must remember here that my exposure to computers was a little strange, to say the least.
My first machine that I actually programmed on was a hand-me-down from my father.
It was this beautiful IBM AT clone, made by a local computer company back in, I think, the late 80s. If I recall correctly,
the company's name was something like Pacific Rim Computing. Nice and generic, something that
matched its generic steel case. It only had a monochrome display adapter, so my world was
contained in this tiny amber CRT display. I can still remember the clacking of the old full-height hard drive as it booted up.
The computer ran DOS 5, which I only vaguely understood at the time. One summer, it must
have been in middle school, I decided I needed to actually learn how to use DOS. To do so,
I reached for a book on my father's shelf. That was, of course, inside the IBM PC.
Earlier that summer, I had set up a hammock behind our house. It was tucked between two trees and the manger where we kept dry firewood.
I spent ages in that hammock reading all about the wonders of IBM's little machine,
drooling over the idea of a Hercules graphics card and transfixed by the promises of protected mode.
Reilly's graphics card and transfixed by the promises of protected mode. If it wasn't the first technical book that I read cover to cover, then it was at least one of the earliest. I still
have that same copy of Inside the IBM PC sitting on my shelf. This is, of course, the revised and
enlarged edition. I've actually been thinking about that book a lot lately. I've started to
receive boxes of books from listeners.
It's been a huge boon to my collection, and it's something I can't possibly express my full gratitude for.
But a funny side effect is that I've been continually reshuffling and rearranging my shelves in my office.
The home office really keeps going through these cycles of order and chaos,
depending on the whims of the United States postal system.
I've been keeping my books better organized as a result.
Currently, I have texts shelved by broad category.
I have a shelf for hardware reference, one for papers and proceedings of conferences,
one for programming languages, another for books about programming,
but not necessarily about any one language, one for history books, and one for contemporary texts. I was actually surprised by one category
that did turn up during my latest shuffle. I have half a shelf of contemporary books about
learning about computers. It's in that section where my beloved copy of Norton sits.
Perhaps you can see where I'm heading with this.
I knew that introductory texts were always a thing, but I didn't realize how pervasive it was,
especially back during the microcomputer revolution and earlier. So maybe there's investigating. Welcome back to Advent of Computing. I'm your host, Sean Haas, and this is episode 131,
Computer Books. Today, we're revisiting a classic genre here on the show, the open-ended investigation, or rather the good ol' Sean
Gets Wrecked episode. The recipe here is simple, I find something that seems interesting, I
work up some questions, and then I go in with absolutely no plan. This time, the target
is simple, introductory books about computers. At least, that may look simple on the surface.
What I have in mind is something a little more complex, so allow me to explain my meandering
train of thought.
And I think the word of the day is going to be meandering.
The history of computing is intimately tied to the history of accessibility.
That's one of the big arcs we can follow.
Computers start out completely inaccessible to almost the one of the big arcs we can follow. Computers start out completely
inaccessible to almost the entirety of the population. They're purely research tools at
big federal installations. The public doesn't even really know they exist initially. Over time,
computers become more and more accessible until today, when computers are just a casual fact of life.
There are many aspects to this transition. One is pure cost. As new technology and techniques
have been invented, computers have become cheaper. So cheap, in fact, that almost anyone can own
their own genuine, 100% authentic computer. Another aspect is the interface. This is sometimes called the
human factor. Computers start out difficult to use, but slowly become more and more user-friendly.
Punch cards give way to timesharing, which gives way to things like BASIC and DOS and
eventually GUIs, with a healthy splash of hypermedia somewhere along that path.
We can add a third pillar to this, one that I think is easy to overlook.
That third pillar is education.
On the big exciting side, we have things like, well, institutional education.
There are no computer science degrees in 1945, but over time, that number goes up.
Degree programs have become more prevalent,
and as a result, more certified computer people have been created.
Computers have also become more common in classroom settings.
Grade schools and high schools start experimenting with machines,
at least in the 1970s,
and over the years, that adoption continued.
Thus, more and more people know how to use computers
as a
consequence of compulsory education. But what about less exciting forms of education? What about
stuff that's not state-sponsored? Or maybe stuff that's more hobby-oriented? That's where these
texts come into play. Computer books, in theory, fill this gap. Maybe you never learned about computers
in school and you don't intend to get a full PhD about it. Or maybe you were exposed young and just
want to know more. In that sense, these computer books function as a means to make machines more
accessible to the general public. This also connects up nicely with another topic I covered years and
years ago. In 1979, the BBC started their Computer Literacy Project. There was this problem that was
identified with the rapid rate of digital progress. Computers could remove a lot of jobs for the
workforce, so it was imperative for citizens to learn to adapt. This was a multi-pronged initiative.
On one hand, it would introduce computers into classrooms, like the BBC Micro,
but on the other was more informal education for the general public.
That was done, in part, via introductory books to computing.
In this episode, we're going to be meandering through a series of computer books.
My goal here, besides just some good ol' fun, is to see if there's any common thread.
What's the message, or is there even one?
I want to examine the differences between these public-facing books and computer textbooks
in general.
What corners are cut, what's being simplified, and what's being emphasized?
In theory, a popular text should be
very different than a computer science curriculum. But is that actually the case? There's also the
question of origin. Where does this tradition of digital texts fall in the larger context of
computing? Are there real computer science PhDs before the first popular books on computers are written?
Or are these tomes actually a consequence of degree programs?
Are people going out getting the first PhDs in computer science and then writing publicly disseminated books on the topic?
I just don't know.
If these questions sound a little rambling, then, well, they are.
I'm going to be rambling for quite a while this
episode. As I said, it's the classic Sean Gets Wrecked style, so be prepared.
Let's kick things off with a definition of scope. I can already hear my long-suffering
product manager at work applauding that idea.
When I say computer books, I'm talking specifically about books written for public consumption.
These are targeted either at hobbyists or, more generally, at a general public audience.
I'm not going to be considering textbooks, academic papers, or collections thereof.
This means we aren't looking at any books with titles like Proceedings of...
Rather, we're looking for books that the average person could have picked up, read, and understood.
This leads into a bit of an interesting question.
When did the general public actually learn that digital computers existed?
This is a surprisingly tricky question to answer. Most of the first generation of digital computers
were war machines, developed during the Second World War in federally sponsored labs. Take,
for instance, the Colossus. That machine was built by the British government
to crack encrypted German communications. Information about Colossus was kept secret
for decades following the Second World War. My research seems to indicate that ENIAC was
the first machine to be publicly announced. Now, I am open to argument there. That may well be incorrect. As far as I
can tell, Zeus's Z3 wasn't widely publicized, but I don't read German very well, so I may be wrong
in that. There may be a news article that I'm missing or a press release that I just can't
access. Machines like ABC are also out of the question. That computer only really existed and
was only ever used in a single lab. There may be press around some of Bell's early computers,
but those were more calculating machines and were usually described as such.
All of the signs I found point to a 1946 press release from the U.S. War Department. This press release, published in February of 1946,
came just six months after Truman's announcement of the atomic bomb. You can actually hear echoes
of that earlier announcement in this ENIAC release. This new machine, termed a mathematical
robot, is, of course, ENIAC, the machine we know and love. The War Department goes on to explain
that it was developed as a weapon of war, to calculate firing tables in preparation for any
future conflicts. From there, it goes on to explain how ENIAC represents a great opportunity
for peacetime. ENIAC and machines like it will help to augment the ability of physicists,
ENIAC and machines like it will help to augment the ability of physicists, engineers, and mathematicians.
Incidentally, the atomic bomb announcement follows a very similar structure.
There's an announcement, description, and then the assurance that there is some peacetime use of the new technology,
and there is some money that will be set aside for dealing with that.
It's at this point we encounter the first pattern that I've noticed.
It seems that most computer books, or press releases in this case, struggle to explain why computers should matter to their target audience. The press releases hook is basically
that these computer things are going to be revolutionary, but trust us. You gotta just trust us. To quote,
official army sources made it known that research laboratories of several large institutional firms
have expressed active interest in the machine. These include manufacturers of electron tube
jet engines, gas turbines, and other types of engines. Spending vast sums of money yearly for experimentation and design research on their products,
these firms are naturally interested in any means of reducing such costs.
It is further felt that better, more scientific designs will now be possible
as a result of the new machine's facility for handling hundreds of different factors in one computation.
End quote.
It's going to be wild.
Everyone will want a computer.
Just you wait.
This press release is justified.
On a serious note, this is actually something for us to keep an eye on.
It all falls back to my constant reminder.
Sitting here in the 21st century, we know exactly how things progressed,
more or less. We know that computers were revolutionary, and still are revolutionary
in many ways. We know that digital machines will reshape the world. We know every step that was
taken along the way, and that ENIAC falls right at the start of this long arc. But in 1946, the die was not yet
cast. The same is true for all of these texts. An author writing in the 50s, 60s, 70s, or 80s
has no idea how prevalent computing will be just a decade after their date of publishing.
They may have a dream, but that's a long way off from reality.
This means that by following the justifications given in these books, we should be able to glean
some insight into how computers were viewed in different eras, or more specifically, how authors
wanted the public to view computers. Now, of course, the press release wasn't really in public hands.
I doubt many people were signed up to personally receive press releases from the U.S. War Department in this period.
Rather, the news would come out in newspapers.
This is just an aside, but the first spate of articles about ENIAC is just wild to me.
spate of articles about ENIAC is just wild to me. There's this full-page spread in the San Francisco Examiner that describes ENIAC as an electronic brain that will be able to essentially see the
future and do all science for us. It's accompanied by this wild illustration of a giant robot
literally towering over clouds and standing on top of skyscrapers.
I unironically love this and kind of want a poster just of the art, because, you know,
the electronic brain of the future will tower over humanity. It's just something I love. These
period pieces are hilarious, and personally, I just think they're fascinating. But that aside, this means
that most people's first exposure to computing, if we can call it that, was very sensationalized.
We get the full range where, on one side, official press is trying to say that computers are going to
be revolutionary, please trust us, they're a big deal, Please, there's a lot of money that will be spent on these.
And on the other end, you have the giant robot of doom that will automatically solve all
of humanity's problems, see the future, and trample the earth itself.
Into this fray enters our first computer book.
It is, as near as I can tell, the first book dedicated to explaining computing
for a general audience. That book is called Giant Brains, or Machines That Think, written by Edmund
Berkeley and published in 1949. That puts us just three years after the public announcement of
ENIAC. Berkeley isn't a computer scientist, but given the era, that's normal.
Computer scientists didn't officially exist yet. We're more looking at mathematicians, engineers,
physicists, and some biologists and psychologists. Actually, there are a lot more psychologists in
the field early on than you would ever imagine. Berkeley fits this mold of unexpected backgrounds.
You see, he was an insurance actuary.
Edmund started working as an actuary for Prudential in 1938.
Now, this sentence is going to sound really boring, so brace yourselves.
Insurance companies were actually drivers of innovation in this time
period. Roll any eye you want to roll, but look at it this way. Insurance companies were one of
the few places actually dealing with large amounts of data and running large-scale calculations.
You have research labs, government projects, then things like banks and insurers.
Those last two may not have had the same cool factor as, say, the Manhattan Project,
but they needed very similar tools as those developing thermonuclear bombs in this period.
As a complete aside, there were actually some very interesting
patents written by insurance folk around data management in this period and earlier.
Berkeley, however, wasn't exactly on that side of things. As an actuary, he was more of a math dude.
Before joining Prudential, he had earned a BA in mathematics and logic from Harvard, so the dude knew his math.
I think it's easy to see how this leads, in abstract, to computing. Berkeley is working
in a math-heavy field, and he has a formal education in the subject, and institutional
backing. More specifically, Berkeley had entered this realm at a wild time. In 1939, he saw a demo of Stibbett's relay calculator at Bell.
He immediately understood that, yes, computers were going to be revolutionary.
They could automate away so much of his work.
But more than that, these automatic machines could make for a more rational world.
that, these automatic machines could make for a more rational world. There's this book on Berkeley called Edmund Berkeley and the Social Responsibility of Computer Professionals by an author named
Longo. I've only read a few excerpts since, well, I'm a fraud and a hack and it's a very long book,
but what I've read is fascinating. Berkeley had this view of mathematics that was almost cybernetic. He
believed that math, symbolic logic more specifically, could be used to arrive at better
conclusions both socially and politically. That it should be possible to apply logic to solve the
world's problems. Basically, this is the flip side of the sensational articles about computers saving
humanity. Berkeley sounds starry-eyed, but with good reason. So when he came face-to-face with
Stibitz's machine, it just made sense to him. It was in line with his dreams, plus it would make
his work easier and more interesting. He reported back to Prudential with the good news,
the computers were going to be revolutionary. Once we hit the Second World War, we have
actually a very familiar story. Berkeley joins the Naval Reserves and is stationed in a lab
at Harvard. He is there for the development of the Mark I and Mark II. I think that means
he would have worked directly with Grace Hopper and
Howard Aiken in this period. He is very much in the belly of the beast right as computers are
being developed. This also means Berkeley was one of the few people that knew about digital computers
before Disclosure. After the war, he goes back to Prudential and makes one more final connection.
Berkeley gets hooked up with John Mouchley.
Now, this is an especially turbulent period.
The ENIAC project is basically over.
It's shipped off to a government lab down at Aberdeen.
The EDVAC paper is leaked, and Mouchley and Eckert are attempting to create the first computer company.
Prudential Insurance becomes one of the first companies to order a computer from this new
company, at Berkeley's urging. So Berkeley, as much as anyone, has some real street cred here.
He's just about as knowledgeable about computers as you could be in this period.
He's just about as knowledgeable about computers as you could be in this period.
Giant Brains actually comes out of this very period, when Berkeley was deep in the trenches.
He started planning the book in 1942, just as he was starting his wartime duties at Harvard.
The reason for this book is pretty clear.
As a total devotee of the machine, Berkeley thought that all peoples should hear the good news.
Computers were going to bring about a revolution.
It wasn't a matter of if, but of when.
The public needed to understand what was coming, and there were currently no resources that would do that.
The text is intentionally written to be easy to understand.
It's meant to be approachable.
That's one of the reasons that the book took so long to write.
This leads to some interesting consequences.
Giant Brains is undoubtedly easy to read.
In preparing this episode, I blew through the book,
something that isn't always easy for me to do. So hey, dyslexia seal of approval there. It's one of those books
that grows in complexity as you read it. It starts off with the basics, the qualitative
what's and why's of a computer. It then builds up to more complex fare, eventually describing
the mathematics and logic involved in machines, and surveying existing computers.
logic involved in machines and surveying existing computers. One of the weird consequences here is a pile of analogies. Specifically, Berkeley uses a lot of analogies with the human body.
Now, this is something that made me perk up my ears. In the EDVAC report, the first description
of a stored program computer with a shared memory space,
von Neumann uses a very similar analogy. In that paper, Johnny von Neumann describes EDVAC in terms of so-called E-elements. These are explained as electronic neurons. Well, artificial electronic
neurons, since neurons are actually already electronic to an extent. Von Neumann
also explains EDVAC in terms of organs, those being discrete parts of the computer made up of
these electronic neurons. I first assumed that Berkeley must have been taking notes from Johnny
Von Neumann. Berkeley would have definitely read the EDVAC report. It traveled
widely and quickly in the community. I quickly, though, add second thoughts, because Berkeley
actually has a very hefty acknowledgment section, and that section has no mention of von Neumann.
So why are these two sources using a neural analogy to describe computers?
That's weird, right?
One is a draft report on a very early computer design.
That's a technical document meant for experts.
Or at least what you would call experts at the dawn of the field.
The other is a book explicitly written for those uninitiated in the dark arts.
The answer is, I think, really, really neat.
This comes from Longo's biography.
Berkeley wasn't pulling from other traditions because, well, there were not other traditions.
He was going to first principles here.
He chose to use biological analogies because of his personal experience. He, as a human, worked as an actuary. He used
his neurons, his eyes, ears, and hands to do math, shuffle data, and make logical decisions.
These machines will be doing a lot of that work, that work that Berkeley
himself was doing. Berkeley did that by thinking. Therefore, these new machines, at least qualitatively,
must also think. It makes sense, then, to describe computers as thinking machines.
It followed that computers would then be composed
of neurons, our own thinking elements. It just happens that computers use relays and vacuum
tubes instead of gray matter. The first part of this book follows this basic idea to, I think,
its logical end. Berkeley explains what computers are in very broad terms,
always using this biological analogy.
This starts with neurons and builds up to a full machine.
And look, I get what Berkeley is going for.
His approach makes a lot of sense in the early chapters.
He's basically saying that computers take the place of humans in many thinking pursuits.
Therefore, machines must think in some capacity. Therefore, it makes sense to describe a computer
in human terms. I think that's fine. I don't see anything really wrong with that. It gets weird
when we get to one of the later chapters, though. In the slow roll, Berkeley eventually reaches the point where he can describe an entire
computer. He does so by laying out the design of a very simple machine that he calls Simon.
This is where the analogy gets a little off. Simon is a very primitive computer. It's literally a
two-bit machine. It's also important to note that it's a Harvard architecture computer,
meaning data and code live in separate memory spaces. This should be no surprise, since
Berkeley is a Harvard man after all. Plus, Harvard architectures can be a little easier to explain.
This is where you must prepare to, well, roll your eyes a little bit more.
Simon is described as a little box with two light bulbs for eyes and two tape reels for ears.
The ear part here is fine, I guess.
The left tape reel is to input data and the right is to input the program.
That's the organ that Simon uses to get input from the world. Ears.
It's a little funny, but little dude has some ears. That works. That's fine.
Then we get to the eyes. You will note that light bulbs are not input devices per se.
You don't get voltage out from a bulb. Simon's eyes are meant as its outputs. The
computer blinks its eyes to display data. So that doesn't track super well with the whole analogy,
but at this point in the text, Berkeley is getting into the harder stuff, so while weird,
I think it's okay. If you read up to the Simon chapter,
then you should already know what's up. It's just the analogy breaking down before Berkeley
falls into actual things like logic tables and relay circuits.
Simon itself is a very stripped-down machine. It only has four operations, but from that, Berkeley is able to
explain very precisely how a computer works. It's kind of neat to go from the very broad qualitative
to a very precise quantitative explanation in the text. From there, the reader gets a good basis in
computer science. That allows Berkeley to break into case studies of actual computers.
He talks about analog machines at MIT, Bell's calculators, ENIAC, the Harvard computers,
and the Kalen-Burkhardt Logical Truth Calculator.
That last one I've actually never heard about and am definitely taking note of for later use.
Anyway, that's the basic rundown here.
Giant Brains comes out of research and industry.
It starts generic, gets specific, and it uses a biological analogy to try and hook the layperson.
All in all, I think it's a good first outing for the topic.
It's a very reasonable text. If you didn't know about computers and you picked up a copy of Giant Brains, you'd probably
read the whole thing and probably walk away with some understanding of what these things are.
So, the final note before we move on. How does Berkeley justify the importance of computers?
before we move on, how does Berkeley justify the importance of computers? To quote directly from the preface, these new machines are important. They do the work of hundreds of human beings
for the wages of a dozen. They are powerful instruments for obtaining new knowledge.
They apply in science, business, government, and other activities. They apply in reasoning
and computing, and the harder
the problem, the more useful they are. Along with the release of atomic energy, they are one of the
great achievements of the present century. No one can afford to be unaware of their significance.
End quote. This, I think, is an improved justification. Computers are important. They're going to do
so much work for us, and you need to understand how important they are.
I think this flips the script a little bit. Instead of trying to say how cool these machines
are, Berkeley is just saying, yeah, they're important, and it's important for you to
understand that. The book itself, in that way, is the justification for computers.
There's also something else in the preface that's interesting to note.
Berkeley goes on to explain that the point of the book is to give the reader a general understanding of how machines work,
prove they're important, and that they think.
While he's explaining that, he drops this line,
and that they think. While he's explaining that, he drops this line, quote,
In this book, I have sought to tell a part of the story of these new machines that think,
end quote. In later chapters, he does tell the stories of the current field of machines.
Those are the case studies I mentioned before. But context here makes this line more impactful.
According to Longo, Berkeley had initially intended the book to focus around the story of computing, something like a contemporary history. It was at the urging of colleagues
that he shifted to the more descriptive format. I think that works for the better,
it makes it a more accessible text. But man,
that would have been cool. Imagine a book detailing the entirety of the history of computing
in 1949. That would have been slick as all get out. But I do realize that its current format is
much more impactful, let's say.
current format is much more impactful, let's say.
Okay, time for a little disclosure of my own. I did say this was a wild meander with no set plans,
after all. I had tentatively planned to pick out three unrelated texts to cover, something like a representational sample. But that didn't work out.
I was going through my stacks and I actually found another book by Berkeley. That book is called
Computers, Their Operation and Applications by Berkeley and Lawrence Wainwright. The second
author only contributed a few chapters, so it's still primarily a Berkeley book.
Crucially, this text is from 1956.
I say crucially because, well, that puts us in interesting territory.
Mass production of computers had just begun.
The digital age was truly upon us.
But we're still in this vacuum tube era.
Transistors barely exist.
Memory is still in a state of flux.
And programming, well, that discipline is still in its infancy.
We don't even have Fortran yet.
Like Giant Brains, Computers is an introductory text.
It's described in the preface as something of a sequel to the earlier book.
That said, it's still intended for a general audience.
This is also an openly transitionary text.
By that, I mean we're stuck in this rapidly moving era.
When Giant Brains was written, Berkeley believed that computers would be the future,
but he didn't entirely put a date on things.
He just assured us that computing was going to be a huge deal. By 1956, the pace of development
was supercharged. As Berkeley wrote in the preface, quote, development in the field is so
rapid that some of the information in this book will shortly become out of date. The sections
describing machines at the time of writing were in the top rank of effectiveness. But many of the concepts, and much of the discussion of
applications of computer machinery in various fields, are, we hope, likely to stay current
for some years. End quote. We start the book with this exciting tension. I personally feel like some
of the book is future-proofed. There are sections
where Berkeley actually mentions transistors. This is kind of funny since TX0, the first
transistorized computer, wasn't completed until 1956. That means that the text would have been
written before TX0. Maybe Berkeley had connections, or maybe he was just reading tea leaves.
Maybe Berkeley had connections, or maybe he was just reading tea leaves.
Something else to point out here is the total lack of justification.
The text doesn't justify its existence or try to persuade the reader that computers are important.
It's just taken as read that computers matter and a general audience needs to understand them.
And honestly, why shouldn't it be that way?
In the seven years since giant brains,
the computer revolution had truly begun. The earlier text had case studies of just about every computer in existence. At least, every computer the public knew about. That was about
five machines. This newer book can't even take that approach. There are simply too many computers.
book can't even take that approach. There are simply too many computers. 1956 is still very early in the chronology here, but that's been enough time for the field to boom. Berkeley
identifies 170 unique types of computers, spread across unique machines, production models, and
some oddities. In total, he estimates there are some 1,000 computers operating in the field.
That's a wild increase from the handful we saw in 1949. Given that context, it makes sense that
no justification would be needed. The closest we get is a passage in the first chapter, to quote,
It seems clear that the 20th century will be remembered in the years to come as the century to quote, the harnessing of machinery to handle information in reasonable ways, which has quite appropriately
been called the second industrial revolution. End quote. That's all the justification you could
ever need. Computing had turned from a small emerging field to the second industrial revolution.
The world was being fundamentally changed by the introduction of automatic digital computers.
We were now in the future that Giant Brains and even the ENIAC press release promised us.
However, that doesn't mean that 1956 was a recognizable future.
I'll be frank here. I like this book. It's a wildly interesting period piece.
Giant Brains is cool and all, but that's very math-focused. Which makes sense. We didn't yet
have the language to talk about digital computers. Machines were in the domain of mathematics,
and Berkeley himself was a math man. By the time Operation and Applications comes out,
By the time operation and applications comes out, some of that language is in place, but we're still in this period of flux.
For instance, Berkeley has a whole section on the advantages and disadvantages of digital computers.
Even just the fact that he has to specify digital computer should be enough of a tip-off.
We're in the Wild West period.
Now, I think we can all guess at the pros and cons here, right?
Digital computers are fast, reliable, flexible, and highly accurate.
In fact, they are as accurate as you want them to be.
Memory permitting, of course.
One of the surprising pros is actually just the memory itself.
Now, this doesn't necessarily mean RAM or random access memory. Back in this period, memory was often used to refer to any kind of digital data storage.
Berkeley specifically notes how things like tape storage give digital computers a huge advantage
over other types of machines. The ability to store, read, and manipulate data is a huge pro in and of itself.
The cons also hits the expected highlights. Computers are expensive in a number of ways.
There's the upfront cost, the cost of the machine itself, plus the recurring cost of maintenance,
power, and staff. That makes computers in their current state somewhat restricted devices.
These early machines are also delicate.
It's easy to break such a clockwork device.
We also get some unexpected twists here.
Throughout these early works, it's been emphasized that computers will replace humans
in many fields.
That was kind of the whole point of Giant Brains.
Computers are thinking machines because they'll be doing the thinking work in the near future. But Berkeley hedges that here in the cons section. He points out that machines,
while wondrous and powerful, still have limited applications. They can't reason like humans. They
can't understand information and patterns like humans do. And in many cases, they just can't
take the place of humans. So while powerful,
there are hard and fast limits to what these machines actually promise.
As we go further, we get more of this transitionary weirdness. At this point,
there are enough computers around that Berkeley can't hope to survey them all, but we still aren't
at a point where actual mass production has really kicked up.
At least, not in any true sense.
There are computers that are made in series, maybe hundreds,
but we're not at huge scales yet.
That said, that kind of scale production is right around the corner.
At the time of printing, computers were expensive and rare enough
that most institutions wouldn't actually have access to a machine. But in the next few years, computers were expensive and rare enough that most institutions wouldn't
actually have access to a machine.
But in the next few years, that may change.
Berkeley, in fact, was banking on that.
Operation and applications was intended, in part, to be a guide for those who would soon
come into contact with machines.
Though, in theory, that would be every human on earth.
contact with machines. Though, in theory, that would be every human on earth. But in the short term, that would most likely be folk at institutions that were just on the cusp of affording a machine.
The issue is, Berkeley couldn't really provide a machine-by-machine review. And even if he could,
that would be out of date in a matter of months. So he did the next best thing. He provides this massive list of priorities and
questions to ask when assessing a computer. He calls it the checklist of characteristics.
And it's seven pages long. It's everything from characteristics like digital or analog to
capacity of the machine to accept numbers from one or more input mechanisms at
the same time that the arithmetic unit is calculating. You may tell by the wording that
the list is both extensive and cast in the most general possible terms. Each characteristic is
ranked by importance, so you can figure out what actually matters. Now, at first glance, this reads like a bizarre shopping list. But I think
it's a neat way to deal with growing pains. Any review would be pretty limited, necessarily. So
providing this huge list of characteristics gives a way to generally understand what a computer is
capable of. But wait, there's more. There is so much more to this book. By 1956, it's clear that analog
computers are mostly on the way out, but Operation and Application actually has a whole section
dedicated to analog machines. That section was pinned by the second author, Wainwright.
I will admit, I didn't get too far into it. I more wanted to comment on the inclusion
of analog machines at all. In theory, analog should be dead. Digital is in the process of
winning out, but that process is still ongoing. Hence why there is even discussion of analog
machines. The field is shifting digital, but there are still analog machines out there and in use.
This is due, according to the text, to the costs involved with computers.
Analog machines tended to be cheaper and require less power to operate.
The section also claims that analog machines were more simple.
But I don't know if that's actually true.
We are still dealing with highly specialized machines.
actually true. We are still dealing with highly specialized machines. That said, the fact that analog machines were older may have meant that more experienced technicians were kicking around.
There are many more sections and quirks that make operation and applications feel transitionary, but
I think this is enough to get the general idea across, right? Once again, we're at this point
where nothing is written in stone yet.
That's what's so fascinating here. Digital computers are becoming a big deal. The future
that Berkeley predicted seems to have arrived, but there's not yet a guarantee that digital
computers will persist. Let's turn back to something I mentioned earlier. In Operation
and Applications, Berkeley drops the whole analogy thing. He isn't really explaining computers in terms of neurons and organs anymore.
There are a few slips back into the old way of thinking, but by and large, he's writing
pretty directly about computers. The question is, why the shift? Sadly, my later source,
Longo, doesn't have anything about the second book.
So I want to drop some speculation here. I think, personally, there are two factors at play.
The first is the fact that Operation and Applications is clearly meant as a sequel
to Giant Brains. It's not assumed that the reader already knows the first book, but
Berkeley does explain that he tried to make this latter text stand on its own. That the second book stays away from topics covered in the
first. Hence why the whole biological analogy was dropped. The second speculative factor here is the
state of computing in general. In 1949, the language around the computer was primitive at best.
In 1949, the language around the computer was primitive at best.
The field wasn't yet professionalized.
By 1956, that process was well underway.
Part of that process was the founding of the ACM in 1947, the Association for Computing Machinery.
Berkeley was actually a co-founder of the organization.
This was one of the first serious institutions dedicated to computing.
This is also where we get the first journals dedicated to the field. I can only imagine that the founding and then growth of ACM had a profound impact on how Berkeley viewed and talked
about computing. I think that partially explains the change in language. Operation and applications
sounds more technical and complex because the field. Operation and applications sounds more technical
and complex because the field itself is starting to sound more technical and complex.
There's one more piece of this book that I want to cover, which will set us up for the finale.
I did say this episode was going to be a ramble and a meander, right? In the middle of the book,
Berkeley lays out a fascinating argument.
It goes something like this.
A lot of early press around computers positioned them as job killers.
These new automatic machines would replace human jobs.
Berkeley argues that the opposite is true.
Computer adoption has actually created new jobs.
At least, computers created new job
categories. So while people may lose jobs in, say, accounting departments for every job lost,
there are more created for programming, dealing with the machines themselves, operations,
everything around computing. But that leads to its own issue.
There weren't actually enough programmers or system engineers or operators to go around.
There was a need for trained people to fill these new roles, but there just weren't many trained folk around.
This turned into a bit of a compounding issue.
In order to train new initiates, you needed access to a computer.
There were not many computers in the world, and the ones that existed were big, expensive, and fragile.
Berkeley calls this last piece the VIP issue.
Machines in this era were scarce, so they were only really used for very important problems, VIPs.
Training, while important, wasn't a very important issue. As a result, there was a gap.
You need to train up more digital folk to work with computers, but there weren't enough computers
for them to learn on. One possible solution, which Berkeley posits, is to create a new type
of computer. Small computers. This, I think, is a fun approach. He proposes that we could train
initiates on small, toy-like computers. That avoids the VIP issue by, well, making unimportant
computers. Look at it this way. A tiny and underpowered machine can be used to teach all the fundamentals used in larger machines.
Those machines can be made cheaply and in quantity.
Further, since they're simplified computers, they aren't capable of tackling very important problems.
This makes a whole new type of machine, a whole new approach to computing.
A, dare I say, personal computer. But I get ahead of myself.
Berkeley is talking about a new approach to building machines, about intentionally building
little dinky computers and then using those as tools to train up people. Better still,
he claims a computer exists that fits this criteria.
That computer is one we've already discussed.
Its name is Simon.
I find it only fitting to close out this episode on computer books
with an actual computer.
I did say this was a ramble, so I believe I'm permitted.
And besides, I think I've been pulled in by Berkeley's work.
And really, why not?
It's a really interesting body of work, and the historic context makes it all the more so.
So, Simon. Back in Giant Brains, Berkeley uses Simon as a theoretical machine. He argues that
the best way to understand computing is to understand a computer. So he builds up this tiny simplified
machine. And yeah, very fair point. Whenever people ask me the best way to learn programming,
I always point them towards case studies and projects. Simon fills this same role. It's just
this little 2-bit computer. It only has 4 operations. It can only blink data on two light bulbs.
Berkeley also posits that you could build Simon for a few hundred bucks.
Now, that is 1940s money, so still pretty expensive.
But if you compare that to a real VIP type of computer, well, that price tag may as well be a song and a dance.
May as well be a song and a dance.
The first real-life Simon was built at Columbia University in 1950 by two grad students and an unaffiliated mechanic.
The result was a working version of Berkeley's theoretical computer.
The grand cost was just $300, which is under $4,000 in today's money. So we're talking more used car cash and less house money.
That same year, after a demo of the Columbia Simon, Berkeley publishes an article on computing
in Scientific America. This outlines the current state of the overall Simon project and gives a
basic description of the little machine. At this point, no changes have been made to the machine. At least,
no substantial changes. The power supply has been redesigned by the Columbia team, but
I think that's it. The article does, however, share more of the ideology behind this tiny computer.
One detail that I didn't pick up before comes down to the construction. 1950 is well into the vacuum tube era, but Simon doesn't
use tubes. Instead, it's built using relays. That puts it at a huge technical disadvantage.
Relays are slower at switching than vacuum tubes. The upsides are simplicity, cost,
and power consumption. Relays are much easier to understand than tubes. You don't need
to go into the thermionic effect or heating and cooling or really much of anything. Relays are
just tiny automatic switches that use an electromagnet and a wire. The magnet, when
powered, attracts the tiny wire closing a contact. Simple, that's all you have to say.
the tiny wire closing a contact. Simple, that's all you have to say. This also simplifies the machine's circuits and power requirements. Vacuum tubes need power for switching, operating,
and heating. That draws a lot of current, and oftentimes requires a number of different
voltages. So while technically superior to relays, vacuum tubes are a bit harder to deal with. The final factor is cost.
Berkeley assures us in the article that relays can be bought surplus for 60 cents a piece. Simon
doesn't need a huge amount of relays, but it does need quite a few, so this cost cutting really pays
off. Berkeley also explains what Simon's purpose is in more absolute terms.
This is where we see the genesis of Simon as a small computer for education.
Berkeley also envisions the machine as a practical computer, just on a small scale.
If Simon was extended and tweaked, it could be used to solve more complex problems while staying out of the realm of VIPs.
The article closes with this passage, which I'm just going to read in full. I'm hoping it has the same effect on you that it had
on me the first time reading it. To quote, from electric power lines like refrigerators or radios. These little robots may be slow, but they will think and act tirelessly.
They may recall facts for us that we would have trouble remembering.
They may calculate accounts and income taxes.
Schoolboys with homework may seek their help.
They may even run through and list combinations of possibilities
that we need to consider in making important decisions.
We may find the future full of small mechanical brains working about us.
Let that sink in.
This is written in 1950.
Berkeley is proposing personal computers,
small machines that fit in the home that are owned and operated by a single
person. And this isn't just idle speculation either. Simon is the first draft of this future.
It's a silly little machine, but in a year, in just one year, it went from theory to practice.
Berkeley had already witnessed the beginning of the computer revolution in earnest, and before his eyes was witnessing the first small-scale digital computers.
He can't just be writing this as idle musings or hopes.
This is something that's going on.
This is one of those things that leads to a reassessment.
It has to, right?
Berkeley has this little and cheap machine,
and he's essentially calling it a personal
computer. That language doesn't exist yet, but I think it's clear that he's flirting with the term.
So is Simon the first personal computer? Does this push our timeline back even further?
The current reigning champ of the PC war here on Advent of Computing is the LGP30.
That's a machine that was first sold in 1956.
It's about the size of a desk, could be operated by a single person,
plugged into a normal wall outlet for power, and cost a mere $47,000.
That's prior to inflation.
This thing is about the cost of a new house price range once things inflate up.
I call the LG P30 a personal computer because its designer, Stan Frankel, intended it to be
such a machine. It was designed as an entry-level machine that would make computing more accessible.
It had cost-cutting measures. It's easy to operate. It uses normal power from a normal wall outlet.
In all ways, except the price, it's a very personable machine.
How does Simon stack up?
Well, it's not really a competition here.
Simon, in 1950, isn't a useful computer.
It's only useful as a toy for initiates.
It's not going to be churning away any real
problems anytime soon. So I don't think it's quite there, but it's very clear that Berkeley
is on the warpath here. The other feature to note is that Simon is very much a kit machine.
You can't buy a pre-built Simon computer. That's a big strike against personability, I think.
That brings us back to 1956 and computers, their operation, and applications. This is where we see
the results of one of Berkeley's predictions. He had said in 1950 that Simon should be easily
expanded. Well, it was. This is where things get tricky. And I know, I say that every time we talk
about early computers, but these machines were just built differently. Simon was already a strange
beast, and its expansion made it more so. When you read Berkeley's description of the new Simon,
he states the machine can read numbers up to 255 and output numbers up to 510.
That may strike you as strange. It did for me. Usually when you're talking about computers,
the big descriptor is so-called bittedness. You have an 8-bit computer or a 64-bit machine,
or maybe 24 bits if you're a cool dude. That denotes the natural size of data the
computer likes to work with. 255 is, well, that's 2 to the 8th power minus 1, which, yeah, whatever.
Convention means that you start counting at 0, so that's actually 2 to the 8th power. So,
in other words, that input figure implies that NewSimon is an 8-bit computer.
But then we get to that output figure.
Why is it 510?
Well, that's 2 to the 9th power minus 2.
Okay, weird, but whatever.
That's just under 9 bits.
Deal with some convention stuff and maybe an extra bit.
Let's just call it 9 bits.
deal with some convention stuff and maybe an extra bit, let's just call it 9-bits.
So we have a machine that can read 8-bit numbers and output 9-bit numbers? That kind of implies that it's partly 9-bit, right? So what's going on here? Well, weird old stuff is basically what's
going on. Simon, especially New Simon, is a cool case study in older design.
So check this out and maybe cringe in horror. Simon's internal storage, what we'd call memory,
is composed of 16 registers. Now, the nomenclature here is already old school.
Many older machines describe their memory locations as registers. Just go with it.
Each of Simon's registers, even after its expansion, was actually just a 2-bit memory cell.
That's just pretty weak.
But keep in mind, Simon wasn't for very important problems.
However, Simon was just strong enough to pull some cool tricks.
This specific one, the reading in 8-bit numbers, was done by grouping registers.
If you input an 8-bit number, that would actually span 4 registers.
A 9-bit would be 4 plus an extra bit.
I have a feeling that it would actually be 10-bit because things go in units of two,
so you'd have five registers for that, but either way, regardless, this trick is accomplished by
grouping together registers. Of course, Simon didn't have hardware support for the trick.
It didn't know that you were linking up its registers. The trick operated all in software, and it was backed
by new instructions. New Simon did have some added memory for instructions. Specifically,
its instruction register was now upgraded to be 4 bits wide. That's huge! That means Simon could
recognize more than 4 instructions. Two of those, add with carry and negate with carry, helped with the 8-bit math
trick. Let's say you were adding two 8-bit numbers. All you'd have to do is add the first two registers,
then move to the next set of registers, and add with carry. That way, if the first result was
larger than two bits, it would spill over to the next operation. So just by adding a single new
instruction, an addition that cared about a carry bit, Simon could do a lot more. A similar trick
could be done for a subtraction, and I think also multiplication and division if you were
really ambitious. By the time Operation and Applications is published, there were at least
three Simons,
one in Columbia that we've already discussed, one in California, and one in Kansas.
Were these machines built at universities? In garages? Well, we don't exactly know.
Berkeley doesn't give us more detail than that in the book. The short answer is that these machines could have been made
anywhere. In 1952, Berkeley published full plans for Simon. These included schematics,
parts lists, and instructions to build and operate the thing. You could order these plans for just
$5.50. By 1956, he had sent out around 400 copies of these plans, so in theory, there may have been more than
three Simons in the wild. I did a little searching, and I can't turn up anything else on the specifics
of real-lifeble. Did things go
as planned? Oh no, they didn't, but we didn't really have a plan going into this anyway, and
we did stumble on a very interesting story. All these digital introduction books start with giant brains in
1949. Following that thread, and Berkeley's other writings, gives us a valuable look at how
computing was changing year to year in this earliest epoch. I think the lack of justification
in computers, their operation, and applications is especially telling. I did do some other reading for this
episode that didn't make it in. I initially planned to have some Osborne and, yes, even
Norton in the mix. Those books, released decades later and dedicated to microcomputers,
include opening chapters that explain why the layperson should know about computers.
These later texts have digital justifications.
There's something there that I want to revisit at a later date. I don't know if it's Berkeley's
optimism shining through, a shift in target audience, or an actual change in the view of
computers. But there's something going on that I just haven't been able to untangle yet.
So, at the end here, have I been able to answer any of my initial questions?
I know I've taken a thin slice of the genre,
but I think we have enough to start working up some answers.
These popular computer books didn't come out of academics.
The tradition starts too early for digital academics to even exist.
That said, they do fill in a gap.
Operation and Applications is very explicit about this. The book itself and its main machine,
Simon, fill a gap in training and education. In that sense, these two early books are all
about accessibility. The emphasis is on fundamentals and how those can be applied
to real-life machines. I think that's fairly clear from both of these works. By learning how
machines work, you can apply that knowledge and reasoning to any real-life computer. That becomes
all the more useful once we transition into the later 50s and have too many computers to fully count.
And finally, the main theme.
I think this is also simple.
To quote Ted Nelson,
you can and must understand computers now.
Thanks for listening to Advent of Computing.
I'll be back in two weeks' time with another piece of computing's past.
And hey, if you like the show, there are a few ways you can support it. If you know someone else who'd be interested in the story of computing,
then please take a moment to share the show with them. You can also rate and review the podcast on Apple Podcasts and Spotify. If you want to be a super fan, you can support the show directly
through Advent of Computing merch or signing up as a patron on Patreon. Patrons actually just finished closing out a poll for a bonus episode.
I need to officially close that, but a special bonus episode should be coming later this month.
You can find links to information on merch and the exciting bonus episodes
over at adventofcomputing.com. And as always, have a great rest of your day.