Advent of Computing - Episode 110 - The Atari 2600
Episode Date: June 18, 2023I don't usually cover video games. When I do, you know it's for a weird reason. This episode we are looking at the Atari VCS 2600, it's strange hardware, and how it fits into the larger story of the r...ise of microprocessors. These new tiny chips were already changing the world, but they brought along their own problems. Selected source: https://spectrum.ieee.org/atari-2600 - Inventing the Atari 2600 https://archive.computerhistory.org/resources/access/text/2012/09/102658257-05-01-acc.pdf - Al Alcorn Oral History https://www.digitpress.com/library/interviews/interview_bob_whitehead.html - Bob Whitehead Interview
Transcript
Discussion (0)
Something that's never really made sense to me is the whole gaming console vs. computer argument.
If you run down the bill of sales for each side, you wind up with pretty similar parts lists.
I mean, both the PS5 and the Xbox Ones now are x86-based machines.
They each use AMD processors.
You can get very similar processors for a desktop computer.
You could build something with a very similar spec to a PS5
and just use that as a normal everyday computer.
There are differences in libraries, different graphics hardware,
and some customization to the processors,
but ultimately, they're just fancy computers.
A desktop PC can do anything that a modern console can.
This isn't just a recent trend, either.
I remember when the PS3 came out, folk were excited that it could run Linux.
Maybe that gives you an idea of the company I keep.
Anyway, the PS3 was able to do something that's usually relegated to desktop computers.
Sony even officially supported this feature, at least for a while.
Before the PS3, the PS2 could pull a very similar trick.
And that time, Sony even released their own Linux distro for the thing.
I mean, the original Xbox natively ran a version of Windows. The point
I'm getting to is there's this blurry line between consoles and computers. But you must ask,
has it always been this way? Surely this is some kind of newer development, right?
Well, dear listener, I'll let you in on a secret. It's actually software
all the way down. And when there's software, there's bound to be a computer somewhere nearby.
Welcome back to Advent of Computing. I'm your host, Sean Haas, and this is episode 110, the Atari 2600.
That's right, we're doing a video game episode for the first time in, well, quite a long time.
I think the last game episodes were all the way back in October for Spook Month.
Now, I want to clear something up here.
I do episodes on video games somewhat infrequently. That's not because I dislike games. I, too, can have fun.
Sometimes. I don't hold onto some stodgy belief that, oh, mere games don't deserve a place in
the annals of computer history. Far from it. Video games tend to push the state of
the art, which gives them a very unique position in the story of computing. Rather, I tend to shy
away from coverage of games simply because, well, there are historians out there that are doing
a really good job. One of my big goals with Advent of Computing is to avoid stepping on
other people's feet.
I don't want to duplicate work.
I'd rather find a way to contribute something new to the conversation or bring that conversation to a wider audience.
With that said, why have I decided to throw my hat into the Atari arena?
Simply put, I think the story of the 2600 fits into a larger narrative I've been covering. I think I
can take this down an interesting path that aligns with some of my previous episodes.
One of the big story arcs on the podcast has been how microprocessors totally changed computing.
How these dense little chips had the power to reshape the world. Last episode, we discussed
this kind of tangentially.
We were looking at how microcontrollers arose as a way to make smaller and cheaper calculators.
We've spent countless episodes discussing early microprocessors themselves and how they lead to
everything from the Altair 8800 to the PC. But we haven't really discussed how the advent of the microprocessor changed the world
of video games. It's time to tackle that page of the story. The Atari VCS 2600, or just the 2600,
gives us a fantastic window into what the microprocessor was able to do when it wasn't
being used to, you know, crunch numbers or run beautiful spreadsheets.
So, come along as we travel to a land of beeps, boops, and blocky graphics. This episode,
we'll see just what a microprocessor could do for gaming, how that changed the market,
and what potential downsides this change could bring. Along the way, I want to pay close attention
to how these new machines
were programmed. Atari wouldn't be the first company to make a microprocessor-based console,
but they were early enough to be entering into pretty uncharted territory.
I think I've hit this fun point in the show where we have a critical mass of topics, finally.
As such, this episode is going to start with a bit of a callback.
I'd like to direct your attention to episode 48, Electric Ping Pong.
There's this weird group of inventions that show up during the earliest era of video games.
Virtual table tennis.
Pong-like games have been independently invented at least three times,
depending on which sources you want to trust and who you think is a liar. I think this comes down
to simplicity. If you're bootstrapping from nothing to a game on a TV screen, you usually
start by making a blob, then moving that blob around somehow,
adding some way to control the blob, you know.
From there, it's a pretty small jump to add in collision detection, and that, dear listener,
leads pretty directly to table tennis.
This early era of gaming, this tennis era, starts at the beginning of the 1970s. This is well into the digital age,
and these machines are digital, but they aren't programmable. These aren't really computers.
Instead, they're built from custom digital logic, so-called transistor-transistor logic. That just means you have a transistor plugged into another one. Each game was built
from a circuit board covered in discrete transistors and sometimes diodes or other
low-density integrated circuits. Everything was wired up to perform one specific operation.
So, for instance, Atari's Pong board had a specific circuit just to handle moving each paddle.
Atari's Pong board had a specific circuit just to handle moving each paddle.
In 1972, we get the Magnavox Odyssey, which is the first home console, as well as the first home electronic tennis game.
Now, the timeline here is kinda weird.
You may assume that arcade gaming came well before home gaming, but that's not exactly the case.
Computer Space, the first electronic digital arcade cabinet, shows up in late 1971.
The Odyssey hits the scene just about a year later, so there isn't really an era where arcade games exist totally separate from home games. That said, there are some major differences between the two markets. It all comes down to money. Really, most things do.
This is kind of basic stuff, but we should be explicit here so we're all on the same page.
Arcade cabinets are coin-operated. They bring in actual money on a per-play basis. A cabinet will sit in some
predetermined location. As users come through, coins are deposited, games are played, then the
user leaves. For an arcade game to be profitable, you don't necessarily need a lot of actual cabinets.
In other words, there may only be a few dozen copies of each machine, maybe a hundred.
You aren't setting out to manufacture millions or even thousands of copies of a single arcade game.
That means a single cabinet could actually be pretty costly to produce. That upfront cost
doesn't matter quite as much because the money flow is based off how much each cabinet is used.
Home consoles, on the other hand, are a very, very different beast. Let's take the Odyssey
as an example. That console was sold for $100 in 1970s money. That's a per-unit cost.
In this mode of production, Magnavox made money from sales of the actual hardware,
not on a per-play or contract basis. So the cost of the unit mattered a lot more.
That's this inbuilt drive to make consoles cheaper. Now, the Odyssey is a pretty primitive
machine. It relied on discrete components to operate. We're talking straight-up transistors
and diodes soldered right onto a circuit board. This isn't exactly rocket science yet. While this
will work, it's actually a pretty expensive way to build a product. Discrete components are pretty
cheap, usually selling for less than a penny in bulk, but if you need to use a few dozen
of those, well, that starts to add up. Lots of components also lead to big, complex circuit
boards. These boards are usually built by the size when you manufacture them, so it's in your
best interest to have the smallest circuit board possible. Plus, complexity is usually a bad thing. More traces
and components means more places for something to go wrong. That means more time spent debugging
and more money spent on development. It's a compounding kind of issue. The Odyssey only
really had one game. Tennis. The best video game ever created. Now, that is a bit of a reduction.
You could actually play a number of games on this console by switching out cartridges,
but they were all just tennis.
Each cartridge was really just a set of jumper cables in a nice little package.
These cartridges reconfigured how tennis worked.
Add in a transparent screen overlay and some
instructions, and you can technically have more than one game. It's just ultimately that each
game had a wall, one or two paddles, maybe a ball. Things could be turned off and on or made solid or
pass-through, but come on, it's still tennis. This is the dirty secret behind many early video games.
They're really just tennis. It somewhat reminds me of the phenomenon of carcinification.
It's this evolutionary fact that everything tends towards becoming a crab. Just instead of
developing pincers and carapaces, all these games are developing paddles and balls.
pincers and carapaces, all these games are developing paddles and balls.
Around the same time the Odyssey hit shelves, a newly formed Atari released their first arcade game, Pong. If we want more evidence of the theory of tennisification, well, here it is.
Pong is tennis. That's it. It's a little more complicated than what shipped with the Magnavox
Odyssey, but it's still tennis. Pong kept score, and it had a dotted line between the paddles
instead of a solid one. There are also some differences in the ball physics that technically
make Pong more complex than Odyssey-style tennis. This complexity, believe it or not,
than Odyssey-style tennis? This complexity, believe it or not, was possible thanks to better hardware. Pong actually used integrated circuits. But that's really neither here nor
there. The point is, Pong was tennis, and it made Atari enough money to establish themselves
as a contender in the arcade space. Atari's jump to the home market is a little weird to discuss. Al Alcorn was the original
designer of Pong, the arcade game. In an oral history interview with the Computer History Museum,
Alcorn explains that he initially designed Pong as a homebound video game. To quote,
It was set up to run through a little modulator that I built for channel 3 or 4 because it was
going to be a home set, and put it in a box and the coin-off thing was a secondary thing that
happened. I remember going to Christmas at my mother's home that year. I guess it had to be
72, December 72, and I brought it to Christmas. And my brother had two young kids about 5 years The original Pong console would even use a stock TV as its display.
Atari could have, in theory, put Pong in a nice case and sold it to consumers.
But there was that issue of cost.
As Alcorn explains in the same interview,
Pong used a lot of chips. We're talking 70 in total. These are all pretty simple,
pretty off-the-shelf chips, but still, on a pure bill-of-parts basis, Pong would have been too
expensive for an untested market. It couldn't have really competed with the Odyssey, especially since
market. It couldn't have really competed with the Odyssey, especially since they're both tennis games. There's not that much of a differentiating factor. So we can digress back to the same
position. Atari had an arcade game, there were some very simple home games, and they wanted to
get in on the market. So we need some kind of leap. And at least as Alcorn tells it, this was a pretty
wild leap. Nolan Bushnell, the co-founder of Atari, had pretty big plans for the company.
He wanted to be, quote, bigger than Bally. At the time, Bally was one of the de facto arcade
manufacturers. At least, they cornered the market for arcade-like machines. They mainly
produced pinball and slot machines, which did make them a force to be reckoned with.
Atari would easily meet that goal. By 1974, the dream was a reality. Atari's arcade business,
which was more than Pong at this point, was bigger than Bally. This must have been an especially satisfying victory
due to the fact that Bally had initially provided funding for the fledgling Atari.
From there, things could only go up. While the arcade side of the company was grinding away,
Alcorn decided to head down a new and exciting path. During this period, Nolan was keeping lists
of plans for the company. One item on one of these
lists that stuck out to Alcorn was a home console. And so a wild project started, or at least an
attempt at a wild project. At the time, Atari didn't know anything about consumer electronics.
Remember that arcade games and home games were,
for all intents and purposes, totally different markets. Alcorn's gut feeling was that they would
need some custom hardware. This was partly to lower per-unit cost, but partly to prevent
intellectual property theft. This gets to be a pretty complicated topic. Atari's early consoles all used off-the-shelf
parts. There's no code, just well-labeled chips on a circuit board. Copying the original Pong
cabinet was really a simple task. Just open it up, take a few photos, and you're ready to go.
You can make a copy in your garage if you needed to. Throwing that type of design into the home market, that would spell doom. At least,
a pawn cabinet was ostensibly kept under lock and key, so custom chips kind of ended up becoming a
must. The problem was, Alcorn wasn't a chip designer, so he hired one. In 1973, he brought on one Harold Lee. Alcorn threw all
these big ideas in front of the new recruit. He wanted to make a set of chips that could be
repurposed for multiple Atari consoles. Basically, a new toolkit, a new platform for the company to
build around. Lee, the expert here, thought that might be a little too ambitious for
a first attempt. He gave a counteroffer. A direct port of Pong. That would be a fitting start to
Atari's new venture, and hopefully it wouldn't be too difficult. Alright, so this is an episode about the Atari 2600, so why are we still talking about Pong?
Well, here's the deal.
The home version of Pong was relatively successful, but there were huge problems with this approach.
This isn't so much on the technical side of things.
The issues that crop up have more to do with market forces.
Howard Lee designed a single custom silicon chip that can play Pong.
It's literally tennis on a chip.
The chip is then fabricated by AMI semiconductors.
This is occurring right in the era of this wonderful new technology called ASIC,
Application Specific Integrated Circuits.
ASICs are these prefabricated grids of transistors and diodes that can be wired up with a custom
conductive layer. It's kind of like the next step up from a mask ROM. A client sends over how they
want transistors wired together, and the manufacturer just adds a single custom layer.
wired together and the manufacturer just adds a single custom layer. This technology was used in the mid-70s, but Atari either didn't know about it or didn't want to use it. The result is that
the Pong on a Chip project was pretty expensive, maybe more expensive than it needed to be.
estimates vary, but I've seen numbers between $80,000 and $100,000.
So, conservatively, that's about $500,000 once we adjust for inflation.
This Pong chip, this fully custom silicon version of Tennis, would find its way into the Sears Telepong console.
It's not really a full console, more just a tennis player, but let's call it a console for now. Games at this point are kind of just tennis anyway. Telepong was released in time
for the 1975 holiday season, and it sold for just $80. At the time, it was feared that even that
price would be too high for the home market. Those fears were at least somewhat unfounded.
Telepong was successful, or at least successful enough that I can't find complaints from any Atari employees.
However, that success was short-lived.
In early 1976, Coloco released the Telestar, yet another tennis emulator.
released the Telestar, yet another tennis emulator. To note is the fact that the machine was powered by one General Instruments AY38500 chip. This was another tennis on a chip IC. The Telestar
sold for $50, and it could play a handful of tennis and tennis-like games. In other words, it was strictly better and cheaper than the Telepong console.
But it gets worse.
General Instruments didn't just sell their chip to Colico.
Anyone could order an 8500 Tennis on a Chip IC.
That same year, Texas Instruments would announce their own chip, the TMS-1955.
It was a full-on clone of General Instruments' Tennis chip. In other words, Atari had the market
cornered for mere months. After spending so much on R&D costs, they got swooped by more established
semiconductor companies. Atari would make a
number of these custom chip consoles, but they all faced the same issue. It was just too slow
and too expensive to do business that way. Luckily, these misadventures provided crucial
lessons for future developments. There were three key problems facing Atari's home console business in the mid
1970s. First was, of course, development cost. But this isn't just a simple running over budget.
This has to do with the second issue, obsolescence. It's okay to spend millions of dollars developing
a product if that product can bring in millions of dollars
in revenue. For home consoles, this meant you need to move a lot of units. That could either
be in a huge burst or more modest sales over a long period of time, a long product lifespan.
Atari's consoles weren't able to hang in the market very long because, as we've discussed, there was
competition. Telepong became obsolete as soon as the Telestar was announced. That cycle would
continue on. This brings us to problem three, which also brings us to the root issue at hand.
Atari's approach just wasn't flexible. Custom silicon is cool and all, but it's a very inflexible medium. It's
literally a carving, something that can't be changed on the fly. Luckily, there were better
solutions to draw from. At least, in theory there were. The home and arcade game markets were in
their infancy, but computer games were already pretty old at this point.
Atari as a company was well aware of that fact. Nolan Bushnell had first come into contact with
computer games back during his college days. During that period, he was exposed to Space War,
one of the first recognizable graphical video games. This was played on a big, expensive computer using fancy vector displays.
Before founding Atari, Bushnell and one of his friends, Ted Dabney,
built their own arcade version of Space War.
But instead of using a computer, they had to compromise on custom circuits.
There's this split between what's possible with circuits and what's possible
with code. Software ends up being far more flexible than hardware alone. If you have a
machine that can run code, then you can change what that machine does. Take Intel's 4004 as an
easy example. It was originally contracted and designed to be used in desktop calculators,
but with different code, it could just as easily be the controller for a pinball machine or a
terminal. It's flexible because it can be issued instructions. Because it's a computer.
Atari wanted to leverage that software-based flexibility. As early as 1974, their R&D team was putting
together plans for a microprocessor-based video game. This seemingly small shift from custom
chips to an actual computer could solve many of Atari's issues. It could provide a platform for
many multiple games, a home for future products. Instead of churning out custom ICs
for each new game, Atari could become a software house. This could also open up a third-party
market, something that was hitherto impossible, although that was something that wasn't quite on
Atari's radar or mind yet. The shift introduced its own issue, cost. Atari R&D initially wanted to use the Motorola
6800 as a base for this new console. This was a new and very capable processor. It supported a
total of 64 kilobytes of RAM, which would provide plenty of space for very complex game software. It also cost $175 per chip.
Bulk prices could knock that down, but not by nearly enough.
A console built around the 6800 would be pretty powerful,
but it would also cost an arm and a leg.
This is where we intersect with another show alumni, Chuck Petal. He was an engineer at
Motorola who worked on the 6800 chip, or at least worked with it. Around the same time Atari was
crying at price tags, Petal was designing a new chip. This chip would come to be known as the 6502.
It was a cost and feature reduced version of the 6800. Now, before anyone gets mad,
that really simplifies things, but that's the aspect that matters to our story. It's what
mattered to programmers and designers at Atari. In 1975, Petal left Motorola and joined up with
MOS Technology, another semiconductor manufacturer. The brass
at Motorola wasn't too keen on this new 6502 design, but MOS was willing to take a chance.
The chip would be announced that same year, and Petal hit the road to drum up new business.
That's how Petal would meet Ron Milner and Steve Mayer, two Atari R&D engineers.
would meet Ron Milner and Steve Mayer, two Atari R&D engineers. Now, Milner and Mayer were out on the hunt for a processor for their theoretical console. While on this quest, they stopped by
Wescon, a trade show at San Francisco. At the time, the 6800 was selling for $250 a piece, and the 6502 was marketed at $25 per chip.
This is where we enter into what sounds almost like a myth. There are a few tellings of this
story. The 6502 was a really new chip. Petal only had a small number of them, but he wanted to make
an impact. He needed to instill confidence in prospective buyers.
So, Petal pulled a trick. In some tellings, he fills a barrel with every production model of
the 6502 that Moss had on hand. In other tellings, he made a false bottom to the barrel so his
handful of chips could look like thousands. You gotta love this image. Just imagine an engineer
in a suit standing next to a big barrel of chips holding a sign that says, microprocessors for sale,
$25. The story has the right amount of charm and trickery, but things didn't exactly go down like
this. From Chuck Petal's own oral history, it sounds like
their booth was actually barrel-themed. He explains that they had barrels for 6502s, for ROM,
for RAM, and a few other chips. The only trickery was standard trade show stuff.
While this may sound disappointing, I think it makes the story even better.
The 6502 was exciting purely on merit, no tricks needed.
It was cheap and powerful enough for most users' needs.
Milner and Mayer certainly agreed with that sentiment.
They thought the 6502 could be the chip that they had been looking for.
But they didn't go with that chip. At least,
not the original model. The eventual Atari VCS uses an MOS6507. Welcome to the number maze.
So what's the difference between these two chips? Frankly, very, very little.
The 6507 is a cost-reduced version of the 6502, itself a cost-reduced version of the 6800.
The old 07 straight up has fewer pins.
That's right, it's literally as simple as that.
Now, the normal 6502 comes in a 40-pin packing.
That means the physical chip that you wire up has 40 external pins on it.
The 07 has a paltry 28.
That may seem like an insignificant distinction,
but this is one of those small changes that snowballs.
A smaller packing usually means a cheaper chip, and that's the case here. It just costs less to have fewer pins. This also
makes the surrounding circuit more simple. You straight up don't have as many pins to wire up
to things, so the circuit just has fewer complications. You don't need, say, 16 traces for
your address bus if you only have 13 bits to wire. That saves some more pennies. And a less
complicated circuit means you can create a smaller printed circuit board. That, once again, saves some skrilla. In practice, the 6502 and 6507 are kind of the same chip. They run the same code
and they use the same die. Each chip is just wired up differently. There are some small changes
having to do with interrupts, but we can kind of just gloss over that. The 07 just doesn't support
interrupts, so its interrupt enable pin is literally tied low.
Here's the kicker, though.
The cost-cutting came at, well, its own cost.
Crucial here is that address bus.
The 6507 has a 13-bit address bus that's down from 16 on the 02.
That works out to a maximum memory of,
drumroll please, 8 kilobytes. That's not a lot. The 6502 architecture uses single-byte instructions.
That is just a fancy way of saying that each instruction is 8 bits wide.
That means that a program on the 6507 with its smaller address bus
can only be made up of about 8,000 instructions.
Here's something else to think about.
The 6502 architecture is very simple.
It doesn't even have instructions for division or multiplication.
So a programmer has to write those out long form. That means if you need to multiply, you have to tack on some
boilerplate code. The same is true for many tasks on the chip. So in practice, you get fewer than
8,000 instructions that you can actually play with. Keep this all in mind as we continue.
Every time we check something off, the memory space will get smaller.
Okay, so why was the 6507 even considered? It had this glaring issue, so why not use the better chip?
Well, that just comes down to price, price, price. Just how cheap was this chip?
That gets tricky to answer, since we're dealing with business-to-business sales.
Pedal recalls quoting Atari $12 per unit for a 6507 and a mysterious RAM-slash-IO-slash-timer chip.
So, the 6507 was being sold for, I don't know, $8? That sounds like a good
estimate that feels nice. The bottom line is that Atari was getting a really, really good deal.
I'm skipping ahead a little, but the 2600 would have an introductory price of $170.
have an introductory price of $170. That means that only 7% of the console's retail price went towards MOS chips.
All in all, it's a pretty good deal if you can get it.
I mentioned a mystery chip here, the other constituent of that $12 price tag.
This is the fabulous MOS6532, better known as the MOS Riot. That is such a cool name
for, admittedly, one of the most reasonable chips ever produced. Not all computerized machines need
a whole lot of RAM. We like to fixate, or I like to fixate, on upper bounds. That kind of
gives us an idea of maximum complexity or power in a roundabout sort of way. However, not all
applications require maximum power. Sometimes you just need a little. That's where the Riot comes
into play. This one chip provides 128 bytes of RAM, a programmable timer, and two
I.O. drivers. Interestingly enough, this was static RAM. So in theory, data could be left on the Riot,
but I don't think that was the intended use case. Now, 128 bytes is just not a lot of memory to work with, but it's enough for a few variables.
Between the Riot and the 6507, you could actually build a full-on computer. Just add a ROM chip
somewhere, and that's a three-chip machine. Sure, it's not a very useful machine, but it would be a computer. Those are all the
chips that Atari would outsource. In practice, the 2600 involves some ROM chips, but those were
removable and largely unremarkable. It also had a secret tiny chip on the board, but it's just a
hex inverter. It's just one of these support
chips that you use to glue stuff together. Once again, hardly interesting. The final part that
made the 2600 actually functional was, perhaps in a twist, a custom IC. Now, Atari had been trying
to get away from this mode of production. That was kind of the whole point of the new console.
to get away from this mode of production. That was kind of the whole point of the new console.
So why fall back on a custom chip? In a weird way, this was another cost-cutting measure.
Look at the inventory I've already laid out. The 2600 had no RAM. It straight up had no memory to play with. Sure, we have 128 bytes, but that's nothing. This paragraph that I'm reading
to you right now would be too big to fit into the riot. This presents a bit of a problem.
How can you handle video output without memory? There's this weirdness that comes from the jump between analog and digital. A standard TV
expects an analog signal, this continuous signal of some fluctuating voltage on a wire.
This newfangled console was working in the digital realm. To it, an image was composed of pixels,
discrete units of data. That data needs somewhere to live before it can be converted
to analog and sent out to a TV. Earlier consoles got around this by, frankly, not doing anything
fancy. Something like the Magnavox Odyssey only knows how to generate paddles and a ball.
The same was true of other early consoles at Atari. A programmable console, though, would
need some way to generate more flexible graphics. A ball and paddles just doesn't cut it anymore.
The traditional approach was to generate your image in memory, then convert and output that
image. That memory was called a frame buffer, and that would require RAM, which came at a premium.
Just to ballpark things, how much memory are we talking? How much RAM would you need to store one
frame of a video signal on a 2600? Once again, I'm going to indulge in a little anachronism.
in a little anachronism. The production model of the 2600 had a resolution of 160 by 192 pixels, with some caveats that we can ignore. That comes out to around 30,720 pixels in total.
It renders those pixels in 128 possible colors, which means each pixel would have to be represented by a 7-bit number.
That's a bit of an awkward size, so let's round it to 8 for an error bit or something of that
nature. Oftentimes that happens when you have 7-bit numbers anyway. So this theoretical frame
buffered Atari would require 30 kilobytes of RAM, and that's just
to store a single frame.
Not only would that blow the budget, the 6507 actually can't support that much memory.
This would be impossible to do with the current spec for the 2600.
If frame buffering just wasn't possible, then how did the 2600 actually create an image?
Tricks and compromises, dear listener, were the order of the day.
All those tricks were tucked into this final chip, the TIA, the Television Interface Adapter.
When you get down to it, the TIA is actually kind of a programmable Pong machine. I know, that sounds
like another one of my absurd reductionist quips, but it's kind of true. The TIA could only generate
a handful of things on screen. Two player objects, a ball, up to two missiles, and a playfield, a
background image that you play on. The chip was also able to detect
collisions, produce sounds, and read from controller inputs. That, dear listener, sounds
a lot like a tennis chip. You have controllers, collisions, paddles, and balls. The great innovation here is adding a little extra on top, but tennis chips
were kind of already doing that. The key difference is that the TIA wasn't hard-coded for anything,
it was programmable. Now, this doesn't mean programmable like a processor, but more that
it could be configured on the fly. I know, it's not the best word, but it's convention.
A programmer was able to change the shape of player objects and the play field, which
were both stored as bitmaps. That means that, in effect, they were tiny frame buffers,
but we're talking bit-sized, bite-sized frame buffers, just small little things.
Each of these objects could be configured to be a certain color, and their location could be changed.
Sound was configurable in a similar way, allowing a savvy programmer to even generate music.
This is really, really basic, but it does work.
You don't have a lot of fine-grained control, you're basically
reconfiguring tennis in a thousand ways, but that can lead to some interesting outcomes.
There's just enough wiggle room to make unique and fun games, and almost no RAM is required.
But here's the kicker, or rather, another kicker. The TIA was memory mapped. That's a nice $10 word you can use
to impress computer nerds. This just means that the TIA showed up in the 6507's address bus.
There were certain addresses in memory that didn't connect to RAM or ROM, but instead to the TIA
itself. That's a nice approach to take. It means that a programmer
just has to muck around with memory to set up graphics. Just put your little bitmap in the
right place and your player's shaped like an alien or a person or ET or a pong paddle. This is a
pretty common approach. However, this further restricts the already tiny memory space of the 6507.
At first, this RAM-free approach sounds weird.
I mean, it is, right?
Maybe, or maybe not.
There is some precedent for weird low and no memory computer systems.
The best example is probably the TV typewriter.
computer systems. The best example is probably the TV typewriter. This was basically the first cheap digital machine that could be plugged into a TV and just throw some text on screen.
It wasn't really a computer per se, more like a tech demo that you could expand into things like
a terminal. The TV typewriter appears in 1973 as a series of schematics and an obligatory kit in the magazine
Radio Electronics. Full-on RAM was still pretty expensive, so the typewriter used a cheap serial
memory for data storage. This was essentially a big recirculating text buffer, so in effect,
the TV typewriter had no real random access memory.
It used tricks to get around the price of RAM.
In 1974, the Altair 8800, the first pre-assembled computer that a consumer could really buy,
that shipped with a solid 256 bytes of RAM.
That's twice as much RAM as a Riot chip has, sure, but it's still a laughably small amount of memory. This made the Altair a remarkably cheap machine, and it worked out because users could
later buy RAM expansion cards. Low RAM strikes yet again to save costs. There was precedent for
cost-cutting tricks even before Atari laid out their console.
That's kind of what we're getting at here. Another factor that softened the sting of a
custom chip was the fact that the TIA was an actual ASIC. This meant that the actual design
of the TIA was relatively simple. At least, it was more simple than a full-on custom silicon chip.
An ASIC is an almost off-the-shelf part. The actual silicon layer of the die is already set up,
just a grid of diodes and transistors. The only custom part is a conductive metal layer that the
manufacturer adds to the chip right before finalizing it. That metal layer is the only part that the client has
to send in. It's the only part that Atari would have to design. The TIA was a custom chip, but
even this chip is part of a larger scheme of cost cutting. Now we should have a pretty good view of
what the Atari VCS is. It's a 3-bit computer. ROMs, actual stored programs, were housed in
little plastic cartridges that slotted into the 2600's motherboard. All of the chips involved
were chosen to cut costs. That was the name of the game. The result is a surprisingly simple
circuit board, and a surprisingly small one. I don't have exact measurements because, come on,
you already know this, I'm a hack and a fraud, but trust me, it's a small board. Nice hand-sized
kind of thing. This all comes together to form a cheap and kind of practical machine.
I do have to mention another machine here, that's the Fairchild Channel F. You see, the 2600 was really an early
console to embrace software, but it wasn't the first. The Fairchild Channel F hit the market
nearly a year before the Atari 2600. There's even some crossover between contractors, but
that mainly has to do with the industrial design and marketing. The Channel F was, unambiguously, the first home
console to use removable cartridges that actually contained software. However, the technical details
are totally different. For one, the Channel F had a frame buffer. From what I can tell, it was
closer to a straight-up computer than the 2600. I don't want to go too far afield, just mention that the Channel F existed.
It was earlier than the Atari machine, and it was a very different type of console. If you want to
know more about the Channel F, the Gaming Historian on YouTube actually just did an episode about the
Channel F. Like, it came out while I was working on the script. It's a weird coincidence, but hey,
came out while I was working on the script. It's a weird coincidence, but hey, lines up pretty well.
Okay, so we've talked a lot about chips and circuits. That's all lame. This brings us up to the actual important part, the software. This is the whole point of the 2600, after all.
You're offloading games from hardware into software. The console itself was really just
a platform for games. It's just a pile of neat toys for programmers. And, as always,
a computer is only useful with software. I still have to print up a little sign to tap.
Two of my big questions coming into this were how the 2600 was programmed and how that process
was affected by this new hardware.
We're still at a point in time where the microprocessor is a very new technology.
The 2600 is officially released in 1977, which is around the same time as the first wave
of cheaper home computers.
This tech is just starting to become accessible.
Software for the 2600 was burned onto ROMs, which were put on these little circuit boards
with exposed edge connectors. That connector slotted into a socket on the 2600 itself.
Looking at the pinout for that connector, we run into an uncomfortable reality. We have 8 pins for data, there's a couple pins for power and ground, but the address pins, we only get 12 of those.
That means that, at most, a simple cartridge can only have 4 kilobytes of memory.
That's just 2 to the 12, or the size of the address bus that's wired to
the cartridge slot. There are some tricks you can do. There are ways to do bank switching,
for instance, but 4K is the maximum supported size on that physical interface.
There is also just some variance here to begin with. Not all 2600 games are even 4 kilobytes.
Most early titles only used 2 kilobytes of code, while some later games did fall into
pulling some sick tricks.
The 6507 already had a pretty small address bus, and with the TIA and Riot taking up address space, well, 4K is all poor
programmers have left. Another unfortunate reality is that the 2600's hardware is just plain simple.
It does provide a platform for programmers, just not a very rich one. It's not a very comfy environment.
All of the heavy work is offloaded into software.
The only exception is the final graphics generation routines,
but that doesn't save you very much.
A 2600 programmer is in the situation where
you have to do a lot without much code.
With all these issues, it's plain to see that
programming the 2600 was a bit of a challenge. Before we get to that, though, I want to examine
a more basic question. How was the 2600 programmed? As in, what was the actual process?
Believe it or not, this is somewhat of an annoying question to answer.
This is one of those good problems, though, I will admit.
You see, the 2600 actually has a thriving homebrew scene.
There are new games released for the platform almost yearly.
So if you go looking for information on programming the console, you'll find it, but it's all
pretty modern stuff.
This is complicated by the fact that Atari didn't
really think about third-party software. That wasn't even a thing when the 2600 was developed.
So, at least early on, Atari wasn't providing programming guides or special development
hardware. We don't have a pile of contemporary tools that were used with the console.
We don't have a pile of contemporary tools that were used with the console.
We eventually run into third-party development kits that could be bought and used by outsiders,
but when it comes to actual first-party stuff, well, that gets tricky.
The information we do have comes from a weird route.
Activision.
For this to make sense, we need some more context, as always.
Activision was one of, if not the, first software-based gaming companies.
Sometimes they were called the first video game publisher.
The company was started from a conflict within Atari itself.
Apparently, Atari wasn't treating programmers too well.
Atari refused to credit programmers in games, they didn't pay royalties on game sales, and in general, just didn't pay programmers nearly enough. They weren't
valuing their new talent. So in 1979, a group of four developers left the company and formed
Activision. These four programmers, Kaplan, Crane, Miller, and Whitehead, were now operating on their
own. The plan was to create third-party games for the 2600, the platform they knew best.
But after the split, they didn't have access to the internal development resources at Atari.
They had to come up with their own way to program the 2600. White had explained the process in an interview with Digital Press.
To quote at length, I might add,
Dave Crane, with some help from Simple Reverse Engineering and a little input from Al Milner,
designed the hardware, and I wrote the debugger software.
Simply, it was a ROM simulator, with an RS-232 terminal interface which plugged into the Atari cartridge slot.
You would download an assembled program and then run it in a simple debugging environment.
The system needed a cross-compiling and source editing environment provided by a minicomputer to assemble the program.
A dumb terminal was used as the interface to the development system and the minicomputer to assemble the program. A dumb terminal was used as the interface to the
development system and the minicomputer. End quote. This, I think, should give us some insight into
what Atari themselves were doing. After all, this group of four isn't that far removed from Atari.
There's also some mention of a similar setup in other interviews with Atari employees, but Whitehead
gives the most details. So here's the wrinkle that this process is working around. The 2600
wasn't a self-hosting machine. It couldn't assemble its own programs. You couldn't program
on the 2600 itself. That should probably go without saying, but it's important to point this out.
itself. That should probably go without saying, but it's important to point this out. This was the case with a number of early microprocessor-based machines. The tech just wasn't 100% there yet, or
in the case of the 2600, the hardware was just too restrictive. You have to do all the development
on something else, something bigger. The first step of this process was actually writing code.
bigger. The first step of this process was actually writing code. This was done on a minicomputer, possibly a PDP-11 of some sort. At least, given the era, that's a good guess for the
platform, I think. Editing here is easy. They may have even been using Vi at this point.
As for the code itself, well, that's where things get interesting. 2600 games were programmed in assembly language, specifically 6502 Assembler.
That's then assembled into a program, a binary, using an assembler.
But not just any old assembler will do.
You see, the PDP-11 is very different from the Atari 2600. Most computers are. You need
a special purpose tool called a cross-assembler. This is an assembler that can handle code intended
for a different platform. So the assembler ran on the PDP-11, assuming that's the right computer,
but it output code that could only run on a 2600,
or on any 6502-like machine. It's a little muddy, but I think you get the point.
After this step, you have a nice little binary, what we'd call a ROM image today,
but that's still stuck on a minicomputer. The gap between development machine and testing machine,
The gap between development machine and testing machine, computer and console, was spanned by a ROM simulator, or you'd also call it a ROM emulator.
This was really the piece that I was interested in confirming.
I'd read some threads and rumors that internally Atari just wrote test code out to physical ROM chips.
That would have been really, really annoying to deal with.
But Whitehead Explains makes a lot more practical sense to me. A ROM emulator is basically what it sounds like. It's
a circuit that can pretend to be a ROM chip. Under the hood, these are usually some type of memory,
some kind of RAM, that's wired up so data can be transferred in and temporarily stored.
The same chip is also wired up to whatever ROM interface you're emulating.
In this case, that's the 2600's cartridge slot interface.
Physically, we're talking some big, funny-looking cartridge that slots into the console,
and then on the other end, you'd have some wires running out that connect to your development computer.
This approach is more useful
than burning a pile of ROMs because it's faster and it's just more flexible. Burning a ROM takes
time, partly because the chips of this era were all UV erasable. You couldn't just plug a ROM chip
into a program or hit a button and be good to go. You have to expose the chip to a UV lamp in order to erase the last
program you burned to it. So you either have to waste a lot of time or have a lot of chips.
Either way, that kind of sucks. A ROM emulator, on the other hand, that just doesn't care.
It actually does work at the press of a button. You can also pull more tricks with emulation.
Whitehead mentions a debugging interface.
I'm unclear on the details, but I can take a stab at it.
Usually these types of interfaces let you get processor state data and start and stop program execution.
Usually you can also examine some parts of memory.
It's all done from the development machine over some type of serial interface.
This takes some tricks, as with everything on this console,
but with a ROM emulator, you have full control over what the 2600 is executing.
In effect, it gives you a way to pull more tricks.
The overall approach here fits well into the broader context of early
microcomputers. In general, there are a few ways to deal with this. One was to do the whole
Activision thing. You have some special hardware that bridges the gap between a development system
and a test system. The other approach was self-hosting, which was a bit of a hit-or-miss affair as far as support in this era.
For instance, Intel had a product called the Intellec Series.
These were pre-built microcomputers that could be programmed directly in machine code.
They ran these little monitor programs that helped you manually enter numbers and debug programs.
These Intellec machines were purpose-built development hardware.
But they were able to self-host. You had everything you needed to program, say, a 4004,
right in one big blue box. Eventually, these systems would even support things like Fortran,
of course using bigger processors, but the idea is you have a single machine that's built
specifically for development. Later systems, like the Crop from 1977, were even better for this.
If you were an Apple II programmer, you could get your hands on an assembler or compiler that
would run on the system itself. You could type up a program using the Apple II keyboard,
assemble it on the Apple II, and then run and debug it on the system itself. You could type up a program using the Apple II keyboard, assemble it on the
Apple II, and then run and debug it on the same machine. So while the 2600 didn't have such a
cushy environment, you could work one up using some know-how and a little extra hardware.
That's the general nuts and bolts of how people were programming the 2600, which
takes us to the final leg of this journey. That is, what were people
programming? They were making games, of course. That goes without saying. It's the code for those
games, the actual software, that's interesting. We're at the point where it's clear that the 2600
was, frankly, a lame machine. I mean that in the most respectful terms possible. The hardware was just kinda bad.
You had to have external devices just to program it, and Atari didn't even really like programmers.
Why, then, was there even software written for the platform? Why would programmers still,
in the 21st century, mind you, be writing new games for the 2600.
It turns out the weird little VCS is something of a nerd trap.
Steve Mayer was one of the first programmers to work on the 2600.
There's this great IEEE article called Inventing the Atari that has interviews with many of these early programmers.
Here's something that Mayer had to say about the platform.
Quote,
Writing the kernels that make up the game programs
is like solving acrostic puzzles with lots and lots of possibilities.
There's a certain class of programmer that can deal in microcode like that.
If it were easier to program,
we wouldn't have these programmers because they'd be bored.
The VCS is an absolute challenge."
Some weird terminology here right off the bat.
Some programmers use the term kernel to refer to programs on the 2600.
This is kind of an apt description, it just sounds kind of weird in modern context.
Today, kernel is more often used to describe a core component of an operating system.
It's the chunk of code that controls the computer.
The similarity is that most kernels have to provide everything they need to run a machine.
You're coming in fresh from boot, after all.
The 2600 games had to do the same thing.
They started up on a platform devoid of anything useful or helpful.
Microcode, well, that's another weird usage of the term.
In this case, Mayer doesn't mean microcode, microcode, rather just small code.
The first games for the 2600 were only 2 kilobytes in size.
That's around 2,000 characters for your instructions and
any graphical information needed. It very much is micro-sized code. I know, I've kind of sucked
the life out of a good quote now, but the point is that the 2600 was hard to program,
and that's what would draw programmers to the platform. One of the main reasons that I personally like to program is that I like solving puzzles.
Programming is like solving a huge puzzle every day.
Every project you come to has a set of requirements, things you want to do, and limitations.
You as the programmer have to work out the how.
How do you want to accomplish a given task with a set of tools?
How are you going to stay in the box, and how are you going to bend the edges a bit?
Now I assure you, dear listener, I'm not an oddity here.
Many such cases do exist among us keyboard jockeys.
The limitations of the 2600 are severe, to say the least.
And that's exactly what makes it a fun puzzle.
You have a machine that's essentially built to play fancy tennis.
It has no RAM.
You only get a few pages of code to work with.
That's a tough break, but there's enough silicon to do cool things.
Programming the 2600 successfully came down to tricks.
I can't give you an exhaustive list because I'm not an expert.
My apologies.
I'll link out to that IEEE article I quoted in the episode's description.
I recommend giving it a read.
What I will do is give you a rundown of one trick that I think is emblematic of these code acrostics.
rundown of one trick that I think is emblematic of these code acrostics. This trick doesn't have a uniform name, so let's just call it player bashing. Okay, remember how I said the TIA can
display a player object in any shape you want? This is accomplished by passing the TIA a bitmap.
It's just a way to represent a grid of pixels that you want displayed.
Now, the TIA is memory mapped. All its data is just some blob in memory. So to set up a player object, the programmer just has to write that bitmap to memory. You with me so far?
Allow me to introduce another fun limitation of the console.
You don't generate an image one screen at a time.
You do it one line at a time.
A TV screen is made up of many multiple scan lines.
The TIA only draws one line at a time, and that's under software control.
You're supposed to set up the TIA's data, then tell it to draw a bunch of lines,
thus rendering players and balls and backgrounds and everything else to the TV.
The trick is that the TIA and the processor, they kind of run on their own, at least for the most part.
Once the TIA is told to draw a line, it just does it. It draws the line until it's complete.
told to draw a line, it just does it. It draws the line until it's complete. That takes a little time,
during which the processor is freed up to do whatever it wants. That could be anything,
including reconfiguring the TIA's data. I mean, come on, everything's in memory. There is literally nothing to stop you from doing that.
This works in conjunction with another TIA feature, player duplication. You can set up the TIA to display copies of a player object. This was initially intended to be used to display two or
more identical player objects. But here's the thing. You can kind of just overwrite the
player's bitmap whenever you feel like it. You can do that while the TIA is in the middle of
drawing a line, or between line draws. So in effect, you can trick the TIA into drawing
different player images. It's all just data to the chip. Who cares if it's changed?
It goes to duplicate, the data's a little different, that's fine, still shows up on screen.
You can get different objects. I just think it's neat. This is a smart way to work around
the limitations of the console. This is also exactly how games like Space Invaders are possible.
This is also exactly how games like Space Invaders are possible.
The invaders are just two-player objects.
To the TIA, they're just some duplicates.
But by changing the bitmap for these objects, the game can display different aliens.
It can also drop out aliens that have been destroyed.
It's a total hack job, but it works. In general, games on the 2600 work because the platform has just enough wiggle room to pull these kinds of tricks. It's a pretty small
box to program in, but its edges are pretty easy to bend.
Alright, this brings us to the end of today's episode.
What's the moral of the story?
Is this another victory for the microprocessor?
I think it's actually more complicated than that.
The rapid development of better and better silicon technology,
that definitely changed video games.
Was that change for the better? Eventually it
would be. Games made it into the home because of the leaps made at companies like Fairchild and
Atari. But this first generation of software-based consoles were kinda hack jobs. I don't think
there's a better way to put it. The 2600 is a really good example here. It's cheap
enough to sell to consumers, which means that it's missing a lot of useful features. You end up with
this really primitive machine. It uses a cost-reduced version of a cost-reduced processor.
It has almost no memory. Its graphical capabilities are so primitive that programmers have to trick the
console into doing things. The appearance of the microprocessor didn't magically solve any issues.
The microprocessor gave an alternative to a lot of older technologies. Customized Cs could be
replaced with a CPU and some code. However, that also introduced its own problems.
On the other hand, these computers on a chip introduced new possibilities. The old Magnavox
Odyssey could play different games by switching out a cartridge, but that was all smoke and mirrors.
With CPU-based consoles, you could actually load up new games. That led to a subsidiary market, to publishers like Activision
who didn't even make consoles themselves. So while the microprocessor didn't solve all the
woes of early video game companies, at least not overnight, it did change the landscape pretty
quickly. Here's what I want to leave you with. This weird transitionary period isn't something unique to video game consoles.
We've seen very similar phenomena in the wider history of computing.
The jump from analog to digital machines was a hard one.
It introduced new problems, but also changed the horizon of possibilities.
New mathematical methods were developed on the
first generation of computers, methods that could only work on a digital machine.
These computers literally made it possible to do new kinds of math. That's a fact that I will
always throw around whenever I get the chance. In the same way, the jump to microprocessor-based consoles would change
the type of games that could be made. I think that speaks to the profound power of the computer.
This technology can make a lot of new troubles, but it can also change the world.
Thanks for listening to Advent of Computing. I'll be back in two weeks' time with another
piece of computing's past.
If you like the show, there are a few ways you can support it. If you know someone else who'd
also be interested in the history of the computer, then why not take a minute to share the show with
them? You can also rate and review the podcast on Apple Podcasts. If you want to support the
show more directly, you can become a patron on Patreon. Patrons get early access to episodes,
polls for the direction of the show, and bonus episodes. I have a poll up right now for the next
bonus episode, so if you want to squeak in and get in on the action, then I highly recommend
signing up now. You can find links to that and everything else at my website, adrenofcomputing.com.
If you have any comments
or suggestions for a future episode, then go ahead and get in touch. You can reach me on
Twitter at adrenofcomp, and you can also just shoot me an email. As always, have a great rest of your day.