Advent of Computing - Episode 106 - Digital Eyes
Episode Date: April 16, 2023Back in episode 90 I made a passing reference to the Cyclops, the first consumer digital camera. It's this masterstroke of hackery that uses a RAM chip as a makeshift image sensor. In this episode I'...m coming back around to the Cyclops and taking a look at the origins of digital imaging in general. Selected Sources: https://www.youtube.com/watch?v=1gmSeVfmZHw - Terry Walker CHM lecture https://sci-hub.ru/10.1109/6.591664 - The origins of the PN junction https://sci-hub.ru/10.1364/AO.11.000522 - The silicon vidicon photometer       Â
Transcript
Discussion (0)
The seasons are finally starting to change up here at Advent of Computing HQ.
I live up in the Redwoods, which I don't think is a huge secret.
Basically, any image that I post online will have one of those big distinctive trees in the background.
So, the scenery works against my assumed air of mystery, I guess you could say.
Anyway, we tend to have these long and wet winters that bleed far into spring.
This winter, it even got cold enough to snow at sea level, which only happens once every handful
of years. It's very pretty, but there are some issues with that type of weather. You see,
I love the outdoors. Whenever I have the chance, you'll find me outside, usually out on some trail.
Whenever I have the chance, you'll find me outside, usually out on some trail.
But winter? Ah, around here, that season comes with some special problems.
The roads where I live are a little sketchy.
The major highways going in and out of the county tend to get blocked by downed trees or sometimes rock slides whenever the weather gets rough. Most of the really cool trails,
or at least the remote ones that I like to hit up, are only accessible via dirt or gravel roads.
Needless to say, if the highways get sketchy in the winter, then less-to-help roads are
pretty far out of the question, so I've been stuck a little bit close to home.
But seasons change, and we've just started to get sunny days once again.
As I write this, I'm actually sitting out in my garden enjoying some stray rays of UV radiation.
That's one of the cool benefits of semiconductor technology.
I can comfortably work from a computer that fits on my lap and only weighs a
few pounds. I remember when I was a kid, that dream was still a few years off, at least practically
speaking. This wouldn't be possible if we were still stuck using vacuum tubes, but I'm getting
slightly off topic here. Listen, I'm no photographer, but I like to take a few photos when I'm out in the back
country.
I like to share the natural beauty I encounter.
It's also a nice way to provide proof that I've actually completed some of the tougher
routes that I claim to have done.
Now, I don't lug around a big film camera everywhere I go.
Like I said, I'm not a photographer.
I'll usually just reach a beautiful
vista, whip out my phone, snap a few quick photos, and move on. It's a very secondary thing for me,
just a quick little way to save some scenery for later. This is all, yet again, thanks to silicon.
More specifically, this is thanks to the CCD, the charge-coupled device.
That's the actual chip that a digital camera uses to detect light and form images. It's a
fascinating little piece of technology. It's cheap, possible to mass-produce, small, and it only sips
on power. Without the CCD, I couldn't snap photos of my hikes as a second thought.
The same technology is what makes larger machines like, say, the Hubble or the James Webb space
telescopes possible. Like I said, it's a neat little device. How was it developed, you ask?
Well, some of the steps towards this chip, at least a good number of them,
were lucky accidents.
Welcome back to Advent of Computing. I'm your host, Sean Haas, and this is episode 106,
Digital Eyes. I'm actually sticking to my word for once and
coming back to a topic that I mentioned in a past episode. In episode 90, titled,
What Happened to the S100, I mentioned this curious device called the Cyclops. It was a
very early digital camera that happened to be made by a group of engineers that would
be instrumental in the success of S100 bus computers.
In that story, the Cyclops is really a stepping stone.
It's a neat project that would lead to bigger things.
But I made a promise to circle back around to that technology.
So today, we're going to look at the development of the digital camera.
There's one big reason that I'm interested in discussing the topic. The Cyclops is a master
stroke of hackery. Its detector, the part of the camera that turns light into some sort of digital
signal, was a normal RAM chip that had its lid taken off. Certain types of memory chips are actually sensitive to light.
Normally, this is used for erasing data that's stored on a read-only memory chip.
Or, alternatively, this property can just destroy your data by accident.
These fragile chips, more properly called UV-erasable ROMs, are relatively commonplace.
At least, they were at the time of the Cyclops.
If you've ever seen a microchip with a little sticker covering its center,
then you've come face-to-face with one of these UV-erasable ROMs.
That sticker is, at least supposed, to prevent light from leaking into the chip.
Underneath, you'll find a little window that lets you view the integrated circuit.
The Cyclops uses a somewhat similar technology.
Instead of UV-erasable ROMs, it uses a RAM chip.
I always have this thing for exploiting technology in weird ways,
so the Cyclops scratches a certain itch for me.
Now, this planted a question in my head, one that I want to address this episode.
Was the Cyclops emblematic of early digital imaging? Or is it just an isolated curiosity?
To answer that question, I'm going to be diving headlong into the history of the digital camera.
To answer that question, I'm going to be diving headlong into the history of the digital camera.
This should take us all the way from the accidental discovery of the PN junction up to functioning cameras.
Is there a straight path of research from start to finish?
Or are there jumps and hacks along the way?
Will we run across more professional cameras than the Cyclops?
Only one way to find out. Before I get started,
I want to give my traditional plug for notes on computer history. We're getting really close to publishing issue zero. So right now, I really just want to get one more article. I think that will
round out the slate of articles that I have. So if you want to write about the history of computing
and get published in the very near future, I'm hoping, then go to history.computer. That website
has information on how to submit. Now, if you're worried that you might not have the experience
needed for this, well, perish the thought. I'm looking for anyone interested in talking or
researching about the history of computing.
No experience is necessary. So that website again is history.computer. Anyway, let's get back into
the show. Let's kick off this episode by kicking it old school. What even is photography? What does a camera actually do? The simple viewpoint here is that photography
allows for the duplication of an image. This is a very broad definition, but I'm being loosey-goosey
here for at least a bit of a reason. I want to be able to include things like scanning and image
digitization in the larger scope here. Normally you think of a camera as
one of those point-and-shoot types of things with lenses and all that, but when it comes down to the
digital side of things, I want to make sure we don't leave scanners out in the dark.
For us to build an understanding of digital imaging, we're going to need to understand some of the basics of analog imaging. An analog
camera works by projecting a light image onto some photosensitized medium. This projection is most
commonly accomplished using lenses, but there is some wiggle room here. Some of the earliest cameras,
so-called pinhole cameras, used very small holes in metal plates instead of lenses.
Whatever the mechanism, the effect is the same. You're trying to project a scene outside the
camera onto a plane inside the camera. But that plane inside, well, the image projected there is
a fleeting thing. It's just a region where the outside image is nicely in focus. To capture
that image, you need something that reacts to light. Once again, there's actually a lot
of wiggle room here. Analog methods rely on chemical properties of certain substances,
at least usually. The first photographs used this type of asphalt that was slightly light-sensitive.
Under bright light, the asphalt would harden.
If you slather the stuff over a metal plate, then expose it to a pattern of lights,
you end up with hardened asphalt wherever the light was brightest.
The unreacted asphalt is then washed away.
Thus, you've duplicated the pattern of light on your plate. Of course,
there are some issues with the early process. Asphalt photography never really caught on.
A better method was to use silver chloride or some other silver halide chemical. When exposed
to light, the properties of silver chloride change. Its solubility is altered and it darkens.
This is the same general idea as the asphalt method.
You slather a silver chloride solution over a plate.
Then you expose it to focus light in your camera.
You finish up by washing away the unreacted solution to reveal an image.
To actually get a nice photograph, there are a lot
more steps and there's a lot more chemistry, but I want to focus on the sensing part of the overall
process. We're dealing with specific chemical reactions that are sensitive to light. So what
other phenomena are photoreactive? It stands to reason that anything that will react to light could be used in a camera.
Human skin, for instance, can get burned with exposure to UV light.
So in theory, you could strap some lenses to your arm and create a fun image.
But I think it's clear to see there are some issues with that setup.
For one, it's kind of slow.
You'd have to sit perfectly still for maybe an hour or so to get a good image.
I haven't tested this myself, but I'd wager you'd have a pretty bad resolution on the final image.
I don't think you can burn finer details into your skin.
There would also be the side effect of, you know, increased
risk of skin cancer. I don't think skin-type photography will take off anytime soon, at least
I hope not. So, in other words, not everything that's photoreactive makes good photographs.
that's photoreactive makes good photographs. It's best for the reaction to be relatively quick.
Early silver halide plates could take minutes to be properly exposed. More modern chemical processes can require exposure times measured in fractions of seconds. The timescale here has to do with how
reactive the medium is. Early silver halide plates were only somewhat reactive
compared to newer chemical processes. Resolution is also another crucial factor. That is, what's
the smallest detail your medium can detect? This can sound a little strange in the context of analog technology. In general, analog data isn't
subdivided. It's this continuous blob of fluctuating stuff. For the most part, that's true,
but there are very technical limitations to how fine the continuum of blob can be.
In the case of silver halide photography, your resolution is limited by this thing called
grain size. That's the actual size of the crystals of silver halides that are on your plate.
These grain sizes are, well, they're really small. We're talking tens of microns here,
but that's still a limitation. What that means in practice is a little complicated to say.
We can think of a grain as somewhat analogous to a pixel. It's the reaction of these grains that
forms the final image. But this isn't like an image file. We're talking about this combination
of the representation of the image as well as the
detector for generating that image.
If your grain size is 50 microns, then you can't render anything that's projected onto
your screen if it's smaller than 50 microns.
The projected size of an image on your plate depends on the lenses you're using and the
position relative to the plate.
If you have a 50 micron grain size,
you might be able to pick up more fine detail in a photo
than if you were using 100 micron grains.
But once again, these aren't exactly pixels.
They're not uniform.
This is more a gross generalization of resolution.
Now, besides medium considerations, there are just
overall issues with chemical-based photography. You have to develop a photo before it's ready to
view. This can be as simple as shaking a Polaroid picture, but it's still an added step to the
process. Most of these chemical processes are also irreversible, meaning that
once a plate is exposed to light, that's it. You're done, finito. It doesn't matter if that
exposure was accidental or on purpose. The reaction has taken place. Put together, you get
this messy, fussy process. It works, but it could be better. So what else can detect light? What
non-chemical processes exist? Surprisingly, there are a lot of options here. I'll level with you
here. My physics background is purely theoretical. I do have a degree in astrophysics, but I've only ever gone to,
you know, telescopes or observatories to check out what observationalists are doing. I don't
have a good handle on how things work there. I've seen them pull image sensors out of big
doers of liquid nitrogen. So in my head, things that are reactive to light
must be very rare and very special. Now, I have friends who are more material kinds of physicists,
but that's just not what I studied. I had just kind of assumed that non-chemical photosensitivity
was special. You know, doers of liquid nitrogen used to protect image sensors and all that.
But once again, my assumptions have turned out to be pretty wrong.
I'm going to take us through a few of these options here because, well, they're all pretty
darn cool.
One of the first materials to discuss is selenium.
You know, that element with 34 protons. Selenium is a weird
element. It was discovered in sulfur mines in the early 19th century. It would take the better part
of the century to discover its weird properties. Selenium can come in multiple forms, called red,
gray, or black. Each of these forms are just made up of selenium organized in different structures.
Think of, for instance, coal and diamonds. Each of those materials are simply different
configurations of carbon. Chemistry nerds, at least so I'm told, call these different forms
allotropes. So if you want to sound cool in front of chemists, there's a nice word to drop.
Carbon allotropes don't just look different, they also have different properties.
Coal will burn easily but doesn't let much light through.
Diamonds are harder to burn, they will burn, but they're mostly transparent.
The different forms of selenium also exhibit unique properties.
Black and red selenium are insulators, meaning that they don't really conduct electric current.
That's useful, but only in a boring way.
No one cares about rubber.
Gray selenium, however, well, that's where it's at.
You see, gray selenium is a semiconductor.
This is a bit of a catch-all category.
A semiconductor is a material that's not fully conductive, but also not fully resistive.
That in-between-ness leads to some weird properties.
This doesn't necessarily mean that they're just more resistive than a perfect conductor
and less resistive than a perfect insulator.
Rather, a semiconductor exhibits properties that conductors simply do not.
One example has to do with heat. Normally, if you heat a conductor like steel or copper or gold,
its electrical resistivity will increase. It becomes less conductive, less of a conductor.
less conductive, less of a conductor. In a semiconductor, the opposite is true.
An increase in heat energy will make it more conductive. Another interesting property is photosensitivity. This effect was first observed in 1939 by, I'm going to butcher this French name,
Edmond Becquerel. I think that's probably okay.
He found that silver chloride crystals, funnily enough, would actually produce a small charge
when exposed to light. A similar effect was observed in selenium in 1973 by Willoughby
Smith. I'm glossing over the earlier discovery here because the later one is, well, it's frankly
a more interesting story. It was already known that gray selenium could be used as a nice resistor.
Well, the good Mr. Smith decided, hey, I'll get some of this selenium stuff, make some
new big power resistors for my telegraph. It'll be great. But things didn't really go so well.
As Smith reported in a letter to the Journal of the Society of Telegraph Engineers,
the early experiments did not place the selenium in a very favorable light for the purpose required,
for although the resistance was all that could be desired, some of the bars giving
1,400 megaohms absolute, yet there was a great discrepancy in the tests, and seldom did different
operations obtain the same result. While investigating the cause of such great difference
in the resistances of the bars, it was found that the resistance altered materially according to the
intensity of light to which they were subjected. When the bar was fixed in a box with a sliding
cover, so as to exclude all light, their resistance was at its highest and remained very constant,
fulfilling all the conditions necessary to my requirements. But immediately the cover of the box was removed.
The conductivity increased from 15 to 100 percent, according to the intensity of the light falling
on the bar. End quote. I think this is the first point in favor of digital cameras all being
hack jobs and accidents. The photoresistive properties of selenium were discovered by mere chance.
Some dude with a fancy name just couldn't get his resistors to work.
Things didn't add up, and then he comes across something much more interesting.
The physics at work here are a little complicated.
It all has to do with valence bands and how electrons can move across a semiconductor.
I'm going to hand wave most of that low-level stuff away to try and give a straightforward
explanation. Electrons aren't fully mobile in a semiconductor. We can think of a normal conductor
as a pipe that electrons can just easily flow through. Semiconductors, in this analogy, are kind of
like a pipe with a valve. I know I'm going to make some other physicists mad at me, but
you know, they can get bent. I'm just giving a very simplified overview of what these act like.
Now, this valve can be opened or closed to control the flow of electrons.
The possible positions and how the valve operates are all dependent on the specific semiconductor.
In the case of gray selenium, the valve can be opened via exposure to light. In reality,
this is adding a little more energy to the system, which makes it easier for electrons to flow.
This is adding a little more energy to the system, which makes it easier for electrons to flow.
The result is that as the intensity of light increases, the conductivity of gray selenium also increases.
This is a type of photoreaction, just not a chemical one.
There are a lot of benefits here.
For one, the reaction is reversible.
If the light goes away, the chunk of semiconductor simply becomes more resistive. It never vitrifies into some dark crystal, so this makes for a repeatable
way of sensing changes in light. That fact, on its own, makes gray selenium a very enticing choice
for light detection. The effect is also analog, at least
once you're at observable scales. If we get into quantum, then nothing is actually analog, but
I'm going to steer us away from that for the time being. If you want to know more,
go to your local college. They will talk to you about it. Now, you can create a selenium sensor capable of telling you
how bright light is at a given position. In theory, that's all you need to make a digital camera.
Just rig up a plate covered in tiny selenium light detectors, then throw that in your camera,
run some wires, work out a way to commit that to an image, and you're all set. Now, sadly, there are some
issues with the process. I know, I would have loved to see some old-timey photos taken with
selenium plates. As near as I can tell, we don't have a selenium camera from this early era.
It seems that the big problem, at least the one in my head, would come down to resolution.
It seems that the big problem, at least the one in my head, would come down to resolution.
To get a selenium detector that, you know, worked as well as a silver halide plate,
you'd have to create micron-sized little slices of selenium.
You'd need a huge array of these tiny sensors, not to mention matching wires for everything.
With the technology at the time, I don't think that was possible. That said, I'm going to hold onto the hope that out there somewhere was someone trying
to make a discrete image sensor back in the 1880s or something.
Luckily there are more practical effects that can be exploited.
Remember, selenium isn't special. There are many
other materials that are affected by light. This brings us, perhaps, to the most high-profile
phenomenon, the photoelectric effect. Fun fact here, Albert Einstein actually received his Nobel
Prize for his theoretical work on the photoelectric effect. It's a bit of a common misconception that Einstein won the prize for something having to do with relativity.
So, what is this photoelectric effect?
Sounds promising.
But simply, when hit with light, a material will emit electrons.
There are, of course, caveats to this.
Certain materials exhibit this effect more strongly.
Certain wavelengths of light work better.
But the overall effect is the same.
The mechanism at play here is, once again, pretty neat.
Every atom in the universe has these things called electrons that are bound to them.
This binding is mediated by the weak
nuclear force. It's not some absolute chain that cannot be broken. There's an associated energy
with breaking that binding. Photons, the little particles that mediate light, also happen to have
an associated energy. A photon's energy is based off its wavelength,
the color of the light. When a photon hits an electron, it imparts its energy to the electron.
If there's enough energy to overcome the weak nuclear force, then the electron can escape.
Over a large surface with a large amount of light, relatively speaking, of course,
large surface with a large amount of light, relatively speaking of course, this will create a flow of electrons, a measurable current. That, dear listener, is an effect that can be exploited.
The iconoscope was the first device to really leverage the photoelectric effect for sensing
images. Funnily enough, the scope was meant for video, so we kinda start out by jumping the shark
here.
The iconoscope was designed in the 1920s by Vladimir Zhorkin at RCA.
One of the really cool things about this device is that it functions on a totally different
principle than selenium light sensors.
It's also fully analog. We're dealing with one of these big
vacuum tube-like things here. The iconoscope is a big evacuated tube that bends at an angle before
flattening out into a big flat dish. It looks kind of like a very fragile ladle, all things considered.
That last part, the flat end of the tube, is where all the
action takes place. Inside is a thin mica plate that acts as an insulator. One side of the plate
is coated in an even conductive layer. That's usually a mixture of silver and cesium for some
reason. I don't know why, it just makes the whole thing a little more dangerous,
which is kind of cool, I guess. The other side of the mica screen was covered in tiny conductive
specks. You could call these grains if you felt like it. These specks even contained silver,
as well as cesium, so I think the comparison here is somewhat apt. This grainy side of the plate was facing the outside world,
so that it would come in contact with light.
As photons stream in, they strike the metal grains,
which, in turn, start kicking off electrons.
That will release any charges stored in these grains.
The amount of charge released is proportional to how much light strikes a grain,
so after a little exposure, you get a pattern of charges that relates to an image.
As with most systems of this kind, the readout is the tricky part. So you have all these charged
flecks of metal. How do you figure out which flecks are charged and which aren't. This is where the cathode ray comes into
play. As always, the literal particle accelerator comes in clutch. The ray works in a cycle. First,
it's scanned across the grains to apply a nice even charge. This is just a smear of electrons
across the entire plate. At this point, the grains have a known charge and
are ready to record an image. The astute among you might recognize what's going on here. We have a
bunch of small conductors, a big insulator, and then one big conductor. In effect, each grain is
its own tiny capacitor. A capacitor can store charges, but only up to a certain point.
Once a capacitor is fully charged, it stops accepting new electrons. It basically closes
up shop. If you were to try and blast a fully charged grain with the cathode ray,
you wouldn't be able to squeeze in any more electrons. They would just bounce off the grain.
That bounce back can be detected using a simple metal loop, basically a splash zone detector.
Here's where it all comes together. The photoelectric effect will knock electrons off
any grains that are struck by light. That lowers their charge, which means they'll actually accept new electrons. After the
initial charge is spread, the beam comes back for another cycle. This time, each grain that's
blasted will rebound those incoming electrons unless some room has been made on that grain.
A grain that was kept in the dark, well, that'll bounce back everything, while a grain that
was hit by some photons will bounce back fewer electrons. Thus, you can turn the stored image
into an electric signal that will come out of the tube. That signal can then be picked up and
turned into an image by a television. We have here a fully electric way to sense images, but nothing about this is digital.
There are discrete grains, but they're not evenly spaced like pixels.
Each grain can have a range of charges, so it's not a 1 or a 0, or even something like
a number between 0 and 16.
The signal produced is continuous.
The cathode ray doesn't stop at fixed locations and take a reading, it just scans across the plate. It scans left to right,
then down a tad, then right to left, then down a tad, and so on until the plate's covered.
Then it goes back to the top and just does it again. So the output generated is just this
wiggly waveform of the approximate brightness
at every location on the plate. This continuous data stream isn't necessarily a pro or a con,
it's just not digital. We're looking at a profoundly analog technology. That said,
it does have a lot of advantages over chemical imaging. As with all electronic light detection, the process
is reversible. You aren't using up any materials, just reading voltages across some fancy components.
In theory, you could use an iconoscope to produce a still image. But that would be a little hard
still. You see, it wasn't meant for recording video. This was a purely
live style of imaging. The signal that came out of the tube with minimal massaging was the same
signal that a standard television would read. To make a still image, the signal would have to be
stored in some type of buffer that somehow committed it to film. You could probably just point a normal
camera at a TV and snap a photo at that point. So while devices like the Econoscope did make
for useful video image sensors, it just wasn't the right tool for still frames. In fact, nothing
we've discussed so far leads directly to the Cyclops. We still need to take a step forward and look at a different type of effect.
We need to look at another accidental discovery.
In 1940, Russell Ohl discovered the P-in semiconductor junction while working at Bell Labs.
This was, by all accounts, a bit of a fluke.
At the time, Ohl was trying to create a better type of rectifier,
a diode used to turn alternating current into direct current. By restricting the flow of
electrons to only go in one direction, you can chop the waveform up, getting just a positive
part. At the time, this was used in radar and radios, and the rectifier of choice was the vacuum tube.
The problem with vacuum tubes were, frankly, myriad. They're hot, they can be unreliable
under certain circumstances, they're annoying to manufacture, and they can straight up shatter,
they're made out of glass. Plus, they require relatively high voltages.
The issues go on.
Researchers had been trying to replace tubes for years,
for reasons that I think are very understandable.
Ohl was one of the latest to take up the challenge.
When you break things down, Ohl was really just trying to figure out how to make a reliable diode that didn't use a tube. There were existing
options, but they also kind of sucked. The most common solid-state diode was the so-called
cat's whisker diode. These were crude, rude, and full of tude. At least, they didn't like to play
nice with their operators. These diodes were a chunk of some type of crystal with a little adjustable metal whisker
touching their surface.
The whisker could be moved around and, often times, had to be moved to just the right spot
for the diode to work.
These crystals were, in reality, very crude semiconductors.
One common option was galena, a crystalline form of lead ore. Now,
I want to assure you that this technology is even more crude than it sounds at first glance.
Galena is a prime example of this, well, primitive approach to rectification. Galena isn't some
compound made in a lab. It's a mineral that you take straight out the ground.
Chemically speaking, galena is a lead sulfite crystal, a combination of lead and sulfur.
Iron pyrite was another common crystal used in these rectifiers. Pyrite is, once again,
a mineral you dig out of the ground with a shovel. It's also a sulfide compound and a semiconductor. By placing
a metal whisker in contact with one of these crystals, it's possible to make a metal semiconductor
junction, which will function as a diode. However, you can't just poke a wire into a random rock you
pull from the ground. These are rough and unrefined ores. They're full of impurities, and they have
uneven surfaces. In practice, a radio operator would have to move the whisker around to find
a spot on the crystal's surface that actually formed a good junction. This is literally scrying
over a crystal in order to glean energy from the air. It's all very high fantasy once
you strip away all of the physics. As arcane and frankly cool as a wire and rock detector is,
well, these were really low-budget options. Cat's whisker detectors start being used in 1906,
at least that's when the technology was patented. That same year,
the world would bear witness to a much more amazing creation. Silicon-based detectors.
At least, it seems amazing to us. We know the potential hiding in little wafers of this
silicon stuff. At the time, these silicon detectors were just slightly better rectifying
diodes. It has been observed by multiple inventors that a refined silicon crystal could rectify
just as well as galena or anything else. What's more, these metal-silicon junctions didn't require
the same fine-tuning of metal-mineral junctions. This was cool technology, but it turns
out that vacuum tubes are just better than crystal magic. A catchwhisker, although marvelous, isn't
nearly as flexible as a vacuum tube. A tube can rectify, it can amplify, it can serve as a logic element or a relay. A crystal detector can rectify.
Sometimes.
Under certain circumstances, and if you don't look at it wrong.
So, when possible, tubes started to replace crystals.
Of course, this came with trade-offs.
You got all the inherent issues of vacuum tubes, plus one I haven't actually
discussed so far. Tubes in the early 20th century couldn't handle high-frequency signals.
There was a limit to the types of signals they could rectify. You had to be in a relatively
low wavelength for tubes to work. So we have an interesting problem building here.
So, we have an interesting problem building here. How do you rectify high-frequency signals reliably? A crystal detector could do the job, if it felt like it. Ohl figured the solution
was to make a more reliable type of crystal diode. He also believed he knew the issue with
existing cat's whisker detectors, impurities. The silicon being used in these diodes wasn't
pure silicon. Even the nice ones still had impurities in it. It might be polished up,
but there were other chemical compounds in the mix. This gets us back into the more
solid-state material physics that scares me. The best I understand is that Ohl worked out the crystals
with diamond-shaped lattices made for the best semiconductors. At least, they had more reliable
properties. Pure crystalline silicon, one of the element's allotropes, exhibits this diamond-shaped
lattice. The issue was getting to a reasonable level of purity. This was easier said
than done. The solution that Ohl and his colleagues at Bell eventually landed on was, well, it was a
little extreme. It's described by Michael Riordan and Lillian Hodson in the article The Origins of
the P-N Junction, from which I'm going to paraphrase.
All started with relatively pure silicon powder. The article claims it was at 99% purity.
That was dirty enough to mess up the lattice and possibly lead to erratic behavior. At least,
that was All's theory. To get this powder into a crystal, it had to be heated to well over 1000 degrees Celsius.
At those temperatures, weird chemistry can occur.
The molten silicon can pick up impurities from the air and even from the very crucible
it's melted in.
To combat this, Ohl's ingots had to be melted in quartz crucibles in an inert helium atmosphere.
ingots had to be melted in quartz crucibles in an inert helium atmosphere. By melting the silicon in this non-reactive environment, then leaving it to stew, the impurities should settle out.
Heavier elements would drop to the bottom, and lighter ones would float to the top,
leaving a nice region of clean silicon somewhere in the rod. It's a good theory,
somewhere in the rod. It's a good theory, but it didn't always work as intended.
Aul kept getting samples of silicon that acted unpredictably. Eventually, this would have been around 1940, one of his colleagues came into his office with a sample that simply made no sense.
It didn't have a measurable resistivity. As Aul recounts in an interview with AIP,
resistivity. As All recounts in an interview with AIP,
I decided to see what the dickens was the matter with it. So I set it up for testing,
applied the usual AC voltage. I displayed the current on the oscilloscope and I saw the rectifying properties. I saw a peculiar loop. He continues,
I didn't understand this, but I noticed that if I did certain things to it, the loop changed and this was the key.
I had it held over water and thought it was the water vapor that made it change.
I tested different things and finally went to a light, an incandescent light over it, and we found that it made a charge.
I eventually took a neon lamp whose light passed through a chopper and put that on the silicon.
I found charges in the characteristics which followed the chop of the light. Not only was this weird silicon rod detecting light, it was very sensitive.
In fact, it was generating appreciable energy.
Well, converting, but, you know, potato potato. All was able to get about a half volt out of the rod when he shined a light on it.
That's a lot better than selenium. This means that, in effect, the weird silicon thing could
be the building block of a fantastic image sensor. However, that's a bit
of a leap that we can't make just yet. What exactly had Ohl discovered? This is where we enter into
this weird sequence of events. He'd tell someone at Bell that he had this silicon rod that could
generate current from light. They didn't believe him. He'd come back to their desk
with the circuit, turn on a light, and the whole crowd would go wild. It was eventually worked out
that the foundry process was just unreliable. The idea was to settle out impurities and then
cut them away. But you can't really see a difference between 99 and 100% pure silicon. The method was kind of
spray and pray, just cut and hope you're left with the good part. If you aren't careful,
then you get a chunk of highly pure silicon next to a chunk of impure silicon. That barrier between
the regions has become known as a P-in barrier, or a P-N junction. The name comes
from the fact that one side of the barrier is slightly positive, while the other is slightly
negative. This has to do with baked-in electrode deficiencies or surplus. It's not necessarily a
flow of current. This barrier, in addition to detecting light, also acts as a diode. The current can only flow across the barrier in one direction.
Unlike a vacuum tube rectifier, this new silicon one just works.
No heating or vacuum needed.
And unlike a cat's whisker, there's no fiddling to do since everything is,
very literally here, set in stone.
It can also handle high-frequency signals.
All had found what he was looking for. The basic PN junction becomes, really, the basis for all
modern digital technology. There's the semiconductor diode, of course, which can be used to make most
logic gates needed for a computer. Stacking junctions leads to the transistor, which can do everything else.
Layer those junctions on a wafer, and you get an integrated circuit.
Of course, these technologies reach massive breakthroughs in their own right.
You don't easily go from a PN junction to a microprocessor.
It takes a lot of steps.
That said, every digital device we use today is made up of these semiconductor junctions.
Keep that idea in the back of your pocket for a little bit later.
We can take the PN junction another, less computational route.
We can actually jump straight into an image sensor.
At least, we can jump to an interesting transitionary technology.
Enter the silicon- Viticon Photometer.
This is a bit of an interlude that will definitely be worthwhile.
So, let me ask you a question.
Armed with all the technology we've discussed so far, how would you create a digital image sensor?
how would you create a digital image sensor? That is to say, a sensor that reads an image as a grid of pixels, each pixel being some discrete numeric value. You could actually
just take some of these new semiconductor junction detectors and throw them in something
like an iconoscope. Instead of having a screen with random dots, you arrange a set of diodes in a regular grid.
A diode has some basic inbuilt capacitance, so the engineering involved here is really similar to an iconoscope.
Now, I've partly structured this episode so I could talk about one paper that I find really neat.
Hey, it's my show. I get to do this type of stuff.
So let me introduce you to the
silicon viticon photometer proper, perhaps one of the most brute force examples of digital imaging.
The device was developed in 1971 by Thomas McCord and James Westfall. I'm technically jumping over
the real first CCD, but the silicon viticon represents something
we don't get to see very often.
You see, the first true CCD was created in 1970 at Bell Labs, or 1969, depending on if
you want to go by the first paper or the first theory, but whatever.
It took some time to be adopted, that's the main point here.
So the CCD didn't
show up and change everything on day one. There was some lag time. In between the accidental
discovery of the PN junction and the full adoption of the CCD, we get transitional technology.
The silicon viticon is exactly this. It's trying really hard to be a CCD using the framework of earlier
analog technology. A normal Viticon is really just a different take on the old Iconoscope.
There were a whole host of these analog icons and scopes back in the day. The gist is that they all
were refinements on the original. In the case of the Viticon, the capacitor grains were replaced with a slab of selenium.
But the idea was exactly the same.
Regions on the plate reacted to light, then the plate was scanned by an electron beam to detect those reactions.
The silicon version swaps the selenium plate for something more sophisticated.
It's a regular grid of p-n junction diodes.
But that might make it sound a little rough.
We aren't talking about a big bundle of discrete diodes.
Instead, think of this as something closer to an integrated circuit. The plate is a solid blob of n-type silicon with regularly spaced
p-type silicon embedded in its surface. Metal stubs, called top hats in the paper,
are then bonded to each p-type blob. This makes for a single semiconductor plate that functions
like an array of diodes. By 1971, the p-in junction was well understood.
We had the physics down. Some smart researchers figured out, for instance, exactly how much
current a single photon would induce when it hit a junction. Thanks to that, a silicon-based
photometer could be exceedingly precise. The instrument described by McCord and Westfall
could tell you exactly how many photons hit each point on the grid. That's a lot better than some
relative measure of brightness. The silicon photometer was also a fully digital device. Its grid contained 256 points, 256 discrete pixels.
This is in stark contrast to all earlier detectors that had loosey-goosey resolutions.
We aren't dealing with a lot of pixels here, but we are dealing with a number, a specific number of pixels.
This is getting us deeper into the digital realm. Furthermore,
each pixel reported back some discrete number. Sure, there was some conversion in there as
physical readings made their way into storage, but at the end of the day, the silicon viticon
reported an image of 256 discrete numbers. And it gets better. How were images stored?
Not on film, not as waveforms, but on digital tapes. Sure, tape might not be the most digital
thing in the world, but they were used for digital data storage in this period.
The photometer blasted its data straight to tape, which was then loaded up into a computer for later use.
A digital computer, mind you.
We have here the total package.
A digital detector that produces digital data for use on a digital machine.
I think it's fair to say that we're looking at one of the first
digital cameras. It wasn't some fluke or some hack job either. This was a very intentionally
designed device. And here's the best part, or I guess my favorite part. The silicon photometer
was used for astronomy. This tube was hooked up to a telescope and used to
take readings of stars and planets. I spent a good portion of my undergrad career running
digital analysis on this kind of data. For me, it's wild to just kind of stumble upon the roots
of this familiar discipline. Now, of course, there is some room for improvement in this fancy Viticon.
In a lot of ways, we're looking at a weird dead-end technology. Maybe we should call it a
STEM group and not a missing link, a weird mixtape of digital imaging ideas. Besides,
I sort of teased that there was an honest-to-goodness CCD created before the lovely
Silicon Viticon photometer. For that, we have to go back to Bell Labs, the undisputed champions
of silicon in this era. This will also bring us full circle to the Cyclops, and why some
deft hackery could produce a working digital camera. The first CCD started sensing images in... 1969. This technically
predates my beloved silicon viticon. I mean, there's no technically about it, it's just earlier.
I really wish the photometer was a little bit earlier because then we'd have this really nice, clear story about transitionary technology,
but no, it just comes after the CCD. There's quite a bit of mythologizing around the creation of the
charge-coupled device. This is due, I think, to the fact that a Nobel Prize was given out to the
inventors. The CCD is really that big of a deal. The tale traditionally starts at a magical place
called Bell Labs. The setting is only matched by a magical device. Bubble memory. Back in the day,
Bell was at the epicenter of many groundbreaking developments. The PN Junction is just one. The
transistor also comes out of Bell Labs.
So does Unix, for that matter.
Around the time Ken Thompson and Dennis Ritchie were starting to take a stab at programming
their own operating system, a weird device was taking shape.
Bubble memory is, for all intents and purposes, a special type of magic.
It's this form of digital memory that's essentially a
more refined and sci-fi-esque form of magnetic core memory. The basic operating principle comes
down to creating these magnetic bubbles on the surface of a thin film. Current is then used to
move those bubbles across the film, packing them into a dense structure.
You send in ones and zeros, which are translated into magnetic fields and then
shuffled around on the device. Readout is accomplished by carefully shepherding those
bubbles to a magnetic pickup on the far end of the film. In other words, you have a serial
memory device that stores data as little magnetic bubbles.
In theory, bubble memory can store wild amounts of data, and it's also non-volatile.
It's a really wild, nonsense-sounding technology that, at least for me, ranks up there with special rocks that can see.
It's also something I need to do a deep dive on.
Bubble memory is developed out of some weird accidental discoveries with this equally weird
type of machine-built magnetic core memory. It also fails to really catch on. It's a perfect
topic for its own episode, I think. Anyway, back to the topic at hand. Bubble core memory was
developed at Bell in 1967. At this point, the lab was a pretty freewheeling place. Researchers were
more guided around to interesting projects instead of being ordered to do certain tasks.
One such project that higher-ups at Bell wanted examined was something of a fusion of ideas.
Can bubble memory be implemented in silicon?
The project was taken up by Willard Boyle and George Smith.
The overall idea here is to build a semiconductor analog of bubble memory.
Supposedly, it took all of a half hour to draft up the new circuit.
supposedly it took all of a half hour to draft up the new circuit.
One of the reasons this was so easy, or at least quick,
is that bubble memory is really just a fancy type of delay line.
You put a number in on one side, and after some clock pulses and some waiting,
the number comes out the other end.
In that respect, bubble memory wasn't new.
The only new thing was how it stored and moved those numbers.
It's the magnetic slide part.
Semiconductors aren't really big on magnetism, it's just not a Simcoe thing,
but Boyle and Smith figured you could do the same thing with charges.
One cool part of bubble memory is that its storage medium is just a uniform film.
Well, that doesn't really work too well with current storage.
Currents like to, well, flow.
Electrons don't want to stay in one place. They're one of those all-who-wander-are-not-lost kind of particles.
You need some way to contain them, so a uniform medium is kind of particles. You need some way to contain them, so a uniform medium is kinda out. Boyle and Smith
decided the proper storage medium would be a capacitor. Specifically, the duo settled on
what's called an MIS capacitor. Now, this is a bit of a silly name. MIS stands for Metal Insulator
Semiconductor. It's nothing fancy or scientific. The name literally just
explains how the capacitor is built. You have a layer of metal, an insulator, then, well,
a semiconductor. As we've discussed, a capacitor will store a charge up to a point, after which
it will accept no more electrons. That's a convenient way to temporarily hold some kind of
data. MIS capacitors are especially useful here because they can be laid down on an integrated
circuit. You can have a chip manufactured that's full of tiny capacitors. Now, of course, there is
another interesting property here. MIS has, well, a semiconductor.
It also has some junctions.
Those react to light, which causes an electrical charge to build up.
Just like we saw with the silicon-viticon tube, those charges can be stored and then read out.
The readout method here is similar to that of bubble memory, at least in practice.
The readout method here is similar to that of bubble memory, at least in practice.
Boyle and Smith decided to wire up multiple MIS capacitors as a shift register.
A pulse will move the current from one capacitor to its neighbor.
Repeating that pulse lets you pull every charge across the circuit and out to a readout.
The actual mechanism is...
Well, it's a little cool and a little confusing. Essentially,
each third capacitor is wired up together. So if you had a long row of caps, then capacitors 1,
4, and 7 are connected. The same is true for capacitors 2, 5, and 8, and 3, 6, and 9, and so on down the line. By applying voltages to these groups in sequence, stored charges can be
down the line. This works by something-something semiconductors. I've read Boyle and Smith's
CCD paper a few times, and I only vaguely understand how this works. It has something
to do with manipulating potential wells to pull charges. And then I get
a little bit lost in the weeds. The point is, these control pulses let you shift charges around
in a manner similar to how magnetic bubbles slide around on bubble memory. This can be used for
something really boring, like plain, normal, sequential memory. You can totally use a CCD that way.
In fact, that was kind of the original point. That's not really how we use CCDs today, though.
In 1970, Boyle and Smith published a paper on their new device. They give one alternative use
for the technology. Quote, charge transfer in two dimensions is possible as well as
the ability to perform logic. An imaging device may be made by having a light image incident on
the substrate side of the device, creating electron hole pairs. The holes will diffuse
to the electrode side where they can be stored in the potential well created by the electrodes.
side where they can be stored in the potential well created by the electrodes. After an appropriate integration time, the information can be read out via shift register action. End quote.
A little technical, but the idea is this. You make a 2D array of these MIS capacitors,
you let some light charge them up, then you do the silicon hokey pokey to ship the image off the chip.
you do the silicon hokey pokey to ship the image off the chip. You end up with a digital image.
You have discrete pixels which each have discrete numeric values. This is the heart of a digital camera right here. Better yet, the design works. That's pretty good for a half-hour brainstorming
session. Boyle and Smith's plans were passed off to other researchers
at Bell Labs, who were able to build functioning CCDs pretty quickly. I think it's easy to see the
benefits of this new technology. You can now print an image detector on a silicon wafer.
That benefits from all the same upsides as other microchip technology. As integrated circuit methods improve, you can print smaller MIS capacitors,
and thus make higher and higher resolution CCDs.
This means that imaging could now be subject to Moore's Law.
The CCD could only get better from here.
Or, you could forget all that and do some silly stuff. Even though the CCD and the Silicon
Viticon existed, digital imaging technology wasn't generally accessible. Consumer digital cameras
wouldn't hit the scene until the late 1980s. We're looking at lab and professional equipment here.
These devices were expensive and delicate. There's
no way a hobbyist could ever hope to get their hands on a digital image sensor, right?
This is where we come to the Cyclops and its creator, Terry Walker. In the middle of the 1970s,
Walker was in the electrical engineering graduate program at Stanford. Now, Walker is a
really interesting dude. A few years back, he did a presentation about his early career at the
Computer History Museum, which is what I'm working off for this part. Walker was always obsessed with
electronics. During high school, he'd go from making radios to building an oscilloscope with
During high school, he'd go from making radios to building an oscilloscope with salvaged and scrounged parts.
Needless to say, the guy knew how to cobble and hack things together.
This is going to be one of those stories that's all about connections.
At the time, around 1974, Walker was working in a medical imaging lab on campus. He was dealing with really early ultrasonic imaging machines. He had this
unnamed friend that worked in the integrated circuit lab. Now, labs usually aren't walled
gardens. There's often some bleed over, so sometimes the image lab might need help from
the IC lab, for instance. This unnamed IC friend had a second job. He worked for a semiconductor
manufacturer called AMI. So, one day when Walker saw an interesting ad for a new AMI chip,
he knew just who to get in touch with. Soon, Walker had his hands on a sample of this new chip,
plus some, perhaps illegally derived, drawings of its internals.
What was this mystery I see?
At first, it may sound underwhelming.
Walker was after an AMI-S4006, a 1-kilobyte RAM chip.
This was all happening in 74, so a 1K RAM chip wasn't really earth-shattering technology. It was... fine.
It's new-ish technology, but Intel had been making these types of chips since 1969. But
this specific AMI chip was special. Well, kinda. Walker points out in his CHM lecture that most RAM chips were
somewhat complex affairs. The fundamental unit of storage for these early chips is, well, a single
bit. Those bits were stored in capacitor transistor circuits. Contemporary Intel chips
grouped these storage elements into a few different banks,
so we aren't dealing with just a big plane of little IC elements. Rather, something like a few
regions of these storage elements wired up together. The S4006, on the other hand,
did group all its storage elements into a single region.
So with an Intel chip, you might have four or more separate chunks of storage.
The 4006 had one big square of capacitors and transistors.
More specifically, it was a nice 32x32 grid.
Maybe you can guess where this is going.
There's another feature of the S4006 we need to discuss.
It was what's known as pseudo-static RAM.
Early RAM chips were dynamic.
These chips could store a few bits for a few milliseconds,
after which time the charge on each storage capacitor would dissipate,
so they had to be replenished regularly by an external
circuit. The 4006 bundled that refresh circuitry with the chip, so you didn't have to manually
refresh every bit. One last nice little feature that rounded everything out is that the S4006
had a really primitive interface. In newer RAM chips, you have a data bus and an address bus.
To run a read operation, you put the address to read on the address bus. You tell the chip you
want to read, then the chip sends a number over the data bus. The 4006, it didn't do any of that.
Instead, you had an X and Y bus that directly addressed the columns and rows
of that capacitor grid. You'd tell the chip you wanted the bit at location 32.5, for instance.
Then you'd get a single bit out of the read pin. As soon as Walker got his hands on an S4006,
he set to work.
These chips were packed in little ceramic boxes with a metal lid on top.
With a little application of force, supposedly a butter knife and a hammer, he snapped off the little metal cap without harming the chip.
Normally, you would not want to do this.
ICs are delicate little wafers. Once the cap was removed,
Walker was faced with a nice regular grid of storage elements. Checking the pilfered designs
confirmed his suspicions. This IC would be light sensitive. This is due to one very specific
junction that each storage circuit had. The actual bit is stored as a charge on an MIS capacitor.
Read and write operations are controlled by a few transistors nestled around the capacitor.
The bond between capacitor and transistor forms a light-sensitive silicon junction.
This is the same type of junction used in Bell's fancy CCDs.
The operating principle here is a little different, though. In dynamic RAM, the charge on each
capacitor will fade over time. Because of how this junction is situated, exposure to light will cause
the charge to dissipate more quickly. Walker figured he could exploit this to create an image.
dissipate more quickly. Walker figured he could exploit this to create an image. First, he filled out each bit of memory with a 1. That charged up each capacitor. Then he went back and read each
bit. Any bit that was exposed to enough light would now be a 0. That's a good start, but it
only generates a black and white image. To get grayscale, Walker repeated the read cycle multiple
times. He wound up with a circuit that would read the chip 15 times between each write cycle.
This gave him 15 levels of relative brightness. Shockingly, this kind of just worked. At least,
Walker makes it sound like it didn't take much fiddling. And really, why should it?
AMI had accidentally made an almost-CCD.
Of course, there were some rough edges.
At this point, Walker was working with a very crude prototype.
He was able to throw a 1024-pixel image onto an oscilloscope.
It worked, but it could be better.
Once things were working, Walker drafted
up a printed circuit board, which was manufactured over at the campus IC lab. A final refinement was
the addition of pilot lights. Now, these are a bit of an oddity at first. The complete circuit board
had two flashlight bulbs placed on either side of the S4006.
You'd think this would just blow out the picture, but that's not the case.
As Walker explains, this was actually inspired by some earlier experience he had tinkering
with an iconoscope.
Those older tubes had a similar pilot light mechanism.
By adding a little background light, an iconoscope could be made more
sensitive. This basically primed the scope's plate so it took fewer outside photons to register an
exposure. Walker found that this old design worked just as well with his newer digital sensor.
From here, the story meets up with the tale of Chromimco. Walker showed his camera to his
friends Harry Garland
and Roger Mellon. They had been writing articles for this magazine called Popular Electronics and
figured the digital camera would make a good story. The camera, dubbed the Cyclops,
would make the cover of the February 1975 issue of Popular Electronics. Incidentally,
the January issue cover story was the earth-shattering Altair 8800.
The newly-formed Chromimco would make gangbusters churning out Cyclopses,
and eventually become a leader in the S-100 marketplace.
There is one final, interesting piece of the story of the Cyclops.
As Chromimco set up operations, they had a special assembly line.
As Chromimco set up operations, they had a special assembly line.
They would order bulk batches of S-4006 chips,
delid them, and then glue a little quartz window on top of the IC to protect it.
Walker recalls that the company was getting these chips for about $4 a piece.
What can I say? Wholesale really gets it done.
The Cyclops was sold either as a kit or a pre-assembled camera.
The instructions for the kit called the RAM chip an image sensor without mentioning its actual origins.
The popular electronics story even goes so far as saying the Cyclops was possible thanks to advances in CCD technology. Now, I can only speculate, but I like to imagine
this was all done to hide Walker's deft hackery. If that's the case, then it was helped along by
circumstance. You see, microchips are all labeled. They'll have a manufacturer stamp, a model number like S4006, and other information printed on their tops.
When you have these metal-lidded chips, that information is usually put on that lid.
Pulling the lid not only turned these weird RAM chips into image sensors,
it also hid their true identity from the digital public.
from the digital public. Alright, that does it for our dive into the digital camera.
This episode only really had one question, and I think we can answer it now. Were all digital cameras accidents and hack jobs, or is it just the Cyclops? On the surface, the answer is obviously no. Bell's CCD, the
silicon viticon photometer, and even earlier electric image tubes were all purposeful devices.
These were serious stuff, serious engineering, and produced serious data products. However,
there is this accidental undercurrent driving all of this.
The photoconductive properties of selenium were discovered by accident.
The same goes for the P-N junction.
Even selenium itself was found on accident.
Some sulfide miners kept getting this weird red powdery contamination in their product.
These miners were using sulfide minerals to make
sulfuric acid. In their mind, arsenic was a common contamination, so they assumed that it was just
arsenic. But they had their reservations. They pulled out the powder and sampled it by... smell.
It didn't smell like arsenic. All things considered, I don't know why you'd try to smell
something to figure out if it's a poisonous heavy metal, but, you know, whatever. Different times.
It also didn't behave like tellurium, another known contaminate of sulfide minerals.
After consulting a chemist, they figured it had to be something new, something that fit onto the periodic table near arsenic and tellurium,
but was a distinct element.
Perhaps the beautiful hackery of the Cyclops is a fitting tribute to all of these accidental roots.
Thanks for listening to Advent of Computing.
I'll be back in two weeks' time with another piece of the tale of the
computer. If you like the show, there are a few ways you can help support it. If you know someone
else who'd be interested in the history of computing, then please take a minute to share
the show with them. You can rate and review the show on Apple Podcasts as well as Spotify now.
If you want to be a super fan, you can support the show directly through Advent of Computing
merch or signing up as a patron on Patreon. Patrons get early access to episodes, polls for the direction of the show, and bonus
content. You can find links to everything on my website, adventofcomputing.com. If you have any
comments or suggestions, then go ahead and shoot me a tweet. I'm at Advent of Comp on Twitter.
And as always, have a great rest of your day.