Advent of Computing - Episode 52 - THE SOURCE
Episode Date: March 21, 2021One of the great things about the modern Internet is the wide range of services and content available on it. You have news, email, games, even podcasts. And in each category you have a wide range of c...hoices. This wide diversity makes the Internet so compelling and fun to explore. But what happens when you take away that freedom of choice? What would a network look like if there was only one news site, or one place to get eamil? Look no further than THE SOURCE. Formed in 1979 and marketed as the information utility for the information age, THE SOURCE looked remarkably like the Internet in a more closed-off format. The key word here is: looked. Like the show? Then why not head over and support me on Patreon. Perks include early access to future episodes, and bonus content:Â https://www.patreon.com/adventofcomputing
Transcript
Discussion (0)
Who doesn't like a fun afternoon of surfing the net?
Whether you're doing some kind of work or just killing time,
the prevalence of the internet has changed how we live our day-to-day lives.
If you step back for a moment, it really is amazing how we all have data piped directly into our homes nowadays.
And the infrastructure that makes idly scrolling through websites possible is, honestly, nothing short of a
technological marvel. One of the cornerstones of our online lives, what makes the internet so
interesting and eminently browsable, is the sheer diversity of content. It's not just a single game,
a news site, and some email provider. There are countless sources of news and information,
ranging from credible and scholarly to a little more on the dubious side of things.
You have plenty of options for email hosts, or instant messaging, or even multimedia streaming.
I mean, there's even more than one podcast about the history of the computer.
So how do we have so many choices at
our fingertips? Why is the internet so full of options today? It comes down to what the internet
is composed of. That's thousands upon thousands of separate servers owned and operated by many
different organizations and individuals. The diversity of content stems directly from this diversity in the
network itself. Everything from my tiny personal website up to Google are connected up to the same
grid of information. But what would happen without this wide spectrum of players? What would a network
with only one source of information look like? Well, we don't have that far to go to find an answer.
We just need to look at an obsolete service known only as the source.
Welcome back to Advent of Computing.
I'm your host, Sean Haas, and this is episode 52, The Source.
In case you were wondering, the title being all caps is in fact intentional.
The Source is supposed to be stylized in Cruise Control for Cool.
In preparing last episode, I spent some time reading through old IBM press releases about the PC.
Tucked away in their initial announcement was this little passage.
Quote,
Expandability.
A starter system consisting of a keyboard and system unit can be connected to a home television set with a frequency modulator. It can then be expanded
to a system with its own display, printer, and auxiliary storage cassette or discs.
The computer can be used with color or black and white television sets. Information from
centralized databanks such as Dow Jones News Retrieval Service and the source can be accessed and displayed. End quote. You know, fairly dry
explanation about expandability. You can get a cheap PC, upgrade, use different monitors, and
access the source. But what exactly is the source? Why is it in all caps? And that's when I sort of went off the rails.
So now you get to come along with me on a pretty strange trip into the information superhighway.
Started in 1979 or 1978, but we'll get to that, The source billed itself as the first information utility company.
And, well, I think that's the best way to describe it. The source was a service that
computer users could pay for access to, billed by the minute. You could connect up to a central
bank of servers over a phone line. From there, you got the news, messages, could access databases, and even trade
stocks. It's like a tiny walled-off precursor to the modern internet. The strange part, and what
has kept me really fascinated throughout researching this episode, is that the source was a very
familiar-looking service, but it operated in a pretty unfamiliar way.
There have been a number of other centralized information systems both before and after the
source. For instance, Plato worked in a somewhat analogous way, broadly speaking. In that system,
a central machine at the University of Illinois served data out to thousands of terminals.
Users could send messages, play games, access data, and there were even newsletters.
But Play-Doh was strictly an educational project.
You had to be a student to actually use the system.
Services like CompuServe and Timeshare had also roughly similar functionality. They offered
pay-for-use access to mainframes. Once online, you had access to messages, data, and the like.
Timeshare, the name at least, should give this category away. These kinds of services focused
on access to timeshared environments up on some remote mainframe.
So while you could get email and news, the core offering was still centered around raw computing
power. Across the Atlantic and France, we have Minitel. That's another roughly similar system.
Minitel connected terminals up to a central server where users could get messages, news, and shop online,
amongst many other things. But the service was ran by the French government as a fully-fledged
utility, and was structured in such a way that third parties could host Minitel sites somewhat
independently. The bottom line is that during the 70s, 80s, and even 90s, we get a whole host of internet-like services.
They all have some features of the modern-day net, but all operate in a unique way and for
a unique purpose.
The source's purpose was to be operated something like AT&T, a utility that was in pretty much
every household in America.
A utility that was in pretty much every household in America.
You'd have power, phone service, you have water, and hopefully you'd also have the source.
This episode, we're going to break down the environment that led to the creation of the source.
What factors made this specific type of product offering possible?
How did the source itself form? What was service on this new
information utility actually like? And ultimately, why don't we all use the source today? So let's
get into it. If we want to get a well-rounded image of what makes the source so interesting,
we're going to need, as always, to gather up some context. Specifically, what consumer networking was like leading up to
1979. So here's a good place to take stock and have a look at networking infrastructure in the
middle of the 20th century. The best way to describe digital network backbones in this era
would be lacking and a little bit hacky. Fiber wasn't really a thing in the 1960s. Digital traffic
didn't really travel unadulterated across America. Most often, it was ferried from site to site over
analog phone lines as digital pulses, or sometimes over radio or microwave. If you've ever used dial-up internet, then that shouldn't sound so out of the ordinary to you.
But why would anyone want to send digital-ish data
over phone lines in the first place?
Well, it gets complicated.
There are a number of good reasons
for this kind of distribution method.
On the most simplistic level,
the US was already wired up with a phone grid. On the most simplistic level, the US was already
wired up with a phone grid. So just plug into that, add some hardware to deal with turning
sound to data and vice versa, and you have pre-built networking infrastructure. But there
were also more calculated reasons. The best explanation that I can point to comes from a series of reports drafted by Paul Barron
in the early 1960s at RANDCorp. These reports would eventually influence the development of
the ARPANET. The series is called On Distributed Communications and basically lays out all the
particulars about making a nationwide network to help coordinate the Cold War. There's a specific focus on how to create a
network capable of surviving a nuclear war, but that's a whole other component. The key for us
isn't nuclear holocaust per se, it's specifically phone lines. Just keep in mind that the name of
the game for ARPANET was reliability. And, strangely enough, the early
internet was devised pretty explicitly as a tool for fighting communism. Anyway, in the final volume
of the series, the summary volume, Barron wrote, quote,
Highly reliable and error-free digital communication systems using noisy links and End quote.
The current-day noisy links Barron references are, in fact, telephone and telegraph lines.
Barron references are, in fact, telephone and telegraph lines. Barron found that digital pulses could be sent over the phone grid with little to no issue. Provided you make some smart
choices about how fast you transmit data and which voltage levels you use, information can flow
freely over existing cables. The trick that made this all possible was digital modulation. That's taking numbers
and turning them into a series of on-off pulses. Under the right conditions and with the right
hardware, these pulses can travel down a phone line for hundreds of miles without any type of
degradation. The upshot of using existing hardware, read phone lines, are convenience and reliability.
There is already a network of cables and switchboards installed all over the US at this point.
That technology is a very well-known factor.
We had tools and know-how to deal with the phone grid since the very early 20th century.
And, perhaps most importantly, it was cheap.
A telephone cable is actually just a suspended piece of copper wire. There's nothing fancy,
nothing special purpose, just some wire. Of course, Barron's work was just an initial
survey of the possibilities. That being said, he pretty much hit the nail on the head. In practice,
things were a little more complicated, but long-distance digital communications over phone
lines started to become the norm. The trick to all of this was a device called a modulator-demodulator.
The street name for this is, of course, the modem. Devices like modems were actually in use prior to Paul
Barron's work. I just think that Barron gives a really good explanation of why phone lines were
being leveraged for digital traffic specifically. Essentially, a modem is two circuits in one.
It can modulate digital data into a stream of serial on-off pulses to send down some phone line. At the same time,
it can modulate incoming pulses into digital data. When you get down to it, a modem is really a
pretty simple device. It just has to be able to make some sound and then listen for sound back.
Initially, modems were used to connect terminals to remote computers.
That way a user in, say, New York could log into a mainframe at MIT.
As long as there was a modem on each side of the connection and some switching was handled
by the good old phone grid, you could just pretend to be wired directly into that remote
computer.
As home computers hit the scene, modems adapted to fill a more
micro-sized role. Like I said, modems are relatively simple devices. Usually they use
a serial interface to connect to a computer. The computer just has to take its message,
turn that into a sequence of digital numbers, then send that out to the modem. The same goes
for receiving data. The computer just has to wait
around for the modem to push over a string of numbers. Any computer that supports serial
communication can, that is, if given the right software, connect up to a modem. And for early
home computers, serial support was basically ubiquitous. It's a really simple protocol to
implement. By the 70s, we start to see people
actually using modems with home computers to connect up to mainframes. One use was,
believe it or not, telecommuting. This is also probably the easiest example to explain,
since it's just point to point. Basically, let's say you want to work from home in 1977.
You have your brand new Apple II and an acoustic coupling modem.
You pick up your home phone, nestle it comfortably into the modem's waiting foam cups, and
dial your office's line.
The final key is, of course, the software.
An Apple II doesn't know how to work as a terminal out of the box, so you have to
load up some kind of
terminal emulation software. Basically, that's just a program that can talk to the modem,
put some text on screen, and send back whatever you type out. With that, a modem plus phone line
plus terminal software, you can connect up to a faraway mainframe and get down to work.
This is one of those ham-fisted approaches to
technology that I find really charming. Using existing technology plus something just a little
bit newer, people can create something radically new. But the key here is that old technology in
the mix. It brings along constraints that an entirely new system wouldn't necessarily suffer
from. For phone line-based networking, one of the big constraints was billing. Phone lines aren't
free, it's a utility. It varies by local, state, and federal regulation and practice, but somehow,
users have to pay for the pleasure of using a landline. For local calls, say within
your region, everything was either a flat rate or a pretty low metered rate. But if you were calling
long distance, say from San Francisco out to a friend in New York, then you were charged a higher
per minute premium. The thing is, a normal phone call to a friend got billed the same as a digital
connection to a remote computer. The phone company saw no difference, it's just a line
open from one caller to a receiver for some amount of time. So if you spend too much time
hooked up to the old office mainframe, well, you'd end up paying a pretty hefty cost.
old office mainframe, well, you'd end up paying a pretty hefty cost. The other downside was that connecting over the phone line meant you couldn't make or receive calls. The line was busy for the
duration of your connection, and that could range from annoying to a big problem in larger homes.
Back in the days of dial-up internet, I was pretty lucky. My mom had a separate fax line that we used to use
so that we could make calls and check emails at the same time.
Pretty spiffy if I do say so myself.
But the point here is that networking over existing phone lines worked.
It was just a little bit janky.
And with the slow pace of infrastructure changes,
this janky and suboptimal
solution stayed around for decades. Telecommuting wasn't the only way people used modems back in the
70s. Connecting up a terminal to some far-flung mainframe was a nice application, and a very
business-oriented way to get work done. But people do a lot more than just work.
We tend to like to play, so it's no surprise that the same technology used for business
was also adopted by hobbyists.
The exact start date is a little hazy, but by the tail end of the 70s,
home computer users were opening their own computers up to outside callers.
These were called bulletin
board systems, or just BBSs. The phenomenon around bulletin boards needs separate coverage,
but they're important for us to at least glance at. Basically, a BBS was a way for home computer
users to connect online for the first time. Instead of a modem serving as a gateway to some larger machine in an office,
the same device was being used as a gateway to some other user's personal computer.
The overall protocol is simple enough.
You're just faring around small amounts of data,
so you don't really need a big beefy computer on the remote end.
You can have, say, another Apple II or a Commodore.
The overall ecosystem on these bulletin boards was broadly similar to what we see online today.
You had chat rooms, pages roughly analogous to websites, and you could even download files.
Most importantly, you had a huge diversity among BBSs. Each was operated by a single user. Each BBS was utterly personalized.
Given the right software and a phone line, anyone could get started with a BBS.
This was showing that users were very much willing to pay for access to novel online experiences.
Sure, BBS operators usually didn't charge for access, but the phone company charged for calls, so this still cost money to explore.
On the exact other end of the spectrum, we get into larger systems,
none more well-known than CompuServe.
Now, this is actually a really interesting case.
CompuServe started out in 1969, well prior to home computers and in-home modems.
Just recently, an office of Golden United Life Insurance bought a new minicomputer for handling,
well, I guess some kind of life insurance stuff.
They soon realized that their purchase of a DEC PDP-10 was probably a little too much computer for that office. They were using the
computer just fine during normal business hours, but after closing the machine just sat around.
Even then, during normal hours, they weren't using 100% of its power. To recoup some costs and
maybe even make a little profit, the company started to rent out time on the PDP-10.
To make everything easier to deal with,
a separate company called CompuServe was spun off to handle renting computer time to other businesses.
What made CompuServe viable was timesharing,
the ever-important software that lets multiple users share a single computer's resources.
This made it easy for CompuServe to rent out unused computer time
to multiple clients. This started off as a business-to-business kind of thing.
Large companies in the area would buy some server time to get work done,
and over time, this would expand. At the very tail end of the 1970s, CompuServe would start
providing more services over phone lines, and eventually open up to consumers.
But that comes later.
Connecting to remote mainframes was already commonplace, or at least common enough that
the hardware and software was somewhat accessible.
Universities were already giving access to scholars and researchers.
Notably, MIT allowed off-campus researchers to access their CTSS mainframes
all the way back in the 1960s. CompuServe wasn't doing anything new, at least technologically
speaking. What was new was the idea of selling this service. CompuServe may not have been
the exact first company to do this, but they were a prominent and early adopter.
That's a bit of a rushed primer,
but it should give us an idea of the state of the art in the late 1970s.
Well, you know, the consumer networking state of the art, at least.
Phone lines were being pushed to their limits to allow access to remote mainframes or a friend's BBS.
There were even companies using this distribution technology to turn a profit.
So how does this buildup lead to the source? Well, to get there, we need to take a brief
detour and talk about William von Meister, son of wealth and privilege, race car driver,
and the eventual founder of a strange networking enterprise. Von Meister is a bit of a weird
character, at least when it comes to the people I've covered on the podcast. He's not a scientist
or a researcher. He's not even really involved in the computer industry or the hobbyist scene.
A 1984 book titled The Computer Entrepreneurs gives a pretty full bio of von Meister. It opens by
stating that, quote, William's early life reads like the prototypical F. Scott Fitzgerald hero,
bored, reckless, and filthy rich, end quote. And I say that's also broadly applicable to all von
Meister's career choices. He was a serial entrepreneur,
a venture capital chaser, to use a slightly more pejorative term, an idea guy. Born to wealth and
a well-connected family, he made his way through prep school and university. At one point, he was
ditching so many classes to drive race cars that he nearly got expelled.
Nevertheless, by the early 1960s, he graduated with a degree in business and set about building a career.
For von Meister, that meant a constant scheme of dreaming up the next big idea, getting
funding, building it into a business, and then selling his stake and moving on.
Looking for that quote-unquote next big thing in that era
was dangerous, because in the 60s, that meant computers. So it wasn't long before von Meister
became enamored with the idea of digitizing anything and everything. But once again, this
isn't because he was technically involved with computing, or all that interested in
computing.
He just knew that it was profitable.
One of his first big ideas involved Western Union Telegram.
He had a connection with an employee of the company, at the time von Meister had been
passing himself off as a consultant.
The main MO here was to get hired by some company, and then just figure out the rest once contracts
were signed. To me, it sounds a little bit like a mix between a rogue visionary and some pretty
predatory business practices, but it seems von Meister was able to make it work out well, so
he continued the cycle for quite some time. Anyway, in either 1962 or 1963, the sourcing is a little unclear,
von Meister was contracted by Western Union to consult on updating their billing systems.
The idea for the project was simple.
Given a budget, von Meister set out to work computerizing their billing department.
Basically, buying up the right machines, hiring the right talent, and planning the entire system. This one project, plus some continuation work on the billing system,
helped launch his career. That being said, it was only one step in his cycle. A new venture
would come along soon enough. Importantly, the work with Western Union got von Meister
hooked into computing. He may not have been a technician or a programmer,
but he did believe that the next big thing was always going to be digital.
And throughout his early career, von Meister came to grips with the infrastructure
and management needed for that digital future.
The next important venture for von Meister,
at least the next venture relevant to our path towards the source,
was a company called TDX. What's the acronym stand for? Well, not really anything. It just
sounds cool. It's a bit of marketing. Bernard Ride, a colleague from VonMeister's Western Union
days, had been working on a new call routing program. The duo formed TDX to monetize Ride's idea. To explain why this
matters, we need to get into Watts and PBX. As I mentioned earlier, making calls was never
totally free. There's always some associated cost with each call made over a phone line.
It may be a flat rate, it may be metered rates, it may be something else.
Watts, or the Wide Area Telephone Service, was a fun little complication that got added
into that mix of billing. It was started in 1961 by AT&T, and for businesses, it offered a new
and exciting billing model. Watts allowed companies to buy a quota of flat-rate long-distance calls. You could
buy up, say, 100 hours of long-distance calling for a cheaper package rate than getting those
hours incrementally over time. The caveat was that the billing was based off regions, so that package
of 100 long-distance hours may work for a call from California to New York,
but not include a call from California to Mississippi.
The main application for Watts, at least the market that TDX was targeting,
were 1-800 toll-free numbers.
That is to say, call-in numbers for larger groups.
Let's say, for instance, I finally get some venture capital and start up
Widgets Incorporated, the company of my dreams. The business model is simple. Consumers can call
1-800-WIDGETS and order some of our fine merchandise. Thanks to market research and
some past trends, I have projections for how many long-distance hours I need, and I have that broken
down by region. So I go over to my local AT&T
branch and sign up for some hours over watts. By doing so, I save a little bit of upfront cash and
I can invest that back into the business. Maybe I open some more branches and start producing some
better widgets. I can get a lower bulk rate because I commit to a quota.
But what if I go over my call quota for a certain region?
What if I get a local call that would be cheaper to route over a normal phone line instead of using Watts hours?
What if I open multiple branches and need to make sure that inbound calls get to the
closest call center?
sure that inbound calls get to the closest call center. That's basically what TDX aimed to solve by creating a cheap automatic public branch exchange. An automated PBX using Ride's new
cost-cutting algorithms. PBXs are a big missing piece from the earlier explanation of phone-based
networking I gave, so hey, I need to cover it sometime, right?
Phone companies like AT&T run their own massive phone exchanges. These are switchboards that
route incoming calls to their destination. If you pick up a phone and dial out to some number,
that call first gets routed to your nearest local exchange. From there, a route is worked out to
your final destination. Then a series of other exchanges work to link up your call to its intended recipient.
There are a lot more details that I'm not well-versed in,
but it's close enough to think of the exchange system like a router-based network
that we'd be more familiar with today.
It allows many multiple clients to connect to many multiple other clients,
and it does
so all simultaneously.
A PBX is a massively scaled-down version of these large exchanges.
A company like my fictitious Widgets Incorporated would have a PBX sitting in each of their
offices.
When an outside line dials up 1-800-WIDGETS, the call hits the PBX and from there can be
routed to the correct internal phone line.
Or even out to another PBX at another office.
This also works the other way around.
An internal line at Widgets Incorporated can call out to the PBX.
So once again, we have a hardware system that's basically a router but for audio signals. By linking up a
PBX to another PBX system and linking that PBX system up to watts and other larger grids,
it should be clear that we're reaching something that's similar to a full-on network. It's not
exactly distributed in the same way as the internet, but it has shades of the internet's design.
All these switches, whether we're talking about a board inside some massive AT&T building or a PBX sitting in Widgets Incorporated,
they can function in a number of different ways.
Very early on, boards were switched manually, with a human operator sitting in front of a panel of sockets and cables waiting for an incoming call. That person would then decide the best route to get
the caller to the destination. They'd flip around some cables and, very physically, patch them
through. That eventually gave way to automated analog systems, and once computers hit the mix,
we start seeing computerized switchboards.
In the latter case, we're still dealing with analog signals, just the routing is handled by a digital machine.
Now, as near as I can tell, TDX fits into this third phase.
I have that caveat because there aren't that many details on TDX itself.
It shows up in sources on von Meister's career and in a few patents,
but not very far beyond that. Anyway, TDX was aiming to undercut competition by making cheaper
and more effective PBX machines. Specifically, Ride's switching algorithms were able to calculate
the cheapest route to place a call through. So, therefore, TDX switches were supposed to pay for themselves.
A packet of patents from 1977 explains all the gory details.
It's not doing much that's totally new.
The patents actually cite a whole lot of earlier work.
TDX is bringing together a lot of existing technology,
adding in a little fancy new code, and making a better product.
A TDX switch is composed of a mini-computer that's hooked into inbound and outbound phone lines.
As calls come through, the computer is able to, based off the call's location,
figure out the cheapest way to route it to a final destination.
It accounts for watts as well as local call rates and can even record
down audio as needed. What's important here is that TDX pulled VonMeister deeper into a specialized
field, wired phone networking. Now, I've seen conflicting reports around how well TDX actually
did in the market. One short biography of Von Meister's life says that it was used by
multiple Fortune 500 companies, while another says that he got kicked out of the company by
stakeholders. Either way, by 1978, von Meister was restarting a new cycle of scheming, building,
and selling, this time with a newly honed and very specialized set of skills.
That year, von Meister would found the Data Broadcasting Service, or DBS.
This is where the series of events get a little bit fuzzy.
The original plan for DBS was to create a business-to-business network using empty parts
of the FM radio band.
The practice that he was using was called piggybacking, where you can
just cram some data into a little free space on the radio. Von Meister envisioned using specialized
transceiver boxes located at businesses and local radio towers. From there, a user could connect up
to a central computer and wirelessly grab information and read mail. He also had a backer.
and wirelessly grab information and read mail. He also had a backer. Venture capitalist Jack Taub would invest a lot of initial capital into DBS. The arrangement made von Meister and Taub partners
in that new business. But for some reason that I can't fully understand, the initial idea was
dropped in favor of something a lot more ambitious, a consumer-facing network.
In 1979, von Meister and Taub unveiled The Source, the information utility of the future.
And for good measure, they also changed the name from DBS to TCA, the Telecomputing Corporation
of America. The initial business plan for the source was really savvy, and actually
pretty conservative all things considered. The network functioned something like this.
A consumer could go out and buy an account on the source for around $100, and that came with
a manual detailing how to use the service and a login. Once set up, there was a metered payment
system that charged different
rates at different times of day. At night, you could get cheap off-hour rates. During business
hours, the source cost more to use. But these off-hour rates were only $2.75 an hour, and those
were billed by the minute. So all things considered, not super expensive, but it's still not free.
All you have to do is load up some software, dial up your modem, and you're connected to the source.
Well, really, you're connected up to a rented mainframe.
Von Meister worked out a contract where a TCA could rent server time from Dialcom for 75 cents an hour.
So, going by the numbers, TCA stood to make a tidy profit. Cost-wise, the source could
run pretty lean and still rake in money. So what did users see once they logged onto the source?
Well, this is where we run into a bit of an issue. And this problem is a mix of preservation and just how history went.
The source doesn't exist anymore.
Surprising, I know.
Unless I've missed something in my research, the server-side software hasn't been preserved.
There was an ever-expanding manual for the source that new users would receive when they
were onboarded.
And those manuals shouldn't be all that rare. Jumping ahead
a little bit, we know that at the peak, the source had around 80,000 active users, so I'd wager
there's somewhere north of 100,000 manuals floating around. It's just that none of them have been
digitized and I can't find any copies for sale or in archives. That being said, if you
happen to have a manual for the source, please get in touch. I would be very interested in getting
my hands on that. Anyway, barring some revelatory new information, the best I can do is piece
together a view of the source. Luckily, we actually have a handful of sources to go off. A 1983 episode of
Bits and Bytes, a TV Ontario series about home computing, has a short segment on the source,
and it's complete with a recording of one of the hosts logging into and using the utility.
I've also been able to dig up a 1984 MacTalk manual that has a chapter dedicated to the source,
and that comes complete with screenshots. The key problem here is that these are later resources,
but I'll get to why that's an issue a little bit down the road. The main interface for the
source was a series of text menus. Users were able to traverse into submenus, select items, and eventually drop down to a command prompt for certain programs.
The whole point of this decision was to hide some of the digital magic going on.
It also is a smart decision. It helps the source appeal to a wider audience.
The glue that made all of this work was, of course, timesharing.
When a user logged into the source, a new process had to spin up on a remote mainframe
out at Dialcom.
That meant that, at least in theory, any user on the source had their own user account tucked
away on a mainframe.
But von Meister wasn't really billing this new creation as simply access to a beefier
computer.
It was access to information.
So the interface was built more around data than processing. One of the big features of the source was its news and information service, and that feature was usually highlighted in all its ads.
Von Meister cut a deal with the United Press International, that's a source that sold news
stories and information to physical newspapers and broadcasters. So the news section of the
source was every bit as factual and official as any newspaper you might pick up from the corner store.
This type of online news service is exciting for, at least for me, for one big reason. This is a novel use of the medium.
Von Meister was creating a news outlet built like any other, except for one thing. It was online.
In 1979, that was way ahead of the curve. The source also offered stock and banking data.
I can't find out if this was just on closing or if it was actually live.
data. I can't find out if this was just on closing or if it was actually live.
Airline schedules were served over another one of its many menu items that included the ability to order tickets online. There were also courses available over the wire. Ads claimed that they
ranged from childhood education to graduate-level studies, but in the vacuum of information, I have
some doubts. Rounding out the pure information offering were selections on home and leisure,
business advice, and even a wine guide.
It's pretty clear the type of person they're marketing to.
So far, that all amounts to just a slightly interactive database.
Picking from a simple menu leads you to a more specific menu,
and slowly you could drill down to an article. That's the actual information utility part,
essentially an electronic encyclopedia with up-to-date news and market information.
Now, I don't want to sound like I'm downplaying this. It's a huge step in the general direction
of a more recognizable internet. That being said, the technology severely
limited what could actually be done with this part of the source. You won't see any links here,
or really even images. This is still all text-based data that they're sending around.
The second swath of services were interactive programs, basically just a step up in complexity from pure information.
Some menu items would drop you down to a command prompt. In the Bits and Bytes segment,
they show an example where the host traveled down menus in the Education section. Eventually,
they reach an option to compute the trajectory of a falling object. From there, they're prompted
for actual inputs. The computer runs some numbers and they get new results.
The bones of that are really interesting,
but it's presented more as a novelty than an actual useful feature.
Small computational jobs could be done on,
well, on the computer you're using to connect to the source.
The only difference is running numbers over the source was charged by the minute.
However, there was another option for more savvy users. If you wanted to get some more
serious work done, you could actually drop out of the menu system altogether and get
to an honest-to-goodness command prompt. From there, a user could edit files, compile and
run programs, you know, do the normal things you'd do on a remote mainframe. So if you
really needed to work on some Fortran from home, you could do that with an account on the source.
At least within reason, there were still disk quotas and hourly rates that applied.
The final category that I want to look at, and what I think is the most interesting,
comes down to the user-generated content. We got emails, chat, posts, and public files.
Email was, well, exactly what you would expect, but with one twist.
You could actually send out physical letters from the source using a service they called
Datapost.
Additional rates applied, but they marketed this feature as a huge cost and time saver.
And honestly, I think I'd even use that service once or twice myself. There's just something
fun about being able to send physical mail from a computer. Chats were, at least as near as I can
tell from the sources, a pretty standard live instant messaging system. Users on the source didn't have usernames,
per se. They had numeric account IDs, so as long as you knew your buddy's digits, you
could shoot them a live message. If they're online, then it would show as a new chat on
their terminal. Post was a basic bulletin board, essentially an early online forum.
The source had a set of predefined categories that
were populated with topic threads by its users. Anyone with an account could read a thread,
post a new topic, and respond. And finally, we have public files, or in source lingo, participate.
This would be the equivalent of a personal website today. It's a way for users to publish something akin to a blog with a comments section.
The source called them conferences, but it really just sounds an awful lot like a blog with comments.
So wrap that all up, and by the 80s, we're already dealing with a diverse feature set.
This is something approaching a walled garden version of the internet.
For a fee, you could find out about a local service, learn something new, book a flight, or chat
with digital strangers. Everything was contained within the source's computers, and it was
all managed by its corporate overlords. What I find so interesting is that the source isn't
really doing anything new here, at
least not technologically.
Von Meister was just synthesizing a lot of different services into one big system, building
out a bigger utility, if you will.
Bulletin boards weren't a new idea.
Neither were forums, email, dial-up networking, everything was a known factor.
Von Meister just realized that he could offer all those services as a single premium package.
Importantly, he was one of the first to realize that this type of super service was possible,
and that it may in fact be profitable.
But therein lies one of the strange issues with the source.
It wasn't groundbreaking.
And it wasn't some system that naturally grew out.
Even in what I've described, we can start to see that.
The bulletin board system on the source had a lot of overlap with its participate feature.
Both are forums with a few different touches.
You have a system of menu-driven databases, plus a command line, plus some menu options
that end up turning into a command line.
If we can personify things for a minute, it seems like the source has some internal conflict.
It wanted to be the information utility of the future, but it just didn't know what
that would mean, not exactly.
And that would lead to some big problems down the road. You may be wondering
why I've been complaining about sourcing so much. Why does it matter that descriptions of the source
all are from the 1980s? The obvious perennial answer is I always complain about sourcing.
Let's be real here. But there's a more specific reason in this case. The source, as a company, saw radical
change pretty soon after launch. It's very likely that those changes were its undoing. They led to
this split-brain set of features I mentioned. Now, I can't directly prove that causal relationship
without some early manuals to go on, but we can speculate. Just keep in mind that the
clearest picture we have of the source comes from 1982 and later. That's about 3-4 years after the
service launched, and 3-4 years into changes. Sometime in early 1980, von Meister and Taub
ran into, as near as I can tell, personal issues. Once again,
this is where the records get messy. From what I can put together from contemporary news clippings
and some later articles, the source wasn't all that profitable. At least, not initially. Taub
had sunk a lot of money into Von Meister's wild idea. We're talking to the tune of millions of dollars.
And by 1980, there hadn't been a return. Initial signs had been promising. There was a lot of
press buzz around this new and mysterious information utility. Supposedly, Isaac Asimov,
yes, the sci-fi writer, was quoted as saying that the source was the beginning of the information age.
But for all the good press, the source just wasn't living up to its financial potential.
There just weren't enough users to start recouping costs. In 1980, Taub tried to sell 51% of the
source to Reader's Digest. This was an attempt to keep the company afloat and hopefully
recoup some of the money that he put in. And this is where we enter a full-on disaster as far as
sources are concerned. Somehow, von Meister wasn't involved in this decision. It's unclear if he had
left the source already, taken a less active role, or simply was never very active
as a partner in the business once it started. What I can confirm is that von Meister still
claimed a stake in the source in 1980, at least in a roundabout way. You see, he owned somewhere
around 30% of DBS, the company that changed around and became the parent entity of The Source.
Taub went to Reader's Digest without consulting von Meister, even though he was still a stakeholder.
A messy series of lawsuits ensued over who counted as a stakeholder for The Source and
if the Reader's Digest deal was actually legal. By the end of 1980, things were settled,
the sale went through, and von Meister was totally out of the picture. I think we should see where this is
starting to go. Now, I don't doubt that this was somewhere near von Meister's grand scheme for the
source. He drafted a plan, got investment, saw that plan through until the business was operational, and then sold off his share. It just so happened that this last step got a little complicated this time.
Von Meister did come up with the overall vision for the source, but his grand ambitions were
all about money. Taub, now the remaining partner, was just an investor. He was only interested in the source in so much as it
sounded like a good financial decision. And now Reader's Digest was the controlling partner.
They bought up the source purely as an investment decision. From what I've dug up, it sounds like
Reader's Digest saw potential in the source if it just made a few tweaks to make it more profitable.
just saw potential in the source if it just made a few tweaks to make it more profitable.
The net result here is that the source didn't really have a direction, at least not a sustainable one. It was poised as the information utility of the future. But to Taub and his corporate partners,
that was just a means to making some money. Like I mentioned, the early design of the Source was actually really business-savvy,
no doubt thanks to von Meister's experience in this one specific industry.
Using leased computer time reduced upfront costs.
Combining existing technology was cheap and easy to do.
But it wasn't yet profitable.
Given enough time, it may well have been. The next big wrench
in the works came once Reader's Digest took control. They switched from using leased computer
time at Dialcom to running their own mainframes. While that would let the source grow, it also
introduced more immediate costs to an already failing business. Soon, they would switch again,
signing a deal with a company called Timeshare to rent more computer time.
Shortly after the Reader's Digest sale, Taub was interviewed for a piece in Kilobod magazine.
The interview opens with a story about being impressed by the source, but frustrated with
downtime. The service had crashed the night before. Some data was lost.
The interview starts with Taub lining out grand plans to start adding new capacity, but
is stopped with this question. Quote,
But you can't provide reliable service to customers you have now. How can you talk about
1,500 new customers when the old ones suffer through
system crashes? Taub's response? Every minute of downtime is a minute too much for me. I worry
about it. I'm sorry about it, but there are better things coming. We are in the same place the phone
companies were in the 1920s. We are learning. End quote. Now, call me a cynic, but I think this is indicative of a bigger problem
at the source. They were a ship without a clear direction. They were chasing the next big thing
without creating long-term plans. The Kilobot interview is all about how much the source is
going to grow in the coming years, but there's nothing about what it can offer in 1980. The strange overlap in some features is another big red flag for me. The
bulletin board and participate conferences sound an awful lot alike. At least, they sound like
there is overlap in features. That may seem innocuous at first, it's just a quirk of the source. However, I think that's
a symptom of a larger problem. For a big online system, especially one with multiple users like
the source, long-term planning is key. You need to plot out which features you want to add, if those
features should be their own sections or an expansion to existing software. You have to
strategize, otherwise you end up rewriting
really similar code, or just chasing after features that no longer matter. To me, with
years of experience in modern IT, this looks like a sign of mismanagement.
When you get down to it, Reader's Digest did not make a good investment decision.
The source limped on until 1989.
That's 11 years of life, but it never blew up like it could.
Like I mentioned earlier, the service peaked at 80,000 active users.
But there was a lot of turnover.
It wasn't running at 80,000 clients for all that long.
It's this weird product that didn't know exactly what it was. And while it did offer
a broad array of services, much like the internet, it didn't have the user base to make those
services very useful. In 1989, the source was bought out by one of its biggest competitors,
CompuServe. Although, I don't know if I'd actually call them a competitor. CompuServe back then had
around 500,000 active users. It was on a whole different level. We talked about CompuServe back
at the beginning of this episode, so I think we better just bring this full circle. The source
ultimately failed to thrive because it was listless. From my understanding, it seems like the soulless corporate shell of a
great idea. Then, why did CompuServe succeed? What was so different? I think two huge factors
come down to ambition and natural growth. Of course, bear in mind, this isn't business advice.
I'm not a very business-savvy person. This is more a post-mortem analysis. CompuServe started out
really small. They were a secondary business spun up to offset costs for Golden United Life Insurance.
Initially, CompuServe just sold extra processing power on their PDP-10. Just like the source,
their business model relied on leveraging time sharing. As CompuServe grew, they shifted from offering just computer time to software as a service.
Now, this is a more modern term than it would have been used when CompuServe started.
But software as a service, or SaaS, is basically where a company hosts some important program
on a server somewhere.
Then a client can log in and
use that software. It's usually built by resources used on the server, and it works, once again,
thanks to timesharing. Jeff Wilkins, the founder of CompuServe, developed a suite of business and
accounting software that the company offered as a service. It was a highly targeted expansion of features,
not a spreadshot. He had a market in mind, not just some nebulous idea of who may find the new
software useful. In 1978, CompuServe expanded into the home market, selling mainframe time
to personal computer users. Again, this was a slow and deliberate natural progression. Wilkins started by offering
the service to users near CompuServe's Ohio office. When that looked promising, the consumer
service expanded to allow for higher loads. Importantly, in 1978, most of CompuServe's
clients were still businesses. They logged in during business hours, 9-5 plus a little buffer
room. Wilkins was selling that unused time, the off-prime hours, to home users. He was trying to
find a way to keep his machines humming 100% of the day and extract some profit in the process.
In 1980, services expanded again. CompuServe started to offer news over the net.
Instead of exploding onto the scene as a fully-formed information service,
CompuServe got to grow into one.
The company kept lean, they avoided some grand ambition,
and they were able to outlast competitors.
We've seen the same kind of growth with the modern internet.
In 1990, when the World Wide Web first started, you couldn't do a whole lot online. But slowly,
as more users became interested and the network gained more traction and backing,
services started to appear. At least anecdotally, slow and natural growth can really help a
technology last. Sometimes, that does trump being a grand information utility of the future.
Alright, that brings us to the end of this episode.
This has been a weird one for me because the source isn't a success story.
It isn't some hidden technology that feeds directly into the modern day,
but I still find it a fascinating piece of computing lore.
It had all the features we expect out of the modern internet.
Forums, email, something like websites, news, and information services.
All that was rolled into one tidy, walled-off package.
A single information utility for the information age. All that was rolled into one tidy, walled-off package.
A single information utility for the information age.
If better handled, the source could have been a really big deal.
It didn't do anything resoundingly new, but von Meister was savvy enough to see where things were going.
The source was close to the next big thing, but it faltered.
Mismanagement, overstretched goals, and a lack of direction relegated it to the trash heap. For me, what makes the source so interesting is that
it failed despite having a winning formula. It came out of the same primordial networking stew
that birthed Play-Doh, BBSs, and CompuServe. The pieces were there. Von Meister was one of the first people
to put them all together into a sellable product. The source just never gained enough traction to
sustain itself. But it does make a fascinating case study. It shows us an alternate possibility
for what the internet could have been. Thanks for listening to Advent of Computing. I'll be back with another piece of the story
of computers in two weeks' time. And hey, if you like the show, there are now a few ways you can
support it. If you know someone else who's interested in the story of computing, then
why not take a minute to share the show with them? You can rate and review on Apple Podcasts.
And if you want to be a superfan, you can now support the show with them. You can rate and review on Apple Podcasts. And if you want to be a super
fan, you can now support the show directly through Advent of Computing merch or signing up as a
patron on Patreon. Patrons get early access to episodes, polls for the direction of the show,
and bonus content. If I remember correctly, we're up to three bonus episodes right now,
so why not go over and get a little bit more Advent of Computing? You can find links to
everything on my website, adventofcomputing.com. If you have any comments or suggestions for a
future episode, then go ahead and shoot me a tweet. I'm at Advent of Comp on Twitter.
And as always, have a great rest of your day.