Benjamen Walker's Theory of Everything - Enchanting By Numbers (2015 version)
Episode Date: October 9, 2015We take another look at algorithms. Tim Hwang explains how Uber’s algorithms generate phantom cars and marketplace mirages. And we revisit our conversation with Christian Sandvig who, las...t year asked Facebook users to explain how they imagine the Edgerank algorithm works (this is the algorithm that powers Facebook’s news feed). Sandvig discovered that most of his subjects had no idea there even was an algorithm at work. Plus James Essinger and Suw Charman-Anderson, tell us about Ada Lovelace, the woman who wrote the first computer program (or as James puts it – Algorithm)  in 1843.
Transcript
Discussion (0)
You are listening to Benjamin Walker's Theory of Everything.
At Radiotopia, we now have a select group of amazing supporters that help us make all our shows possible.
If you would like to have your company or product sponsor this podcast, then get in touch.
Drop a line to sponsor at radiotopia.fm. Thanks. episode. Why is there something called influencer voice? What's the deal with the TikTok shop?
What is posting disease and do you have it? Why can it be so scary and yet feel so great to block
someone on social media? The Neverpost team wonders why the internet and the world because
of the internet is the way it is. They talk to artists, lawyers, linguists, content creators, sociologists, historians, and more about our current tech and media moment.
From PRX's Radiotopia, Never Post, a podcast for and about the Internet.
Episodes every other week at neverpo.st and wherever you find pods.
You are listening to Benjamin Walker's Theory of Everything. This installment is called
Enchanting by Numbers. Last summer, I did a number of podcasts about work and the sharing economy,
or the exploitation economy, as most of the people I talked with called it. There's one thing,
though, I wish I got in the series, and that was a closer look at
Uber's algorithms. A few years ago, Travis Kalanick, who's the CEO of Uber, said, you know, we don't
define the price. There's a market, and our algorithms sort of determine sort of what the
price of the market is. Tim Huang is one of the most fascinating people I know
working on internet-related issues and cultural phenomenons.
Currently, he's leading the Intelligence and Autonomy Project
at Data & Society.
One of the things he studies there are Uber's algorithms
and the role they play in sorting out supply and demand
in the ride-sharing marketplace.
So take demand, for instance, right?
So I'm a rider looking for a driver.
When you open the app, you would think that the cars that you see are real cars, right?
They show these cars that are kind of moving around very slowly.
And, you know, when you click on them, one of them appears to be assigned to you.
But one of the things that we found in some of the researchers, Alex Rosenblatt and others at Data & Society,
I found that actually that there's a lot of phantom cars, actually.
There's a rep from Uber that told us that, look, some of these cars are basically, they're more of a screensaver.
It's more of just a visual effect. Tim and a few other researchers at Data & Society published an article about Uber's phantom cars in Slate last month.
I'll put a link up on the TOE site.
Those ghost cars, though, are just the beginning.
It turns out Uber's algorithms generate a lot of illusion.
It actually turns out it works out the other way around as well, right?
So if you're a supplier, a driver looking for riders, you actually have a different app.
And it shows you where in the city is surging and where isn't.
And you can kind of make your decision on where to go sort of accordingly.
And the assumption would be is if it was a real marketplace, you would be seeing what the demand was, right?
Where the requests were in a given city. And sort of prices would be seeing what the demand was, right? Where the requests were in a given
city and sort of prices would be based on that. And it actually turns out that's not the case.
Based on some research and also some of the patents that Uber has filed, it actually seems
that what happens is that the surge pricing is actually based on simulated demand, right? So
it's not actually the current demand, but what the algorithm thinks that the demand will be 30 minutes in the future. And the reason for that is to lower latency, right? So
when you push that you want to get a car, the idea is if the algorithm is right, the car will just
be right there, right? Because the cars have already come there 30 minutes before anticipating
the fact that demand is going to exist. And so when you start to look at this picture,
it starts to get a little fuzzier than what the sort of quote by the Uber CEO would suggest, which is that supply sees a simulated version of demand and demand sees a simulated version of supply.
And it raises this real question of what this is if we really want to call it a marketplace.
I'm just confused as to why you keep using the word fuzzy, though.
I guess I think of it as fuzzy just
because, you know, yes, it's true. It is kind of a market in that it still kind of facilitates
riders and drivers. You know, no one would dispute that Uber doesn't connect riders and drivers.
But I think the framing of it as a traditional marketplace really kind of tries to fade out
Uber's role in this overall picture, if that makes sense. Can't you just call the whole thing fake?
I think you could, yeah.
I mean, I think it's right to call it a mirage, right?
That, you know, it's claimed to be a marketplace, but is actually not a marketplace.
Behind the mirage is really like a workplace that is sort of run by machine.
I think that's really what's fascinating about it is it's actually not really a marketplace at all.
You can think about it as sort of a distributed employer.
The really novel thing, I think, is the substitution of sort of machine intelligence as opposed to a human human manager right you may
have had sort of a dispatcher in the past but now it's kind of been replaced by a machine that kind
of directs resources one way or another that's really interesting and pretty novel you know one
of the things we left out of the article that we did was this kind of ongoing debate in economics that it's called the
socialist calculation debate. And it sounds a little esoteric, but it's a pretty basic idea,
which basically says, you know, traditional economists said, look, you know, free markets
are always going to outcompete socialist economies. And they said, well, you know,
I'll tell you why. And the reason why is because no sort of person, no government, no bureaucracy will ever be so sophisticated to process all the data about needs and wants and supplies and resources and distribute them in an adequate way.
Right. There will always be really large inefficiencies there. And this is kind of a classic sort of conservative economic argument for why free
markets are great and why they're better than sort of human intervention in the economy.
What's interesting is that stuff like sort of the declining price of computation,
the distributed nature of computing, you know, I think all those trends are actually like
undermining the fundamental premises of that debate.
Because while it's true, probably in the past, it was really difficult to have a human bureaucracy that could process the data and then command resources in order to make things happen.
I think Uber has basically achieved that in a lot of ways. And I think that's kind of a funny sort of irony about what Uber is
and kind of the world that sort of, you know,
I think you could argue that sort of Silicon Valley is trying to create.
So algorithms can kind of deceive and give the impression of things that we know that are familiar
but under the hood are doing something kind of entirely different.
What we use on our phone to find our way from point A to point B
is an algorithm, and it's in a computer,
and there's elements of computer science in it,
but there's also a bunch of normative decisions
about what we want to prioritize,
and all algorithms tend to be that way.
Christian Sandvig is an associate professor
in the communications department at the University of Michigan.
He studies algorithms. But this kind of research, keeping track of the algorithms that determine
what we think and do, he says, is getting harder and harder to carry out. The difficult thing about
studying algorithms is that because they're happening inside a computer, and in many cases,
the algorithms that govern our life are happening inside some
data center somewhere, it's very difficult to determine what exactly is happening.
So in our research, we talk to users about what they think algorithms are doing.
You might even say that how they think about algorithms is more important than the algorithm.
If you had an image in your mind of how Google works, it might lead you to choose
different search terms. Or if you had an image in your mind about what you thought Facebook was
doing, it might lead you to click on different things or use different status updates. So in
some ways, how you think about the algorithm is as important as the actual algorithm because it
can determine what you do. Recently, Christian and a team of computer scientists
asked a group of Facebook users
about how they imagined the EdgeRank algorithm works.
That's the name of the proprietary algorithm
that Facebook uses for its news feed.
We actually designed the study to ask people
about how they thought about the algorithms
that filtered their Facebook news feed.
So people had ideas about, for example, well, I'm always going to like my own posts because
the fact that I give it one like really kind of gets it started. And that's going to make
sure that other people are going to be more likely to be shown that post.
Now, that's probably the saddest folk belief about how social media algorithms work,
and definitely not true.
But some of the more common folk beliefs do have some truth to them.
There are elements of product mentioning that are taken into account on Facebook that are real.
For example, I read a news story about Facebook's deals with advertisers,
and the news story said that Facebook had launched an agreement
where it would make some status updates that mention advertisers.
It would convert those status updates into ads.
And so I tried to mention a bunch of advertisers that were listed.
And then I asked my friend network,
hey, did you guys see any of these things that I posted?
And where did you see them?
And what did they look like?
And can you send me a screenshot?
And that's how I discovered that mentioning advertisers is a good way to get your posts pegged at the top and listed as a sponsored link.
It's tempting to call this the folk method of Facebook research. But it turns out that when it comes to studying algorithms,
especially the ones companies like Google and Facebook
keep under lock and key,
looking over someone's shoulder is a primary research method.
I mean, self-experimentation is a big frontier
in understanding how algorithms work.
If you look at important journalistic discoveries about platforms, you find that
many of them are people who just did things like looking over the shoulder of a friend
or a family member, and they looked at their friend or family member's Facebook, and they said,
huh, why is that thing from me there on your Facebook? Or why isn't that thing from me there?
And why does it look like that? Okay, so here's what's so mind-blowing about Christian's
research, at least for me. The average participant in this study used Facebook at least 10 times a
day. That was the average. So they were all serious users. And most of the participants
in this study were college-educated. Many even had graduate degrees. But yet, the majority of these super smart, hardcore users had no idea,
no idea that Facebook even employed something like an algorithm.
We were stunned at the degree to which people didn't realize
that the things that they saw on Facebook are filtered by Facebook.
We assumed that everyone would be familiar with that,
but in fact, the majority
of people that we interviewed didn't know that Facebook was filtering the things that
it shows them. So I think this tells us that it's not necessarily that obvious that there's
filtering going on.
In a way, all of these Facebook users had their own imaginary model for how the Facebook algorithm worked,
even the ones that were unaware of its existence.
And these imaginary models directly influenced how these people, all of them, felt about themselves.
They didn't know that Facebook was filtering, and so when something happened on Facebook,
they would make an inference about their personal relationship.
And many of these were kind of tragic.
You know, they'd sit in their room and cry
because they said things on social media and no one answered.
I mean, they would think, oh, well, I posted this to Facebook
and I didn't get any likes or comments. I'm unloved.
Of course, Christian and his colleagues informed all the algorithmically unaware subjects about what was really going on,
that most of their friends and family members weren't even seeing their posts on Facebook because of an algorithm.
And Christian says this kind of went down like that scene in The Matrix when Morpheus shows Neo that he's just a human battery for evil machines.
Stop.
Let me out.
Let me out!
I want out!
Yeah, the Facebook study subjects were pissed off too.
When we showed people that there was an algorithm,
because we revealed to them, we said,
hey, look, why is this post at the top for so long?
Or have you ever noticed that
some posts seem to be on there even though you know they're not something that was posted
recently when we explained to them that they had a number of friends that were posting things that
facebook chose not to show them a number of them were angry. But when Christian and his colleagues followed up with these subjects,
six months, six months later,
they discovered that all of the anger was gone.
One big change that they reported is that they actually liked Facebook more.
So rather than their initial horror and anger and shock and surprise
at the fact that their stuff was filtered,
after they thought about it more and spent more time interacting on Facebook
with the algorithm in mind, they understood why Facebook filtered things and they found
that it made their feed better. Christian's research suggests that in the future,
companies like Facebook and Google could be a bit more transparent with their users
about the algorithms that they use and suffer no consequences.
And this troubles him because history, history tells us something entirely different.
If you like computer history, you're familiar with the Sabre system.
It's sometimes described as the first wide-scale commercial application of computing.
At the time that it was turned on, it was the largest commercial computer network in
existence in the world, and it did airline reservations.
It was a system that allowed ticket agents and travel agents to search for flights and
buy tickets with a special computer terminal that was provided by
Sabre. The funny thing about it is that the user of the system started to notice that many of their
requests resulted in improbable itineraries. So the system might recommend extremely long and
expensive flight as a top result.
And that seemed odd.
Well, it turned out that since American Airlines built the reservation system,
they had a unit at American that informally was referred to as the Screen Science Unit.
And that unit's job was to think about how they might sort the results
in the airline reservation system so that American would make more money.
This was something of a scandal at the time, but when the CEO of American was called before Congress to testify, he didn't understand why people were upset.
He said, of course American is using the system to make money.
Why would we build an airline reservation system if we weren't going to use it to advantage our own airline?
Christian Sandvig says this piece of computer history gives us what he calls Crandall's Law of Algorithms. Robert Crandall was the name of the American Airlines CEO,
the guy who testified to Congress.
And we should apply Crandall's Law, Christian says,
to all of the algorithms that we use.
We should expect this behavior from all of our algorithms.
Why wouldn't an algorithm be designed
to advantage the company that invested millions of dollars
in building it?
The thing we have to watch out for are the places where what
benefits the company doesn't benefit us. The true history of the algorithm begins in the Victorian age in England.
Of course, there were no computers in the Victorian year of 1833,
but there were a couple of strange machines
that the mathematician and inventor Charles Babbage
kept in his drawing room.
Babbage invented a calculating machine,
a mechanical calculating machine called the difference engine,
which could do basic arithmetic.
And he went on to invent another machine that was much more complicated,
which he called the analytical engine.
Sue Charman Anderson is an author, technologist, and the founder of Ada Lovelace Day,
a celebration of the woman who wrote the world's first computer program
for her friend and collaborator Charles Babbage's theoretical computer, the Analytical Engine.
What Ada saw was that the Analytical Engine could do more than just calculate large tables of numbers.
She saw that given the right input, it could create art and music
if it was given the right algorithms to start with.
This was so far ahead of its time.
She was the only person in the world, on the planet, in the 19th century,
who had the insight to see what a computer could really be. She saw that the unlisted engine,
which Babbage regarded as a sophisticated calculator,
could in fact control any kind of process
that you wanted it to control.
That's James Essinger.
He's written a number of books
about what you would call the prehistory of the computer.
And the title of his new book makes it clear just how much credit he believes belongs to Ada.
I've published a book called Ada's Algorithm,
How Lord Byron's Daughter, Ada Lovelace, Launched the Digital Age.
Lord George Gordon Byron was already famous when he married Ada's mother, Lady Annabella Milbank.
In fact, it was Lady Annabella's cousin, Lady Lamb, who made his reputation.
She said he was... Mad, bad, and dangerous to know.
It was definitely a doomed union. They had Ada Lovelace, and then just a month or so later, the marriage basically broke up.
After the separation, Byron fled his creditors and England. Ada never saw her father again.
I think this had a massive impact on Annabella and her attitude towards her daughter.
And so I think she was very keen to try and make sure
that none of Byron's sort of poetical madness, if you like,
would infect her own daughter.
That was one of the reasons she had her schooled in maths and science
and why she had these eminent tutors for Ada was because, you know, she wanted that discipline to try and
and edge out any sort of poetic tendencies. In Ada's time, few women got the opportunity to
study math and science. In fact, it was commonly believed that the female brain could not handle the stress that came with serious
thinking. But Ada excelled at math. She didn't have any airs and graces. She didn't imagine she
was an amazing mathematician. She just thought so as a competent mathematician. But she did devise
this term poetical science. And for her, poetical science for her clearly meant
bringing the imagination to the service of science. She actually had insights into technology,
which seemed to derive from her doing precisely this, marrying both her imagination and her knowledge of science.
So when she saw Babbage's plans for his analytical engine,
she became fascinated with them and collaborated with him on how to describe them.
If Babbage hoped to build his analytical machine,
he was going to need a lot of money from the British government.
Ada wanted to help him. She translated a paper about Babbage's ideas that had been published
in a Swiss academic journal. But Ada added her own notes to this translation. And it is in note G
where we find her detailed instructions on how this computer would perform an equation. She writes, we can adequately express the great facts of the natural world and those unceasing changes of
mutual relationship which visibly or invisibly consciously or unconsciously to our immediate
physical perceptions are intermittently going on in the agencies of the creation we live amidst
that was really a major step forward in the way that people thought about machines because
at the time machines could only do exactly what you told them to do whereas the analytical engine
was capable of actually working something out for itself if you like. Her idea that it could be used for creating music and creating art
really shows how she situated the analytical engine
in a creative sphere, in a humanistic sphere,
not just as this strange abstract project.
I don't know where it came from, this astonishing insight.
Maybe it was a feminine insight.
Maybe she looked at Babbage's machine and thought,
this could do anything, not just calculate numbers.
And that, for me, is something close to miraculous.
Babbage foolishly turned down Ada's partnership offer,
and he never got the money he needed to build the analytical
machine. And it would be almost another hundred years before the computer revolution really took
off. It's impossible not to wonder what could have happened had Ada Lovelace and Charles Babbage
stuck together, or if Ada hadn't been felled by cancer at such a young age.
But I also can't help but wonder
if Ada Lovelace's ideas about poetical science
might be just what we need today
as we plot out the algorithms that will determine
what we think and what we do in the future. We don't want to forget that we have the ability to do anything.
You have been listening to Benjamin Walker's Theory of Everything.
This installment is called Enchanting by Numbers. Special thanks to Bill Bowen, Celeste Lai, and Matildeo.