Benjamen Walker's Theory of Everything - Doomed to Repeat
Episode Date: February 8, 2017Your host examines the targeted advertising and fake news that some say put Donald Trump in the White House. Plus: the colonial history of Biometric Surveillance. Thanks to Mira Waits, E...than Zuckerman and Hannes Grassegger
Transcript
Discussion (0)
You are listening to Benjamin Walker's Theory of Everything.
At Radiotopia, we now have a select group of amazing supporters that help us make all our shows possible.
If you would like to have your company or product sponsor this podcast, then get in touch.
Drop a line to sponsor at radiotopia.fm. Thanks. episode. Why is there something called influencer voice? What's the deal with the TikTok shop?
What is posting disease and do you have it? Why can it be so scary and yet feel so great to block
someone on social media? The Neverpost team wonders why the internet and the world because
of the internet is the way it is. They talk to artists, lawyers, linguists, content creators, sociologists, historians, and more about our current tech and media moment.
From PRX's Radiotopia, Never Post, a podcast for and about the Internet.
Episodes every other week at neverpo.st and wherever you find pods.
This installment is called Doomed to Repeat.
All you're going to need to enroll is going to be, you know, you're going to take a photograph,
you're going to give a thumbprint, you're going to do a retinal scan,
and then you are now enrolled in the program.
Mira Waits is a historian who studies the art and architecture of colonial South Asia.
So no, she's not talking about Donald Trump's plan
for the American Patriotic Biometric Database of the Future.
She's telling us about a biometric database that already exists in India called Aadhaar.
This exchange of biometric data gives you this number
that then is sort of your gateway to so much that India has to offer. People who don't
have sort of voter registration cards or ID cards in the more traditional sense can actually find a
way to sort of infiltrate the state as it were. India is home to some of the world's largest
megacities. Aadhaar opens these cities up to the millions of people who want in.
The project is designed to help people advance themselves in the world to a certain extent,
and mobility is really at the heart of that question. They can now access the infrastructure
in the city, so they can open a bank account, for example, with their Adhar card.
They can have the opportunity to send money home to people in the village.
A whole realm of state infrastructure is opened up to you.
Aadhaar launched in 2010 as a totally voluntary program.
And that's really what's so remarkable about it.
Pretty much everyone signed up voluntarily.
They've had enormous success with this.
And if you think about it, India, I think a recent estimate
is roughly around 1.3 billion people.
And there are over 1.1 billion people who've elected to enroll in this.
And they really have this goal of having 100% of the population enrolled in this service.
That's funny.
When I hear you talk about it like that, I almost think like we could be talking about
Facebook.
Right, exactly.
No, I mean, it kind of has that appeal.
And it's this great thing that people want.
They've made it super accessible.
It has all of these connections to mobile devices.
So you install these applications.
So everything is done through one's phone.
And so it really does have that kind of sort of Facebook appeal in a way.
So I think that's a really nice parallel.
People often will ask me, do you think we could ever have a program like Aadhaar in the States?
And there's been such backlash in terms of sort of national ID systems like this that it seems significantly unlikely that we could ever get something off the ground.
But here, you know, if you actually sort of think about it, we almost do in a private world, you know, because we are willingly and voluntarily giving Apple or, you know, whatever other phone you're using your Touch ID on, we're giving them this data, and they have it, and they have it for posterity.
With Adhar, it does seem that Indians actually get something for this exchange of data.
Whereas with Facebook, Facebook just keeps all the benefits.
But Meera doesn't really want us to compare Adhar, say, with Facebook.
She's a cultural historian, so she can't help but see Adhar is the latest chapter in the long
history of bodily surveillance. It's sort of a little known fact, but the practice of
fingerprinting as we have it today really started as a colonial enterprise. So the British were in India looking
for a way to manage their population. And they came upon this native indigenous practice. And
we have some sort of very enterprising colonialists who see this and they think,
maybe a trace of the body is going to be somehow more legitimate than an actual signature. And so they
start going down this road of exploring prints and impressions of the body. It's here in this
colonial space that we develop this technology, and then it gets sort of implemented all over the
world, you know, back in Britain. And so these identity sciences really have this very sort of deep
and almost sort of darker backstory in terms of colonial history,
which, you know, as we know, is sort of motivated by, you know,
not the best of intentions.
British colonialists enacted numerous forms of legislation
to actually register people as criminal.
So these individuals who sort of
didn't conform to the comfortable logic of the state. So people who, you know, were nomadic people
or people who, you know, sort of were coming in from other parts of the world, sort of immigrating
into India. And so the state really had this view in mind of using fingerprinting to differentiate,
to sort of control the population
in a very specific way. And so when you turn your focus away from thinking about Adar in terms of
providing people from the village with better access to state infrastructure, and you look at
something like classing groups of people as one thing and classing groups of the population as
something other, there is this sort of deep connection to the colonial histories.
In America, we insist that our tech tools have all been designed for good.
It's only in the wrong hands that they could be used for evil.
But perhaps there's another way to discern the true nature of our technologies. You see, our tech tools also have histories,
histories that show us exactly how tools like biometric databases have been used in the past and how they will be used again. Echoes of the colonial world still reverberate
in serious and consequential ways. And so for me, I think that having this awareness,
thinking back to why do we have surveillance mechanisms like biometric identification in
place in the first place, bringing to light where this actually came from, I think
gives more insights into how it can be used in the future.
We have a tool called Media Cloud that was developed at my lab a couple of years ago,
and we're able to look at when phrases come into use. It's a bit like Google Ngram search, only for us, it looks at news media. And fake news really doesn't surface until right after the election.
Ethan Zuckerman directs the Center for Civic Media at MIT. He's a long-term friend of the
show and of mine. And like me, he's very concerned about this thing people are calling fake news. It comes in as a term for folks on the left to try to explain the influence of
media in electing Donald Trump. And within six weeks, the whole valence of the term has changed.
You have the president stand up at his first press conference, refuse to take a question from CNN,
denouncing CNN as fake news.
I'm not going to give you a question.
Can you stay categorical?
You are fake news.
Sir, can you stay categorical?
So it's a term that's just completely transformed since its introduction.
And frankly, I think everyone is sort of trying to scramble to keep up.
In Donald Trump's reality, all news, well, all bad news is fake news.
But what Ethan wants us to see is that it's the news about fake news that is fake news.
Or as he puts it in an article he recently wrote, fake news is a red herring.
It's propaganda.
Propaganda is weaponized speech. some of it true, some of it
false. It's always been with us. You can think back to John Kerry and swift boating, you know,
we created a verb to describe what, you know, propaganda only loosely related to the truth can
do within a political campaign. The stuff that really interests me and
sort of freaks me out is the disinformatia. When it comes to disinformatia, America really is behind.
Russia, China, they're laughing at us. It's pathetic. And it's not rocket science. You simply pump out as much disinformatia as possible until people start doubting.
Doubting what is under investigation.
Doubting what was in the emails.
Doubting what was going on in the hotel room with the Russian prostitutes.
Doubting what is true.
And doubting what is false.
And the people who benefit from the sort of doubt that's sown by disinformatia
are strong, charismatic leaders. So disinformatia works really, really well for Vladimir Putin.
He can basically say the world is confusing. No one can trust anything. Don't believe anything
you see or hear. Just believe me and I'll guide you through this. And I think there's an argument
that Trump is trying to lead in a very similar way. Americans have no idea what to do with
this informatia. We have this way of freaking out when people are talking about stuff that isn't
true. We want to sort of stand up and wave our arms and say, wait a second, that's just not true. The point is, when people start playing with
disinformatia, they don't care anymore. It doesn't have to be true. It's effective, it's persuasive,
and it does what it's supposed to do. It increases doubt and increases the strength
of strong charismatic individuals.
But isn't there a difference in trajectory here?
I mean, it seems that it's all bottom up rather than top down.
I mean, we have like trolls bragging about how they shit posted their own meme into the
White House.
Honestly, Benjamin, I wish I understood better how this stuff works.
I think what's so weird about it is that this sort of bottom-up disinformatia that's coming out of these dark corners of the internet
is really poorly understood by people like me.
For one thing, I think I am generally not the target audience for it.
I'm much more likely to go out and sort of look for
my elite media. But this idea that people talking about a topic is enough to get someone to get a
gun and go investigate a pizza parlor in Washington, DC. This stuff is clearly very, very strong. It's
clearly very powerful.
Well, let's talk about something you do know a lot about, the business model for journalism.
Is this the root of the problem?
So the economic model is one that's gotten a whole lot of attention. We know that because we've chosen in the United States, a form of news media that is commercially supported. And let's just
be super clear, it isn't the only way that you can support your news media. A lot of the rest of the
world has very strong public broadcasters, they use them as sort of the baseline for facts, and
then there's other broadcasters in there. But in the US, we've really said, we want our media to be free and fair and make a profit. And that's a tricky
thing to be. So we have this endless need for clicks. And so creating something that's highly
sensationalistic, even if it is entirely divorced from facts, turns out to be a profitable thing to
do. Okay, but, you know, over the past two years, Ethan, I've run into you at a number of conferences
where people are on stage talking about this new business model for journalism
that they believe will solve all of media's problems, native advertising.
It just seems to me that we can draw a straight line from publishers confusing their audiences over what is news and what is an ad
and where we are now with audiences totally confused over what is even a fact.
Isn't it fair to say that this whole fake news thing might just be an unintended consequence of native advertising? I mean, Benjamin, my take on this would be,
yep, native ads probably are decreasing trust in media.
But it's kind of like the car had already been set on fire
and driven into a wall,
and now someone's stealing the hubcaps.
So it's not that it's not there.
It's just, it's actually really hard to measure
because it's against the background
of this already catastrophic drop in confidence.
Wow.
Okay, I will take that.
That is a very good answer.
But you know, there is something else I was hoping to talk to you about as well.
You've been a longtime listener of the show and a supporter.
So I'm sure you're aware that it's very difficult to be doing what I do right now.
I'm not sure how to proceed.
Benjamin, I actually think you're phenomenally well positioned for this moment in the media
because you've been making civic fiction for years now.
And so I guess in some ways, I would expect a bit more sympathy for you, for Steve Bannon,
for Alex Jones, you know, for these folks who have decided that they are not attached to reality.
And I think it's going to be a battle of your civic fiction versus theirs.
I have no sympathy for those losers, Ethan. What I'm saying is I want to beat them, but I don't know how.
The big thing that I would say about it
is that a lot of this has to do with the power of secrets.
As soon as you tell people that information is secret,
that it's been leaked, that it's somehow come out,
people think the information is really important
and they start trying to interpret it.
I think if you really wanted to goose your listenership numbers,
you could stop just releasing this through Radiotopia
and call this a secret underground podcast
and try distributing it through those means.
And my guess is that your listenership would increase tenfold.
Benjamin, you need to do better with your mid-roll ads.
Andrew, what's wrong with my ads?
Well, they sound like ads, you know? The ad should be just as good as everything else on the show.
Don't you listen to how other podcasts do it?
You could find someone who works at one of these companies, like a cool secretary or a quirky delivery driver,
and just interview them about how awesome their company is.
Can't I just get an actor?
No, no. You can't have a fake ad that isn't true, Benjamin.
Yeah, but look, it's already down to the wire The episode's super late I have no time to find some doofus delivery guy
Who delivers these kind bars
What other options do I have?
Well, we can start with what you really like about the product
That's the problem
I don't like anything about these alternative snack bars
Okay, actually, we can work with that
How?
Well, not liking things is a big
part of your brand. You know, if you start talking about something that you like, your audiences are
going to be suspicious right away. Plus, it doesn't matter what you say about the kind bars themselves.
That's why all the copy is so bad. You know, it's just about getting your voice to say the words
kind bar to your audience. You know, as long as your voice to say the words kind bar to your audience.
You know, as long as your listeners are hearing the words kind bar come from your mouth and the conversation around it is interesting, funny, or worth sharing,
it will be an effective advertisement.
Try it.
Just say kind bar.
Kind bar.
Okay, let's try it a couple more times.
Play with it.
Kind bar. Put a let's try it a couple more times. Play with it. Kind bar.
Put a kind bar in your mouth.
Put a kind bar in your mouth and chew it up.
Give a kind bar to the person you hate most in your life.
Okay, good.
Just don't forget to tell them the offer code.
No, I think I'm done.
There are now so many narratives competing to answer the question, how did Donald Trump become president of the United States?
Perhaps the most intriguing one I've come across involves big data and surveillance.
It's a story that ran in the Zurich-based Das Magazine last December.
A story about a big data firm called Cambridge Analytica, a firm used by both Donald
Trump and the Brexit Leave campaign. Obviously, this is a story that connects with the miniseries
we're doing here. And thanks to all 30 of you out there in listener land who sent me a link.
This piece now has unofficial and official English translations.
I have had pieces that went viral before, but not in such scale.
Hannes Grasiger's story about Cambridge Analytica was the most read piece of journalism
in Germany in 2016. Hannes and I spoke on the phone as he walked the streets of his
neighborhood in Zurich.
You know, it's a fairly common journalistic product to deliver the big explainer after an election.
And normally it's other people doing these things, not the guy Hannes Krasager.
And so everybody in the media was kind of like looking at it and saying, can it be true?
Before we attempt to answer that question, can you just explain what it is that this
particular big data company, Cambridge Analytica, does or says it does differently than, say,
all the other big data companies out there who do this sort of work, targeting and profiling
potential voters for campaigns?
So the revolutionary approach is that they're using a certain psychological form of assessing
people's characters through analyzing personal data. They focus on the character of a person
rather than the demographics. And that falls
totally into the field of psychometrics. Psychometrics tries to measure the character
of people in numbers. The key to measuring the character of an individual is a personality
assessment. And the particular personality assessment Cambridge Analytica uses breaks down human beings into five dimensions of personality.
They are openness.
How open are you to new experiences?
Consentiousness.
How much of a perfectionist are you?
Extroversion.
How sociable are you?
Accreableness.
How considerate and cooperative are you? Androversion. How sociable are you? Agreeableness. How considerate and cooperative
are you? And neuroticism. How easily upset are you? These five dimensions, the big five,
are also known as OCEAN, an acronym for openness, conscientiousness, extroversion, agreeableness,
and neuroticism. Now, researchers have been doing ocean assessments for years using traditional surveys.
But in 2008, a couple of guys turned to Facebook to get people,
a lot of people, to fill out ocean personality assessments.
In 2008, a guy from Poland, Michal Kuczynski,
starts doing his PhD at Cambridge University at the Psychometric Center.
And at that time, one of his colleagues had started a little side project called My Personality App,
which started as one of these little tools that you see on Facebook sometimes, where you are asked, do you want to know your IQ,
or do you want to learn about your character,
which kind of person you are,
and then you can kind of click on the answers to certain questions,
and then you get an end result.
Michael Kaczynski and his colleague David Stilwell got their friends to use the MyPersonality
app.
They then got their friends to use it.
Who then got their friends to use it?
So almost overnight, thousands upon thousands of people were filling out these personality
assessments.
Of course, being university-based researchers, they have to ask for permission in order to collect and use the data people are giving them via the app.
But most people say yes.
You remember those years when the sharing hype was really crazy and everybody felt like, oh, wow, sharing is really great.
It would change the world.
And so people were sharing their personal information quite freely.
And probably you remember that before 2013, likes were actually public.
So you could actually scrape the net for likes.
And that's how that major data set evolved.
Combining the surveys with Facebook likes, Michael and his colleagues are able to do some amazing work.
For example, in 2012, Kaczynski proved that on the basis of an average of 68 Facebook likes by a user, it was possible to predict their skin color,
their sexual orientation, and their affiliation to a Democrat or Republican party. All this is from
Hannes' article. Intelligence, religion, as well as alcohol, cigarette, and drug use could all be
determined. From the data, it was even possible to deduce whether someone's parents were divorced.
Before long, his model was able to evaluate a person better than the average work colleague,
merely on the basis of 10 likes of Facebook pages.
70 likes were enough to outdo what a person's friends knew,
150 what their parents knew,
and 300 likes what their partner knew.
More likes could even surpass
what a person thought they knew about themselves.
Yeah, but as amazing as all that sounds, what's truly amazing is the reverse engineering.
If you look at the data one way, it's a profiling machine. If you look at the data
the other way, it's a people search engine.
You can search for people with a certain character, if you have their data, at least.
Michael's work garners a lot of attention. But there's one guy who's also at the Psychometric
Research Center who is really intrigued. His name is Alexander Kogan, and he comes to Michael
with a proposition.
He is in contact with a company, he says. He can't disclose too much, but this company
wants to access the MyPersonality data sets.
After a while, Michael gets Alexander Kogan to tell him the name of the company that wants
the data. It's a company called SCL.
SCL stands for Strategic Communications Laboratories.
So when you look up SCL and you really start diving into it,
you find that these guys have a history of around 30 years
in behavioral change management.
So it's about influencing people.
It's about winning elections.
This is their job.
So Kosinski kind of steps back and says,
oh, wow, that's something I don't want to do.
And this is where the university research actually stops.
Here's where things get a little cloudy.
It's unclear exactly what happens next.
Some possibilities.
Kogan replicates Kosinski's data sets using mechanical turkers to fill out more surveys.
SCL works with Cambridge University to build new data sets
or accesses the originals.
As I said, it's murky.
But what is clear
is that not long after Kaczynski
finds out about SCL,
he leaves Cambridge University
and moves to the psychometric lab
at Stanford.
And SCL rebrands its data operations,
calling the new company
Cambridge Analytica.
And Alexander Kogan, he rebrands himself as well.
He moves to Singapore and changes his name to Dr. Spector.
Oh, and there's one more thing that's clear.
This new and shiny rebranded company, Cambridge Analytica, they go on a data buying spree.
So there's a video by Cambridge Analytica CEO Alexander Nix.
Welcome to the stage, Alexander Nix, Chief Executive Officer, Cambridge Analytica. We're giving a kind of a marketing presentation in New York in late 2016, where he explains
how Cambridge Analytica works.
...to speak to you today about the power of big data and psychographics in the electoral
process.
And there's a couple of logos behind him in his presentation, such as Axiom or Experian.
This actually shows his data sources.
One of these logos also shows Facebook.
Now, this point is very contentious.
In fact, after Hannes Grasiger's article came out,
Cambridge Analytica made a statement saying
they do not get data from Facebook,
which makes sense that they would say that
because they do not want to piss off Facebook.
Their whole operation depends on Facebook allowing them access.
During the Trump campaign,
they sent hundreds of thousands of uniquely targeted messages
to potential voters.
Which brings us back to our first question.
The big question.
Does this psychological micro-targeting work?
So you have to understand our original intention was just to describe what kind of people are
using our personal data for what aim, okay? But at the end of our reporting,
like, you know, we never expected Trump to get elected, but then we figured, hey, he got elected.
Okay. I am, of course, aware that saying psychological micro-targeting works because
Trump won is about as dumb and unscientific as Trump saying
no one wants to see my taxes because I won.
And after Hannes' piece went viral,
a number of data scientists,
including some who've been on this program,
have publicly called Cambridge Analytica out
as snake oil salesmen.
But maybe that's the point.
Oh yeah, what's really interesting is that for Cambridge Analytica actually,
so when I was talking to their CEO,
for him the American election was basically just a showcase for attracting commercial customers.
Thanks to Brexit and the American election, which he can now use as like best case showcases,
he can offer his services to companies. you are listening to benjamin walker's theory of everything
this installment is called Doomed to Repeat. You can find links to all their work at toe.prx.org.
Special thanks, as always, to Mathilde Biot, Cara Oler, and Jesse Schaffens.
And we also had support this time around from the Alfred P. Sloan Foundation,
enhancing public understanding of science, technology, and economic performance.
More information on Sloan at sloan.org.
The Theory of Everything is a proud founding member
of Radiotopia, home to some of the world's best podcasts.
You can find links to all of them
at our nifty new website at Radiotopia.fm.
Special thanks to all the folks at PRX,
our headquarters, who made this happen.