Your Undivided Attention - Rock the Voter — with Brittany Kaiser
Episode Date: December 5, 2019Brittany Kaiser, a former Cambridge Analytica insider, witnessed a two day presentation at the company that shocked her and her co-workers. It laid out a new method of campaigning, in which candidates... greet voters with a thousand faces and speak in a thousand tongues, automatically generating messages that are increasingly aiming toward an audience of one. She explains how these methods of persuasion have shaped elections worldwide, enabling candidates to sway voters in strange and startling ways.
Transcript
Discussion (0)
So before we get into the show, we just wanted to provide a little update or reflection on
why we're doing this. The problems that we look at every day at the Center for Humane Technology
are really serious. They have to do with election integrity, social isolation, shortening of
attention spans, the toxification of the information environment. We have to fix these things.
And so as a small organization of no more than 10 people work full time on doing that.
Oftentimes people look at us and they say, we're so glad that those guys are working
on that. And we don't want that to be the case at all. This is something that requires every
single person, especially those people who are inside technology companies, to stand up and be
part of the solution. And what that means is sometimes you'll see episodes every week and sometimes
there might be a little delay. The only thing I would add here is you know how Ellen Greenspan
used to walk into the briefcase and reporters would look at the size of his briefcase and try to guess
Like, if it was really thick, they knew something was about to happen in monetary policy.
You guys can do the same with us.
If it's been a little while, that's because there's a lot of, like, you can make some guesses
about what's going on behind the scenes.
So if you see a delay from us, and it's been a couple of weeks, it's not that the podcast
has stopped.
It's just that we have some big things going on.
And, you know, we want to hear from you about how you're finding this valuable.
We're doing this to try to have everybody step into being part of the solution to put
our hands in the steering wheel and change the system.
and let other people know about the podcast.
We're growing in incredible double digits right now.
Not that that's the goal.
Not that that's the goal.
We don't care about metrics.
But it certainly is encouraging to hear how much it seems to be impacting people inside of technology, policymakers, and media.
So thank you for listening and on with the show.
In December 2016, everybody from Cambridge Analytica that had worked on the Trump campaign and the Trump suit,
SuperPack, gave us a two-day-long presentation of every single thing that they had done.
That's Brittany Kaiser, a former business development director for Cambridge Analytica,
which harvested the personal data of up to 87 million Facebook users, without their consent, of course.
So for two days, they showed everybody else in the company what they had done,
from data collection to modeling to audience building.
And the building of the audiences was the first really shocking thing that I saw.
I had seen the word persuadables used before, especially in our commercial campaigns.
It's a different concept to a swing voter.
A swing voter means somebody that will vote one way or the other, and they might switch which candidate they're supporting.
But persuadables mean people that can be persuaded to do something or not to do something.
And unfortunately, they had persuadables categories called deterrence.
So they had deterrence campaigns to stop persuadable people.
who were definite Hillary Clinton supporters
and would never vote for Trump
to deter them from going to the polls at all.
That was one of the first things we were shown on day one.
What Brittany saw that day is a new form of political campaign
in which candidates greet voters with 1,000 faces
and speak in a thousand tongues.
They exploit our individual vulnerabilities
and they automatically generate messages
that are increasingly aiming toward an audience of one.
And they do this invisibly.
Even Brittany, who worked at
Cambridge Analytica, recalls the shocked reaction of her co-workers as they took in the
presentation. You should have seen the looks on some of the people's faces in the New York boardroom.
New York was just a commercial office. These are people trying to sell cars and toothpaste.
A lot of them had been, you know, executives from PepsiCo and Unilever. I think I remember my
chief revenue officer's comment, which was, wow, that's not how you sell soda, is it?
No one really knew what to do.
Today on the show, we talk with Brittany Kaiser
about the methods of persuasion she first witnessed at Cambridge Analytica.
She describes the experience in detail in her new memoir, targeted.
And she's about to explain how these same practices are now available
to essentially any candidate with a Facebook account.
And to be clear, these methods will proliferate,
regardless of whether our data remains securely within Facebook servers,
or slips into the wrong hands.
If Cambridge Analytica was a weapon,
then Facebook is the arms dealer
and they continue to profit from those
who deploy those weapons today.
I'm Tristan Harris.
I'm Azaraskin, and this is your undivided attention.
I joined the Obama campaign in 2007
and was a part of the team that invented social media strategy,
not just for politics, but in general.
And I mean, this was the best.
beginning of figuring out what social media was, because this is about three years or so
after Facebook was born. Exactly. And not too long after they removed the requirement for a
college email address. That's right. So seeing the very beginning of it, I got really excited
about the types of basic data collection we were doing because I saw that as soon as we sent
targeted messages at individuals, they were engaging. And they were engaging in a wholly different
way than with the blanket messaging that most politicians were used to sending out. We were getting
young people to register to vote for the first time. We were getting people who had been politically
apathetic to come back and actually care and engage with their government again. So I saw
data collection as wholly positive, and I did for many years after that as well. So what are
maybe some specific examples where, you know, before the data in the Obama campaign, you know,
you got X response, but then when you add the data, you get this other higher.
response in some of these examples you're talking about. Take us back. Well, yeah. I mean,
the first time I ever worked on a political campaign was for Howard Dean. And we started using
targeted emails in order to fundraise. And we broke all political fundraising records ever.
When Howard Dean lost and that translated to John Kerry, again, we used a lot of those
same tactics. Now, on the Obama campaign, we went obviously a lot further than emails going to
social media. And instead of Barack giving speeches where he might have tens or hundreds of people
all of a sudden, thousands and tens of thousands of people were showing up to these rallies.
And this was completely revolutionary.
I mean, it was exponential political engagement, which meant more than you could possibly measure
for a politician who had very small name recognition, was a one-term senator at the time.
I remember from being on the outside of that, what that felt like to me as a general public
was, oh, there's a groundswell of support for Barack Obama.
It was invisible to me that there were sort of targeted messages, or there was even a change going on in how voters are being reached.
Yeah, I mean, it was not just targeted messaging on social media, but we built new platforms for the first ever one-to-one interactions between a campaign and supporters.
So, for instance, a head of debate watch parties, we built a platform where you would text into the campaign and you could text in your questions.
So we built a platform with basic algorithms that would sort through everyone's questions.
We had different teams in campaign headquarters that would receive the health care questions
to the foreign policy questions, to the environment questions, and we would be answering those
questions one-on-one while the debate was actually going on.
In real time, while the debate was going on.
In real time, we would spend the night on the floor of the campaign office.
I was just talking with a VC who specializes in B-to-B sales kinds of things, and he was explaining
a new sector he's really excited about, which terrifies me. And it's the idea that on sales
calls, you have essentially what you're talking about. You have an AI system listening,
diarizing the call in real time, matching it to what other salespeople have found successful
and offering prompts in real time to change the conversation. And so you think you're talking
to real person. You are talking to a real person, but that real person is backed by, yeah, it's a true
on cyborg and you're unaware. It's an asymmetric power that the salesperson now has over you.
Wow. I'm sorry to say that I find that system incredibly attractive because I used to do phone banking for a lot of political campaigns and fundraising for nonprofits and charities. And I wish I would have had that back then. Yeah. Yeah. I mean, the first thought that popped into mind as I heard him talk about this was like, yeah, this is going to be, one, effective. And two, it's going to be used not just for B2B sales calls. This is clearly going to be used for, I don't know whether it's the 2020 or 2024 election. Like this is coming for your ears.
Absolutely. And that, again, is where we saw the evolution of our one-to-one messaging platforms that we built specifically for the debates, where a lot of answers started becoming quite similar to questions that were grouped together. So we started having blanket messaging for specific types of questions. So in healthcare, we might have five different types of general questions that came in. And so then we would eventually get those templates and suggested answers so that we could
go through them a bit more quickly.
I mean, and so when you go back to the, what you're talking about the targeting?
I mean, I think there's sort of, what was 2007 targeting as opposed to today targeting?
Because I think we throw around this word to say, oh, yeah, we're just targeting the messages.
Well, of course, we target the messages.
We tailor things all the time.
I say something to you in a different way because I know you versus if I don't know you.
But what kind of targeting back then was going on?
It was incredibly manual.
So that would be us actually collating names and saying in our own spreadsheets.
that we're building by hand, this person cares about these issues. This person has been to these
events. This person has donated for these causes and trying to build a campaign database where
if there's an event around healthcare or a specific call to donation around healthcare, that those
people all see that message. This wasn't, for example, generating brand new messages just because
we know that's what you want to hear. This was, hey, we already have an event that's going on. We
already have this other thing that's going on. We just need to make sure that these people who we know
care about it, do get to hear about it.
Absolutely, which is why many years later, when I joined Cambridge Analytica, I realized
that what we were doing on the Obama campaign was incredibly basic.
We were speaking to supporters we already had, not finding new ones.
That's the big difference.
Facebook tools didn't exist at the time for you to find people who were similar to the
people you already talking to.
If an individual that interacted with your post wanted to share that with their friends
and family, okay, great, and we would encourage that.
but there wasn't any look at like targeting.
Did you want to define lookalike models of people?
Of course, yes.
So lookalike modeling is a concept where if you already have an audience that you're going to
advertise to, say it's 10,000 people that you already know care about the environment,
they care about climate change.
I can upload those people into Facebook and I can say Facebook, I want to find 500,000
or a million people that are as similar as possible to these individuals.
And Facebook will find everybody who has as similar as possible behavioral data to the individuals that I know for sure are my climate change supporters.
And then it'll be able to send my message or my advertisement out to as many people as I want to widen the audience, right?
To make it more concrete, sometimes I'll ask a friend, like, hey, have you ever had somebody saying like, oh, I met somebody just like you?
Like you sort of have a doppelganger.
They look like you, or they talk like you, the behavior like you, you have your kind of like humor.
what a look-like model does that lets Facebook say, cool, I'm going to find all of your behavioral doppelgangers, all the people that sort of act like you.
Your susceptibility doppelgangers.
Like, I know that this particular form of sugar is something that's your bubble tea or whatever your thing is.
Right.
You know, that works with you, but your susceptibility lookalikes.
Right. Exactly.
And then that image in my head is like I sort of like tap you on the shoulder and all of a sudden I see highlighted in a giant crowd, all the other people is sort of like walking a little bit like you.
Like that's the power that Facebook has.
Exactly. So in 2010, Facebook developed something called the Friends API, which is now quite famous, because that was the way that over 40,000 developers were given access to most people on the platform's personal data. So not just the individuals who would take a quiz, but also everybody else in their network once they consented that data would also be transferred to the developer. So that's the famous API that Cambridge Analytica used, but it was also used in the Obama campaign in 2012.
now that was one of very many tools that were rolled out not just for politics but for advertising in general
and from 2010 up until the last election the amount of different advertising tools really became
exponential in the ways I can decide to target you and everybody else like you based off of any
different category that I decide to including you know race and religion but over those many years
the difference between what was done in 2012 and 2016 really came with the intention of the messaging.
I didn't see micro-targeting in 2012 that used negative messaging, that used kind of counter-campaigning,
that spread the types of hate and fear and, dare I say it, but voter suppression tactics in the way that was used in 2016.
the negative and counter-campaigning just was not advanced in that way.
You specifically said Obama had a policy not to do any negative campaigning, including against other Democratic candidates?
Yes, exactly.
So actually part of the 0708 campaign was that I would have to have tons of volunteers that on a daily basis would go and delete off of all of our social media accounts,
anything that was negative against Hillary or other Democrats, as well as any Republicans, because we had a policy of zero negative messaging.
We didn't allow any of that.
You know, I saw that as fantastic.
Every single thing that we pushed out of the campaign in terms of messages was wholly positive.
And that's it.
It was only encouraging people to take action, to care about issues, and to believe in Senator Barack Obama's ability to accomplish those things, which was great.
So what was different about that to what was used in 2016 in Brexit or in, you know, Trump?
What's been different?
So in 2016, there were.
were packs and super PACs and even parts of campaign messaging that the entirety of the campaign
was negative, especially the super PAC, make America number one, which was the main Trump super PAC.
It was 100% negative messaging. There was nothing that was supportive of Trump, not even one
single message. Every single thing was negative against Hillary. I had never seen those tactics
used before, ever. Usually a campaign splits. Most of it time is positive messaging
for a candidate and they'll also have counter campaigns against specific opponents. But I've never
seen entire organizations that are dedicated to negative messaging and dedicated to negative messaging
in a way that is not just an undermining of democracy, but contravening a lot of laws that we
have in the United States. We have laws against photo suppression. But somehow on our technology
platforms. The FEC, the Federal Election Commission, has not found a way to enforce our election
laws and any other government agencies or lawmakers have not found a way to enforce a lot of
our other laws on technology platforms. And it was really shown in 2016 how vulnerable those
loopholes actually are. Imagine you're in New York City. And imagine that we get rid of the
police force. You can break the law and like no one will know.
How long does it take for the city to go crazy?
Like, how many hours is it?
How many days is it?
You have all of our previous social norms in which we have assumed there's accountability.
We do the kind of good thing.
But what happens when you discover there actually is no accountability?
And you can do whatever you want.
That's basically the world we have right now online, where, yes, it's true that people can
use the advertising, micro-targeting tools to just target shoes to good people who want those shoes.
But the problem is that the bad actors will out-compete the good actors, and there's nothing stopping them.
And so when Facebook or someone says, you know, let's just keep it as it is, like, it's not that bad.
It's like because we actually haven't seen everybody abuse it all at once.
And the system allows for everyone to abuse it everywhere all at once.
This is it.
I mean, this is the election year of the United States.
And whether the Facebook engineers or the Twitter engineers listening to this, leave it as status quo and allow the complete unregulated use of algorithmic machine optimized toxic speech
to be the thing that wins, you know, we're talking about real consequences here.
I think, you know, one of the hopeful messages here is that if you're sitting inside of one
of these companies, especially Facebook, it could be Twitter, it could be Google, you actually
have an incredibly high amount of agency for making hugely impactful decisions. Twitter actually
banned political advertising. Like, it is all possible. And if you think about those kinds of
decisions, it's just a bunch of people speaking up and having conversations with their executives,
with their teams, asking transformative questions that do not, you know, fall down the excuse
aisle of, we're just giving people what they want.
People just, haters are going to hate.
Technology is a neutral tool.
Who are we to decide what's good for people?
These are inadequate statements that are mostly evading responsibility for what is in our
direct hands.
Even according to the Facebook employees, I think, in the Facebook employee letter,
they think that there should be blackout periods.
in the at least few days before an election.
Is it so much to ask that they turn it off one percent of time?
They're keeping on the other 99% of the time.
So it's actually a pretty small ass.
It should be something they'd agreed to almost immediately.
What do you think the pushback is?
What do you think they would say for why they wouldn't do that?
What's the defense?
I think they would be worried about the fact that it would demonstrate that they don't have a way to solve it.
So it's sort of like, you know, first they came for the election blackouts and then they came for me.
You know, first they came for the 24-hour blackout, and then suddenly, you know, Center for Humane Technology and the rest of the nonprofit civil society groups demand that they just turn off all the advertising.
You know, it's a slippery slope for them to admit that if the reason why they're turning it off is because an exponential number of advertisers targeting an exponential number of things run by machines is unsafe fundamentally, they're admitting that the entire system is dangerous fundamentally.
So that's one reason why they might be pushing back.
One of the things that sort of arises in my mind, the resistance is like, okay, I can believe that maybe other people are persuadable, but I don't believe that I'm persuadable.
Connect that for me. Like sort of show me why I can be persuaded if I'm sort of on the fence about something.
Yeah. So you would measure an impact of persuadability by what someone's activities were before you show them a certain messaging campaign.
then what their activities or opinions are afterwards.
And you can actually tell what people are searching for
after they've seen a particular ad,
how people answer questions after they've seen a particular ad.
What do you mean how they say they're searching for?
You mean how would you know what they're searching for
after they've seen a particular ad?
I mean, it depends, you know,
how you're using tracking cookies
and what platforms they're inside of.
But yeah, usually you can track what they're searching for afterwards.
So like Google, you can somehow say they're Google searches?
Yes, uh-huh.
Wow.
So if people have 20% more searching for, like, Trump and the economy after you've just shown them an ad about how Hillary is terrible and economic policy,
then you know that that's specifically related to that ad because they're going and searching right after they've seen it.
Got it.
And so you're actually, you're getting a closed loop of message you show someone and the way that their behavior immediately changes right after to what they're interested in.
Yeah, exactly.
Something that we've been talking about, audience can't see, but I'm just trying.
to Tristan here is that it's hard for me to see how messages Facebook, wherever else will change my
behavior, but it's much easier to see how they might start to influence my bias. But then
bias over time becomes behavior. And so if you can own someone's bias, you eventually own their
behavior. You take an example of Crooked Hillary as a meme. Defeat Crooked Hillary, yes.
Defeat Crooked Hillary. The logo of that came from Cambridge Analytica. They invented that.
Yes.
But the phrase came from Trump when he first did that.
Yes, just like he had, you know, Lion Ted and Little Marco.
Right.
Crooked Hillary was his phrase, but defeat Crooked Hillary.
The campaign in the logo was made by Cambridge, yes.
Right.
And so the reason I'm going here is once you implant, you know,
what Trump does in general, as you say, you know, sleepy Joe Biden and lying Ted Cruz and crooked Hillary,
you're doing a binding, a cognitive binding to the person with an anchor that says,
this is the bias you should have. Every time you look at Hillary, see her as crooked. Every time you see Joe Biden, see there as sleepy. Every time you see Ted Cruz, see him as lying. Yeah, I go a little bit more into that in my book. I talk about Trump's kind of pairing of every single one of his opponents with a specific negative phrase. And that was one of the first times that really the drop in support for Marco Rubio was so easy to measure.
But Lion Ted was pretty successful as well.
But Little Marco actually had a big effect on his campaign from our measurements.
And so I think the common narrative is, oh, yeah, the persuadables, you know, they're so easily duped.
Not any of us at this table, Brittany A's interest on.
We're so smart.
We could never be influenced.
But if you say bias, our bias is much more easily, I think we can admit that there's invisible ways that we are looking for certain evidence or others.
But something that caught my eyes when you said, well, the thing about the Big Five personality traits is
that neurotic people, which is the fifth one,
the neuroticism, always respond to fear-based messaging.
It works very well.
Do you want to talk about that?
Of course.
Because that's a very clear example of a deep bias that you can tap into.
Right.
So through all of the behavioral, clinical, and experimental psychology
that Cambridge Analytica brought into our modeling infrastructure,
we found that there are around like 32 different personality types.
And people that are very high in neuroticism respond to fear-based messaging.
I mean, neuroticism means you're a bit emotionally unstable and you can be triggered quite easily.
And we mean neuroticism in a formal psychological sense, just so people know.
We're not talking about like an adjective level judgment.
We're talking about there's a clinical sort of view of what a neurotic personality type is.
We had, you know, a team of psychologists that were working on this with the data scientists on how to measure it using large scale qualitative and quantitative testing.
And so the defeat Crooked Hillary campaign, which was run by the Make America number one super PAC, after seeing how safe.
successful it was, sending fear-based messaging to neurotics. They only sent fear-based messaging
to people measured to have high amounts of neuroticism for the entirety of the campaign.
Really?
That was the entire point. In the beginning, they mixed hopeful messages to open-minded and
extroverted people and assertive messages with fear-based messaging, and it was only fear that
really had a massive impact. So they spent the rest of the Superpex money on fear.
on fear, yes. Wow.
Sort of like you found, you find a crack in somebody's psyche, and you pay to take a chisel and a giant hammer, you just start whacking against that one fault again and again.
And that's sort of the image I have for what's going on with our democracies.
Yes. I mean, the first ever stark example that I saw of this, and these ads are all available on YouTube, Cambridge designed five different ads that were put out on both television and YouTube pre-roll for John Bolton's superpowers.
on national security.
And some of the ones that were for, again,
the open-minded and extroverted individuals
showed families playing out in the sunshine
and bright waving American flags
and lush green hills and the hope for the future of America.
And then you saw the ad that was cut for neurotics.
And it was dark, nearly black,
really dark images of some of the most iconic buildings
in America with surrender flags.
waving on them. So a white surrender flag and a nearly black and white very dark image of, you know, of everything from Lady Liberty to the bridge in San Francisco. And then it has really ominous music and it stops to a black screen and says America's never surrendered. You know, we're not going to start now. It was so incredibly dark compared to everything else that that was cut. And when I used to show that video in meetings, people would say, hey,
Hey, I remember that. I saw that on TV. Hey, I remember that. I saw that on my laptop. It really stuck with the people that were targeted by it. They remembered it very well. They just kind of would stop and pause and their face would go a bit blank. You could tell how impactful it actually was, that it made them feel afraid that America was being attacked and that if we didn't do something about foreign policy, that we were in danger. And that was what made them feel like national security was important because they were.
afraid of being attacked, not because they had hoped that America was an amazing place and that
we had a bright future. What's amazing to me is that the speaker, in this case, that the Trump
advertising team is saying two completely different messages, to do different honest, it's the same
speaker. So on the one, it's like, imagine you meet a friend and, you know, you talk to that person,
and then, you know, they talk to you, and they talk about this super upbeat tone, and then they
talk to someone right next to you when you're not around, and they say this totally opposite thing
about the exact same topic.
Yeah.
Like, you would call that person untrustworthy, sociopathic sort of way to operate.
Yeah, someone you wouldn't ever want to do business with.
If someone sits down with you and out of one side of their mouth, they say,
I can't wait to do business together.
And out of the other side, they say, I'm going to destroy your company if you don't work with me.
That's right.
It's kind of like that.
And we've created this sort of mass infrastructure for automated sociopathy because each,
each campaign company can basically run these split-tested ads and actually be,
in a constant rolling state of saying different things to different people about the same topics
and being 100% self-contradictory and opposite, but it's almost like we have this phrase we've
been playing with. It's sort of like socially subliminal messaging. It's like a drive-by message.
And you say, did I just hear that thing? You try to refresh the page and it's gone. And you ask,
hey, did you see that thing that I saw? No, I didn't. What are you talking about?
Absolutely. I mean, we had a very smart group of people who built the ad tech at Cambridge.
and they were testing sometimes hundreds, thousands, tens of thousands of messages at once,
and that would be a slight change in words, images, phrases, even the coloring and the sound in ads
until it was optimized for the most amount of clicks.
And that means that most people saw an ad that maybe hardly anyone else saw.
Maybe tens or hundreds of other people saw it, or maybe it was just for them.
Definitely in the primaries when Cambridge was working on the cruise campaign,
there were some messages that were just for, you know, like 50 people.
And if Facebook or these campaigns are whispering different messages into each person's ear,
is it any surprise that we end up with societal incoherence?
Right.
Inability to agree on truths because everyone's hearing a different message.
Recently, we've started also playing with this, is that micro-targeting is a little bit of an unfortunate phrase
because it sounds so small.
It's, oh, it's just micro-targeting, but really this is human targeting.
This is like taking the world's largest supercomputers armed with enough data that the algorithms can make better predictions about you than your colleagues, your spouse, and sometimes even yourself, finding the right brains to target, and then selling the bullets to whoever the highest bidder is.
Right. And that's very much what it was. I mean, from what I understand, the Clinton campaign only served about 50,000 messages over the whole duration of the campaign, and there were over a million.
that came out of the Trump campaign, even though it was run over a shorter period of time.
Another thought that came to mind is sort of inattention capitalism, heat has a home field
advantage.
Yes.
That's a well said.
That's unfortunately how a lot of news feeds and search algorithms are built.
Something that is more inflammatory, something that is more fear-based, gets more clicks, so it rises to the top.
We now have automated...
content generated by machines, uploaded to automated content ranking systems, mapped to automated
users, aka bots, mapped to automated advertising. And it's like a computer generating stuff for
computers. The question is, can algorithms know when they're being gameed and when they're
amplifying hate or false things or bad things? And according to Facebook's own logic, they can't
know. What's the example of that? The example of that is, do you remember trending topics on
Facebook. They used to have on the right-hand side, here's the most popular news stories. And they had human beings, human editors who were curating that. They had some contractors. Facebook got accused by conservatives in the United States saying, oh, you're biased against conservatives. They said, fine, fine, we're going to get rid of our human editors, and we're going to have just the machines decide what are the trending topics. So you just count up, how often are each of the words mentioned and the topics that are mentioned the most, they show up on the right-hand side. After they do that, within just like 24 hours, three out of the top eight news,
stories are fake news articles. And so what do they decide to do? They say, we're shutting down
trending topics. So essentially, you have millions of pieces of content, trillions of pieces of
content surging through their system every day. And when they delegate it to machines to decide,
is this true, is this good, is this helpful to society? They don't have a way to decide.
And according to their own logic, they say, this is an unmanageable problem. We have to shut it down.
Now take that exact same structure and apply it to the automated advertising system.
They've got more than 6 million advertisers sloshing through their system every day,
running tens to hundreds to thousands of campaigns each,
generating millions or trillions of possible combinations of ads being matched to human eyeballs,
all run by machines.
The machines don't know what's true, what's good, what's beautiful, or what's helpful to society,
and yet they're saying we're not going to shut it down.
It's like the reverse CDC, like the Center for,
for disease control.
You know, instead of trying to block a virus from spreading virally throughout the entire
population, it's the reversal.
We've actually laid the train tracks for viruses to spread as fast as possible with as
little ability to respond and prevent that damage as possible.
And I think the fundamental tension here, these systems are always demanding greater and
greater automation because automation means I don't have to pay people to do it, so it's more
profitable to have machines decide rather than to pay human brains to sit in rooms and make
decisions for us. So the incentive is to take as many of these human decisions and turn them
into machine decisions. But if we just categorically deny that machines cannot make critically
important decisions that have to do with democracy or children's health or what's safe or
what's good, we're basically saying there's a limit to how much we're willing to automate
with machines. If you're building systems that are beyond the human capacity to course correct
or to make moral judgments, they're not safe. Yeah. You know, I sort of
want there to be an XPRIZE for trustable trending, right?
Like if somebody or groups of people could crack this, maybe it's coming from the blockchain
community, maybe it's coming from anthropology and sociobiology community, I don't know,
but that just seems like such a perfect use case to be able, if we can get to trustable
trending, that's a huge advance.
Yeah, it's sort of a unit test.
It's almost like the alpha-go chess game for whether the AI can figure out the thing.
It's like, how good can the AI approximate good moral human decision-making?
Yeah, that gets me excited.
That gets my engineer design hat sort of like, or my gear is starting to spin,
being like, oh, how would I do that?
I don't know if it's possible.
You remember that solution?
It was one of our listeners provided.
It was like for every hateful message, it would donate to an opposite cause an equal
or greater amount.
I wonder what the equivalent of that is for political advertising.
If you go all fear.
That's interesting.
Yeah, yeah.
The more fear you use, the more we show the other candidates ads for free.
Yeah, exactly.
That creates the counter disincentive that actually prevents you from even wanting to do it in the first place.
Tell us about some of the other countries that there were campaigns, because I think in the film, The Great Hack, which, by the way, everyone should see, details the sort of unveiling of Cambridge Analytica onto the world stage.
Absolutely. So the SCL group in Cambridge Analytica worked in over 50 different countries.
there were nine or ten national elections for prime minister and president every single year that the company was around.
Alexander Nix, the former CEO, has probably run more political campaigns than anyone else in the world.
As far as I know, and a lot of the smaller countries, such as Trinidad and Tobago or a lot of different Caribbean nations,
the company had a lot of experience there.
Now, when I joined the company specifically to work in defense and social and humanitarian projects,
I was shown a Trinidad and Tobago example of what they had done, and it looked fantastic.
I was shown a youth engagement campaign where they managed to be able to figure out how to turn out more youth and get them to the polls.
And this was a landslide victory for the political party that they worked for.
Now, throughout the years I worked there, the executives of that company got a bigger and bigger head, a bigger and bigger ego.
and near the end of my time there,
they started being a little bit more honest
about the way that they had worked at other countries.
And I would participate in meetings
where I would hear out of my CEO's mouth
really terrible, underhanded,
and probably even illegal things that were done in other countries.
Actually, only yesterday or the day before,
Trinidad and Tobago started a criminal investigation
into the last campaign that the SCL group ran there.
And the way that this youth engagement campaign was then described to me was they undertook a large-scale data collection in the country and found out that there's one party that is of an Indian background, one party that's of an African background.
The SCL group was working for the Indian party.
And through their research, they found out that the youth that supported that party were always going to listen to their parents and always show up to the polls, no matter.
or what, but the youth that supported the other party could be convinced to not go to the polls.
They were persuadable. They were persuadable to becoming politically apathetic. How did they know that?
How do you figure that out? So that's very large-scale complex research that actually more comes
from the Psi-Ops background of the company. So Psi-Ops is psychological operations. It's something
that's usually used by militaries. And that is psychological research.
that is used to fully understand everybody's levers of persuasion and motivations, their religious
affinities, their caste systems, whatever it happens to be. And usually you can start to see what are
the biggest triggers or what are certain triggers that are never going to work for people.
And just because of their cultural background in Trinidad and Tobago, the Indian youth are always going
to go to the polls with their families. And so what they did in order,
for it, I suppose, not to be obvious that they were doing this for a political party.
They started a youth movement called Do So, which means don't do it.
Do so.
Yeah, do so with crossed arms.
They cross their arms.
The thing everyone takes away from the film is just seeing all of these youth make the gesture
because that gesture was constructed memetically by the Cambridge and Lineca creative team, correct?
Yes.
They constructed this, I suppose, youth apathy campaign, which was saying everyone in government is corrupt.
you know, turn off of politics, like they don't care about you type of thing.
And if you want to be an activist, you know, go out there and do things for yourself
because the government isn't going to take care of you.
And this movement spread.
And so the youth of the entire nation were out there and demonstrating and making videos
and graffiti and all of this stuff with the crossed arms do so logo.
Don't do it.
And so on election day, half of the youth population nearly didn't go out to vote.
compared to the election before that, but all of the Indian youth were dragged to the polls by
their parents, and they still voted. And so therefore, that party won. It's amazing as I think
people, you know, you think about these things, as A's and I often do. We focus on the technology
platforms as the vehicle and the delivery vehicle for all sorts of psychological, memetic flows.
But then with these examples like you're talking about, you see how it spills out into the real
world. It's almost like we have this vast oil spill and just like, it spills out all over the
world and we have this like hate spill over here and then we have this disinformation spill over
here and then we have this dissuasion from voting democracy is broken spill over here yes but we've
created this like you know this kind of the whole world just feels like it's spilling out from
these from these tactics yes I'd love for you also to talk about some of the other examples I know
I mean people just don't you said operate in 50 countries I know Nigeria Ghana Mexico
Indonesia when I saw you over the summer we were talking with someone who said they were
from Indonesia and they'd left the country when they were kid fleeing the sort of new government
or something. You said, oh, yeah, you said right in front of me. Oh, yeah, Cambridge Analytica
worked on that election. And I remember being like, whoa. Yes. I was always given as one of the
earliest examples, which was that Cambridge Analytica's parent group, the SCL group, was hired to
help build a movement in Indonesia that overthrew Suharto, which at the time was seen as a good thing
by, I suppose, whoever was paying for it, likely an intelligence agency.
And Sue Hartow was overthrown and was replaced by someone even more corrupt.
So yes, you overthrew a dictator, and some people might see that as good.
But you destabilized a country and put in someone who is much worse,
which I think we've seen in very many countries around the world,
and you always think was there someone like the SCL group behind that?
And now, through my experience, I would say, yeah, there's probably many organizations
like SCL group around the world
who are involved in movements.
Since I left the company,
I've seen a lot of quote-unquote movements
around the world that do not look like
they were created organically whatsoever.
What are the markers?
Like, is there something to look for?
I would say a very exact unifying message
that spreads like wildfire
and spreads a lot faster than something that's organic
and that more quickly turns into protests
than a lot of other people.
protests, right? A lot of times a movement, a movement gathers momentum for quite a long time before
people actually go out into the streets. I would say if something is an unorganic movement,
you will see one catchphrase and one symbol that is used by absolutely everybody, whereas
inorganic movement, usually there's tons of different messaging all around the same concept,
and it takes them a while to actually physically go out in the streets, whereas, you know,
you'll see one unified message, and then people are out in the streets protesting something with
all the same poster a lot faster than you would expect to see.
What was amazing to me in the example of Trinidad and Tobago was the way that the right memes
kind of carry themselves forward because after they invented that meme of the crossing
hands, there were kids who made like YouTube videos, music videos, thinking that this is cool
for themselves.
They weren't like, you know, bought by Cambridge Analytica to do that.
They were doing this on their own.
And so if you find the right meme, it's like you're knocking the first domino off.
And then you can, you know that it's actually going to spill out and a lot of people are going to
are going to do it. And so I think this is a critical point to get is that when you start
to do this, you can actually then take your hands off and you're automatically now following
through with the memetics that I've already implanted. And sort of a persuadables cascade.
There's this really interesting question at the root of what we're talking about here
with you. And when we first met, this is the fundamental conversation is what is ethical
persuasion? How does the persuader respect the values of the persuadee? But then if you tie that
conversation, well, people often say, well, people actually don't know what their own values are.
persuaders we know. And so we're just going to do it anyway because they don't even know themselves.
So we might as well put it in there. But then you end up with the situation, which is actually
what successful advertising is, where the advertisers' values become your values. So now you think
that that's what you want that by yourself, but that was actually the sort of infrastructure of
Facebook or YouTube guiding people towards that. Absolutely. And this gets to sort of a question
I have for you is, what does 2024 start to look like? What do we have to get in front of right now? How
much worse is it going to get? Well, I definitely think that right now we don't have too many
obstacles to it getting worse. We don't have legal or regulatory frameworks in place. We don't
have the technology to stop some of the abuses of the current tech that we have. So I think
that's important to say is that technologists need to be working very hard on some of these
problems. I mean, looking ahead to 2020, I'm terrified over what people are going to see in the
next year. I'm terrified at how unprotected we are and I'm disappointed in the executives at Facebook
that have made a decision that politicians will not be held to the same standards as you and I.
If I decided to libel someone or slander or put out disinformation, my content would be
blocked and removed. I might even be banned from Facebook myself as an individual. Yet,
if Donald Trump does the same thing, his content is likely going to go via.
and millions of people will see it even if it is disinformation and it will not be removed at all
even if it is identified as disinformation. That's a huge problem and I'm not saying that I think
that all political advertising should be banned. No, I want everyone to care about politics. I want
people to engage with issues that are important to me. I want them to be able to hear what candidates
have to say. So, you know, Jack Dorsey's heroic action of hopefully a temporary ban of political
advertising is to try to fix the problem on the back end before letting it get worse, right?
And that's an important conversation we need to have right now, which is between now and
next November, is there going to be no political advertising on Twitter except for voter
registration?
Okay, that'll be interesting.
Let's see how that goes.
But I hope that doesn't last too long.
I hope that they're investing on the back end and identifying disinformation and hate and
racism and finding better ways to block and remove that content so that we can put political
advertising back up and a lot of the issues groups and candidates that I think are well-intention
can continue to have a voice. But what Facebook has decided to do is the opposite of what Jack Dorsey
has done. Everyone can say everything that they want all of the time. And so completely unchecked
political messaging is obviously a danger, but also an opportunity for the well-meaning people out there.
And then complete blanket banning is also a stifling of political voice when we can still have
people sell us cars or petrol products. And that's, you know, that's really not productive
either, to be honest. But what we talked about when we met over the summer was, you know,
we used to have the fairness doctrine that politicians had equal air time and we guaranteed that.
And we took that away, I think, in the Reagan era. But we could actually say, well, look,
what is democratic speech from politicians supposed to be about? Is it supposed to be about
who can basically, you know, in a TV debate where you have, you know, what do you think about the
Middle East, you have 30 seconds to respond, and game theoretically, it's better to attack the other guy
than even say anything about the Middle East. Like, that isn't what we want. No. We could actually
have a thing where instead of, I think this is how it works in France and in England, you get like
one slot and you get to say one thing. Yes. And you could say, what's my message? What's the thing
I'm trying to say? And Facebook could actually introduce a kind of mass fairness doctrine where in every
country, there isn't this like, it's how much you pay me with the sort of Citizens United problem that we all
No.
Right.
Facebook, Twitter could actually each introduce these sort of fair spaces of equal speech.
Facebook could, in fact, and Twitter could be the very best tools we've ever invented for humanity as a whole to make sense of the world and to have collective action to solve the existential problems that are facing us.
I agree.
But they have to stand up and say, actually, we realize that we are constructing the social world we live in as a technology platform and take responsibility for doing it.
And it's great because if you can make that.
flip, you go from just being responsible to actually empowering to solve the biggest problems that we have.
Yes, and that's really where technology should be able to play a role. But right now we do not have
the laws, regulations, education, or technology to stop the negative use cases of that. And that's
where we need to concentrate in order to be able to take advantage of the good. Let me just throw
one more sort of thing that is scaring me right now. I don't know whether this is already happening
are about to. So, 2018, December, Microsoft releases a paper on an implementation of an AI that quotes
satisfies the human need for communication, affection, and social belonging. It's deployed already
to 600 million people, mostly through Asia. And here's just one little quote, which is an emotional
connection between the user and the AI became established over a two-month period. In two weeks,
the user began to talk with the AI about her hobbies and interest by four weeks to begin to treat
the AI as a friend and ask her questions related to her real life.
And after nine weeks, the AA became her first choice whenever she needed someone to talk to.
So when I think about the loneliness epidemic, that seems like it's about to become the biggest national security threat and election security threat.
What happens when your best friend is a computer that's for sale to any message can pipe through it?
Yeah.
I mean, that's the situation that we're already in.
That's the thing.
Our Facebook feed and our Google search feed is up to the highest bidder.
We can't consent whether our data is given to the highest bidder in politics or commercial
and who those people are and what their intentions are.
That lack of transparency and consent mechanism, it's just not there right now.
Bernie, thank you so much for coming on the podcast and for what you're doing and regulation.
And I know that lots of state legislatures here you're working with to try to pass new laws
and we'll be in touch for many more.
But thank you so much for coming.
Definitely thank you guys for having me.
Yeah, thank you very much.
So owning your data and education are both really important first steps, but while they may be necessary, they're not sufficient, because it's what you can do with the data that matters, the predictions that let the machines know what you're going to do before you know yourself.
This is exactly what machine learning is good at, is detecting patterns and then mimicking those patterns.
So figuring out how you speak, mimicking them, and then modifying them in a little way.
way, that's like, that's to the heart of what machine learning does the best.
They can just wake up the avatar voodol of you.
Each of those voodol's, each of those avatars, act and think and speak more and more like
us, which means that you can actually kind of predict more and more steps ahead of what all
those avatars are going to do.
And then you can sell those predictions to an advertiser and say, hey, do you want
those future choices that you don't know you're going to make to go in this other direction
that you can pay me to create?
Yeah.
I wanted to bring up a fairly.
new technology that I think many of our listeners might not be aware of, and that's style transfer.
Style transfer is where I can teach an AI, pointed at Van Gogh, and it learns the style
of Van Gogh, I pointed at Warhol, it learns the style of Warhol. You pointed at Magritte,
it learns the style of Magritte, and then I can take any other image, and the AI will transfer the
style, turn that photo of you into an image that looks like it was drawn by Warhol, Magrite, or Van Gogh.
It's pretty cool, honestly. And recently,
that technology has been moving from style transfer for images to style transfer for text.
That is, I can point the machine at Shakespeare.
It learns how Shakespeare writes, and then I can give it any message, something you wrote to a friend, and I can rewrite it as Shakespeare.
That doesn't sound so bad until you realize the other ways it could be used.
Gmail could point the AI at every email you've ever written, and they can now write any message as if it's coming in your voice.
or if you pointed at every message that you've responded to quickly or positively,
it can learn the style that's most persuasive to you.
And obviously, this is a kind of asymmetric power
because Google or Facebook was doing this,
they could turn around and give that ability to any advertiser.
Just click a checkbox and then whatever marketing message you have
runs through their AI so that it's uniquely persuading you.
That is so creepy.
Sharon Lanier's metaphor for this is, imagine going to Wikipedia, except this new version of Wikipedia in which each article was personalized just to manipulate you.
So you're actually getting a different version of that article than everyone else who's getting that article.
That makes people realize how creepy that is.
It's actually sort of invisibly dividing us socially.
It sort of reminds me of that, you know, a house divided cannot stand.
This is dividing the house down to its individual people.
Which is why we say this is a unsustainable.
business model and system.
The reason micro-targeting is so dangerous and why we should have never even allowed
it and look-alike models is because it enables, like, you know, in Othello, the Shakespeare
story for those who.
You know, understand that.
Iago is that character who's gossiping strategically in Othello's ear, and he's able to create
a sense of distrust in one person and in another by controlling the messages that two people
receive and then making them hate each other and then making them hate each other just enough
so that they'll never actually talk to each other and compare notes about what information each of them was receiving.
So that's essentially what micro-targeting allows, and that's why it has to stop.
It cannot be allowed because it enables the mass strategic division of society
by spreading the kind of gossip that makes it impossible for us to ever compare notes
and realize that there's this massive artificial divide.
Your undivided attention is produced by the Center for Humane Technology.
Our executive producer is Dan Kedmi.
Our associate producer is Natalie Jones.
Noor Al-Samurai helped with fact-checking.
Original music and sound design by Ryan and Hayes Holiday.
Special thanks to Abby Hall, Brooke Clinton, Randy Fernando,
Colleen Hakeas, Rebecca Lendell, David J.,
and the whole Center for Humane Technology team for making this podcast possible.
We want to share a very special thanks to the generous lead supporters of our work at
the Center for Humane Technology, including the Omidiar Network, the Gerald Schwartz
and the Heather Reesman Foundation, the Patrick J. McGovern Foundation, Evolve Foundation,
Craig Newmark Philanthropies, and Knight Foundation, among many others.
A huge thanks from all of us.