Making Sense with Sam Harris - #152 — The Trouble with Facebook
Episode Date: March 27, 2019Sam Harris speaks with Roger McNamee about his book Zucked: Waking Up to the Facebook Catastrophe. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all ful...l-length episodes at samharris.org/subscribe.
Transcript
Discussion (0)
Thank you. of the Making Sense Podcast, you'll need to subscribe at SamHarris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only
content. We don't run ads on the podcast, and therefore it's made possible entirely
through the support of our subscribers. So if you enjoy what we're doing here,
please consider becoming one.
Welcome to the Making Sense Podcast.
This is Sam Harris.
Okay, very short housekeeping here.
Many things happening in the news.
The Mueller report just came in.
I think I'll do a podcast on this when there's real clarity around it.
I'll get some suitable scholar on.
So I will defer that for the moment and just introduce today's guest. Today I'm speaking with Roger McNamee. Roger has been a Silicon Valley investor for 35 years. He has co-founded
successful venture funds, including Elevation, where he's partnered with U2's Bono as a co-founder.
He holds a BA from Yale and an MBA from the Tuck Business School at Dartmouth.
But of relevance today is that he was an advisor to Mark Zuckerberg very early on and helped recruit Sheryl Sandberg to Facebook.
very early on, and helped recruit Sheryl Sandberg to Facebook. And he is now a very energetic critic of the company and of many of these platforms, Google, Amazon, Facebook, etc. We focus on
Facebook in particular. We talk about Google to some degree. But this conversation is a very
deep look at all that is going wrong with digital media and
how it is subverting democracy, making it harder and harder to make sense to one another.
It's a growing problem that I've discussed many times on the podcast, but today's episode
is an unusually deep dive.
So now, without further delay, I bring you Roger McNamee.
I am here with Roger McNamee. Roger, thanks for coming on the podcast.
Oh, Sam, what an honor to be here.
So I got connected to you through Tristan Harris, who's been on the podcast and who many people
Tristan Harris, who's been on the podcast and who many people know has been dubbed the conscience of Silicon Valley. But I also realize another podcast guest who I also got through Tristan is
another one of your partners in crime, Rene D'Aresta, who gave us a fairly harrowing tour of
the Russian influence into our lives through social media and other hacking efforts.
So you know both of those people, and they really have been allied with you in your efforts to deal
with the problem that we're about to talk about, which is just what is happening on our social
media platforms with bad incentives and arguably unethical business models so as to
all too reliably corrupt our public conversation and very likely undermine our democracy.
So let's just start with your background in tech and how is it that you come to have an opinion
and knowledge to back it up on this
particular problem? Yeah. So, Sam, I began my career in the tech world professionally in 1982.
And when I was going back to college in 1978, I dropped out for a period of time. My brother had
given me a Texas Instruments speak and spell, you know, the toy for teaching kids how to spell.
Texas Instruments Speak and Spell, you know, the toy for teaching kids how to spell.
And it was brand new that year. And he hands it to me in Christmastime 1978 and says,
you know, if they can make this thing talk with a display and keyboard, you're going to be able to carry around all your personal information, a device you can hold in your hand. And it probably
won't take that long. So this is one year after the Apple II, three years before the IBM PC, and I think roughly 17 or 18 years before the Palm Pilot.
He planted that seed in my head, and I couldn't get rid of it. And I spent four years trying to
figure out how to become an engineer, discovered I was just terrible at it. And so I got a job
being a research analyst covering the technology industry.
And I arrived in Silicon Valley just before the personal computer industry started.
And that was one of those moments of just pure dumb luck that can make a career in a lifetime.
And in my case, it did both. So I start there in 1982. I follow the technology industry for a long, long period of
time. And I do this, like Zellig, I just wound up in the right place at the right time, a lot of
different moments. Beginning in the mutual fund business in Baltimore at T. Rowe Price,
but covering tech, traveling around with the computer industry as it formed. Then starting
a firm inside Kleiner Perkins Caulfield & Byers, the venture capital firm in 1991.
Then starting a firm inside Kleiner Perkins Caulfield & Byers, the venture capital firm, in 1991.
So I was actually in their office when the internet thing happened.
So the day Marc Andreessen brought Netscape in, the day that Jeff Bezos brought in Amazon, the day that Larry and Sergey brought in Google, those were all things that I got to observe.
I wasn't the person who did them, but I was there when it happened.
And that was, if you're an analyst, that's a perfect situation. And so in 2006, I had been in the business 24 years and I get a phone call from the chief privacy officer at Facebook saying,
my boss has got a crisis and he needs to talk to somebody independent. Can you help?
And so Mark came by
my office that day and he was 22. The company was only two years old. It's about a year after the
end of the storyline from Social Network. Company is only available to high school students and
college students with an authenticated email address. And there's no news feed yet. It's
really early on. And he comes in my office and I say to him,
Mark, you and I don't know each other. I'm 50, you're 22. I need to give you two minutes of
context for why I'm taking this meeting. And I said, if it hasn't already happened,
either Microsoft or Yahoo is going to offer a billion dollars
for Facebook. Keep in mind, the company had nine million in revenues before that,
so a billion was huge. And I said, everybody you know, your mother and father,
your board of directors, your management team, everybody's going to tell you to take the money.
They're all going to tell you you can do it again, that with 650 million bucks at age 22,
you can change the world. And I just want you to know, I believe that
Facebook, because of authenticated identity and control of privacy, is going to be the first
really successful social product, and that you can build a social network that will be more
important than Google is today. So keep in mind, it's 2006. So Google's already very successful,
but obviously nothing like what it is today. And I said, they will tell you you can do it again,
but in my experience, nobody ever does. And so I just want you to know, I think what you have here
is unique. It's cool. And I hope you'll pursue it. What followed Sam was the most painful five
minutes of my entire life. You have to imagine a room that is totally soundproof because it was a
video game lounge inside our office.
And this young man is sitting there pantomiming thinker poses. At the first one minute mark,
I'm thinking, wow, he's really listening. This is like, you know, he's showing me respect.
The two minute mark, I'm going, this is really weird. At three minutes, I'm starting to dig holes in the furniture.
At four minutes, I'm literally ready to scream. And then finally he relaxes.
And he goes, you won't believe this, but I'm here because the thing you just described,
that's what just happened. That is why I'm here. And so that began a three-year period where
somehow I was one of his advisors. And my experience with him, Sam, was,
it was fantastic. He was the perfect mentee in the sense that he reached out to me on issues where
he was open to ideas. He always followed through. I never saw any of the antisocial behavior that
was in the movie. You know, I didn't have a social relationship with him. It was purely business.
I didn't have a social relationship with him. It was purely business. But for three years,
it was really rich. And I saw him almost every week. And the key thing that I did in addition to help him get through the first problem, because he didn't want to sell the company
when he came into my office, but he was really afraid of disappointing everybody. And I helped
him figure out how to do that. And then he needed to switch out his management team. So I helped him
do that. And the key person I helped bring in was Sheryl Sandberg. And so you have to imagine the
context for this thing is I'm a lifelong technology optimist. And I grew up in the era, I'm the same
ages as Bill Gates and Steve Jobs. So I grew up in the era where technology was something that
made people's lives better and that we were all committed to changing the world in kind of a sort of hippie libertarian
value system.
And Mark appeared to me to be different from the other entrepreneurs.
You know, I was not a fan of the PayPal mafia's approach.
And I had consciously turned down some things where I really was philosophically out of
line with the management teams. And, you know, I look at Peter Thiel and Elon Musk and Reid Hoffman as
incredibly brilliant people who had ideas that transformed tech and transformed the world.
But philosophically, I come from a different place. And so I wasn't so comfortable
with them. But Mark seemed to be different. And Cheryl, I thought, was different. And,
you know, so what wound up happening is I retired from the investment business because
it turned out that, I guess, I'd gotten past my philosophical sell-by date, that I was seeing too
many businesses with strategies that I just couldn't sign up for, things that I knew would be successful, things like Uber and Spotify, where, you know, they
delivered a lot of value to the customer, but only by causing some harm to other people in the chain,
and I wasn't good with that. And sadly, I wasn't paying close attention to Facebook. I stopped being a mentor to Mark in 2009. So I wasn't around when the business model formed in 2011, 12, and 13. And I did a crappy analytical job. I just missed the development of the persuasive technology and the manipulative actions that really came to dominate things.
So in 2016, I'm retired from the business. I'm still a fanboy. I really love Facebook.
But all of a sudden, I start to see a series of things that tell me there's something really
wrong. And that's what got me going. So between January 2016 and October, I saw election issues in the
Democratic primary and in Brexit, where it was clear that Facebook had an influence that was
really negative because it gave an advantage to inflammatory and hostile messages. And then I saw
civil rights violations, a corporation that used the Facebook ad tools to scrape data on anybody who expressed interest in Black Lives Matter.
And they sold that to police departments in violation of the Fourth Amendment.
And then Housing and Urban Development, the government agency, cited Facebook for ad tools that allowed violations of the Fair Housing Act, the very thing that Facebook just settled the civil litigation on in the past week.
And so you have civil rights violations, you see election things. And I'm freaked out. And I write
an op-ed for Recode. And instead of publishing, I sent it to Mark Zuckerberg and Sheryl Sandberg
on October 30th of 2016, so nine days before the election. And it basically says,
I'm really, really concerned that Facebook's business model and algorithms allow bad actors
to harm innocent people. It's a two-page, single-spaced essay. It was meant to be an op-ed,
so it's more emotional than I wish. If I'd had a chance to do it again, I would have rewritten it for them. But I wanted to get it in their hands because I was really
afraid the company was the victim of essentially well-intended strategies producing unintended
consequences. And that's what led to it. And they got right back to me. Both of them did.
They were incredibly polite, but also dismissive. They treated it like a public relations problem.
But they hand me off to Dan Rose, who was one of the most senior people at Facebook
and a good friend of mine.
And they said, well, Dan will work with you.
And he's just saying to me, Roger, we're a platform, right?
The law says we're not responsible for what third parties do because we're not a media
company.
And so Dan and I talk numerous times,
and then the election happens. And I just go completely ape. And I'm literally the morning
after the election, I'm screaming at him that the Russians have tipped the election using Facebook.
And he's going, no, no, we're cool because Section 230 of the Communications Decency Act says
we're a platform. We're not responsible for third parties. I'm going, dude, you're in a trust
business. I mean, I'm an investor. I'm your friend. I'm not trying to be hostile here. I'm trying to
save you from like killing this business. That you got to do what Johnson & Johnson did when that guy
put poison in bottles of Tylenol in 1982 in Chicago,
which is they took every bottle of Tylenol off the shelf until they could invent and deploy
tamper-proof packaging. They defended their customers. Even though they didn't put the
poison in, they weren't technically responsible. And I thought Facebook could convert a potential
disaster into a winning situation by opening up to the investigators
and working with the people who used the product to understand what had happened.
And for three months, I begged them to do this. And finally, I realized they were just never going
to take it seriously. And that's when I went looking for, you know, like I didn't have any
data. I mean, Stan, you know how hard this is when you're talking to really, really smart technical people.
You got to have a lot of data.
And all I had was 35 years of SpiderSense.
And I went shopping for friends.
And that's when I met Tristan Harris.
And that changed everything because I was looking at this as an issue of civil rights and an issue of democracy.
And Tristan's on 60 Minutes, and he's talking about brain hacking and the use of manipulative techniques, persuasive technology to manipulate attention and create habits that become addictions.
And then how that makes people vulnerable and how filter bubbles can be used to create enormous economic value,
but at the same time increase polarization and undermine democracy.
And I had a chance to interview him on Bloomberg a couple days after the 60 Minutes thing.
And I call him up immediately after the show's done and go, dude, do you need a wingman?
Because I'm convinced he's like the
messiah of this thing. He's the guy who gets it. And I thought, well, maybe I can help him
get the message out. And so that's how we came together. So that was April 2017. And we literally
both dropped everything we were doing and committed ourselves to seeing if we could
stimulate a conversation. And it was really clear we were going to focus on public health because I was certain that Tristan's idea was the root cause
of the problem. And so that's what we went out to do. And the hilarious thing was, he may have
told you this, but it began with going to the TED conference. Eli Pariser, the man who identified filter bubbles and wrote
the amazing book about that, got Tristan onto the schedule of the TED conference two weeks before
the conference itself. It was amazing what he did. Actually, it was Chris Anderson got in touch
with me having heard Tristan on this podcast a few weeks before the TED conference. And that was
also part of the story.
Oh, outstanding. Well, thank you for that. Okay. So I did not know that piece of it.
No, it was super gratifying to see that effect because I wanted Tristan's voice amplified.
Okay. Well, so then we owe it to you. So I look at this as, so it may, that's really funny because
it then that's perfect. That explains a lot of things.
So anyway, we go to the TED conference, right?
We're thinking there's a thousand people there.
We're going to make this thing a big story overnight, right?
We're going to solve this two weeks from the day we meet.
We go to Ted, right?
He gives this impassioned thing.
And you've seen the, you've seen the TED talk.
Yeah, yeah, yeah.
And, you know, we go around to collect business cards.
I think we came out of there with two.
Right.
You're talking to people whose job depends on not understanding what you're talking about.
Oh, my God.
It's just exactly right.
And so we're just, like, completely traumatized because we don't know anybody who's not in tech.
And that's when a miracle occurred.
And that's when a miracle occurred.
So when Tristan was on 60 Minutes, the woman who did his makeup happened to be someone whose regular gig was doing the makeup for Ariana Huffington. And she called up Ariana, for whom she'd worked for a decade, and said, Ariana, I've never asked you to do this, but you need to meet this young man.
And so she sets up for Tristan to meet Ariana. So the two of us go to New York,
and Ariana takes Tristan under her wing, gets him onto Bill Maher, and introduces him to a
gazillion other people. And so all of a sudden, we go from not having any relationship at all,
and then this purely beautiful woman, Brenda, who did Tristan's makeup, gets him on there.
And she recurs in the story throughout because she did his makeup on Bill Maher.
She did mine when I was on Bill Maher.
Yeah, mine too.
And it's like you sit there and you go, you know, it's the butterfly's wings.
And she was the butterfly.
You know, it's the butterfly's wings, right? And she was the butterfly. And while Tristan's meeting with Ariana for the first time, I get an email from Jonathan Taplin, who wrote the book Move about the antitrust issues on Google, Facebook, and Amazon,
and wrote a book about it in early 2017 that had really helped frame my ideas.
And he sends me a card for an aid to Senator Mark Warner. And if you recall, in
May of 2017, the only committee of Congress where the Democrats and Republicans were working together was the Senate Intelligence Committee, of which Mark Warner was the vice chair.
So to get a card for somebody who is policy aid to him was a huge deal.
And so I called him up and I said, have you guys – I know your oversight mission is intelligence agencies, but is there anybody in Washington who's going to protect the 2018 elections from interference over social media?
You know, it was clearly outside their jurisdiction.
Anyway, he brings us to Washington to meet Warner because he goes, you're right.
If it's not us, it's not going to happen.
So we've got to find some way to get to it.
You need to meet Warner.
And it took a couple months to set up.
And in between, we get a contact from Senator Elizabeth Warren,
who has a hypothesis about the tech group that is really profoundly insightful,
where the question she asks is,
isn't this basically the same problem as the banks had
in 2008, that you have one side, the powerful banks in that case, had perfect information,
and their clients only had the information the banks were willing to give them? And she had this
insight that Facebook and Google, and to a lesser extent Amazon, were doing exactly the same thing,
that they were maintaining markets of perfect information on one side and starving the other side. So they were
essentially distorting capitalism, really undermining the notion of capitalism, which
requires at least some uncertainty on both sides to have a market. And, you know, using that in a
monopolistic way, which, I mean, I was gobsmacked. I've been in the investment business
for 35 years. I know a lot about antitrust. I was a first party to the Microsoft antitrust case
and to the AT&T breakup. So I really got to watch both of those up close. I'm a huge believer in
using antitrust in tech. And here is a senator who has this whole thing figured out in 2017.
And, you know, so that's the start of our day. And then we go and meet Warner and Warner immediately gets the need to do something about to
protect the elections. And he goes, what should we do? And Tristan, this is how genius he is,
Tristan, without blinking an eye, goes, oh, we need to hold a hearing. You need to make Zuckerberg
explain why he isn't responsible for
the outcome of the 2016 election. Well, I want to drill down there. I want to fast forward at
some point to those hearings, because those hearings were, I think, famously unproductive,
at least the public's perception of them is that. So let's articulate what the problem is here with Yeah. you knew Zuckerberg, you knew Sandberg, you had a reason to believe that they would appreciate
their ethical obligations once this became evident that there was a problem. And the problem,
as I understand it, is this. I should remind people that we're talking about your book,
Zucked, which is about Facebook in particular, but it covers the general footprint of this problem of bad
incentives and a business model trafficking in user data. And generically, the issue here is that
misinformation spreads more effectively than facts. The more lurid story is more clickable than the more nuanced one. And you add to that the emotional component
that outrage increases people's time on any social media site. And this leads to an amplification
of tribalism and partisanship and conspiracy theory. And all of these things are more profitable
than a healthy conversation about facts. They simply are more profitable given the business
model. And one could have always said that this dynamic vitiates other media too. I mean,
this is true for newspapers. It's true for television. it's just true that if it bleeds, it leads on
some level. But this is integrated into Facebook's business model to an unusual degree. And yet,
to hear you tell the story of your advising of Zuckerberg, and I don't think you said it here,
but it's in the book that you actually introduced Sandberg to him and facilitated that marriage. That was at
a time where the ethical problem of this business model wasn't so obvious to hear you tell. I mean,
were they having to confront this back in 2007 or not?
Well, they were certainly not confronting it in any way that I was aware of. To be clear,
in the early days of
Facebook, they had one objective only, which was to grow the audience. There was really no effort
made during the period I was engaged with them to build the business model. Cheryl's arrival was
about putting in place the things to create the business model, but there was a great deal of
uncertainty. In fact, Mark was initially very hesitant to
hire Cheryl because he didn't believe that Google's model would apply or work at Facebook,
and it turned out he was correct about that. So my perception of the model, I love the way you
just described that. You know, the thing that I always try to explain to people is that when you think about filter bubbles and you think about when it bleeds, it leads.
That whole notion has been with us for 150 years.
But before Google and Facebook, it was always in a broadcast model.
So when I was a kid, everybody my age saw the Kennedy funeral, the Beatles on Ed Sullivan,
and the moon landing.
And we all saw it together.
And the filter bubble brought people together because we had a shared set of facts.
And the complaint at the time was conformity, right?
Because we all saw exactly the same thing. With Facebook and Google, they create this world of, in Facebook's case, across all their platforms, 3 billion Truman Shows,
where each person gets their own world, their own set of facts with constant reinforcement where they lure you onto the site with rewards, right,
whether it's notifications or likes, to build a habit. And for many people, that turns into an
addiction. I always ask people, people say, oh, I'm not addicted. And I go, okay, great.
When do you check your phone first thing in the morning? Is it before you pee or while you're
peeing? Because everybody I know is one or the other. And, you know,
we're all addicted to some degree. And then once you're coming back regularly,
they have to keep you engaged. And this is a stuff that was not happening until roughly 2011,
which was this notion of, you know, before 2011, what they had to keep people engaged was Zynga,
right? They had social games. That was the big driver of usage time before 2011.
And, but what they realized was that appealing to outrage and fear was much more successful
than appealing to happiness because one person's joy is another person's jealousy. Whereas if
you're afraid or outraged, you share stuff in order to make other people
also afraid or outraged because that just makes you feel better.
And Tristan had this whole thing figured out.
And, you know, we obviously shared that in Washington.
And that was, you know, an important stimulus.
But when I think about the problem, there's all that's one piece of it, which is the the manipulation of people's attention for profit and the natural divisiveness of using fear and outrage and filter bubbles that isolate people that, you know, if if you start out vax anti-vax curious and they can get you into an anti-vax group within a year, you're going to be in the streets fighting vaccination. It's just
how it works. That constant reinforcement makes your positions more rigid and makes them more
extreme. And we cannot help that. It's about the fundamental, it's not a question of character or
whatever. It's about the most basic evolutionary wiring. I just want to cover this ground again, not to be pedantic, but I do
have this sense that there are many people who are skeptical that this is really even a problem,
or that there's something fundamentally new about this. So I just want to just cover a little bit
of that ground again. You've used this phrase, filter bubble, a bunch of times. If I recall,
that actually came from Eli Pariser's TED Talk, where many of us were
first made aware of this problem. He might have mentioned Facebook, but I remember him putting it
in terms of Google searches, where if you do a Google search for vaccines and I do one, we are
not going to get the same search results. Your search history and all the other things you've done online are getting fed
into an algorithm that is now dictating what Google decides to show you in any query. And
the problem here is that, and I think it was Tristan who, no, either Tristan or Jaron Lanier,
you might correct me here, one of them said, just imagine if when any one of us
consulted Wikipedia,
we got different facts,
however subtly curated
to appeal to our proclivities,
on any topic we
researched there, and there could be no guarantee
that you and I would be seeing the same
facts. That's essentially
the situation we're in
on social media, and social media is the,
and Google, and this is obviously the majority of anyone's consumption of information at this point.
Exactly. And so if we take that as one part of the problem, so when Eli first talked about filter bubbles, he used both Google and Facebook and showed
these examples and how essentially these companies were pretending to be neutral when in fact
they were not, and they were not honest about it.
So you know the Harvard scholar Shoshana Zuboff has a new book called The Age of Surveillance
Capitalism. And there are some
things in there where she spent a dozen years studying Google's business and gathering data
about it. And in my book, which I wrote at the same time she was writing hers, so I was totally
unaware of her work, I hypothesized a bunch of things. And Shoshana has data, so she's like,
in my opinion, a god.
But the core thing that Google did, and here's how the flow worked, because without this, what Facebook did would have been less harmful.
But when you talk about the people who are skeptical of harm, when you see the Google piece, then the two of them together make it really clear.
So Google begins like a traditional marketer.
They have one product.
It's 2002. They have one product. It's
2002. The product is search. They're gathering data from their users in order to improve the
quality of the product for those users. And they have an insight, which is that they only need a
couple percent of the data they're gathering to improve the search engine. So they decide to figure out, is there any signal in the other 98%?
And the insight is traditionally, I think, credited to Hal Varian, an economist at Google,
that there was, in fact, predictive signal.
So they could basically do behavioral prediction based on this stream of excess data that they
were capturing from
search results. And the signal wasn't hugely strong because it was just from search. So they
had the insight, we need to find out the identity of people. And then they did something incredibly
bold. They create Gmail, which would have given them identity, which you could tie to the search
queries and therefore you'd know purchase intent and whose purchase intent it was.
But the really bold thing they did was they decided they were going to scan every message.
And they put ads into the Gmail ostensibly to pay for it.
But I think that was actually just duck food.
This is a hypothesis of mine that they knew people would squawk at the ads and force them to be removed.
But once they were removed, people would stop complaining and Google would still be scanning all the messages. So we essentially, if you're looking for data for behavioral prediction,
it's hard to get a better product than email for telling you what people are thinking.
And for whatever reason, people who signed up for Gmail went along with this.
So suddenly Google got this massive treasure trove of data about what people are going to do
with their name and their search results to tie it to actual purchase intent. Then they decide,
well, we need to know where they are. So they create Gmail, or sorry, they create Google Maps.
And so now they know where everybody is. And then they realize, wait a minute,
there's all these open spaces.
We can turn them into data and monetize them.
So they start driving cars up and down the street to create street view.
And they do this without permission.
But nobody really pushes back very hard.
There are a few complaints.
Germany got very uppity about it.
And there was a big stink over there.
But in the US, people sort of went along with it.
And then they realized, well, wouldn't it be cool if we also took pictures
of everybody's house from the top? So they do satellite view. And then they create Google Glass
so they can get up close. And that doesn't work. People blow that up. So the guy leaves,
creates Niantic, and so they do Pokemon Go, and they do all the APIs. So they get all this.
You know, people think they're playing a game, but they're really gathering data for Google.
And when you put all these pieces together,
you realize, oh my gosh,
the business model initially
was about improving the targeting of the ads,
but then they have the genius insight
that with filter bubbles
and with recommendation engines,
they can take that market of behavioral prediction and increase the probability of a good
outcome by steering people towards the outcomes that the predictions have suggested. And so that's
how they use filter bubbles. That's how they use. And so the way to think about it is if you're a today Google and Facebook have all of your customers behind a paywall but you
can do this Faustian deal with these guys which is you can get perfect
information on these people as long as you're willing to do it on their terms
now the other side of that trade if you're a consumer the data you're
getting is coming primarily from Google or Facebook,
right? It's being controlled by them. So if you take the emotional component of what Facebook has been doing and that whole thing with, you know, manipulation of attention and the notion
of creating habits that become addictions and that inflaming of lizard brain emotions like outrage and fear and the use of disinformation and conspiracy
theories to essentially get past people's civility, right? Civility is a mask. And you
want to strip people of that and get to their underlying reality because that's where all the
behavioral prediction value is. And then you overlay onto that what Kugel was doing and you
realize, oh my God, these people have created digital avatars for each and every one of us. And they've
got this choke collar on it and a leash. And they control our digital avatars. We do not control
them. And they control them simply because they went into a place where there was this unclaimed asset called data, and
they claimed ownership of all of it, and we let them get away with it.
So on the one hand, you're talking about companies.
Let's just focus on Google and Facebook here.
I'm sure Twitter is involved as well, but I can't figure out how Twitter is functioning.
Well, the other guys, Microsoft and Amazon are the other guys who really do this.
Right. Okay. Well, let's just focus on the products you've already described here. So,
Google rolls out Gmail and Maps, and the user perception of this, and search before them,
the user perception is this is adding immense value to our lives, right?
I mean, just to be able to navigate in a city based on, you know, accurate mapping data
and to understand, you know, what streets to avoid because the traffic is so bad.
This is what technology should be doing for us.
And, you know, Gmail, I was never a fan of the idea of Gmail until I started getting
spammed.
I don't know who put me on the devil's
list, but I woke up one day and I was literally getting 99 to 1 spam to real email, and no spam
detector could deal with it. And I ran my email through Google servers and all the spam magically
disappeared forever. So I was immensely grateful for this.
And there are many other instances of this
where if you're a user of Facebook,
which I'm really not,
I can imagine you like the fact
that Facebook is serving you stuff
that you find interesting.
But the general principle here
is that everything that these platforms do
that is good for a user or seems good for a user is really doubly good for advertisers.
Otherwise, they wouldn't do it.
That is the bottom line and what's so perverse about the incentives built into the business model.
Yeah.
So the way I handicap it is this way.
Yeah. So the way I handicap it is this way. If all they were doing was capturing the data that you put into the system in the form of the routes for your going to the office and back or the
emails that you send or the photos or posts that you put on Facebook, everything I think would be
fine. The problem is there is also a third leg of the stool, which is the third-party market in our most personal data.
So this is our credit card transactions data, which is sold by Equifax, Experian, and TransUnion.
It is our location sold by our cellular carrier, but also captured through the APIs
that Google has with companies like Uber. It is wellness and health data captured from
apps and devices that are outside the protection of HIPAA, the Health Information Protection Act.
Information Protection Act. And it is also our browsing history, which can be freely acquired.
And, you know, to me, we've never asked the question, well, wait a minute, why is it legal for companies to do commerce in our most private data? We've actually never agreed to that, right?
There's nothing that I can find in a credit card transaction that gives those people the right to sell that data. They've simply asserted and
no one has said no. And we live in this really deregulated economic environment where the
government, you know, in contrast to a normally highly functioning capitalist system where the
government would set the rules and then enforce them uniformly on everybody.
Well, it must be in their terms of service that
nobody ever reads, right? It's got to be in the fine print somewhere. Well, hang on. I don't have
a business relationship with Experian or Equifax. Right. My relationship is with Visa. Visa just
runs the technology and Equifax actually handles the transaction processing. I don't have a
relationship with them. Right. Okay. And so most of these guys have something buried in the terms of service, but I think on that one, I don't even know where it would show up.
Right?
And, you know, I can't imagine why it would be in Visa's interest to have that happen.
Also, they just have a monopoly.
You can't opt out of using credit cards, or at least you can't do that easily.
Exactly.
Yeah.
Exactly. Exactly. And so my point is, if you take those three problems, the emotional component,
the essential data capture, and, you know, the claim of ownership. So it's almost like a,
it's like they're acting like a government exercising a right of eminent domain, right?
They're claiming, okay, well, this data has touched our servers, therefore we own it forever and we can do whatever we want with it. And then
you've got these third parties who simply will trade your digital life to anybody who wants it.
So in that scenario, you wind up with this thing where the gatekeepers, in this case, Google,
Facebook, Amazon, and maybe to a lesser degree Microsoft,
can offer perfect information to marketers in exchange for access to all of their consumer
customers. And the consumers are in this extraordinary position of having their access
to information limited by those people. And my point here is not that Google or Facebook do not provide good value.
I think they provide tremendous value.
What I believe is true is that the value they're receiving in return is not only disproportionate,
it's that they have the ability to influence our choices in ways that we are not aware of,
and they're taking our agency
away. They do a lot of first-party gathering. That would be the Google apps, that would
be the Facebook apps. And then they acquire data wherever it's available. So they create
this digital, high-resolution digital avatar for each and every one of us. And they sell
access to that avatar. That's the perfect information. And so they're selling access
for cash, right? So they're getting paid. That's why they're so immensely profitable, right?
And my simple observation is if you want to understand the nature of the relationship
is ask yourself how much more value you get from Google Maps or Gmail today than you got,
say, two years ago. And then look at Google's average revenue
per user over those two years and see how much more value they got from you.
Right. And here's where the moral problem gets really dicey, is there is no opt-out. We all say,
hey, my data's out there. I don't care. And I'm an honest person. And I sit there and go,
that would be true if the only impact of the data was on you. But I don't use person. And I sit there and go, that would be true if the only impact of
the data was on you. But I don't use Gmail. And if I send an email to somebody in a Gmail account,
it is being scanned by Google and it is going into their behavioral prediction on lots of people,
including me. And I have no voice in that. And there are hundreds of examples just like that all over the economy. And so
if you sit there and think that phase one was, again, improving the quality of the ad
targeting, which is the thing you liked inside Facebook, and phase two is using recommendation
engines and filter bubbles and things like that to steer you to desired outcomes, you're
sitting there saying, oh, I maybe don't like that quite so much. Here's phase three. Anyone who is a subscriber to things like the
Financial Times has run into the Google Captcha system where they say, hey, we want to figure out
if you're a robot. So look at these photographs and touch all the photographs that have a traffic
light or all the ones that have a bus. And I think we've all seen that one degree or another.
And those things are getting harder and harder.
And we think, okay, well, they're just trying to figure out if we're human or not.
And, of course, that's not what you're doing at all.
What you're doing is training the artificial intelligence for Google self-driving cars.
That's why it's getting harder because they're getting to corner cases now.
They've figured out you're a human because of the movement of your mouse.
Now, I assume that they're keeping a log of all of that. And I assume that Amazon does the same thing and Facebook does
the same thing, which means that they may already be able to do this. But if not, very soon, when
my mouse movement becomes slower than it used to be and gets more wobbly,
my mouse movement becomes slower than it used to be and gets more wobbly,
that may be the first indication that I have a disease like Parkinson's.
Now, here's the problem, and this is a deep moral problem.
Whichever company captures that, whether it's Facebook, Google, Amazon,
is under no obligation to tell me.
In fact, they're not even under an obligation to keep it private.
They are free, and all the incentives point to them selling it to the highest bidder,
which would almost certainly be my insurance company, which would almost certainly raise my rates or cut off my coverage, and I still wouldn't know I'd shown a symptom.
And I would simply point out that that same technology could be used in an insurance product
that simply said, pay us $10 a year, and if you ever show a symptom of a neurological
problem, we're going to let you know.
Like, you'll be the only one.
We'll be covered by HIPAA, and it will protect your secret and get you to a hospital quickly.
And of course, all of this could be probabilistic, so it might not actually apply to you, but
it just has to apply to people like you in the aggregate to be worth trading in this
data and acting on it. Exactly. And so the issue that we have here is that in a traditional
advertising business, we would say that you're not the customer, you're the product. But in the
model of these guys, in what Zuboff calls the surveillance capitalism, you're not even the
product, you're the fuel. Each one of us is a reservoir of data
that they're pumping the data out of. And I simply make the observation that their influence,
if we simply look in democracy, their influence on democracy in every country in which they operate
is enormous, that their code, their
algorithms have so much more influence on our lives than the law does. And yet, they're not
elected. They're not accountable to anyone. And that, from a democracy point, is a huge problem.
You have all the issues on public health where, you know, why is it legal to even capture data,
much less exploit it relative to minors under 18? Yet Google has whole businesses in Chromebooks
and YouTube that do precisely that. And if you simply look at the imperatives created by the
business model they have, they sit there and their first rule of thumb is, well, we'll let any content
go on there and then the users will be responsible for telling us when there's a problem. So if,
you know, a madman kills 50 people in New Zealand, the users have to tell us first.
And then when that didn't work politically, they said, okay, well, we'll have moderators who will sit and watch stuff. But I would like to point
out that all of these things happen after the fact. And the reason they happen after the fact,
and the reason these guys are so insistent on doing it that way, is they want to capture
what Zuboff calls the behavioral surplus, the signals that come from the rawest parts of our psychology, right? They
want to strip the veneer of civility off us and get to that, you know, what are our real underlying
emotions? What are our real biases? And so they're going to fight us every step of the way. I mean,
obviously, you know, you had Rene D'Arest, and Renee's completely brilliant. And one of the things that she taught me is
this notion that freedom of speech is not the same as freedom of reach, and that the
issue here isn't censoring people. The issue we're talking about is avoiding amplification of the most hostile voices in society.
And these platforms are platforms designed to provide status as a service.
that model, you're basically rewarding people for being more outrageous, more angry,
more disinformed, if you will, more conspiracy-oriented, and then leaving it to the users to clean up the mess. And I got a problem with that. And I just don't think that there's any amount of value that you can get from Google Maps or Gmail or from Facebook or Instagram that offsets the damage that they're doing right now to society as a whole.
That individually, we may love these products, and I don't dispute that, but they're causing huge harm.
And my basic point here is I believe we can get rid of the harm without having to eliminate
what we like about the products.
They're going to be a lot less profitable, a lot less profitable.
But tough noogies.
I mean, you know, companies, corporations are not allowed to destroy civilization just
because it's more profitable than building civilization.
I want to zoom out for a second. I want to talk about these specific problems more,
and I do want to get to the government's response and to possible solutions, but
I just want to zoom out for a second and talk about this basic quandary, which I'm fairly
sympathetic with. The most charitable view of what these platforms are doing is
simply thinking of themselves as platforms. They insist, you know, we're platforms,
we're not media companies. And, you know, on his face, that seems like a legitimate distinction,
which could absolve them of responsibility for the content that appears on their platform.
If you'd like to continue listening to this conversation, you'll need to subscribe at
SamHarris.org. Once you do, you'll get access to all full-length episodes of the Making Sense
podcast, along with other subscriber-only content, including bonus episodes and AMAs,
and the conversations I've been having on the Waking Up app.
The Making Sense podcast is ad-free and relies entirely on listener support,
and you can subscribe now at SamHarris.org.