CyberWire Daily - Extra: Let's talk about Facebook's research. [Caveat]
Episode Date: October 11, 2021Our guest is author and journalist Steven Levy. He’s editor-at-large at Wired and his most recent book is "Facebook: The Inside Story. Steven offers his insights on Facebook’s internal research te...ams, Ben shares a newly-decided court case on whether Big Tech companies can be sued under the Anti-Terrorism statute, and Dave's got the story of some warrantless surveillance being declared unconstitutional in Colorado. While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. Links to stories: Federal appeals court clears social media companies in Pulse shooting lawsuit Colorado Supreme Court Rules Three Months of Warrantless Video Surveillance Violates the Constitution Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com or simply leave us a message at (410) 618-3720. Hope to hear from you. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You would think that all steps, any step possible will be taken to change that situation, but
those steps weren't taken.
It was saying, well, you know, gee, if we change that, people would use Instagram less.
Hello everyone and welcome to Caveat, the CyberWire's privacy surveillance
law and policy podcast. I'm Dave Bittner, and joining me is my co-host, Ben Yellen,
from the University of Maryland Center for Health and Homeland Security. Hello, Ben.
Hello, Dave. On this week's show, Ben shares a newly decided court case on whether big tech
companies can be sued under the anti-terrorism statute. I've got the story of some warrantless
surveillance being declared unconstitutional in Colorado.
And later in the show, my conversation with author and journalist Stephen Levy.
He's editor-at-large at Wired,
and his most recent book is Facebook, The Inside Story.
He joins us with insights on Facebook's internal research teams.
While this show covers legal topics and Ben is a lawyer,
the views expressed do not
constitute legal advice. For official legal advice on any of the topics we cover, please
contact your attorney. All right, Ben, let's jump right into our stories here. Why don't you kick
things off for us? So I have a really interesting case that was decided in the 11th Circuit Court
of Appeals. Once again, I saw one of my favorite Twitter follows, Gabriel Maller,
who always alerts me to interesting appeals court cases and flagged this one the other day.
So this case concerns the Pulse nightclub shooting in 2016. Omar Mateen, who was not an official
affiliated member of ISIS, but was inspired by ISIS, committed really an atrocity at the Pulse
nightclub in Orlando, an LGBTQ nightclub, I think killed upwards of 50 people. He was later shot by
law enforcement. The question in this case is whether big tech companies, and the three named
here are Google, Facebook, and Twitter, can be sued as aiding and abetting this level of terrorism.
So the allegation is that Mr. Mateen became radicalized in his views via social media.
He was reading propaganda tweets, Facebook posts.
He was able to communicate with known ISIS leaders via these platforms.
He was watching YouTube videos, which is part of the Google family.
So the question here is, do the survivors of the attack
and the relatives of those who perished have a cause of action?
There's something called the Anti-Terrorism Act,
a 1990s-era piece of legislation, a federal law,
that says people have a cause of action
against anybody who aids and abets
acts of international terrorism.
So these plaintiffs filed suit against these three companies alleging a violation of that
act, and they're looking for monetary damages.
The complicating issue is that the ATA, the Anti-Terrorism Act, only applies to international
terrorism. Right. So the question Act, only applies to international terrorism.
Right.
So the question here is, is this international terrorism?
And that's where things get interesting.
He's domestic, right?
He's a citizen of the United States.
He is.
The crime was committed in the United States.
He lived in Florida.
Yep.
Florida, last I checked, is still part of the United States.
Bugs Bunny hasn't not yet succeeded in sawing it off.
Despite our wildest dreams.
Just kidding.
We love you, Florida.
We love your theme parks.
Yeah.
Your sunny beaches.
So, yes, this was a domestic terrorist attack.
It took place domestically.
It was a U.S. person who committed the attack, but he was radicalized by ISIS, which is an overseas group, and is recognized as a terrorist organization by our State Department.
Unfortunately for the plaintiffs here, the test for international terrorism includes the following.
It has to involve a violent act or an act dangerous to human life.
You can check that one off the list.
Right.
It has to involve a violent act or an act dangerous to human life.
You can check that one off the list.
It has to appear to be intended to intimidate or coerce civilian population.
I think you can make an argument that it's doing that here.
Right. The third part of the test is it either has to occur primarily outside the territorial jurisdiction of the United States or, quote, transcend national boundaries in terms of the means by which they are accomplished.
So either it has to take place abroad or it's an attack that takes place here that's supposed
to have a, you know, broader implications as part of some sort of global jihad.
What the court here is saying is the plaintiffs did not meet that third part of the test.
Well, you know, this is obviously a complete tragic scenario.
He isn't an affiliated member group of an international terrorist organization.
This wasn't part of a larger terrorist plot that threatens the broader national security of the
United States. Just because he's claiming association with a known terrorist group
doesn't make it so that this is, meets that definition
of occurring either outside our jurisdiction or something that transcends national boundaries.
So because he was being inspired by international terrorists but was not being directed by them,
right?
Exactly.
Now, here's where it gets complicated.
So there have been other cases,
I think we've talked about a couple of them along these lines, where somebody's radicalized online.
In those cases, the terrorist group in question didn't claim any association with the terrorist
in question. Okay. That's not the case here. ISIS claimed responsibility for the attack. So,
Omar Mateen said a few days before the attack in one of the online forums, look for a terrorist
attack in the next few days in the United States. And ISIS, in the day or so after the attack, said,
we claim responsibility for it. And we were at war with ISIS at the time. It was at the height of their strength in Iraq, especially, conversations between this terrorist and terrorist organizations,
are not going to be held liable.
And I'm not sure that people are going to be pleased with the result here.
What about Section 230 here?
I mean, doesn't that in itself get them off the hook?
Well, so there's an interesting interplay between Section 230 and the Anti-Terrorism Act.
I don't think we've seen this fully tested in court.
So, yes, you can't be held liable for things that are posted on a platform.
But we haven't gotten to the merits of a case on how that interplays with the Anti-Terrorism Act where the specific action is for aiding and abetting members of an international terrorist organization.
So it's possible that's not something that's covered under Section 230.
But we're not—we didn't get to that part of the case because it was thrown out on this jurisdictional issue.
I see.
Is that the end of it?
Could it go—can it go on or are we done with this?
We are done with this case.
go on or are we done with this? We are done with this case. I mean, the plaintiffs could appeal and potentially try and get cert with the United States Supreme Court. I don't necessarily see
that happening. I haven't seen a sufficient split among circuits on this issue that would justify
the Supreme Court stepping in. And so I do think this is the end of the request for relief from these plaintiffs who've obviously suffered this enormous tragedy.
There is an obvious step that can be taken here.
Our Congress critters, friends in Washington, could step in and change the anti-terrorism statute.
And they could say it doesn't necessarily have to have this transcend national boundaries element to it in order to hold the tech companies potentially liable.
There seems to be a lot of hemming and hawing about these tech companies.
They're not exactly popular in D.C. among members of either political party.
You'd think this is a perfect opportunity to stick it to them, change the statute so that
they could be held liable, even if this isn't, you know, Al-Qaeda or ISIS itself committing this
act of terrorism. Interesting. Yeah. I mean, I guess it's a shot across the bow for a lot of
the big tech companies. Part of me wonders, are we in tricky territory here when someone
claims inspiration
you know
but as you said you know ISIS
claimed the event
whether or not you know after
the fact right if they said
oh hey look that happened that's good for us
yes we'll take credit for that
I mean that's what is always
sort of confusing with these cases.
We see this a lot where, of course, ISIS is going to want to take credit for attacks that happened in the United States because it makes them look more powerful.
Right.
Any person can be affected by propaganda online.
Most people who take in that propaganda don't commit acts of this know of this magnitude killing 49 people at a
nightclub yeah but yeah i mean at the very least you could say he has buy-in from an international
terrorist organization i mean i think where the court's coming from and i certainly understand
this argument is it's not like he was at an isis training camp for 10 years and this was part of a
broader plot against the United States.
He wasn't working with bin Laden
or Khalid Sheikh Mohammed
or any of the major players
to plan a 9-11 style attack.
He was a guy who was radicalized
and his little micro way
was going to attack the United States
by doing what he could,
which is taking advantage of his access to firearms and killing lots of people in a club.
But it wasn't part of a broader terrorist transnational plot.
You know, it's up to Congress whether the anti-terrorism statute
should apply in those circumstances.
I just think this is an opportunity that's ripe for them
to expand the application of the Anti-Terrorism Act.
Interesting. Interesting. All right. Wow. Yeah, there's a lot of wheels spinning on that one,
isn't there?
It's a fascinating case. And I think we're going to see more cases of this. It takes so long to
make it through the court system. I mean, if you'll recall, that attack was now over five years ago. So, you know, we may not get a satisfying answer
on this question for a while, if we ever get one. Yeah. All right. Well, my story this week
comes via the EFF, the Electronic Frontier Foundation. They posted this on their own
website written by Jennifer Lynch. And they're
following up on some news coming out of Colorado where the Colorado Supreme Court has ruled that
three months of warrantless video surveillance violates the Constitution. And this centers
around a case called the People versus Tafoya. And basically what happened here,
this is sort of follow-up on something we've covered here before.
Police had received a tip about some drug activity,
and they put a camera on a utility pole across from a gentleman,
Rafael Tafoya, across from his home,
and they were able to see his front yard, his driveway, and his backyard.
Now, his home had a six-foot-high privacy fence, so people walking by the house couldn't look in his yard,
couldn't look in his house, couldn't, you know, so he was—
Unless it was an MBA center came by, then maybe they couldn't, yeah.
Right, right, exactly.
Right.
But his intentions were clear, right, to have a certain degree of privacy in his home.
But having this camera up on a utility pole meant that police could see in and they had remote control over this camera.
They could pan, tilt and zoom and they could store their footage indefinitely.
footage indefinitely. So Tafoya was arrested for drug trafficking, and at trial, his counsel moved to suppress all the evidence resulting from warrantless surveillance, saying that it violated
the Fourth Amendment. The trial court denied the motion and he was convicted on drug
trafficking. The Court of Appeals reversed. They agreed with DeFoya that their surveillance was
unconstitutional. So just recently, Colorado's Supreme Court upheld the Court of Appeals opinion
and they found that continuous long-term video surveillance violated his reasonable expectation of privacy.
And there's a quote from the court.
They said, put simply, the duration, continuity, and nature of surveillance matter when considering all the facts and circumstances in a particular case.
The court held that 24-7 surveillance for more than three months represented a level of intrusiveness that a reasonable person would not have anticipated.
So, okay, that's interesting. But I think additionally, what's interesting is that now
we have some courts from around the country weighing in on this. This article points out
that Massachusetts, their Supreme Court has agreed with this ruling. They've had a similar ruling
of their own, but some other courts have ruled the opposite. The Seventh Circuit held that a
pole camera was fine for 18 months, that it didn't violate the Fourth Amendment. The First Circuit
overturned a district court's decision that eight months of pole camera surveillance
violated the Fourth Amendment.
So I'm interested in your take on this, Ben. Now that we're seeing disagreements from the circuits,
where do we go next with this? Does this go to the Supreme Court?
It very well might. And frankly, I think it's the Supreme Court's responsibility to resolve this issue. And I will explain why. Okay.
So in order for there to be a search under the Fourth Amendment, a person has to display
a subjective expectation of privacy, and that expectation has to be one that society is
willing to recognize as reasonable.
In this case, I think one of the determining factors in this case is the fact that Tafoya
strongly exhibited that subjective expectation of privacy by building a six-foot fence to protect his private property.
The implication of that is perhaps if he didn't build a six-foot fence, if he didn't display that subjective expectation of privacy,
maybe this would not have been a Fourth Amendment search and thus would not have been an unreasonable search and seizure.
So the Electronic Frontier Foundation is concerned that this case was more decided on that
particular fact that he had set up this six-foot fence. And I think they want a broader opinion
by the Supreme Court or perhaps other federal courts to have some sort of uniform
rule as it relates to utility pole cameras. If not utility pole cameras themselves, this type of
invasive long-term surveillance. You read what I think is the money quote from this case, which is
the duration, continuity, and nature of surveillance matter when considering all the facts and circumstances in a particular case.
The problem is that the Supreme Court cases that are relevant on this issue, and they mention U.S. v. Jones and Carpenter v. United States, is that those terms, duration, continuity, and nature of surveillance, aren't specifically defined.
They're not properly defined.
We know bits and pieces of what counts as a search in terms of duration and what doesn't.
So in Carpenter, it was seven days of historical cell site location information, for example.
But they didn't come down with a hard and fast rule on duration saying seven days not okay, eight days okay.
It was the same way in the United States v. Jones.
So we don't exactly know where the dividing line is, which means that state courts, as in this case, and other federal courts have to take the general guidance from Jones and Carpenter and try to apply it to the specific circumstances.
I think it was properly applied here.
I mean, three weeks of monitoring somebody's house certainly seems like a long duration.
It's pretty invasive surveillance when you can zoom in, zoom out, pan, and store the video indefinitely.
I certainly think all of those qualify,
but we've had, as you say,
courts across the country disagree on this issue.
And I think they're disagreeing
because there isn't proper specific guidance
from the Supreme Court
properly delineating what counts as too long
for the purposes of a Fourth Amendment search,
what counts as too intrusive?
I don't know if we're ever going to get a satisfying answer to that question.
Yeah, that was my next question.
Would the Supreme Court be interested in making that a bright line?
I mean, there's nothing in the Constitution that would allow them to draw a line at any particular duration.
But they do do that stuff all the time in all other types of cases.
I mean, there's nothing in the Constitution about abortion and or pregnancy.
But the entire Roe v. Wade decision was divided by trimesters of a pregnancy.
So if they want to make things up in terms of a relevant duration,
they've certainly done that in the past in a variety of circumstances.
It's not the best way to do it.
You know, the potential solution is looking at something besides duration,
something that's a little bit more tangible,
in order to come up with a dividing line for what counts as a search and what doesn't.
And I think the invasiveness of the technology itself
might end up being that determining factor.
So the Supreme Court might have to declare something like a utility pole surveillance is the type of technology that used to – that prior to its existence would have required vast law enforcement resources.
You would have had to have a dude climbing a tree.
Right.
Boy, that lineman's been working on that electrical connection there for a long time now.
Very long time, yeah.
He must have significant upper body strength to be up there.
So that's what would have been required in the past.
And in order to maintain that equilibrium,
it might be wise for the Supreme Court to jump in and say, all right, utility poles, because of the invasiveness of that equilibrium, it might be wise for the Supreme Court to jump in and say, all right,
utility poles, because of the invasiveness of that technology, because you can keep them up there,
it's a very little effort. I presume it's not much of a cost to law enforcement to keep one
camera on a utility pole, that that's going to require a warrant. I think this case, because we
have a split among state courts and federal courts,
is very ripe for Supreme Court consideration. I look forward to covering it on the show when
it finally gets there. I might have more gray hair than I do now at that point, but I do anticipate
that that's a decision they're going to have to make. All right. Yeah. Interesting for sure.
a decision they're going to have to make.
All right. Yeah, interesting for sure.
All right, well, we'll have a link to that.
Again, this is coverage from the EFF,
so obviously they're coming at this from their own point of view.
Sure.
But I think overall their coverage of it is pretty fair and covers the facts, so I think it's worth sharing.
We'll have a link to that in the show notes.
We would love to hear from you.
If you have a story that you would like us to cover
or a question for me or for Ben,
you can write us.
It is caveat at thecyberwire.com. All right, Ben, we are in for a treat this week.
I recently had the pleasure of speaking with author and journalist Stephen Levy.
I will admit to being a bit of a fanboy when it comes to Mr. Levy.
We're all fanboys of something, Dave. No shame.
No, I have enjoyed his writing for a long time. In particular, his book,
Hackers, Heroes of the Computer Revolution, was just a book I absolutely devoured when it came
out. And if you're interested in the history of hackers and computers and the folks who were
part of that first wave of folks using computers and figuring out how to exploit them, it's a must
read. His most recent book is titled Facebook, The Inside Story. And we reached out to him
in response to this series of articles that the Wall Street Journal put out recently,
In response to this series of articles that the Wall Street Journal put out recently,
digging into some things going on at Facebook,
and my conversation with Stephen Levy specifically centers around the research teams.
Here's my conversation with Stephen Levy.
Well, research started pretty early in Facebook's history. They were watching what people did almost from the get-go.
history, they were watching what people did almost from the get-go. But in 2006, they hired a really bright person named Jeff Hammerbacher to make all the data very easy to search. And he created
this infrastructure that allowed them to take the data and do all kinds of research. And they began to hire social scientists and statisticians to make the research in a more organized fashion.
Interestingly, research was part of the organization at Facebook that was devoted to growth.
So a lot of the research was devoted to ways that people would stay on longer to Facebook
and help them discover not only ways that people might use Facebook better.
A lot of companies in Silicon Valley use researchers to test how well you use the product,
what you might want to do in the product that you can't do, what you have difficulty doing.
But in terms of Facebook, they also figured out how the algorithm would work
to keep you using it more. And one of the big breakthroughs that happened in research
was when they discovered how things can go viral on the system. And they published a paper on it
called Gesundheit, because it's like a sneeze, certain things can go viral. And they thought
that was the greatest thing ever. And they never realized, the researchers who published that, that really is the key not only to fun things going viral, but as it turns out, some things that create anger or divisiveness or just misinformation is harmful.
So the research is sort of a mixed bag there.
As this sort of information has come into Facebook, as they've realized that not everything they do in their day-to-day business is a net positive, how have their researchers responded?
You see, I've got to spend a lot of time with some of the researchers, both talking on the record, and I would see them at conferences, and we would talk more confidentially.
And a lot of the people who are in Facebook research are people who could have had terrific jobs in academia. They're really accomplished people with PhDs. And what excites them is not so much publishing, because the
opportunities for publishing, while they exist at Facebook, are not abundant, because, you know,
for reasons I can get into in a second, they've been overly cautious about it, but they have a
chance to affect the product, the product that billions of people use. And if you're in academia, you usually don't
have that opportunity. You publish something and maybe your peers will see it, but you certainly
don't have a chance to change the world. In recent years, it's gotten more difficult to argue that
that's a great thing to work for Facebook because so much has come out about Facebook and the way it can be
harmful. And I think the mindset has changed among some of those people to think, well, maybe we can
mitigate this. Maybe we could do research that unearths things that will make Facebook less toxic.
Yeah, that's a fascinating aspect to me because I can't help thinking sort of that old phrase when good people work for bad companies. I'll admit I'm not particularly a fan of Facebook myself. How much of that is going on within the company? I mean, it sounds like these folks realize that there are problems and issues, but I guess what you're saying is they feel as though maybe the best way to fix it is from the inside?
I guess what you're saying is they feel as though maybe the best way to fix it is from the inside?
Yeah.
Well, it's almost impossible to fix it from the outside because Facebook does what it wants.
So the regulators have tried, and there's been a lot of pressure of Facebook to change it.
And that really doesn't move Mark Zuckerberg, who's the person that makes all the decisions at Facebook.
Ultimately, big decisions come down to Mark Zuckerberg.
So it's an interesting thing.
I mean, there are people who work on the way a product works. There's a lot of people that work on research to help Facebook get more revenue, what works to make ads more attractive to people.
revenue, what works to make ads more attractive to people. And then there's research that goes on about misinformation and security and what goes on there. And the Wall Street Journal
was really fascinating, that report they did, because they had the series of presentations
that the researchers gave to show Facebook where it was failing and implying, of course, that Facebook
should improve on those areas. And in the cases that you saw in the journal, these presentations
went up to the very top. And ultimately, Mark Zuckerberg and his people around him decided not
to take decisive steps to mitigate the problems that
the researchers were finding. Things like how Instagram was creating mental health problems
in teenage girls. Is this ultimately just about growth and money and profits? I mean,
why do you suppose they're so hesitant to make meaningful changes here?
I mean, why do you suppose they're so hesitant to make meaningful changes here?
Well, growth is the North Star at Facebook.
In my book, I devoted a lot of time to tell, for the first time, the story of Facebook's growth circle, they called it, which used all kinds of means, some of them pretty dicey, to get and retain users. And that is the key to Facebook,
really. And money is important because that enables Facebook to spend money to grow more
and retain more users. It is connecting the whole world, which is important to Facebook.
think the whole world which is important to Facebook and to do that in light of competition from places now like TikTok that draws people away from Instagram and Facebook probably those
TikTok users aren't using the Facebook main app anyway and there's only a certain amount of time
people spend in a day so that is really really important. And as it turned out,
when push came to shove in certain ways, the way Zuckerberg chose to look at it was to say,
wait a minute, a fifth of our users, the teenage girls using Instagram, it makes them feel bad,
aggravates their mental health problems. That means like four-fifths are doing great, right?
So let's go with that.
But obviously a fifth of the teenage girls who use Instagram represent millions of people, quite literally.
So there's something really wrong if your researchers come to you and if you look at these slides, it's almost like they're begging the leadership of Facebook to do something about it.
You're saying our product is making millions of teenage girls feel bad.
And some of them with mental health problems are seeing these problems aggravated by it.
That's a serious problem for a company to make the lives of millions of teenage
girls miserable or worse even you would think that all steps any step possible will be taken
to change that situation but in this case according to the journal reporting
that those steps weren't taken it was saying well, well, you know, gee, if we change that, people would use Instagram less.
How do you reconcile this? I mean, obviously, in the work that you've done for your book, Facebook, The Inside Story, you spent probably more time than any other journalist with the top level people at the company.
What is your take on how they think about these sorts of things?
I mean, do they have blinders on?
Well, I think it's not so much blinders as what they prioritize.
And there is a belief, and I think they truly believe it,
that overall, Facebook is good for society and good for people.
And it is an unprecedented situation that the company is in.
And no one has ever built a social network to get a really significant chunk of the world's population on it at the same time and allow anyone on that network
to post things that any other person in the network can see and sometimes many many people
in the network can see that's something that it's happened for the first time and it opens up
problems that no one's had to deal with before but given, when you see that unique situation develop in a way that's
causing misery and danger to many, many people, it's not enough to say, well, on balance,
we're doing good. And I think they don't shift from that. They say it more like it's an inconvenience or you could weigh that off
and think that's okay, but it's more like a plane crash than an inconvenience when you have people
feeling bad, right? We tolerate some things that we don't like, like crowded subways, right? People
will get into a crowded subway. You don't have to stop crowded subways. At least you didn't before COVID. But plane crashes or subways that, you
know, like once a day you have a subway like crash and people burn up. We wouldn't tolerate that.
Even if you would say, hey, we ran like, you know, 10,000 subways today, only one crashed
and burned everyone. You know, Facebook is well known
to be insular with their research, not wanting to share their results and not wanting to allow
outsiders to have access to their data. Were they more open in the past? Yeah, there was one moment
that happened in Facebook. They did a study in 2012. It's called the Emotion Study, where they
were testing what happened if you put some, you know, kind of negative posts on there where people
use Facebook less. And, you know, people charged that they were using people as guinea pigs without
their knowledge. And some of them, you know, slightly less and showed that maybe they were less happy.
It wasn't like they went into a chronic tailspin depression,
but they felt a little worse about themselves.
And that got a lot of negative publicity.
And from then on, Facebook was much more cautious
about what they published.
Facebook is also being super
cautious about sharing data with researchers that want to come up with answers to the questions
of whether Facebook is causing mayhem in society, whether the misinformation has an effect on
politics or on voters.
And this is stuff that's important for society to know.
But Facebook, though sometimes it promises access,
sometimes in the recent years has been finding reasons
to pull back that kind of cooperation.
And I think that's partly because of the privacy problems
they cite about sharing information, but there's
ways to address that. It really looks like the PR aspects is something that motivates Facebook.
Now, when your own researchers come up with stuff, you don't have to publish it. You don't
have to share it. Though, in this case, we got to see it because there was a leak.
Though in this case, we got to see it because there was a leak.
Yeah, which I think is interesting in itself.
I'm speculating, of course, but could indicate that there is perhaps some unrest within the organization.
Well, one researcher actually did, after the Wall Street Journal post, a Twitter stream showing that he was dissatisfied. I think he's still working there.
I haven't heard that this person has been fired, but my suspicion is that he voiced a dissatisfaction that was somewhat broader than it might have been a couple of years ago.
Where do you suppose things have to go for us to see meaningful change here? Is this something where we could see – if Facebook doesn't make effective change themselves, perhaps we'll see some regulation?
I think it's more likely.
The more we see leaks like this coming out, which isn't – let's say it's in the category of shocking, but not surprising.
People don't really expect Facebook to be dealing honestly with them anymore.
Certainly the legislators that are trying to get information out of them,
the regulators don't think that.
There's a whole class of skeptics and critics of Facebook who wouldn't be surprised
by this. The independent board that Facebook set up, whose job it was basically to rule on
decisions that Facebook made that people are challenging, overstepped their charter
intentionally and said, wait a minute, we want to get into this. We want to look into this.
So they're going rogue in a way, which is kind of interesting. I think ultimately,
this pressure is going to lead Facebook to make some changes, maybe not willingly.
All right, Ben, what do you think?
So part of me, I want to continue to have a Facebook profile. And in case any of the powers that be are listening, I want to be careful in what I say.
Okay.
Zuckerberg is just to me off the rails in terms of his priorities. And what's
happened with this research group is based on this interview. It's just kind of tragic to me.
I mean, they've compiled evidence of how things go viral, how the things that go viral can be damaging. And Zuckerberg
seems primarily concerned still with the bottom line and making excuses on behalf of Facebook.
And I just don't see that changing anytime soon. It really reminded me of, and I think you've made
this comparison before, what we went through with the tobacco companies over the last, you know, maybe 30, 40 years.
Right.
Where they kind of tacitly acknowledge that their product is dangerous and could potentially be harmful.
But there are ways of obfuscating that and, you know, saying it's not really the fault of the tobacco, it's the smoke.
It's those types of things where they're evading responsibility.
And with all due respect to Mr. Zuckerberg, and he's a man of remarkable accomplishment,
please don't delete my Facebook profile, I just really see a similarity there.
So it was an eye-opening interview. Makes me really want to read the book. And I'm glad you did it. Yeah. I think what really sort of turned a light
bulb on for me was when Steven said that his take is that the folks at the top levels of Facebook
really do believe that they're making the world a better place. Yes, I think they do legitimately believe that. I don't think they're entirely wrong. I mean,
they are fostering some things that are very productive, keeping us interconnected,
allowing us to, you know, stay close with friends and family. I think that sort of 2004 to 2007 version of Facebook was very admirable. But then
Frankenstein got out of the lab. Yeah. And we've gone to a place where it's much more harmful.
Yeah. It does strike me, for me personally, I see it as being sort of a moral failure that,
you know, how much money do you need? How much growth do you need? You have
the number one social network in the world, you know, or certainly, you know, depending on how
you measure it. And if you find that it is doing harm, you got to fix that. You need to take that
seriously. If people are being harmed, if people, if teenagers are taking their own lives.
Right. This mental health issue among Instagram users.
How do you not immediately say, okay, stop. Everybody just stop. We need to take a serious
look at this and we're going to put growth on hold while we figure out how to first do no harm.
I will never understand the priorities that keep them from doing that.
Dave, as the great Bob Dylan once said, all the money you make can never buy back your soul.
Right, right.
Please don't delete my Facebook profile.
Right. All right. Well, again, our thanks to Stephen Levy for taking the time for us. A real treat for me personally to get to chat with him. Again, his book is titled Facebook, The Inside Story. And for his latest writing, you can always head on over to Wired and see what he's up to there. Always time well spent checking out his writing.
For sure.
All right, that is our show.
We want to thank all of you for listening.
The Caveat Podcast is proudly produced in Maryland at the startup studios of DataTribe,
where they're co-building the next generation
of cybersecurity teams and technologies.
Our senior producer is Jennifer Iben.
Our executive editor is Peter Kilby.
I'm Dave Bittner.
And I'm Ben Yellen.
Thanks for listening.