Big Technology Podcast - Meta's Stock Decline, TikTok's Power, and Funding Transparency — With Frances Haugen
Episode Date: October 3, 2022Frances Haugen is an ex-Facebook employee who shared thousands of internal documents with journalists in 2021 that revealed much about the company's decision-making and values. Haugen joins Big Techno...logy Podcast for a live conversation at Unfinished Live in New York City that covers social media product, policy, funding, the stock market, and TikTok's power. Join us for a lively interview featuring debate, nuance, and a few good laughs.
Transcript
Discussion (0)
LinkedIn Presents.
All right, everyone, welcome to the big technology podcast, a show for cool-headed, nuanced
conversation of the tech world and beyond.
We are live here at Unfinished Live in New York City, and just to show you that there's
actually a live audience before.
for us. I'm going to ask them to let us hear it. You guys got to be loud because these
microphones won't pick it up unless you give it. You're all. Let's hear you guys.
All right. We've got to keep that energy up through this whole thing. I don't want to see
anyone walking out. You don't always get a laugh track. So you know, you've got to lean in while
you have one. That was good. We're going to, we'll put that any good moment along the podcast.
We'll put that. Now everybody boo. No, I'm kidding. We're here with Francis Hagen. You might know her as
the person behind the Facebook files, the Facebook papers.
I don't quite know which name to use.
But they're the series of revelations that showed a lot of things about Facebook business
that really were not great.
And those revelations happened a year ago,
and they've sparked a series of discussions
and much more informed conversation about the platform than I think we had previously.
As a journalist, I'm really grateful to have been able to comb through some of the material
that you put out there because it finally allowed us to put some data and some actual internal
research behind these things that people were trying to shoot from the hip and say we're happening
and we didn't really know whether that was true or not.
So first of all, thank you for revealing all that stuff.
Second, let's just take a look at meta's stock price right now.
I took a look about September 13th is when the beginning of the Facebook file series of
articles came out in the Wall Street Journal.
Since that week, meta stock, it's now meta, meta stock is down 59%.
Wow.
I only check it occasionally.
I have one Twitter follower who messages me every time it goes up.
And he's like, oh, no, the stock is recovering.
Then I look at it again.
I'm like, oh, it's actually down a lot.
Yeah, that's called unnecessary paranoia.
Yeah.
So I guess looking at the way the stock has performed, mission accomplished?
You know, sometimes people ask me questions like, you know, do you think you won? And I feel like that really misses a large part of the criticism I've given, which is I feel like I'm fairly characteristic for whistleblowers in that when you look at it statistically, overwhelmingly whistleblowers are conscientious people who actually care a lot about their companies. And I really believe that Facebook has problems, that the only way it can successfully solve.
them is by working on them collaboratively.
And so my intention was never to have the stock price go down.
My hope was that Facebook could get the support it needs to be long-term successful.
And?
And unfortunately, the stock has gone down because I think the public has looked at how they responded.
Like, the biggest thing they did fresh out of the gate was they're like, actually,
we're not a metaverse company now.
And in general, I think investors get quite scared when you say you have a giant problem,
and they're like, ooh, shiny.
You know, it's not really a great forward-looking indicator.
And this is also coincided.
It was the top of the tech market cap era.
It's crashed.
There are other factors involved.
We also have had the Fed raise rates and large declines.
But it's fallen substantially more than the other social media companies.
Right.
Do you attribute that to your revelations?
So in our SEC complaint, we said Facebook's evaluation.
So I actually went and looked back at five years of Facebook stock price.
So I'm a data scientist, and so I view the world through data.
And I looked at every time, I went to actually went in Google,
like specifically to target Googling for every time Facebook stock price
declined more than 5% versus the NASDAQ over a five-year period before I came out,
or before the first article came out.
And there's a very limited number of times that happened, and there's about 27.
And overwhelmingly, the things that happened when the stock price declined,
more than the NASDAQ declined, were Facebook.
Facebook had a decline in users. Facebook had some kind of security incident where they had to come
out and say, we're going to have to spend more on safety, or it was implied that we're going
to have to spend more on safety. And something like, on the order of 65% of the time, when you
had those deviations from the NASDAQ. Our SEC complaint has said the price was overvaluated,
was overinflated, because users used the product who would not have used it, had they known
the truth. Advertisers advertised, would not have advertised had they known the
truth. And Facebook underspent on safety more than they would have than if they had been
transparent. And since then, we've seen it to client users. We've seen them having to increase
the amount they've spent on safety by two to three times, which is huge compared to the past
times they've increased safety. And so we'll never be able to know exactly what fraction of this
is because of the transparency. But I think there's a good chance that the information did cause
the stock to correct back to something closer to its natural evaluation.
A skeptic would say that Apple's anti-tracking moves probably had a larger impact.
What do you think about that?
I think unquestionably they had an impact as well.
I would never want to assess what is the relative contribution because as quantitative people
are naturally, I always try to be very precise in what I say, and I have no data to indicate
what the relative contribution is.
Okay.
It'll be interesting.
I mean, I know you can't speak about the SEC or discussions.
Karraswisher tried that at code, didn't get anywhere.
So I won't try to press that.
I actually just don't like that.
Yeah.
Like, they've talked to me once.
Okay.
And apparently that's fairly standard.
Right.
They usually try to not talk to witnesses too many times because if you ever don't say exactly the same thing,
that can actually make it harder for them.
And they usually go dark for long periods of time while they're building their case.
And so I genuinely don't know much about what's going on.
Okay.
So as Facebook struggles, whether it's because its stock price has declined due to these revelations or whether Apple is kneecapping it or maybe Apple's having more success because of the revelations that you brought out, there's this other, there's a countercurrent to that.
And that's that competitors get stronger.
The one that really seems to be doing quite well right now is TikTok.
is TikTok. And a while ago, people would say, okay, well, the fact that Chinese companies could take the
place of Facebook is just a big tech talking point that they're trying to, you know, divert attention
to their own actions. But now it really is happening. And so what do you make of the fact?
You know, is there any credence to the fact that as a company like Facebook gets weaker, company like
TikTok can emerge and actually leave us in a worse position than we were previously?
So I think TikTok's emergence is a great example.
So throughout, you know, I think basically all of capitalist history, we have seen time and time again that if consumers get to choose between products that make them happy and products that make them sad, as a general role, people choose products that make them happy over products that make them sad.
And when I look at the things that make TikTok sticky, you know, I think there are genuine questions we need to ask, right?
So part of the reason why TikTok makes you happy is TikTok emerged.
The fact that the only successful competitor to Facebook emerged from the only place Facebook wasn't allowed to play.
Like Facebook went into all the other corners of the world and sometimes even bribed the locals to say, let us become your internet.
They paid for people's data to get people to converge onto it.
The fact that it emerged from China meant also the only competitor that could emerge was one,
that met Chinese goals.
And China created a social media platform
that focused on dancing, on comedy,
on lighthearted things
because those things are often not the same
as politics.
Right?
And at the same time,
they've created a product
that makes people feel good.
And so if you have a situation like that,
someone was eventually going to come along
with a product or even be real.
Be real makes people feel better
than Facebook does.
And so I think that's the real thing.
You looked at a lot of Facebook research.
Some of that research said that people feel worse if they scroll and don't participate.
Just passive consumption makes people feel worse.
If that was the internal Facebook research, how do you think it is that TikTok,
a app that literally has people scroll passively, is making people feel good?
But they actually don't.
Like, you have to choose to move on with TikTok.
TikTok, if you are truly passive, like you're not engaging at all,
if you're not choosing to do a next gesture, it just replay.
the video, right? And I think it's also a thing of TikTok's algorithms, because it is, like,
I think I keep trying to flag for people is TikTok is a very manipulative experience algorithmically,
right? They bias towards happiness. They bias towards humor. Like, they don't want you talking
about sad topics because political topics often are sad topics, right? And so, you know,
funny dances, if you can be passively, you can be not really engaging, not having a conversation,
and watching funny things, and you're going to feel laughter,
laughter makes you feel better.
Is that what your TikTok feed is mostly funny dances?
You know, I get a lot of home restoration.
I, like, TikTok has decided I really like cake decorating.
I don't know why.
I can't have gluten.
Who knows?
They really think I really like cake decorating,
and they really think I like renovating homes.
So that's like a large fraction of my feed.
I don't know what this says about me,
but mutton is all, like, couples finding their partner cheating.
Really?
Yeah.
They're great videos.
You know, you could, you know, you know, thumbs down some stuff.
I like them.
Oh, well, you know.
So, anyway.
Go with God, my friend.
Indeed, I will.
So let's get back to that question, though, about whether we're worse off with TikTok
Dominate.
I mean, yeah, okay, it's making us happier.
The other side of it is...
Feel happier, yeah, yeah.
I don't know if it, like, existentially happier.
That's true, yeah, we could.
I think sucking our time in, we'd probably be happier gardening, but, you know.
That's true.
So, but speaking about that, okay, let's say feel happier, but it's doing keystroke logging,
or it's at least alleged to do that.
In an app browser.
Yeah.
Which is a pretty big deal.
And also, we don't fully know the extent of the oversight that the Chinese government may or may not have over that app.
So if American, I know we're going to get into global stuff, well, let's talk about U.S.
If American social media users, you know what, let's just go global.
If global social media users are in an app that has these major liabilities, isn't that worse than being inside Facebook?
Inside Facebook.
Well, I think it's this question of, you know, like what we, I think we need to have a, I think we should have.
Facebook of 2008. Like if we're going to do any kind of magical hypotheticals, you know,
social media that is about our family, our friends, our communities, connecting with people
is better than anything where you have hyperviralty. The things that are good about TikTok
are TikTok acknowledges they have power. And Facebook really doesn't want to acknowledge they
have power, right? They like to be like, oh, no, we can't do anything. We have no ability to do
anything. The secondary thing is, you know, if we had to choose between, you know, an app that
has had two genocides, so, you know, we have 200,000 people dead in Myanmar. This is not my opinion.
The UN has a multi-hundred-page report detailing what Facebook did that was negligent.
If we see the same thing happen again in Ethiopia with lots of warning, if we see this happen
over and again, I think there's this question of at some point we have to decide when do we
have to hold Facebook responsible for being an app that seems out of a pattern of causing
genocides. Because I have not seen any TikTok genocides yet.
Well, not yet.
There have been genocides before Facebook.
That's true.
So do we now in the age of Facebook where do we now put the blame for all genocides on Facebook or?
No, I think there's ways of acting responsibly in a space.
So, for example, you know, in 2002, excuse me, 2020, 2020, in April of 2020, in preparation for the U.S. election, they held an internal working group. It's like 60 people. I was one of the people on it who were tasked with coming up with, quote, soft interventions to prepare for the election. Right. So a soft intervention is something like you, instead of taking out an account, taking out a post, you say, hey, maybe you should post.
pop up a thing saying, hey, you know, you didn't click on a link. You didn't actually click
through on this link. Do you want to click on it first? Because like putting a little moment
of intentionality before someone reshers something actually substantially decreases
misinformation. You haven't censored anyone. It's like a soft intervention. And one of the
things that was flagged was there was a single parameter in meaningful social interactions.
So back in 2018, Facebook switched from just seeing how long they could keep you on Facebook
to seeing could they drive you to do interaction.
they called it meaningful social interactions. So stuff that got more interactions was considered
better content. Even though six months after they did this change, they pulled people and said,
hey, is your home feed more meaningful? And people said, no, it's not. It's actually less
meaningful. There was a single parameter in, we had noticed for at least at that point,
18 months when this report came out, that a single parameter, which was if you don't just
give content credit for generating interactions, like let's say I'm trying to side,
I show this to you. My little virtual model view says, how likely are you to click? How likely
are you to reshare? How likely are you to, you know, put a comment? But, you know, if you were
to reshare that, all your friends might click on it, all your friends might reshare it, all your friends
might comment. We should give it credit for that. We should give a downstream credit. The only
problem was they knew that people didn't like reshares, that they've done these beautiful studies,
like one of my favorite graphs from Facebook shows that you like original content from your
20 closest friends. And Facebook has a ranking of all your friends. They know which ones
are your favorite friends, at least on Facebook. You like original content from your 20
closest friends, but you only like reshares from your five closest friends, because people
don't like reshers. So content that can actually get someone to engage after a reshare
has to get over the bias of not liking reshers. And so it turned out to be a scoring parameter
that overwhelmingly rewarded only really extreme content. And the study came in and said,
we've known this for a year and a half. We know that if you take this factor out, you get way
less hate speech, way less violent, like literally violence, violent content, bloody content,
violence inciting content, maybe in places that are considered at-risk countries, we should
not include the scoring parameter. It doesn't decrease how long you stay on the site. It doesn't
decrease in recessions. It doesn't decrease ad dollars. But it will hit our core metric of
meaningful social interactions, which people's bonuses are tied to, by the way.
And literally in the notes from the market says, with regard to going broad, with taking out the scoring parameter, that we know increases violent content, hate speech, all these things. If it hits core MSI, we're not going to do it. Should they be responsible for that?
Yeah, I think they should. Yeah.
Okay, then there's the question of how much that they contribute to the genocide. But I think even contributing a little bit to a genocide is something you want to stay away from.
Or to do it that shamelessly? Like, do you really want a company where you get said, you're not going to lose any money.
not going to lose any sessions, all the metrics that are seen externally, they're not going to go down,
and you know there's going to be less hate speech, less violence, less violence inciting content.
It's a separate category.
You know all those things.
But you might have to, like, figure out a different way to compensate people.
And your CEO doesn't even entertain, can we figure out a way to do it.
Yeah.
And this is, by the way, something I've been reporting on and speaking about for a while, which is the harmfulness of the share.
Share, I think, and I'd love to hear your perspective on this because we're talking about
what I think is the deep reshare, re-share.
No, this is just single re-share. Single reshare.
Okay.
So, yeah, well, we don't even, I feel strongly about deep reshers.
Okay.
Yeah, we can go, we can go shallow reshares and deep reshares.
But I always felt that the ability to share something on impulse, that's news, never leads
to good information.
distribution because it goes from, takes people from being thoughtful, hey, I read this story.
I think you should, you know, check it out to seeing in your feed that your political opponent
is, you know, a space alien Nazi child killer, eater, and being like, oh, yeah.
They're also a lizard person.
Lizard person.
Yeah.
And you hit share.
Yeah.
And never, if you have to put this under your own identity and copy and paste that link,
it can be much more reticent to do that.
then to, you know, just hit the share button and have somebody else's, you know,
avatar show up in your friend's feed and be like, aha, I just did my part for the cause.
My parents are both life scientists.
So my mother's a cell biologist.
My father is a pathologist, but he studies the viruses.
And my mother, you know, you can see the wisdom of my childhood.
My mother used to always say, be careful what you select for because you'll get it.
That's the story of cell cultures.
You have to be very careful about keeping things clean
because whatever you select for, that's what you're going to get.
You know, when we started talking about freedom of speech,
like when the Constitution was written,
if you wanted to distribute an idea, it cost money.
You know, you had to buy paper.
It took a lot of time.
It took a lot of effort.
There was a natural filter
that if you wanted to say something,
you had to invest in saying it.
I think asking people to have a moment of intentionality,
like a copy and a paste,
is not a huge amount of cost, but it adds some costs.
You see, you don't get unintentional sharing.
Yeah, and every social network seems to know this.
Clearly, Facebook knows this, the research that you shared, showed it.
Twitter knew it inherently.
That's why before the 2020 election added the speed bump that you had to, you know,
do you want to quote tweet this tweet?
Do you want to click the link before you reshare?
Yet they're so addicted to the virality.
Why?
I think that's a secondary thing.
So I'm founding a nonprofit called Beyond the Screen,
because right now we are limited to what we see on our screens,
and we need to see beyond them if we want reform.
You know, I think this question of, like,
why did Twitter do it and Facebook didn't?
It's such an interesting question,
because it's like a really cheap way to get, like,
10 or 15% less misinformation.
I think the reason why Twitter did it and Facebook didn't
was Facebook's average user is substantially less literate
than Twitter's.
I know it might not seem like that, but like given the global distribution, you know, there's
huge numbers of Facebook users that are literally becoming literate to use Facebook.
It is the internet where they live.
You know, 80 or 90% of all the content in the majority of languages in the world is only on
Facebook.
And when you look at the outlier countries, there's some really good graphs in there that
show, you know, there are countries where 35% of everything that's in the newsfeed, every
impression is a reshare. And so when you have such a large fraction of your content is just
reshares, any friction you add, any moment for pause or intentionality, substantially decreases
the content in the system, which decreases consumption. It's a money thing. By the way,
if you pull the reshare back or put this speed bump that causes less resharing, you don't
need to engage in these messy content moderation. Exactly. That's my whole point.
accused of censorship. That's my whole point. Yeah. That's my point since my first Senate testimony,
contrary to the conspiracy theories. I mean, it's so obvious. Yeah. Anyway, you spend a lot of time
thinking about this. So it's obvious to you, but it's not necessarily, because we don't study
these things in school, it's not necessarily obvious. No, it's good to be talking about it in
public inside the companies. Yeah. I mean, the person that built, and I'll get off my soapbox
on this in a moment, but I'm enjoying it so much. Okay. Well, I'm going to keep going then. The person
that built the retweet inside Twitter. I spoke with him and he said, look, it's like I
handed a machine gun to four-year-olds. Wow. That's a line. Sort of like what it feels like. Anyway,
that's my hope is eventually because of revelations like yours and because of, I don't know,
maybe conversations like this that have people inside the company listening, we start to
revisit this. But, you know, we may end up having product evolution before we even need
get there. We were moving to these short-form videos. And by the way, I'm curious what you
make of this. We just had Brandon Silverman on the show, who was the founder of CrowdTangle,
which is this great software tool that let you see what was moving inside Facebook,
public links in particular. And he made a great point that Facebook actually is deprioritizing
news within its news feed. And now we don't see as many news links. I'm curious what you
Isn't, is that something to be applauded?
So part of what frustrates me so much about, about that is so, like, the reason why
they're deep prioritizing news is because they're scared, rightly.
They're getting all this criticism about things like, you know, ethnic violence.
And they're like, they're kind of, you know, taking a card from TikTok where they're like,
you know, at TikTok, if you're not a funny dance, you're not like humorous.
If you don't like, aren't like, you're like you're not going to get distributed.
And the thing that's so sad for me is that it really illustrates.
illustrates, you know, you can either choose to design for human scale, human modeled communities, right? So I always say, like, is it a dinner party? Is it a cocktail party? Is it a church parish? Is it a conference? Is it a university? You know, we have models on how people can exchange ideas. They've lasted for thousands of years. Or you end up with AI censorship. And the problem is, if you must insist on content moderation, I think you should.
have to be transparent about it. And that means things like you should have to put out
samples of the output of your scoring systems. So we can see what's getting labeled as any of
these categories and what the consequences of that is. Because when we take down, when we don't
let news get distributed, that's news by the definition of Facebook's AI. We don't know what
doesn't get through or what does get through. And when I talk to activists around the world that
work on issues like violence against women, gay rights, they say across the board, our accounts
all get taken down. Because these systems that are meant to keep us safe, because they're not done
in a very high quality way, counter speech looks like violating speech. And so I 100% am against
badly done content moderation. And I think we need to have some pretty large societal conversations
around like, you know, if we must insist on doing content moderation, we need to be doing
in a different way than we're doing it today.
Yeah.
And as many, I feel like as many system changes we can make before we get to that decision,
stay up or shut down, that's probably better.
So let's talk about another, I love how we're going into product.
I always love to talk to you about product for a long time, so I'm glad we're doing it.
Another thing that we should talk about is algorithms, algorithms on this.
newsfeed. And just a little interesting history, I was reading the documents that you made
available to reporters. And there was one internal study that said they removed the ranking algorithm
from the newsfeed. And it made it worse? And he wrote about this. He said, I got it wrong.
I didn't say you got it wrong. Did I say you got it wrong? You said that she didn't see this
document or she didn't like blah blah blah. I don't think I said that, but I could be wrong. Anyway, we're
here to heal. Yeah, we're here to heal. Good. I'm all about truth and reconciliation, so I'm
down. I appreciate you sitting down with me. And I think that at the time I heard from you
was saying that, like, look, you're not looking at this.
Polistically. Yeah. Yeah. You didn't say it was wrong. Just you said, there's some context
you might want to take a look at. And just so set the story straight, it was a study about
what happens when you put the ranking algorithm on the feed, which means that Facebook's
algorithm is making a decision of what you see first, as opposed to the algorithm showing you
what's happening in a reverse chronological order and just showing you the most recent first.
Let's actually make that a little more concrete for people who, there's a lot of big words in there.
So when we open our email, the thing that we see first is our most recent email, unless we have
some kind of fancy AI email inbox. We could have a social network where we got content from, you know,
things we chose to get, and it showed up, you know, in reverse time ordering, just like our
email.
And, or they have a time ordering for email.
But same kind of thing.
Like, there's different ways we can be doing this.
And one of the things that's true, there's a document in the disclosures that say, hey,
we tried getting rid of the ranking, like the algorithmic ranking, you know, where we try
to decide what's going to be better for you.
And guess what?
We see more bad things.
We see more violating content.
We see more hate speech.
And I think one of the most important things for people to understand.
is that back in 2008, when we had chronological ranking, you know, you saw things in the order
they came in. It was kind of like your inbox or a reverse of your inbox. So when you knew
you run out of new things to see. We had no big groups, right? There were no million person
groups on Facebook. Facebook had not been pushing you for years into groups that, you know,
you may not have been that interested in, or those groups were intentionally selected because
they were groups that got a lot of interactions.
For contacts, like in 2016, they found out that 65% of people who join neo-Nazi groups in Germany,
where they are illegal, like you can't have a neo-Nazi Facebook group in Germany,
join them because Facebook suggested that group to people.
Over time, though, Facebook needed you to stay on the platform for longer and longer,
and your friends and family let them down.
They didn't make enough content for you to get longer and longer sessions.
So you could click on more and more ads.
We cannot have million-person groups
that are basically just like fire hoses
without having algorithms
because that group makes 1,000 posts a day
and you flood your feed.
And what the article found was
if you get rid of the algorithmic ranking,
their feeds get flooded by these high-frequency groups.
You no longer see your friends and family.
And so you can't really think about
the idea of taking the current product
that has all of these baked-in assumptions
and just removing the algorithm.
You have to really design, design for safety holistically.
And so your point is that if we were to go to the most recent first,
as opposed to the algorithmically selected stories,
that the company would have to redesign its product,
and you wouldn't run that experiment on the current product,
you would rebuild the product, which I found interesting.
And I think that's good context.
You'd have groups that were like Discord servers.
So you can have a Discord server that has 100,000 people in it,
and when the conversation gets too loud, you break off into little subgroups.
You can imagine a world where maybe your group's content never got inserted into your feed.
You had to navigate to your group.
So you can still have your cancer support group.
You just wouldn't have it insert all the content into your feed.
And you're not worried that the feed would be overrun by spammers
who just post every five minutes and try to get stuff in front of people?
So I like to call, my favorite ranking, no one else calls it this, but I got to brand it because I made it up. I call it algorithm. I call it chronological plus plus. So in computer science, the way you add one if you don't want to write like n plus or n equals n plus one is used to n plus plus is an plus. Imagine if you had a little bit of demotion by frequency. So it's largely chronological. But if you send 12,
20 posts in a day, you know, each progressive thing demotes you down a little bit. So it's loosely
chronological. And I think the better way to describe, like, what I would want is
describable ranking, right? That right now we have unintelligible ranking. We have a black box
neural network, and we don't know what the model really is optimizing for. You could imagine
having a publishable algorithm where you said it's basically time-based. We try to stay as close to
time-based as possible. But if you post five times a day, if you post three times a day, you know,
we might move you down a little bit.
You'd be less likely to get seen.
And that's the description.
But there are people who are saying we should remove protections from these platforms
that use ranking algorithms to show feed, to show you content.
And even using chron, what did you call it, reverse chronological plus plus?
I said chronological plus.
You are ranking in some way.
But I'm doing an intelligible way, a way that we can have a conversation on it.
We can.
Because right now we never get to have a conversation on how it's prioritized.
Yeah, I understand.
And then, but because that's still going to be ranking, it would still fall under the,
these companies can be liable.
So, but my complaint on the current way we rank is right now we have these ranking
algorithms that are optimized for business objectives.
And you could imagine a world where you said it's going to be chronological or it's
chronological or you have to give us a defendable reason why improves safety, right?
You can't tell us you made more money on it.
Like, that's why you did it.
to like show us a thing and show us like what was the safety objective and you have to do it
publicly and show us consequences so we can have a conversation on it. Like that's a very different
world than one where, you know, they, they, one of the problems that's outlining the documents
is that safety teams would work for months and months and months to figure out a way to make
the product safer but not hurt ads. And because the model was unintelligible, people are just
like throwing new factors at the model and would often undo changes that have been made.
made because you could represent the thing that had been pulled from the model or the fix that
have been done by munching together four or five other features because people just didn't
understand how they interacted.
Yeah.
And I like this proposal.
So, but does that then mean companies should still not be held liable?
What do you think about this mean?
Because this whole idea of go completely reverse cron, pure reverse cron, without any filtering
at all, even, well, explained or not.
And otherwise you're liable for what you suggest.
Can I give another counter offer?
Yeah, yeah.
So let's imagine instead of, like, demoting people who post too frequently,
imagine if you posted more than three times a day or maybe even twice a day,
your post got collapsed down to a single line that said,
Francis Hogan, posted two times I'll write today, do you want to read the post?
Okay, I like that.
Because then it's like a little shame.
We're still doing chronological, but you're like, hey, you kind of talk a lot.
I thought about listening a little, you know?
I was trying really hard to have you say this.
Making platforms liable for going pure reverse cron is an unworkable idea.
But we got pretty close.
I think I think the way we should talk about liability is the way we talk about it with cars.
Okay.
Right?
Whereas this question around, you know, people should have to demonstrate that they are actively pushing towards safety.
And, you know, if you have 100 chances at bat, and when we look at, you know, like one of
things I've suggested is right now, every time they run an experiment,
there are these dashboards.
And the dashboards have hundreds of metrics
that can be sliced in a ton of different ways.
Imagine if they had to publish the dashboards.
They don't have to tell us what the experiment is.
They don't have to give up the IP.
They don't have to tell us about their magic secret sauce.
But they have to show us every single time they run an experiment
and release the data so that we can analyze
whether or not there's a trend
where consistently they had chances to make us safer
and they chose not to do them.
And they had a whole bunch of chances
that made it more dangerous for us,
but it made them more money, and they shipped those.
Like, show us what experiments you run
and show us what you ship or don't ship.
And if we see a pattern of behavior,
we should hold you responsible for those consequences.
I think that's much better.
I'm all on Team Transparency.
And for context,
I appreciate you unpacking this.
I will link this podcast underneath that story.
Oh, thank you.
So whenever someone reads it, they'll get the broader discussion.
You can even do it at the top of the article
so they'll actually see it.
Consider it done.
All right.
Let's take a break.
We're here with Francis Hagen.
She's the founder of Beyond the Screen.
And the person behind all these big Facebook revelations that we're speaking about,
we'll be back right after this.
Hey, everyone.
Let me tell you about The Hustle Daily Show,
a podcast filled with business, tech news,
and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email
for its irreverent and informative takes on business and tech news,
news. Now, they have a daily podcast called The Hustle Daily Show, where their team of writers
break down the biggest business headlines in 15 minutes or less, and explain why you should
care about them. So, search for The Hustle Daily Show and your favorite podcast app, like the one
you're using right now. And we're back here for the second half of big technology
podcast. We're here with Francis Howgan at Unfinished Live. By Some Miracle, or actually probably
the draw of what Francis has to say, our second half audience is actually larger than our first half
audience? Substantially larger. Much bigger. This has never happened for a podcast that I've done
before live. So, um, anyone, anyway, let's hear it from you guys. Thanks for, for showing up.
Guys, got to be loud. Thanks, everyone for being here. All right, let's, let's get into the second half.
One interesting thing you said in the first half
that caught me by surprise
was that TikTok acknowledges its power
and Facebook doesn't.
Really?
Yeah, I mean, like TikTok is designed to be censored, right?
Like, we've had lots of scandals around this, right?
So how many people remember, I don't know,
it was a couple years ago, there's like a makeup tutorial
where she's like, this is how you do a smoky eye
and like, blah, blah, blah.
Oh, and by the way, the Uyghurs are being killed.
Does anyone remember that?
I do.
Yeah, good.
So the reason why she did the smoky eye for like 15 seconds and then started, you know, spilling the tea on, you know, they're killing a million people, you know, or they have a million people in concentration camps, is they were, TikTok's manually censors all their highly popular content.
And, you know, I don't agree with any system where they take down lots of content and they don't tell us why or how, or they don't show us the biases in what they take down or don't take down. I highly don't support that.
but they acknowledge the idea that social media has power
and the social media has social implications
and I think the fact like they've publicly talked about the idea
that they watch for people who fall under rabbit holes
and they try to pull them away from those.
Do they actually or not? I don't know. They're not transparent either.
But they understand the idea that they have an impact in society.
And so are they rooting for our team? I don't know.
But acknowledging power, I think, is better than pretending, you know, the world is flat and, you know, everyone's the same and you're just a mirror and everything bad has always been bad. And, like, now you can see it and that's why you're mad.
Would you rather have a platform acknowledge power, but actually, like, work to do things that undermine our society or one that doesn't acknowledge its power, but...
Undermines our society.
...is trying? I don't know.
I would be highly hesitant to say that Facebook is trying, right?
Like, Facebook, if Facebook earnestly was trying, you know, when they got asked, please share the research with us that you have internally on whether or not kids are killing themselves because of your product, they would not have given back a three-page letter saying, what research, they would have given over their research.
They lied to the oversight board also.
Yeah, like blatantly lie. Blatently. So for context for people, when they asked the oversight board, hey, like, you know, you know, was taking Trump down the right thing. Can we have a conversation about it? The oversight board came back with like 15 or 16 questions. And they were on a variety of things. And Facebook wrote back with answers. And one of them was they asked like, hey, this crash check program, this idea that VIPs get a special pass. Like you say you treat everyone the same. But then you also say, like,
like the reason why Trump hasn't gone touched is because, like, he's in cross-check.
He gives some more context on that.
And they said, oh, we've already explained what cross-check is.
And they linked to a blog post from 2018.
And that blog post said things like, this is not a two-tiered system.
This is just like getting a little extra double-check to make sure our policies are
accurately being reinforced.
And in reality, because they weren't willing to spend a very small amount of budget,
they didn't actually put that second check-in for most safety systems.
So for most safety systems, people got white-listed.
So if you were in cross-check, those safety systems no longer applied to you.
And even for the ones where it did apply, because they were willing to allocate so little budget to this VIP program,
it would take like three days before a human would look at your content.
And so people would get about 75% of all the distribution they would have gone anyhow because of that underfunding.
Okay. What do you think of the oversight board? I mean, the meme about the oversight board among Facebook critics is that it's a public relations stunt that's a shield that, you know, will make it look like Facebook actually cares what the public has to say, but it doesn't. Do you subscribe to that?
I find the name deeply, deeply, deeply manipulative. So if we were to ask, like, what does the Facebook oversight board do? You know, if you believe in the theory of what's in what's on the.
the labels in the can, you'd be like, oh, I mean, they do oversight. They look after Facebook
as a whole. It really should be called the Content Moderation Appeals Board, because that is
the entire scope of what they can do. They can't ask about the performance of the content
moderation systems. They can't even ask, like, what language are the content moderation
systems in? They can't ask about how stomach or, like, are there other examples of the ones
that get appealed to them? They can't do anything. They can't ask about the algorithms in general.
And so my heart really goes out to the oversight board because it is full of a number of very conscientious people who I think genuinely try very hard.
But because they are so limited by what Facebook allows them to do, they get put in a very hard position.
What do you think about the Metaverse?
Ah, the Metaverse.
So the Metaverse will make certain problems easier and some problems harder.
Right? So on the side of reducing risk, you know, we talked about before, your friends and your family are not the problem on Facebook, hypervirality, million person groups, five million person groups. In a lot of African countries, the single largest news source in the country, or maybe the only really mass news source will be like a five million person Facebook group. That's the problem with misinformation, that we have a selective megaphone.
where 1,000 posts come in, but only two get delivered to you, and they're the most extreme
two, right? That's the problem. In the metaverse, it's much more about one-on-one interactions
or small group interactions, and so you're going to see probably less things, less things like
misinformation problems. The big problems with the metaverse are things like, you know,
studies out of China have shown it's more addictive or more habit forming, like substantially
more so than phones, because it's immersive. I think there's going to be a bunch of psychological
things that we're going to be unpacking over the next few years. Things like you have a socially
isolated child who, you know, maybe they are of a lower socioeconomic tier. They can't afford
after-school activities. Maybe they have a single parent. They don't really like their life.
And they come home and they put on their headset and they have a way cooler apartment.
Their friends are more beautiful, have cooler clothes. They do these fun things. But at the end of the
night, they take their headset off and they look in the mirror and they no longer, they don't like
the person they see, right?
The idea that you would, the only person you liked being
was a virtual avatar, I think is just, it's,
it's heartbreaking to me.
And I worry that the metaverse is going to become
this stopgap for us where we say, instead of like
making sure our old people, our elders are like integrated
into our communities and have adequate resources, we can
like plug them into the metaverse and leave them alone.
Or instead of building community centers and after school activities,
we can, you know, leave our kids to go into their
And so I worry a lot about these kind of systemic problems.
And we know that they haven't invested in safety again.
So when it comes to things like Kate Speech, we're creating these spaces where, you know,
if you can hear 40 people speak but you can't, you know, pause the tape and roll back two minutes
and realize the person who's been shouting racial epithets at you is that one, you basically
can silence the ability of minorities or women to be in these spaces because they just get yelled at
and they get pushed out.
And so I think there should be upfront investment in safety
and shared systems across the games
because it is very clear that the reason why Facebook wants to go to the Metaverse
is they want when someone comes to them
and complains about getting groped
or complains about getting, you know, harassed,
that they can say, I'm so sorry, that's really tragic.
But we didn't write that app.
We just made the hardware.
So it sounds like you think the Metaverse is going to happen
because that's still an open question.
I am a follower of Moore's law, so I think the headsets are going to get cheaper, I think they're going to get faster, they're going to get lighter, they're at higher resolution. I have trouble imagining that 10 years from now, we won't spend more time than we currently do in the Metaverse. Do I think Mark's vision of spending 12 hours a day in the Metaverse will come true? No. I think Mark does that, Mark thinks that because Mark lives that. But not all of us have to avoid cafes,
because people glare at us when we walk in.
So, you know, we have different incentives.
Another reason why Facebook wants to go to the Metaverse,
okay, maybe there's the, you know, we don't need to be responsible.
But the real reason is that they are an app on Apple's phone,
an app on Google's phone.
They want it on the phone.
They want the operating system.
No doubt about that.
An interesting thing has happened where Apple has asked its users
if they want to cut off Facebook's ability
to track them off the app
and people have overwhelmingly said yes.
Now an interesting thing has happened after that
where Apple is in the middle of doubling its ad sales team
or its advertising team, not sales,
the team that builds ad products there.
I got to ask you about this.
If the business model drives the problems that we're seeing
and Apple is kneecapping Facebook's business
and in some ways emulating it,
don't they put themselves at risk of falling into some of the same problems that we see Facebook facing today?
That is such a great question.
I do think that they are playing with fire.
Like, I think it's always hard when you have an economic incentive that is not.
So right now, Apple's economic incentive is safety.
They make their money selling new devices.
So if you get more privacy and having more privacy makes you want to buy their phone more,
they can make more from your phone.
I think they are, they're mixing their incentives.
And I have not sat down and like fully war-gamed out that, but I, I, so I would want to be cautious to speak with too much confidence.
But I can totally imagine that might become a problem in the future.
I think it's a major problem for them.
And we're just scratching the surface there.
Yeah.
It's definitely worth talking about.
Yeah.
Let's talk a little bit about your nonprofit.
I want to ask this question beforehand. I read this interesting quote from you that said,
like, your greatest hope is that you're not relevant. I won't be needed anymore.
Not needed anymore. But at the same time, you're working on a book. You sign with CAA,
which is a big Hollywood agency. So how do you square those, your hope and the action?
Well, right now, I don't see very many people who are on the stage who are proposing and pushing
for things like, you know, let's solve the product, not the content.
At the same time, I'm someone who's almost died, right?
Like, I spent two months in the hospital when I was 29.
I am fully sensitive to the idea that, like, life is fleeting and precious.
And the way we solve real problems in ways that have the highest chances of working
is you cannot be relying on individuals, right?
We have a hit-by-the-bus problem right now.
Like, we should be wary of, like, Facebook employees,
driving buses around New York, right? Like, you can imagine nothing that would be more convenient for
them. We need 100,000 people who can speak as detailedly about the choices we have with regard
to social media as I can, right? A world where there's only one person talking this way is not a
safe world. And I think anything, I had a lot of people pushing me really hard when I first came
out to be like, give us the five-point plan. How do we fix Facebook? And I, I, I, I, I,
have said repeatedly, any world where, you know, I am the solution is just as autocratic as the world
we live in right now with Facebook. And I really believe that the process of democracy is about
having a lot of different people with different interests and different stakeholder groups,
having to hash out something that they're willing to feel comfortable with together.
And right now, the people who get to sit at that table are a very, very small number of people.
Like, there are hundreds of people in the world right now who understand.
these systems at the level of detail that I do, right?
The interaction between the algorithms and the product features and how all these things
feed back on each other, how they different kinds of people and when.
And that is terrifying to me.
And so we need a world where there is not, and we need to think about a continuum of expertise.
Like we need, right now, it's not just that the only security people get trained inside
the companies, right?
It's as if SpaceX was dependent on training its aeronautical engineering.
engineers in-house. If SpaceX was training its aeronautical engineers in house, we would not be going to Mars, period. The reason why Twitter can't afford safety people is because there's so few of them, and Twitter can't pay high enough. All of us are endangered by the fact that we are reliant on these companies to train the experts. But if we look on the policy side, it gets even more scary. Because just like those security people are trained in house, the only place where you become fluent in these policy issues around social media,
is like working all these issues on the hill.
You're a legislative aide.
Your congressperson really cares about it.
You start learning more about it.
You get more fluent.
And as soon as you start to be a little effective,
you get poached off by Facebook.
You get poached off by Twitter.
You get poached off by another tech company.
And in the couple weeks before my Senate testimony,
we met with an aide from Blumenthal's office
and one from Blackburn's office and shared with them.
We'd given them the SEC filings
and all the files associated with it.
And we asked them like, hey, like, you know,
this is a really good conversation.
Is there anyone else on the committee that would benefit from having these documents?
And I don't remember which aid it was, but one of them was like, yeah, you know, I get why that would make sense, and I want to say yes.
And I would say yes if we could go more than two weeks without having a staffer from the committee leave for Facebook.
Two weeks, right?
And that's a huge problem.
Did you see this just happened, but it's a revolving door problem?
across government. They've hired people from the FTC. There was a banker that was, no, a banker. Bankers were
testifying before Congress, and one of the members thanked the bank for hiring one of their
staffers. And the bank was like, her father already works here. It's amazing. It's a problem. The
revolving doors is a problem. But when it comes to banks, like, you know, you can get a finance policy
degree. Like, if you want to go get your Ph.D. and specialize in, like, how should we
regulating banks, you can go and get probably a free degree in public policy in that subject.
You can probably get grants for it. You cannot take a single class, a single class,
where you really understand, here's the spectrum of choices we have. You know, here's the
very wide, eclectic palette of colors we could paint with to build social networks. And
here's the consequences of these choices. You cannot take a class.
If you're a poliscied person or even a CS person anywhere in the United States right now, anywhere in the world.
And that's unacceptable.
And I've heard you talk about the fact that you want to build a simulated social network so people can train on it.
You also want to train lawyers who are interested in bringing the action on the ends and outs of these networks.
So they sound smart about it.
Is this what Beyond the Screen is going to be all about?
So Beyond the Screen is two core products or two core things we're working on.
One is a project that we got seed funding for, and we announced yesterday.
So the McCourt Institute is graciously giving us our first seed funding for something around duty of care.
So when it comes to any other physical product, any food you eat, any drug you take, we have certain expectations about what the floor is for safety.
Like, what are the questions you should have asked?
Like, you should have done a certain amount to, like, make sure this was a safe thing.
We don't have an expectation around that.
We don't have a consensus around what that would be yet when it comes to social platforms.
And I say social platforms because I would also include things like gay.
or Discord or things like that in there.
And so we're doing a project that's around,
I think Jonathan Haidt has done really good work
around helping us begin to have a public conversation
about the harms around kids, mental health.
We want to mirror that process
across a wider set of harms.
Let's talk about human trafficking.
Let's talk about cartels.
In addition to the kids, let's talk about elder issues.
You know, let's do a much broader set
and reproduce that process.
Let's kind of do the truth and reconciliation phase.
and because I'm a product manager.
Like the way we solve problems
is we get really clear
about articulating what the problem is.
And then let's talk about levers
that would prevent or mitigate that problem
because often right now
we confuse a lever for a strategy
for pulling that lever.
I'll give you a concrete example.
A lot of the harms for kids,
a common lever is let's keep under 13-year-olds
or let's keep under 16-year-olds
off the platforms.
Right now, because the people who understand social problems often don't understand tech,
they reconverge on, let's check IDs.
And I don't think checking IDs works.
Like, you can go, like, we can have a conversation on it.
I also think there's huge collateral damage.
I don't think any of you really want to have to have your ID checked.
There's, like, a variety of issues with it.
But if you had asked a technologist, hey, I have a problem.
I need a fine under 13-year-olds.
How could I do that?
They would come back and say, here's 10 or 15 ways to find an under 13-year-old.
We don't have to check IDs.
If we could all look at the same list and say, hey, here's the menu of what's possible.
Right now the platforms are only doing one or two things.
They're like asking you how old you are.
And if you say you're 13, they're like, cool.
You know, I'm not seeing you all 15 things.
Like, I don't want to be prescriptive.
But I do want us to sit down and say, hey, maybe there's six or seven things that we all feel comfortable about that we think are the expectation.
And if you're not willing to do that amount of work, then you're negligent.
So that's the goal of duty of care.
And then because a lot of the logic in duty of care is actually dependent on having an intuition on the physics of social networks,
instead of saying, just trust me, you know, like we talked about deep reshares versus shallow reshares,
or like, how long should you have to wait before you reshare, all these things.
Imagine if we had, you know, we spent hundreds of millions of dollars every year on flight simulators
to train jet pilots to keep us safe for the Air Force
or keep our jets safe for Delta Airlines.
We spend $0 on flight simulators for social networks.
We have to train all of our safety people in-house
because we don't have a lab bench for training them in academia.
And so those are the two things that we want to work on over the next few years.
Sounds great.
I read that you're trying to raise or you were trying to raise,
or you were trying to raise $5 million for this.
I have a couple questions about that.
A, I'm curious if you've raised it,
B, if you have, where's it come from?
And I would say most importantly,
there's, I think, a plague in this world,
a big tech advocacy,
where there's so much dark money that gets spent
all over the place
and no one ever discloses where that money's coming from.
And so we don't know where the agendas lie.
So can you, I don't know if you have or not, but are you going to commit to share the list of your funders moving forward when it comes to running this nonprofit?
You know, I am totally willing to entertain that idea. I want to put a quick caveat, which is I would like to check in.
So we've only taken two checks ever. Like McCourt is, we're in the process of getting the check from them, but they made a public announcement of it yesterday.
I would want to check with those two other funders on whether or not they would want their names publicly.
But for context, prior to McCourt, we'd only raised a very low six figures.
Like, I've gone by on a shockingly small amount of money, and part of my book advance is all that has funded me since I came out.
The people who tell me, oh, it's easy, they'll just give you a check for $300,000.
I don't know how that actually works because it has never happened for me.
I'm not going to complain, though.
But I like that idea, and I want to double check first.
But we've been public about our funding, at least in the last, after those first two checks.
I feel like you should make it a prerequisite.
Yeah, but I'm saying that going forward, I can talk to them and be like, hey, like, we need to have your name out in the open.
I'm totally fine with that.
Oh, we're running out of time.
Do you want 10 more minutes, or are they going to get angry at you?
We're the last ones, so you never know.
They're mad.
Let's do five more, and then we can.
can bail. It'll be shameless. We'll make them get, pull us off the stage.
Sounds good.
This is just a curiosity of mine.
Yeah. In the 60 Minutes interview you did after you revealed who you were, maybe it was
there, maybe it was the journal you mentioned that you had basically been living off of
Bitcoin winnings or something like that. I don't know, correct the record if I'm wrong.
I don't think there's anything in 16 minutes.
Oh, but was, is that true because you were living?
I heard you were living in Puerto Rico with.
No, I think people conflate the two.
Yeah, they conflate the two.
So I do hold some crypto.
It is not life-changing amounts of crypto.
It is part of a diversified portfolio.
All right.
But I live in San Francisco, so like I have lots of people who talk about crypto all the time.
I moved to Puerto Rico because I was so ill that I was paralyzed beneath my knees.
So I went into the hospital because my leg turned purple because I had a foot-long clock.
that had been in my leg for between 18 months and two years.
They don't know how long.
And when I went to the doctor and complained about it,
I'd be like, I'm in a lot of pain.
I'm really exhausted.
They'd be like, oh, it sounds like you have depression.
You know, there's nothing gendered about women's pain, clearly.
But it turned out that I was also starving to death.
So I had celiacs, and I was, even though I had gained weight,
I was so malnourished my protein synthesis had stopped.
So I showed up in the hospital, and for context,
My father-in-law died last week, and he had been suffering from cancer for a long time, maybe seven months.
And he went through a period of time where he didn't really eat anything for four months.
He ate very, very, very little.
And he lost 75 pounds or something insane.
And my father was reassuring me as he was looking through his lab results.
He's like, these look real bad, but to be fair, your numbers looked worse.
And so I was really malnourished.
And I got paralyzed beneath my knees because my nerves died.
And I'm much better now.
I can walk on my toes.
I hike for hours.
But I have constant pain.
I have constant very severe pain to the point where in addition to my neuropathy drugs,
I have to numb my feet, like that level of pain.
Because I can't take more because my heart beats too slow for context.
And I was really unhappy in San Francisco.
during COVID, because if you've ever been, you know how balmy and warm San Francisco is.
I used to live there.
Yeah, yeah.
And the buildings aren't insulated.
So, like, I couldn't even keep my inside warm.
And I was really unhappy, and a bunch of people were like, we're all going to move to Puerto Rico.
And I was like, well, we have to only socialize outdoors because it's COVID.
You know, I have nothing to lose.
And I went down there for, like, a three-week vacation.
And I was like, wow, I've been complaining.
about being cold for years. Why did I live in San Francisco? Um, and I, it's just been a game
changer. It's, I, I actually credit Puerto Rico with my ability to have been as successful as I was
with the whistleblow because I had so much more energy after I moved there. Right. I was able to
lower my pain beds a bunch. Um, and it, it made it, um, it just, it gave me many more hours a day
where I was effective. Okay. So we get to the bottom of the,
Crypto narrative. We're out of time. Do you have like 60 seconds? Is there anything you're
optimistic about? Oh, sure. Can I, can I do my soapbox now? Yeah, but they're going to make
us wrap. But yeah, definitely. Soapbox away. I'll hold the floor. Okay. Okay. So we talked
about some heavy stuff today, right? So like genocide, like, you know, business incentives that are
dystopian, kill democracy, whatever. The reality is every single time we have invented a new
communication technology. It has been really, really disruptive. People like to say the printing
press was disruptive. The printing press was not disruptive because no one could read, right? It's like
60 years between the invention of the printing press and the printing press causing harm.
The real thing that caused harm was Martin Luther was like, the Catholic Church has been lying to
you, you're not going to hell, just go be nice to people, love each other. Let's teach everyone to
read so they can read the New Testament too. And it turned out when you taught people to read, it
went from being 3% literate in Germany to being 30% literate in 20 years. And people started
publishing pamphlets on things like, you know, how do you know if your neighbor's a witch?
You know, I hate to tell you, you might have to burn them. It's going to be hard, but you can do it.
We invented cheap printing presses. Those are newspapers. And we literally had wars over misinformation.
Historians have looked at the telegraph. The telegraph did not cause the civil war,
but they believed that the telegraph influenced when the Civil War was
because for the first time people could actually see
that the opinions in D.C., the opinions in other parts of the country,
were very different than the ones at home
because the news all converged into real-time.
Radio, we've had whole genocides over radio, things like Rwanda,
or like World War II, for the first time people had personal relationships
with their leaders, and literally Hillary was the best person in the world at radio.
What we are experiencing right now is not novel, but the reason we feel overwhelmed, why this feels different, why is this so scary, is because it's actually our problem to solve, right?
Like, we feel overwhelmed because we can see that we're in the driver's seat, that we have to figure out the way forward.
And I want to be honest, it might get a lot worse before it gets better, but the reality is, is we have figured out how to do this every single time before, and I have 100% confidence.
that if we do bring 100,000, if we bring a million more people to the table,
we are going to figure out a path forward where we get to live with social media,
where we get to feel good about it, and it's good for us.
So I'm excited about the future.
Frances Howgan, everyone.
Thank you.
I pleasure.
Thank you, Francis, for the chat.
Thank you.
What an amazing audience.
You guys are amazing.
This is going to be on Big Technology Podcast feed, so if you go on your phone, subscribe to Big Technology Podcasts. It's available on all podcasts app. We'd love to have you. We do these every single Wednesday. We've interviewed Francis's lawyer, Lawrence Lessig.
Oh, Lawrence Lessig. Yeah. Yeah, he's wonderful.
And a bunch of other people that agree and disagree with her. So thanks again. Thank you to Unfinished Live for having me here, having this conversation with Frances. It's been awesome. And, well, we hope to.
to see you again on the internet.