On with Kara Swisher - Signal’s Meredith Whittaker on Surveillance Capitalism, Text Privacy and AI
Episode Date: October 17, 2024What do cybersecurity experts, journalists in foreign conflicts, indicted New York City Mayor Eric Adams and Drake have in common? They all use the Signal messaging app. Signal’s protocol has been t...he gold standard in end-to-end encryption, used by Whatsapp, Google and more, for more than a decade. But it’s been under fire from both authoritarian governments and well-meaning democracies who see the privacy locks as a threat. Since 2022, former Google rabble-rouser and AI Now Institute co-founder Meredith Whittaker has been president of the Signal Foundation, the nonprofit that runs the app. Kara talks with Meredith about her fight to protect text privacy, the consolidation of power and money in AI and how nonprofits can survive in a world built on the surveillance economy. Questions? Comments? Email us at on@voxmedia.com or find Kara on Threads/Instagram @karaswisher Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Support for the show comes from Into the Mix, a Ben and Jerry's podcast about joy and justice produced with Vox Creative.
Former President Donald Trump has made voter fraud a key talking point in his 2024 presidential campaign, despite having no evidence of widespread fraud.
Historically, claims like this can have a tangible effect on ordinary voters.
on ordinary voters.
In a new three-part series,
host Ashley C. Ford talks to Olivia Coley Pearson,
a civil servant in Douglas, Georgia,
who was arrested for voter fraud
because she showed a first-time voter
how the voting machines worked.
Hear how she fought back
on the latest episode of Into the Mix.
Subscribe now.
Support for On with Kara Swisher
comes from Anthropic.
A lot of AI systems out there feel like they're designed for specific tasks that are only performed by a select few.
So where do you start?
Well, you could start with Claude by Anthropic.
Claude is AI for everyone.
The latest model, Claude 3.5 Sonnet, offers groundbreaking intelligence at an everyday price.
Claude's Sonnet can generate code, help with writing,
and reason through hard problems better than any model before.
You can discover how Claude can transform your business at anthropic.com slash Claude.
Do you feel like your leads never lead anywhere?
And you're making content that no one sees sees and it takes forever to build a campaign?
Well, that's why we built HubSpot.
It's an AI-powered customer platform that builds campaigns for you, tells you which leads are worth knowing, and makes writing blogs, creating videos, and posting on social a breeze.
So now, it's easier than ever to be a marketer.
Get started at HubSpot.com slash marketers.
Hi, everyone from New York Magazine and the Vox Media Podcast Network.
This is On with Kara Swisher, and I'm Kara Swisher.
My guest today is Meredith Whitaker, president of the Signal Foundation,
the nonprofit that runs the Signal messaging app,
one that I use all the time because it's pretty much the only one I trust.
Signal has been around for a decade and only has 70 to 100 million users a month,
which is peanuts compared to WhatsApp,
just under 3 billion.
But Signal is not a lightweight in the tech world.
Its Signal protocol is considered the gold standard in end-to-end encryption.
In fact, it's the tech that WhatsApp
and Facebook Messenger and Google use.
The difference is that Signal users
actually keep all their metadata locked up too,
which is why it's become the messaging app of choice
for people who are really concerned about privacy, cybersecurity experts, NGO workers, indicted New
York City Mayor Eric Adams, Drake, and me too, as I said. Meredith Whitaker has been leading the
Signal Foundation since 2022, and she's kind of a perfect person to do the job. More than one
reporter has called her Silicon Valley's gadfly, which is also my title, by the way.
After starting at Google in 2006, Whitaker quickly moved up the ladder, founding Google Open Research and MLAB.
In 2017, while she was still at Google, she also co-founded the AI Now Institute.
This was very early at NYU with Kate Crawford to research the social implications of AI.
So basically, Whitaker was a rising star.
research the social implications of AI. So basically, Whitaker was a rising star. But then in 2018, after she helped organize walkouts to protest sexual misconduct at the company,
citizen surveillance and Google's military contracts, the company told her to cool it,
and she left. Whitaker has been the no-hold bars advocate for data privacy in a world increasingly
run by what people call the surveillance economy. I'm excited to talk to her again about that,
the increasing consolidation of power and money in tech,
especially in AI, where she sees this privacy fight going
and how a nonprofit or even for-profit startups can survive.
She's still a firebrand,
and our expert question comes from another of those people,
Renee DiResta, so this should be good.
One more thing, I've been nominated for the best host
in the Signal Listener's Choice Awards, which has nothing to do with the Signal messaging app.
We're not trying to butter up judges here.
But if you agree that I am the best host, and, you know, you really should, you can follow the link in the show notes.
Thank you so much.
All right, let's get to it.
It is on. Meredith, welcome. Thanks for being on ON.
So happy to be here, Cara, and nice to see you again.
I know. It's been a while since we did an interview. It was 2019. You were still at Google building up the AI Now Institute on the side. Now you're at the Signal Foundation,
a nonprofit. Talk a little bit about that shift and what happened there and why you decided to
do this. Yeah, it feels like centuries in tech and in my life. I mean, for me, this is all part of a
single project, really. Like, how do we build technology that is actually beneficial,
actually rights-preserving? How do we stop the bad stuff, start the good stuff? And of course,
technology is vast, and these companies are huge, and there are many angles to take. So,
you know, I was also at the FTC trying to help it there, AI Now trying to produce better research,
and now Signal to me is just the most gratifying dream job in a
sense, because we are actually building and shipping high availability tech that is rejecting
the pathology and the toxic business model that I've been pushing at, prodding, fighting for
almost 20 years in my career. Talk about why you left Google.
It had gotten, I don't think hostile is the right word,
but not what you wanted, as I recall.
Yeah, I mean, it was a combination of hostility from some in management who didn't like
honest conversation about ethical business decisions,
let's say.
Oh, that, oh, that.
Yeah, that old thing.
Not evil, but, you know, adjacent. Not evil, but, you know,
evil, just, you know, like in polite company, don't mention the evil.
Yeah. So I, you know, I had raised a lot of alarms. I'd participated in organizing against
some really troubling business decisions, some of the surveillance and defense contracting that were
just being made based on bottom line, not based on kind of ethics and duty of care.
And I kept that up for a number of years, but at some point, I felt that I had hit the end of the
road. The pressure from Google to stop some of the retaliation that I and a lot of my colleagues
were facing, meant that we were just spending more and more time strategizing how to, you know,
keep our toehold other than, you know, not building, say, AI Now Institute, which has gone
on to do incredible things, not thinking about positive projects in the world. And, you know,
my adrenals needed a rest. I needed
a change of pace. And I'd also been at Google for over 13 years at that point.
Yeah. When you say they didn't want, there was pressure, explain that for people who don't
understand within these companies. Google's always been a place where things are debated
since the beginning, or it was, even if two people really did run the place or controlled the place.
even if two people really did run the place or controlled the place.
Yeah, look, I mean, I joined in 2006, which was a wild and free time at Google. And they really did nurture a culture of open conversation, communication. You know, there was the sort
of Usenet message board vibe on our internal mailing list. You just go on and on debating
the nuances of any given point. So that
was sort of the nervous system of Google when I jumped in there. And of course, there was a huge
amount of money. So there was a lot of room to play around, to fail, to learn things, to start
new initiatives. Now, it doesn't mean that decisions were made by consensus, right? But it
means that was the environment that was nurtured and that attracted a lot of people. Yeah, on every
topic. It wasn't just very serious ones.
I remember a huge argument over kombucha there at one point.
Yeah, yeah, yeah.
Yelling at the founders about the shitty kombucha.
There was a famous thread on Goji berry pie that went on for like 3,000 posts, right?
So, you know, I really, I learned my poster skills pretty early.
But, you know, that muscle still remained. And a lot of people
were there because they believed the rhetoric, right? Like, don't be evil is a bit trite. And
it's certainly, you know, far down the line, there's a lot to the to the left of it. You could
do a lot of bad things to the left, right? Yeah. And evil to whom I mean, come on. But nonetheless,
it was, you know, in a socially awkward discipline of kind of
nerds who do computer science, pointing to don't be evil was often invoked, just to say, like,
yo, I'm uncomfortable, right? So there was this reflex in the company. And as they moved,
you know, let's say they, they moved closer and closer to the red lines, they were able to swear
off in the beginning, because they were so far away because
the money was coming in and we'll solve the problem of what happens when we have to choose between
a billion dollars and hanging on to our ethics right that you know that seemed like a fantasy
and of course they started hitting up against these these red lines in 2009 you know kind of
requests from the Chinese government um you know they they held firm there. And then we went into the
mid-2010s, and they're signing up to be defense contractors, you know, building AI drone targeting
and surveillance software. So you had started the AI Now Institute on the side. Explain for
people what that is. And then we're going to get to Signal, because it's how you got here is an
interesting journey, I think. Yeah. no, my path is wild and winding.
So I had founded a research group at Google, and that was a research group that the nucleus of
which was a measurement lab, this large scale internet performance measurement platform with
a consortium of folks in Google and outside of Google at the Open Tech Institute. So then I, you know, I hear about
AI, it's machine learning back then around like 2012, 2013, 2014. And I'm like, oh, what is this?
This seems cool. It's like a statistical technique that does some stuff. Oh, wait, you're taking
really flaky data, like not way more flaky than mine. And way higher up the stack. So it's making, you know, like up the stack to making decisions about human beings.
Yeah. So the garbage in, garbage out idea. That's the issue, right? Like it's not, you know, you're calling this intelligence, but actually you've just sort of masked its provenance, masked its flaws, and are using it to imply that these, you know, massive't useful for decision making. Obviously they are,
right? You know, the issue is that there is a toxic business model at the heart of this,
that those patterns aren't always responsible, and that we forget that data isn't a synonym for facts
at our peril. Yeah, I just interviewed Yuval Harari, and he made a salient point, a very
simple one, that there's a lot of information out there, but not a lot of facts. And that's hard to discern. And of course, this higher intelligence isn't going to know the
difference because you put it in there, right? Because it's only going to know what it knows.
So you got worried, you leave. Explain how you got to Signal and what your thought was on why
it was important. Well, I've actually been a big fan of Signal, involved in the world of Signal since around the beginning.
When you work at the network layer, at the kind of low layer,
you're privileged to begin to learn pretty quickly
that everything is kind of broken, right?
There are security flaws, there are privacy issues,
like duct tape and sticky tape,
and like a handful of core libraries maintained by open source contributors who live on a
boat and won't answer emails.
Like you're like, oh, this is the internet.
Wow.
And I think like I began to be animated by questions of privacy and security pretty early
because of that exposure and because it was the most interesting place, frankly. It was where the fresh air of politics met the theory of the internet. And so I had been a fan. I'd known
Moxie for a number of years.
Explain who that is. That's Moxie.
Moxie Marlinspike is the founder of Signal, co-author of the Signal Protocol, and really
carried Signal as a project on his back, putting huge amounts of time and energy into it
to do what is almost impossible, which is create this virtuous, non-profit, open,
high-availability communications tech that is not participating in surveillance, that is not
participating in targeting or algorithmic tuning or content curation
or any of the other things that we've seen
go real, real south with the others.
Right.
Now, let's talk about Signal Messenger,
which is the core product.
For a while, a lot of the big concerns
around messaging apps were green versus blue bubble barrier
or stupid things like that.
But there's more important things.
What does it do differently and what doesn't it do
so people can understand the difference
between all the different messaging apps? Signal's big difference is that we are truly
private. We collect as close to no data as possible, and we develop open source so that
our claims, our code, our privacy guarantees don't have to be taken on trust. You can actually
verify them. And because of the reputation we've built up in the community,
because the Signal protocol was a massive leap forward
in applied cryptography and actually was the thing
that enabled large-scale private messaging on mobile devices,
we get a lot of scrutiny.
And that promise of many eyes making better, more secure code has really
delivered for us. Right. People know what you're doing. And so it's like a messaging app like any
other in terms you could message back and forth. But what does it do differently and what doesn't
it do? Yeah. So it protects your privacy, let's say, up and down the stack. We use our own gold
standard cryptography that actually others license from us.
WhatsApp licenses it.
Google licenses it.
It is the gold standard.
Yes, this is end-to-end encryption.
End-to-end encryption.
And we created the kind of the gold standard there.
And it protects what you say.
So you and I are texting, Cara.
Signal doesn't know what we're saying on Signal.
Like, you can put a gun to my head.
I can't turn that over. But we go well beyond that too, because of course, metadata, this fancy little word for
data about you and your friends is also incredibly revealing.
So we don't collect data about your contacts, your groups, your profile photo, when you
text someone, who's texting whom.
when you text someone who's texting whom. So all of that required research and actually design, like building new things, solving problems, because the ecosystem we're in
has been built with the assumption, everyone wants to collect all the data all the time
and keep it to do whatever. So we actually have to go in and be like, well, we can't use that
common library for development. Because if we use that, it would collect data. Let's give a concrete example. When we added GIF search to Signal,
because everyone likes a reaction GIF, right? Or at least boomers do.
And we couldn't just use the Giphy library. That would have taken a couple hours. We would
have tested it in the engineers go home and go to sleep. No, we had to rewrite things from scratch. This was actually a significant
architectural change. It took a number of months. And when we implemented it, it meant that we
weren't giving any data to Giphy. They have no idea, whatever. So when they're acquired by Meta,
we don't have to worry. You don't have to worry. Right. Exactly. So this is this end-to-end
encryption. Who's using it now and where are you seeing growth right now?
Yeah.
I mean, our user growth has been steady. And I think, again, this just, you know, the bloom is off the big tech rose, right?
People do not want to be surveilled.
There is a giant demand for privacy.
And so, you know, Signal is global core infrastructure.
We're used by journalists everywhere, human rights workers.
We are the core infrastructure in Ukraine for safe communications, for sensitive information, you know, government, military. We are, you know, core communications in governments across the world, right? Just for, you know, we don't want a data breach to expose sensitive information. I think, you know, every time there is a,
what we call a big tech screw up or a massive data breach, we see spikes in signal growth.
We also see spikes when there are, there's geopolitical volatility. So, you know, you see
when, when there was the uprising in Iran around women's rights, we saw a massive spike in use.
in Iran around women's rights. We saw a massive spike in use, and then we saw the government or the ISP try to block it, and then we stood up proxies to try to get people access anyway.
So it's really, you know, it's when people suddenly recognize the power that personal
sensitive information can give those who might want to oppress or harm or,
you know, otherwise hurt their lives. Or just sell you things.
Exactly. Or sell you things and, you know, and then like decide what news you get,
decide if you get an ad for a good rate on a home loan or a bad rate, right? Like these things that
are subtle, but also really meaningful. WhatsApp is peddling privacy in the form of encryption as a selling point, but still collects metadata. Talk about this business model for a huge slice
of the tech sector at this point, data collection, surveillance, capitalism, profits, etc.
I mean, I think of this as really the original sin, right? Like the Clinton administration knew there were privacy concerns with the Internet business model. They had reports from the NTIA. They had advocacy from human and civil rights organizations. This was a debate that played over out over the 90s as the rules of the road for the Internet were, established. And I, you know, this is why I just like, I get itchy when people are like,
they could never have known. And I'm like, literally, there were reports before any of
this were done, laying out exactly how this would go down. And it went down that way and slightly
worse. This wasn't a matter of guileless innocence leading to innovation that got out of control.
This was a business model choice where, you know, the Clinton administration said absolutely no restrictions on commercial surveillance.
And they also endorsed advertising as the business model of this internet.
And like, of course, what is advertising if not know your customer?
We got to get more and more data, right?
So it's an incentive.
It's like a flywheel incentive for surveillance.
We want as much data about our customers as possible so we can target them with ads.
And what does that incentivize?
That incentivize huge clusters of compute and data storage so that you can keep this
data that incentivizes things like map reduce that is sort of the precursor to a lot of
the AI models.
Now that incentivizes, you know, social media that calibrates for virality and sort of like
upset and cortisol and like, you know,
it's like amygdala activation, basically. I always say enrichment equals engagement.
Yeah, there you go. Exactly. And why does it equal engagement? Not because like we like
engagement, but because that means you see more ads, you click on more ads, you contribute more
data, you know, the cycle continues and this business model is super profitable. So that's the norm. So let's talk about finances. You said there isn't a business
model for privacy on the internet. Now, Signal is not just opposed to surveillance capitalists.
As we said, it's a nonprofit funded by donations, including a big chunk from WhatsApp founder,
Brian Acton, who is also a co-founder and board member at Signal. You don't take investments,
you don't have advertising. The app is free,
but you still need money to pay your engineers
and keep your servers running.
Talk about how you do that.
Yeah, well, our costs are about $50 million a year.
And every time I say that,
I get a couple of tech founders,
a couple of tech execs come up to me and say like,
congratulations on keeping it lean, right?
So we're, you know,
we're doing really well, but what we're doing is big and requires resources because tech is
capital intensive. So right now we are funded by donations. That's our nonprofit status. And again,
as we just sort of touched on, that nonprofit status is not a nice to have. It's not like,
oh, we like, you know, charitable giving. No, it's a prophylactic
against the pressures of a business model that are opposed to our mission, which is private
rights preserving communication. So we are looking at different models right now for how we grow
this. How do we sustain Signal? And how do we make sure that Signal isn't just a lonely pine tree
growing in a desert, right? We need an ecosystem
around us. We can't be just the, you know, the sole example of one that got away from that business
model. And I think, you know, things like how do we set up endowments that can sustain Signal
long-term? How do we think about, you know, tandem structures or hybrid structures where things that would otherwise be polluted and
toxified by exposure to a toxic business model are kept cordoned off. There's some vision in
there that we could inject, but the flat fact is that's the cost. Yeah. And there's nothing you
want from your users except use, right? It's like a free park or something like that. So protecting privacy
isn't also something that's not a moving target. There are new systems on the horizon, quantum
computing comes to mind, which require a complete overhaul of encryption systems, which you
depend on. You're already preparing for QDA, as the Wall Street Journal recently called it,
very dramatic over at the Wall Street Journal, but explain what Q-Day is and what you've been
doing to deal with that. Some people have a vague knowledge of quantum computing, but it can
unencrypt everything very quickly, basically. Yeah, it's very, very powerful computing that
basically can factor large primes, which is what we depend on in cryptography very quickly,
right? And so this would break the premise of, you know, kind of unbreakable math is the guarantor of,
you know, current crypto systems, and a future in which we have sufficiently powerful quantum
computing, which I guess is what Q day is, although I would have thought it was like a QAnon thing.
Yeah, Q is a letter we have to stop using.
Yeah, I'm like, oh, cool.
X and Q, yeah, X and Q.
I know, we're reducing our literacy as we speak.
But, you know, there is,
quantum computing is developing
and there's no clear answer
to when we will have a quantum computer
that can actually do that.
But it's catastrophic
enough that we can't rest on hope or postpone it. So Signal was the first private messenger,
the first messenger to implement post-quantum resistant encryption for our Signal protocol.
And the resistance we added protects against the kind of attack
we can be worried about now, which is called harvest now, read later.
And that just means you collect all the encrypted data.
It's garbled bits.
It means nothing, but you save it and you save it and you save it.
And at a time when these sufficiently powerful quantum computers exist,
you then apply them to decrypt it.
The harvest now thing is really interesting for people who don't understand. It's like
stealing all the safes and putting them in a room, and then someday you'll be able to figure
out how to open them, essentially. Yeah, that's a perfect...
Yeah, yeah. So one of the things, obviously, is reputation. So in 2022, federal investigators
said they had gotten access to signal messages, helped them charge leaders of the Oath Keepers
in the January 6th plot. It wasn't clear how they got those. And I'm sorry to say this,
because I think he's the biggest information sieve on the planet. Elon Musk questioned
the app on X, something about known vulnerabilities are not being addressed.
Any idea what he meant? I mean, I'm not going to ignore everything that imbecile says, but
what kind of impact do reports impose? I know, I know, large sigh.
But what kind of impact do reports impose? I know, I know, large sigh.
Yeah, I mean, I don't know what he was talking about. I think, you know, getting to the Oathkeeper point, look, the way that the cops usually get data is someone snitches, someone But it is a good hook if you want to scare people about security claims, particularly because 99999% of people can't themselves validate
these claims, which makes this kind of weaponized information environment really dangerous and
really perturbing to us, which is why we're so careful about this. When Elon tweeted that,
I don't, you know, what I can say for sure, and this is what I posted on Twitter,
we have no credible reports of a known vulnerability in Signal. We have a security
mailing list that we monitor assiduously, where we haven't heard anything. There are no open,
critical vulnerabilities that we have heard of.
So, you know, it's kind of,
you put us in a position of proving a negative.
And so, you know, so it was this off the cuff tweet.
It caused a lot of confusion.
I was sort of dealing with that for a number of days,
you know, not because it was serious,
but because it seriously freaked people out.
We had human rights groups.
We had people calling us just saying like,
look, this is life or death for us, right?
If Signal is broken, you know, we're going to lose people.
We need to know for sure.
And what I can say is we have no evidence that this is true.
I will say since then, Elon has tweeted screenshots of messages on his phone that are Signal screenshots.
So, you know, you can put that together.
Of course he uses Signal.
Everyone uses Signal.
He's got a lot of secrets to keep, I think. Well, I mean, anyone who has anything they're
dealing with that is confidential or has any stakes generally ends up using signal.
Yep. We'll be back in a minute. Thank you. Mint Mobile's latest deal is kind of like that. Because when you're paying just $15 a month when you purchase a three-month plan, you want to spread the word.
At Mint Mobile, plans come with high-speed data and unlimited talk and text delivered on the nation's largest 5G network.
I've actually tried out Mint Mobile.
I have an extra phone, and it was super easy to set up.
And I didn't really have to do anything.
It just was plug and play.
It's been great as a Mint customer and very easy to do all online.
To get this new customer offer and your new three-month premium wireless plan for just $15 a month, you can go to mintmobile.com slash Cara.
That's mintmobile.com slash Cara.
You can cut your wireless bill to $15 a month at mintmobile.com slash Cara.
$45 upfront payment required, equivalent to $15 a month.
New customers on the first three-month plan only.
Speed slower above 40 gigabyte on unlimited plan.
Additional taxes, fees, and restrictions apply.
See Mint Mobile for details.
Support for this podcast comes from several nines.
If your company operates in the cloud, you know that out-of-control costs,
disruptive outages, damaging security breaches, and unacceptable vendor concentration can feel like they're destroying your cloud services.
It's time for a new paradigm, one that not only provides you with more control over where and how you run your application and data workloads, but also one that provides the operational efficiency and reliability at scale via automation.
operational efficiency and reliability at scale via automation. Pioneered by several nines,
Sovereign DBaaS is a new way forward for corporations with multiple business requirements who need to deploy their workloads in mixed environments. It's grounded in end-user
independence. It gives IT ops teams the ability to deploy and orchestrate databases in public,
private, and hybrid environments, removing lock-in risks and giving organizations the orchestration benefits at scale that they used to get from traditional
DBaaS, but now without its portability and access trade-offs. So whether you're in a DevOps or
platform engineering team, Sovereign DBaaS can provide you true optionality, resulting in a
healthier, more robust, and competitive tech ecosystem. Learn more at severalnines.com slash Cara.
Support for On with Cara Swisher comes from Quince.
No matter how you feel about football and pumpkin spice lattes,
one thing I think most people can enjoy about the fall is a good warm sweater.
And if you're looking for a high-quality one for that upcoming cold weather,
then look no further than Quince.
They're known for their Mongolian cashmere sweaters
that are warm, chic, and best of all, affordable.
In fact, all of Quince's luxury items are priced at 50% to 80% less than similar brands.
I got myself actually a piece of luggage that you could put such sweaters in,
and I really like it.
It's a hard shell.
It moves incredibly well, and it fits in the overhead compartment, which is a critical thing, and at the same time can expand if I had to add more stuff in it. I plan to take it with me everywhere I'm going during the holidays and shoving it full of cashmere sweaters. quality wardrobe essentials, go to quince.com slash Cara for free shipping on your order and 365-day returns. That's Q-U-I-N-C-E dot com slash Cara to get free shipping and 365-day returns.
quince.com slash Cara.
It's fair to say you've faced a lot of headwinds in this battle to maintain tough standards on
encryption, and everybody does remember Apple's battle with James Comey, of all people.
If people don't remember, it was James Comey.
But last year, the U.K. passed the U.K. Online Safety Act.
The E.U. has been debating child sexual abuse regulation known as CHAT control bill.
Basically, they're all touted as efforts to protect users, especially children online, which seems like a good thing, right?
But you and other security experts have been pushing back.
Talk about these two bills, what they do and would do to your model.
Yeah, I'll just kind of characterize them in one brush,
because ultimately they are aiming for the same thing.
TLDR, in the name of protecting children from abuse, harmful content, they would mandate or would give a body or agency the power to mandate scanning everyone's private messages and comparing them against some database of prohibited speech or content. And this isn't possible while preserving end-to-end encryption.
Like that's the mathematical truth. A backdoor, you implement a backdoor, you have broken the
mathematical guarantee of encryption. And we have only to point to the fact that the Wall Street
Journal just reported that, you know, apparently the Chinese government, no surprise to anyone, has been hacking interception points, so backdoors, in US systems, right? So this is not a game. This
is not a hypothetical. This isn't the technical community raising, you know, large hyperbolic
flags. No, this is the reality. And any backdoor in a network compromises the whole network. So
you backdoor the, you know, you mandate scanning
of signal in the UK. Of course, communications cross borders and jurisdictions all the time.
And then that means Amnesty International, housed in the UK, when they're talking to human rights
defenders in Uganda, where being gay is punishable by death, working to get people's information,
is punishable by death, working to get people's information, to get asylum cases going, to get people out and to safety, that conversation is then compromised. So I just, in fact, spoke to
Hillary Clinton, and she was talking about how they use Signal and WhatsApp to help women get
out of Afghanistan after the U.S. military withdrawal, and they needed that secrecy
to protect them. And you noticed, you see a surge in downloads when conflicts arise,
to protect them. And you noticed you see a surge in downloads when conflicts arise,
but these back doors, everybody gets in then. Everyone's able to get in.
Yeah. A back door is not something you can control. Once there's a door, anyone can walk through it, right? So this is the magical thinking that we talked a lot about when we were pushing
back on this bill, right? You want a golden key, I think, as James Comey said with the Apple showdown, you want a magical wand, you want a secret portal that only you have
the spell to open. Well, that doesn't exist. That's a fairy tale. What does exist is a critical
vulnerability in the only core systems we have to guarantee confidentiality and cybersecurity of communications. And if you
undermine those, if you open that door for everyone, it means that the technology we have
for that type of security no longer exists, no longer matters, right? So it's serious.
How much does the recent arrest of Telegram founder and CEO Pavel Durov impact the debate?
Clearly, Telegram's known as a cesspool. You know, my son was like,
it's for sex and drugs, mom, just so you know. It's often named in one breath with Signal because
the company talks about privacy and encryption in the larger sense. But for those who don't remember,
Durov was arrested in connection with distributing child sex abuse material and drugs, money
laundering, working with organized crime, is accused of failing to allow authorized law
enforcement interception. Basically, he didn't
give investigators access on the app. You're not a social media company, but talk about the
difference and how that has affected you all, because you're adjacent to him, of course.
Yeah. I mean, I think the discourse is exactly the right way to frame this. The impact of the arrests, the talk and the kind of information,
hyperbole, the way this sort of became a martyr story, and the lack of really concrete information,
which never helps, meant that there was a lot of questions, right? And I remember being like,
wait, what happened? What are the charges? Sort of sorting through the French legal system.
But ultimately, you know, it doesn't affect us, right? We're not
a social media app. You can't go viral on Signal. You can't broadcast to millions of people. It
doesn't have sort of encounter features. It's a very different thing. And for people who don't
understand, these are groups that they create on WhatsApp or on Pavlodurov's platform, Telegram.
And millions and millions of people.
And then they have a kind of like what's happening near me.
So you can feature that will like with geolocation.
So there's all sorts of things happening there that mean that the legal and regulatory
thresholds and duties they face are wildly different from Signal.
They're a social media company.
They broadcast things to millions of people.
They are constitutively not private and not very secure. Signal has designed ourselves
so we are an interpersonal communications app. We intentionally do not add channels or features
where you can go viral. We intentionally steer clear of those kind of duties because you cannot do those duties. You can't meet those obligations
while being robust in our privacy and security guarantees. That's just full stop.
Right. But there is that idea, the concerns that total accretionism doesn't help the good guys,
it aids and abets bad actors. I think that's the bigger worry about CSAM. This is child
sexual abuse materials. Every week, we get a question
from an outside expert. Let's listen to this one, and I'd love you to answer it. Hi, Meredith. This
is Renee DiResta, author of Invisible Rulers and previously the technical research manager at the
Stanford Internet Observatory. My question for you is, as AI makes it easier to generate synthetic,
non-consensual intimate imagery and CSAM, how specifically should
platforms and governments respond to the production and dissemination of this harmful content?
Is it possible to implement effective measures against these abuses without infringing on
privacy and free expression? So what are your thoughts on this? One of the things that's
important is there are significant and justifiable concerns in this area, right? In certain areas, drugs,
child sexual abuse, et cetera. Then how do you protect against it?
Yeah. I mean, absolutely. This is a very serious area, right? And that's one of the reasons it has
been so, let's say, effective in floating a lot of these anti-encryption proposals because it takes the air
out of the room, frankly. Like a lot of people have experience with this, sadly, and it is
extremely difficult and extremely emotionally engaging. So, you know, I think we need to take
the issue itself seriously first, right? How do we protect children full stop? And then begin to look at what are the
slate of options that we have? Where is the problem coming from? Are we funding social services?
Are we ensuring that there are infrastructures in place where when a child reports that something
bad is happening, that a priest know, a priest or a teacher
or their uncle are involved in something horrifying, how do we take care of that child
and protect them? Why is Prince Andrew walking around in a country that is fixated on encryption
as the culprit here, right? And this doesn't, this is not saying that platforms don't have
responsibility here, but it is saying that I think when you look at the facts here, when you look at the number of people in different countries' law enforcements who are actually dedicated to reviewing this material and tracing it down, I can't say those numbers publicly because they were given to me privately, but we're talking about tens of people.
In one case, we're talking about tens of people. In one case, we're talking about
two people total. So a lot of times we're talking about, you know, the issue is a haystack full of
needles and not enough people to categorize the needles. We're talking about resources there. We
don't have basic trust and safety tooling available to startups. So there are many places to invest
in actually tackling this, both online, right? You know, go after payment processors. A lot of
what's happening is sort of sextortion, and that's a node there. You know, there are reporting
infrastructures for public platforms and social media. There are all sorts of research on this.
social media. There are all sorts of research on this. Attack the business model, right?
All of those are options on the table. To me, what leads to my distrust of a lot of the remedy space is that with all of that being obvious, with a lack of investment in social
services, with the culture we have where children are often not believed here,
have where, you know, children are often not believed here, still encryption is manufactured as the cause of this problem when there's very, very little evidence that private communication
has any causal role in this issue.
Right.
But of course, I think Durov flouted not helping.
Yeah, but he also doesn't run a private communications app, right?
Like none of that was private.
It was just a flex of like, yeah, we're not going to help. Right. So that's a social media platform just saying no. And I think there was a, you know, how do they say it? A fuck around and find out moment. and encryption, and it's weird how there's sort of a transitive property by which encryption
becomes the problem to be solved in every case, even when the evidence doesn't support that.
Well, it's sort of a brew. It's like throwing a hammer at a piano to make music or something.
So in the U.S., we're seeing a lot of states are passing bills requiring age,
one of the solutions is age verifications and restricting social media apps and access for minors.
Florida, South Dakota, Oklahoma, name a few.
There's agreement with a lot of these bills that's been used as a smokescreen.
You've called it surveillance wine in accountability bottles.
Talk a little bit about these ideas of restricting young people and then what you meant by surveillance wine in accountability bottles.
Yeah.
Well, I mean, look, I don't think restricting young people ever works as a young person always figured it out faster than my parents. This is a
paradigm where I am, I have very low hopes and I have even lower hopes when I see the folks at the
tip of the sphere of this movement, which are frankly often large biometric and age verification
companies like Yoti who are selling the remedy, right? So if we pass a bill that requires age verification to get
into websites, none of these platforms are going to do that themselves. They're going to contract
with a vendor or a third party who will bolt on age verification software and run that for them
because, you know, that's a liability shield and, and you, you know, you don't want to build what you can, can lease or borrow.
And then we get into, you know, a situation where age verification is a mass surveillance
regime that is, you know, similar to tracking people's content and habits online, right?
You can't know that someone's a child without knowing who is also an adult to be, you know,
clear about that. And so we begin to
legislate a tracking and monitoring system that, one, won't really work based on all the evidence
to date. And two is attacking the problem at the level of restriction, not at the level of
platform business models, right? And this is where we get into accountability wine
or surveillance wine and accountability bottles,
which is really like you and I lived through this.
We recognize that there is something really wrong
with the big tech business model, right?
That accountability is needed.
And we saw in the mid 2010s
that there was a real call for this.
And what came out of that were some good ideas. And
then like this, I think some bad ideas wrapped in accountability, right? So instead of, you know,
going after the surveillance supported, you know, advertising, social media, business model,
cutting off the streams of data, perhaps, you know, implementing a very strong federal privacy law in the US that would undermine that model, take a bunch of money
off the table, but, you know, clean up a lot of the global harms, we're looking at bolting more
surveillance and monitoring onto the side of it. So it's giving the government and NGOs and whoever
else a piece of that monitoring, instead of reducing the monitoring itself.
And so I think it's, you know, how do we tune these regulations and how do we, you know, how do we find the political boldness to actually go up against these business models and those who are profiting from them instead of sort of, you know, try to make our name as someone who did go up against them, but actually propose remedies that don't go up against them. And I guess that's the age old question of, you know, how do we find real
bold political leaders? Are you worried about the impact of the outcome of these laws? Say here in
the United States, we have this election, a potential autocrat who would love surveillance,
although I don't think you'd understand it at this point. I am. I am. I, you know, I would say I am,
I'm a bit of a political exile and that I'm concerned with centralized power wherever it
exists, whether that's in large tech corporations or in governments. I don't think handing more
tools to governments and then imagining we live in a counterfactual world in which those
governments will always be run by us and benevolent adults is correct. And I think a lot
of people sort of, you know, have been pushing for accountability often live in that world.
And I also am, you know, frankly, I think a lot about what the collateral consequences could be
of a very bad law in the U.S. that affects the big tech companies that control the world's
information and infrastructural resources, right? You have three companies based in the U.S. that affects the big tech companies that control the world's information and infrastructural
resources, right? You have three companies based in the U.S. with 70, 7-0% of the global cloud
market. You have five information platforms, social media platforms, four of which, the biggest
four are jurisdictioned in the U.S., which at this moment control most of the world's information environment. So that's a lot of power to be homed in one jurisdiction, particularly given the kind
of volatility we're seeing and the way that just people in general, as we move through
generation after generation who are kind of native to tech and kind of understand these
things, are beginning to recognize just how much power and control is housed in these
companies.
I think that recognition is seeping into the bedrock of popular consciousness.
And, you know, I want to reduce this toxic business model.
I want to create an ecosystem that is way less concentrated before someone with malicious intent gets their hands on that throttle.
We'll be back in a minute.
Fox Creative.
This is advertiser content from Virgin Atlantic.
Hey, Carrot Scott.
Remember me, the guy, the Tina Fade,
your Alec Baldwin, sort of rejuvenated your career. Anyways, I'm in the lounge at Heathrow.
I'm at the Leithrow, the Virgin Lounge,
the Virgin Atlantic Clubhouse Lounge, and I'm about
to have the chicken tikka
masala. I love it here.
You should check it out. It's where the cool kids
hang out. Anyways, hope you're all safe travels.
Scott, frankly, it's a miracle
that Virgin Atlantic let you into the clubhouse and their incredible business class, but I guess they did.
Tell me how it was. So, Cara, I'm an original gangster when it comes to Virgin. I've been
flying Virgin for 20 plus years, and I do the same thing, and they get it right every time.
They always have the financial times for me, and I order the chicken tikka masala.
And that is my virgin experience.
If it ain't broke, don't fix it.
And your drink was?
What is your drink?
Well, I used to drink a Bloody Mary
or a beer in the clubhouse.
I don't drink alcohol when I travel anymore,
so I just do mineral water.
But they have this kind of cool cocktail
that's like a lemongrass or some sort of cool margarita thing
and I get a virgin one.
What is your pre-flight routine?
What is your actual, besides your chicken tikka masala,
the Virgin Clubhouse?
My pre-flight routine is,
well, I always do the same thing in the morning
when I travel, I try and work out.
I take the dogs for a walk
and I always make time for the clubhouse
because I do enjoy the Virgin Clubhouse at Heathrow.
So check out virginatlantic.com for your next trip and see the world differently.
Certain amenities are only available in selected cabins and aircraft.
Support for this show comes from Miro.
While most CEOs believe innovation is the lifeblood of the future, only a few feel their teams excel at making innovative ideas actually happen. The problem is once teams move from discovery to ideation to
product development, outdated process management tools, context switching, team alignment, and
constant updates massively slow the process. Now you can take a big step to solving these problems
with the innovation workspace from Miro. Miro is a visual collaboration platform that can make sure
your team's members' voices are heard. You can make use of a variety of helpful features that
let your team share issues, express ideas, and solve problems together. And you can save a ton
of time summarizing everything by using their AI tools, which synthesize key themes in just seconds.
With Miro, you can innovate faster and feel stronger as a team. Whether you work in innovation, product design, engineering, UX, agile, or IT,
bring your teams to Miro's revolutionary innovation workspace and be faster from idea to outcome.
Go to Miro.com to find out how.
That's M-I-R-O dot com.
Fox Creative.
This is advertiser content from Zelle.
When you picture an online scammer, what do you see?
For the longest time, we have these images of somebody sitting crouched over their computer with a hoodie on, just kind of typing away in the middle of the night.
And honestly, that's not what it is anymore.
That's Ian Mitchell, a banker turned fraud fighter. These days, online scams look more
like crime syndicates than individual con artists. And they're making bank. Last year,
scammers made off with more than $10 billion. It's mind-blowing to see the kind of infrastructure
that's been built to facilitate scamming at scale.
There are hundreds, if not thousands, of scam centers all around the world.
These are very savvy business people. These are organized criminal rings. And so once we
understand the magnitude of this problem, we can protect people better.
One challenge that fraud fighters like Ian face is that scam victims sometimes feel too ashamed to discuss what happened to them.
But Ian says one of our best defenses is simple.
We need to talk to each other.
We need to have those awkward conversations around what do you do if you have text messages you don't recognize?
What do you do if you start getting asked to send information that's more sensitive?
Even my own father fell victim to a, thank goodness, a smaller dollar scam, but he fell victim.
And we have these conversations all the time.
So we are all at risk and we all need to work together to protect each other.
Learn more about how to protect yourself at vox.com slash zelle.
And when using digital payment platforms, remember to only send money to people you know and trust.
Last time we spoke, AI was all we talked about. Things have changed dramatically, but you were warning back then about the surveillance economy and power consolidation.
Cassandra, I would say, I think you inspired me a lot to start really talking about it and pointing it out over and over and over again.
to start really talking about it and pointing it out over and over and over again.
So how are you feeling about AI now
and how it's related to this new AI economy,
this idea of surveillance capitalism?
Because these systems are going to get,
you're saying we should stop it now.
Is it even possible
given the consolidation of power in tech?
Yeah.
Well, I am, you know, look,
I am a consummate optimist.
I wouldn't be able to get up and do this if I didn't believe that change was always possible.
I think we are, we're in a frothy, hypey moment.
And I do see the AI market wobbling.
I see the definition of what AI is sort of wafty right now.
And I see a real struggle by the incumbents to try to keep that market going
and maintain their advantage. And so I can explain a little bit why I see that, right?
And maybe we'll just start with what AI is. This deep learning paradigm, which is, you know,
all the transformer models, chat GPT, all of this is still deep learning, right? We haven't moved
into some sort of escape velocity for a new form. It's actually pretty old. The algorithms are from the late 1980s. Now there's been sort of
moves that improve them, but nonetheless, it's pretty old. What is new is the massive amounts
of data available. And this is data that's collected, created, that people are enticed
to deposit by these large surveillance platforms, and the massive amounts of
computational infrastructure, which was basically created in order to sort of, you know, support
this business model. And then in the early 2010s, they were like, oh, you know what, these old
algorithms, machine learning algorithms, but we're going to call them AI because it's flash here,
do new and interesting things, you know, improve their performance when we match them with our
massive amounts of data, when we match them with the computational power. Yeah. So you,
right now, for people that know most of AI technology, they're held or financed by one
of the big names, Microsoft, Google, Amazon, Apple, or Meta, X less so. Most of the AI chips,
the GPUs are controlled by NVIDIA, which you've called a chip
monopoly. So what you're essentially saying is they've assembled all the parts, right? That's
really what's happened. They've got the data they didn't have before. They've got the compute power
and it's all in the hands of people that can afford it. There's also the idea they've been
pushing for a while that bigger is better. You know, they're always like, we need to be, I've
heard it from Mark. I've heard it from all of them,
is we need to be this big in order to fight China.
That's usually the boogeyman they use,
which is a boogeyman, let's be clear.
So we spoke with Mustafa Suleiman a few weeks ago
who said that even $1.3 billion from Microsoft
isn't enough to make inflection AI successful.
So he took the whole ship to his funders.
We're seeing valuations in AI that are insane, $157 billion
for a startup, OpenAI, and the money coming from just a few sources. You said the market is wobbly,
but it doesn't feel wobbly. It feels like it's the consolidation of power again. So I'd love
you to talk about what that means. I know, I think what is wobbly here
is that there isn't a clear market fit for this, right? We have this round trip investment. So
it's, you know, the big companies, you know, I think it's 70% of series A in AI startups were
coming from the big infrastructure providers, and it was often in the form of credits to use their
infrastructure. So we're, you know, we're talking about something, something really muddy there,
but it's not an organic startup ecosystem, right? And the path to market. So if you want to monetize
this, it still goes through these companies, you're either selling access to a cloud API by
Azure, Google Cloud, whatever, or you are monetizing it by integrating it into your
massive surveillance platform, you know, a la meta, and, you know, kind of using it to sell ads.
Let me rewrite this email for you.
Exactly. Which, you know, no, thank you. The email was one word, and it was fine.
Yeah, that's what I always say.
And so I think it's, you know, I think there still hasn't, if we're, you know, we're talking
about billions, hundreds of, you know, trillions of dollars.
We're talking about capital to the moon.
We're talking about the capital no one else can reach.
And then we have like a bot that messes up our email or Target spending a huge amount
of money for their company to develop this chat bot to help employees that was immediately
roasted by everyone because it was so wrong.
It was so bad. It was so annoying. Or, you know, at Upwork Research just published a survey that
said 77% of the people from, you know, executives through rank and file employees who they interviewed
said AI made their work messier, not better. Right. So like when the rubber meets the road
on the actual business model, we're still struggling to figure out like, what does this do that's worth hundreds of billions of dollars?
Let me push back on that. You could say, I heard that early internet, what do I need this for?
You couldn't have imagined an Uber when apps happened. You just couldn't have, like nobody
could. And eventually they're making money now, like a ton of money, but still a lot of these
companies you wouldn't have imagined then. So I think we're sort of in the stupid assistant phase, but it's not going to stay there necessarily. Maybe you think differently. I
don't. I think it will improve and become better and show what it's used for. I think we're going
to see a vast culling of the market because we simply don't need many, many big, big bloated models.
That are the same, LLMs.
That are the same and that are very resource intensive. I think we also need to be super
careful about how we're measuring better. And this gets into benchmarking and evaluation.
I just published a paper with a couple of co-authors. You're looking at this better bigger is better paradigm and we actually you know um you see that
smaller models more purpose-built like with more better curated domain specific data often perform
better in real life radiologists or something like that or cancer certain cancer cells like a lot of
health applications so i think it's um you know, I'm not saying throw the baby out with the bathwater
data isn't, you know, isn't useful for anything. But I think this particular type of like massively
bloated model that, you know, it's not going to stop hallucinating, we are bolting kind of
relational databases onto the side of these, you know, probabilistic systems trying to kind of
stabilize them so that they're not as embarrassingly
wrong on main like they are in search in Google right now. But nonetheless, that's not a solution
to the core problem, that they don't have information augured in facts.
So you're saying not useful, like the same candy bar with different wrappers,
the same shitty candy bar with different wrappers. What do you imagine would be useful? These smaller, as you noted in your paper,
these smaller databases where you, these LLMs that are really specifically useful, for example.
I mean, that's a bit of a tricky question because it's hard to answer when the claims being made
around AI's utility by sort of the marketing are that
it's useful for everything, right? I think we need to really like break it down, like,
what would be useful in education, right? And this is where I'll point to that some of the AI now
work looking at some of the industrial policy. It's like, is AI even the thing that's useful there?
Or are the climate costs, are the opportunity costs? Do we need school lunches
or a chatbot? And I want the freedom to answer that question before I have to sort of take AI
as a given and be like, how to make it more useful? Because there are places where data
analysis is super, super important and useful. But this is not a general everything tool.
This is a product being produced by big tech, which is sort of making more use of the derivatives of this already toxic business model, creating more data that is often faulty or, you know, harmful, but nonetheless, powerful affects our lives. And that is being sold as kind of a skeleton key for everything when it isn't actually proven as useful. Just more for a surveillance economy. As you said, I spoke with historian and philosopher
Yuval Harari recently, as I said, and that's his nightmare scenario. He's calling for more
regulation in AI and everything to slow it down. Now, you are a senior advisor on AI to the FTC.
What needs to happen from your perspective?
Well, the market could slow down
or the business model could slow down.
I think, you know, things like the CrowdStrike outage,
which is when, you know,
Microsoft effectively cut corners on quality assurance,
on testing, on monitoring for a very, very critical update
that affected core infrastructure,
like healthcare systems, traffic lights, banks.
And cost, cost money.
Yeah, it cost money for them to do this right.
So they didn't do it right.
And global infrastructure was offline for days.
So the evidence that this sort of concentrated business model
is bad is it's no longer deniable.
So I do think that combined with the danger
of this sort of concentration in one jurisdiction,
the concerns about sovereignty that you're seeing across the globe, will, I believe,
impel real investment and real focus on building healthier alternatives, you know, like Signal,
as a sort of model for the thing we need. I also think the climate costs are just undeniable.
Undeniable. Well, you know, they're building nuclear devices for that. Three Mile Island
is so reopening. I mean, did no one Google Three Mile Island or did they hallucinate? Well, to be fair,
fossil fuels are worse than nuclear, just by stats. I mean, fossil fuels are terrible. But
these companies will claim that they are carbon neutral, but you scratch the surface there. No,
they're not. It's nonsense. And you see that they're buying kind of carbon neutrality certificates and using weird accounting. Right, right. Yeah.
But let me finish up by asking, like, because the biggest company here, obviously, is getting the
most money and everything else is OpenAI, which is sort of the quarterback right now of this thing.
Microsoft, as we call them. Yes, yes, that's right. Well, they don't like that. But
you've written about how the term OpenAI was always misleading. Obviously, they're ditching their nonprofit status. I had talked about this several times to many people who were sort of, they were really, they really did believe in the mission, I have to say. And I kept saying there's too much money now. I'm sorry. I don't know who you are, but you're naive at best to think that this amount of money is going to keep this a
nonprofit. You obviously probably took a salary hit when you joined Signal. It's a real nonprofit.
Talk about, is it possible to unwind that mindset? You just said we all need organizations,
many more like Signal, but this is an enormous amount of money. How do you disassemble the mindset there?
And how do you get it into a more mindset like yours?
Because I just don't see any with this amount of money going any other direction.
Quickly, one aside, Signal pays very well.
So if you're looking for a job, check out our jobs page.
We do try to retain and attract the best talent.
And I think in the open AI case,
like it was never about just the models, right?
Like you need massive compute.
You know, it can cost a billion dollars for a training run.
You need massive amounts of data.
And so you're gonna have to figure out either convince a big company to give that to you,
convince someone
with billions and billions of dollars to burn it on that. And once they burned it on that,
what do you do with a model? Like, I want to let that hang in the air. What do you do with a model?
You can't use it at scale again, without a path through to market that goes through these
companies. So this is when I talk about AI being a product of
big tech, like that's very, very literal, right? They have the data, they have the compute,
they have the access to market. Either, you know, Meta is applying it within Facebook,
at, you know, for services, this email prompt thing, or OpenAI is advertising it as a like
dongle as a, you know, for Azure customers who sign up to that,
or you have something like Mistral, which is a national champion in France, building open source
large models. But how do they actually get to market? How do they make good to their investors
and, you know, their business plans? They license it to Microsoft, who then licenses out through their cloud. So this
is when I talk about 70% of the market in cloud being controlled by three companies, we're also
have to fold that into the AI conversation and recognize that it's a, you know, AI has not
introduced any new software libraries or new hardware. This is all stuff we've had in the past that we know about that exists. This is
not novel. What is novel is this massive amount of data and the way that it's being used to sort of
train these statistical models that can then be applied in one way or another.
So what happens then from your perspective? What happens? I mean, obviously, it was never going to
be a nonprofit after the money raising happened. And they need the money, FYI. They can't not have the money to
grow. And they've got everybody on their tail, too, at the same time. What is their fate? What
happens to a company like that? I mean, it seems like there's a lot of, like, interpersonal things
happening. Like, you know, it every company court drama as well so predicting
you were around at google i was there even before you it was like it was come on it's true come on
touche touche car amazon come on yeah um the hot mess of twitter come on oh god yeah so hot mess
aside yeah um i would say they just kind of slowly become an arm of microsoft you know the same way
you know maybe there's an alphabetization of microsoft the same way google kind of slowly become an arm of Microsoft, you know, the same way, you know, maybe there's an alphabetization of Microsoft, the same way Google kind of spun out, you know, different entities as
an antitrust prophylactic, you know, but I don't, you know, again, there was never a model to be a
nonprofit long term, given their vision, in my view. And I think what you've seen is just a
series of whittling away at that until it no longer exists. So, you know, Microsoft, Amazon, Google, those are the three big clouds. You know, that's the gravitational pull in which model creators are going to ultimately get sucked into, you know, NVIDIA maybe.
Is there a model for a nonprofit in AI? I think so. I mean, I think so. But again, we got to take this term AI back a little bit.
It does not simply mean massive, massive, massive law of scale, you know, bigger is better AI. There
are many forms of AI that are smaller, more nimble, like more actually useful. And I think
there is a, you know, there could be a model for AI research that is asking questions that are less
useful to the big players, to this bigger is
better paradigm, and perhaps more useful for smaller use, for use in things like, are we
training models for things that aren't profitable, like environmental monitoring or civic something
or another? I think there's a model there. I think, again, though, what I'm seeing is a misunderstanding of that fact and a misunderstanding of just how capital intensive this bigger is better AI is.
That's having governments around the world who are anxious about sovereignty concerns and like want their own basically throwing money at AI without understanding that that's going back into the same companies. That's not going to ensure sovereignty. So it's like, oh, great, you have a $500 million European AI fund.
Well, let me break it to you slowly. That's half a training run.
That's right. That's right.
So what are you doing?
You can't afford it. Yeah, you can't. Just regulate them. Just regulate them. Same way
with privacy and everything else. So last question. We spoke five years ago, light years ago, it seems like a million years ago, a trillion years ago, looking down the road, what do you
think the next five years will be the most important things for your work for signal
tech in general, if you had to prognosticate? Yeah, well, what I am working on what I'm kind
of obsessed with right now, in addition to just, you know, building and sustaining Signal, which I love,
is how do we find new models to sustain better tech, right? Like, once we've cleared the weeds
of this toxic model, once we've prepared the ground, how do we grow things that are actually
beneficial? How do we create a teaming ecosystem? How do we encourage open tech and, like,
democratic governance, which I
think is a thing we don't talk about enough, frankly, but like, how do we have a part in
deciding what tech is built, who it serves, how we assess it, you know, some of the scrutiny that
Signal receives from its, you know, loving and sometimes belligerent community of, you know,
security researchers and hackers is part of our strength, right? How do we expand that
to people? How do we shift from a monolithic five platforms control our news to a much more
heterogeneous ecosystem that's a little warmer, a little fuzzier, a little RSS feed ready,
so to speak? I think those are problems that aren't new, but that I think there is a real new appetite to actually tackle because it's getting too obvious.
When you have Andreessen Horowitz and Y Combinator coming out and saying, like, we're the champion of little tech, we know that the death knell has rung for big tech.
And what we need to do is then, like, define what comes after.
Yeah, absolutely.
All right, Meredith, thank you so much.
I love talking to you. I should talk to you so much. I love talking to you.
I should talk to you more often.
I love talking to you, Cara.
Not every five years,
and I really appreciate it
because I think people
don't understand it.
Please, everyone, use Signal.
I use it all the time.
Thank you.
It's free.
It's not stealing my stuff.
And it's really,
it's another moment
where I'm like,
where is the signal AI thing?
Amen.
Well, I'm team Cara on that.
Okay.
All right.
On with Kara Swisher is produced by Christian Castro-Russell, Kateri Yochum, Jolie Myers, and Megan Burney.
Special thanks to Sheena Ozaki, Kate Gallagher, and Kaylin Lynch.
Our engineers are
Rick Kwan and
Fernando Arruda and
our theme music is by
Trackademics.
If you're already
following the show,
you're not drinking
the surveillance wine.
By the way, it tastes
terrible.
If not, back into the
accountability bottle
for you.
Go wherever you
listen to podcasts,
search for On with
Kara Swisher and hit
follow.
Thanks for listening to
On with Kara Swisher
from New York Magazine,
the Vox Media Podcast Network, and us.
We'll be back on Monday with more.
Do you feel like your leads never lead anywhere?
And you're making content that no one sees?
And it takes forever to build a campaign?
Well, that's why we built HubSpot.
It's an AI-powered customer platform that builds campaigns for you.
Tells you which leads are worth knowing.
And makes writing blogs, creating videos, and posting on social a breeze.
So now, it's easier than ever to be a marketer.
Get started at HubSpot.com slash marketers.
Support for this podcast comes from Stripe.
Stripe is a payments and billing platform supporting millions of businesses around the world,
including companies like Uber, BMW, and DoorDash.
Stripe has helped countless startups and established companies alike reach their growth targets,
make progress on their missions, and reach more customers globally.
The platform offers a suite of specialized features and tools to fast-track growth,
like Stripe Billing, which makes it easy to handle subscription-based charges,
invoicing, and all recurring revenue management needs.
You can learn how Stripe helps companies of all sizes make progress at Stripe.com.
That's Stripe.com to learn more.
Stripe. Make progress.