The Prof G Pod with Scott Galloway - Meredith Whittaker on Who Controls Your Data in the Age of AI
Episode Date: March 5, 2026Meredith Whittaker, president of the Signal Foundation and a leading voice on AI and privacy, joins Scott Galloway to examine the growing tension between artificial intelligence and personal freedom. ... They discuss how Signal actually works, why most messaging apps aren’t as private as they claim, and whether AI agents embedded in operating systems pose new security risks. Algebra of Happiness: a hack for dads. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Support for the show comes from David Protein.
Who doesn't enjoy a protein bar after a good workout?
Here's a tip, David Protein bars.
All David Protein bars are designed to maximize protein while minimizing calories,
and they say that their bars deliver the highest protein per calorie ratio of any leading bar on the market.
Their David Gold Bar, for instance, delivers 75% calories from protein,
and the David Bronze Bar delivers 53% calories in protein.
Head to Davidprotein.com slash PropG,
where they're offering a special deal for our listeners by 4%.
four cartons and get your fifth free. You can also use their store locator to find David in stores at a retailer near you.
Support for the show comes from VCX, the public ticker for private tech. The U.S. stock market started history's greatest wave of wealth creation.
From factory workers in Detroit to farmers in Omaha, anyone could own a piece of the great American companies.
But today, our most innovative companies are staying private longer, which means everyday Americans are missing out, until now.
introducing VCX, a public ticker for private tech.
Visit getvcx.com for more info.
That's getvcx.com.
Carefully considered the investment materials before investing,
including objectives, risk, charges, and expenses.
This and other information can be found in the funds perspective at getvc.com.
This is a paid sponsorship.
In communities across Canada,
hourly Amazon employees can grow their skills and their paycheck
by enrolling in free skills training programs
for in-demand fields. Learn more at aboutamazon.ca.
Episode 386 is the area code serving north-central parts of Florida.
1986, top gun hit theaters. True story. Tom Cruise is starring in romantic comedy about body positivity.
He and his actress both came 300 pounds for the roles. The name of the film?
Missionary Impossible. Just give it a second. Welcome to the 386 episode of the
Prob G-Pod. What's happening? In today's episode, we speak
with Meredith Whitaker, the president of the Signal Foundation and a leading voice on AI policy.
I first came across Meredith at South by Southwest. She was on a panel. I never, I was bored,
and I walked in. I almost never listened to panels. And I thought, who is this? Who is this
strange, dark-haired woman speaking all sorts of truth and logic about AI? And it ends up,
she runs the app, and I had lunch with her, and she struck me as really intelligent. And I have
been much more concerned about, for the first time, I don't know if I'm getting older,
I'm much more concerned about my own privacy, worried that at some point all of my AI queries will be made public, right?
Is that my prostate?
Question mark, expecting AI to answer.
I'm pretty sure every ailment I have is because of an enlarged prostate.
I'm convinced there everything starts with the prostate.
Anyways, I don't know how I got here.
Anyway, she's an incredibly insightful, intelligent person, and I would argue probably the most well-liked person or C-E.
and tech right now, which isn't saying a lot.
Very impressive, very intelligent, and sort of signals trying to or is, I think, carving
itself out as sort of the clean, well-lit part of the Internet.
And I'm fascinated with the trade-off between privacy and utility, and we'll speak more
about that.
Anyways, here's our conversation with Meredith Whitaker.
Meredith, where does this podcast find you?
I'm in New York City.
In New York.
I thought you were, I thought you lived in Europe.
I'm in Europe a lot. I go between Paris and New York. We're small org. We spread a lot of jurisdictions.
There you go. So let's bust right into it. I want to start with the basics. Signal has been in the news a lot this year. And we'll get to that in a moment. We know it's widely used by journalists, public officials, and people are especially concerned about privacy. But on a practical level, how does Signal actually work and what makes it different from other messaging apps?
On a practical level, Signal is the most widely used actually private communications platform.
We go out of our way to collect as close to no data as possible.
And that's really what sets us apart because we exist in an ecosystem where, for better or for worse, in one way or another, most of the time you make money in tech by collecting and monetizing data.
So you collect data about the users of your platform and then you sell access to different
types of users based on that data to advertisers or you collect data and you train your AI model
with it, et cetera, et cetera, et cetera.
That's kind of the economic engine of tech since the 90s and, you know, maybe before.
Signal is obsessed with maintaining the human right to communicate privately.
And we have built an alternative communications platform that does just the
that. We end up rewriting core pieces of the proverbial stack to enable us to do what is normal,
to provide a basic and easily usable messaging platform in a way that does not collect your data
and thus does not put us in a position of being forced to turn it over if we get a subpoena
of having a breach expose your most intimate information of violating the compacts that we make
with the people who rely on us. So that's, in a nutshell. We're also open source. And open source
matters here because that means you don't have to trust me. You don't have to like me.
You can actually verify that, yeah, the thing that she or anyone says it does is what it does
because we can scrutinize the code. We can prove it.
Well, I think you're the most likable CEO in tech, which isn't saying a lot.
But, yeah, well, the bar, the bar is where it is. I'll take it.
So the term, I meant that, the term encrypted is a loaded term.
Can you talk about the biggest misconception about encryption and messaging apps?
I mean, I think it's a little bit like, you know, the way skin care ingredients or like, I don't know, gold or something gets invoked, right?
We can say it's, you know, both of these have encryption in them, but one has 10% encryption or encryption or encryption is.
only applied to 10% of the data, whereas another is fully encrypted. And so if you look at, say,
WhatsApp and Signal, WhatsApp uses Signal's encryption protocol. And this is the gold standard for
encryption messaging was released in 2013, has stood the test of time, really advanced the field of
privacy preserving technology when it was introduced. That's licensed by WhatsApp, but WhatsApp only
applies it to one layer of the WhatsApp layer cake, so to speak. They use it to encrypt the content
of your messages. So if I'm texting you like, you know, hey Scott, where are we going to meet it,
South by Southwest? WhatsApp would not be able to see that. But WhatsApp does not encrypt
intimate metadata. And metadata is a fussy little term, but it's, you know, it's actually pretty
revealing data. It's who you text. It's who's in your contact list. It's your profile photo.
It's when you started texting someone, your therapist, your oncologist, your FBI cutout,
whoever it is, that's very revealing data.
And then, of course, we're not owned by meta, which means that, you know, there isn't a bunch
of Facebook and Instagram data.
You could then join that intimate metadata with to make profiles, et cetera, et cetera, et cetera.
So Signal is, you know, encrypted up and down the stack.
We encrypt the contents of your messages, but we also encrypt your profile.
photo, your contact list, who is texting whom, who is messaging with whom, who's in your groups.
So you can look at our website, signal.org slash big brother, and we work to unseal any subpoena
that we are forced to comply with. And what you see there is a long list of requests for data.
That's normal. That's what governments assume an average messenger is able to give up.
and then you see what we're actually able to give up, which is very close to nothing.
We can confirm that a phone number has an account.
We can confirm a handful of other things.
But we have gone out of our way to be unalloyed, you know, 100% encrypted to use that
slightly metaphorically.
But, you know, you get the gist.
We really take that extremely seriously.
We're not just sprinkling encryption dust on top of a, you know, ultimately non-private
infrastructure.
So I wanted to talk about something that gets no news. I don't know if you've heard of AI, but it's in the news recently.
AI.
AI, right.
There you go.
What is that?
What is that?
It's a movie by Steven Spielberg.
So the AI agents specifically, you've been pretty vocal about the dangers of the agentic AI that the danger opposes to our privacy and security.
Can you elaborate on the risks here and what are most people not aware of?
Yeah.
The risks are the flips.
of the promises, really. And we actually started talking about this about a year ago when we were
seeing things like Microsoft recall creep into the product updates, in this case, for Windows,
and really recognizing that signal exists at the application layer, right, which means that we have
to trust the operating system. We build on top of iOS or Android or Windows, and we have to trust
that the operating system will be a reliable set of tools that we as developers can leverage to
ensure that signal works for the people who rely on us and that, you know, the users of the device
can rely on. And our primary concern is that as agents get integrated into the operating systems
by these, you know, AI companies, the people who maintain the operating system, and as they get
leverage beyond that in ways that are giving them very pervasive access to your life,
it undermines our ability a signal to guarantee the type of privacy that we guarantee at the
application layer. And I'll give, you know, that may sound a little bit arcane to people who don't,
you know, live in these waters with me. But just a quick example, you know, if you have an agent
running on your operating system or sort of given deep access to your file system and other,
other sort of data on your device in order to do something like, you know, plan a work dinner.
Well, the agent will need access to your calendar. It will need access to your browser,
perhaps to look for a restaurant, maybe your credit card or your EA's credit card in order to
book that work dinner. And in a scenario where you are, as you should be all using signal,
it will also need access to your signal and your signal contacts to text them and coordinate
dates and times. All of that becomes a pretty frightening set of data access points and
ultimately a security vulnerability because instead of having to break our gold standard encryption
algorithm, which has been tested and so mathematically proven to be secure, you just have to
leverage the type of access that these pervasive agents are being given into your applications,
into your intimate data in ways that are, you know, are just from a security architecture
perspective, very, very insecure. And I'll note that right now, almost every agent that we're seeing
kind of in the mainstream is relying on large LLM models, models that are too big to run on your
device, which means that, you know, ultimately most of this data would need to be sent off your
device to a cloud server to be processed for inference, you know,
creating another security issue and potentially, you know, placing data in the hands of whatever
company is running that agent. So that's, you know, our concern is really coming from a
privacy integrity standpoint and from a concern for the people who rely on signal by the
introduction of these tools, which can be useful for some things, but, you know, also pose this
pretty significant risk that isn't getting the kind of attention, I believe it should.
We'll be right back after a quick break.
Support for the show comes from BetterHelp.
This International Women's Day,
BetterHelp wants to remind all the mothers,
grandmother's aunts, and sisters of the world
that you deserve to take care of yourself
as much as you take care of people around you.
If you want help getting connected with a therapist,
you could try BetterHelp.
Better Help does the initial matching work
so you can focus on your therapy goals.
All you need to do is fill out a short questionnaire
that helps identify your needs and preferences
and BetterHelp matches you with a licensed therapist
operating under a strict code of conduct.
After 12 years plus of experience,
BetterHelp says they have an industry leading match fulfillment rate.
And if you aren't happy with your match,
you can switch to a different therapist
at any time from their tailored recommendations.
With over 30,000 therapists,
BetterHelp is the world's largest online therapy platform,
having served over 6 million people globally,
and out of over 1.7 million client reviews,
BetterHelp's average rating is a 4.5 out of five for a live session.
your emotional well-being matters. Find support and feel lighter in therapy. Sign up and get 10% off at betterhelp.com slash prop G. That's betterhELP.com slash prop G.
Support for the show comes from LinkedIn. It's a shame when the best B2B marketing gets wasted on the wrong audience. Like, imagine running an ad for cataract surgery on Saturday morning cartoons or running a promo for this show on a video about Roblox or something. No offense to our Jen Alphi.
listeners, but that would be a waste of anyone's ad budget. So when you want to reach the right
professionals, you can use LinkedIn ads. LinkedIn has grown to a network of over one billion
professionals and 130 million decision makers according to their data. That's where it stands
apart from other ad buys. You can target your buyers by job title, industry, company role seniority,
skills, company revenue, all so you can stop wasting budget on the wrong audience. That's why LinkedIn
Adds busts one of the highest B2B return on ad spend of all online ad networks. Seriously, all of them.
Spend $250 on your first campaign on LinkedIn ads and get a free $250 credit for the next one.
Just go to LinkedIn.com slash Scott. That's LinkedIn.com slash Scott. Terms and conditions apply.
Support for the show comes from Square. Think about your favorite small business, that coffee shop on your
block, or the salon you've been going to for years, or that Dog Walker you owe his past who seems
to be having the time of her life. Square makes it sense.
It's simple to run a small business no matter what it is. Whether it's one brick and mortar
a pop-up, mobile service, or franchises, Square can help track sales, manage inventory, and access
reports in real time. Square even has built-in tools like loyalty and marketing to help you connect
with customers and reward them for showing up again. Square supports every major payment method,
including tap-to-pay and offers instant access to your earnings through Square checking.
A lot of the local businesses I go to seem to be using Square, which makes me feel good about the
brand. With Square, you get all the tools to run your business.
with none of the contracts or complexity.
And why wait?
Right now, you can get up to $200 off square hardware
at square.com slash go slash prop G.
That's SQU-A-R-E dot com slash geo-slash prop G.
Run your business smarter with Square.
Get started today.
What do you think the risks are,
if you're using Claude or Chat-G-P-T,
what do you think, realistically,
the risks are over the next five or ten years
that your data's compromise in some bad actor or the LLMs themselves will have access to your private information and be able to link identity with, I mean, the John Oliver segment on finding people's data in the dark web, including their search history.
Should people be really, should people be cognizant of what they query these LLMs?
I mean, I think they absolutely should be cognizant. Any query to an LLM that isn't sort of a specialized private inference setup, you know, kind of what Moxie, who founded Signal is doing with confer or other similar setups. But any, you know, a general query chat GPT is sending that data to servers that are controlled by OpenAI, Microsoft servers. They retain that data.
data, they could leak that data. We know that when presented with a valid subpoena, they will turn
that data over in a world in which norms and laws and definitions of criminality shifts from,
you know, one year to the next perhaps. It's good to be cognizant of where that data could go and
what it could do in terms of, you know, marking you as one or another type of person. Not to mention,
And I think, you know, with the introduction of advertising and, you know, increased targeting,
at least a plan to introduce advertising in chat GPT, I think there are also issues about what that can
reveal about you, you know, in more mundane context as a consumer or as a job seeker and, you know,
the kind of advantages or disadvantages that might accrue, given that the power to define you
based on data that is, you know, in the context of chat, GPT, often extremely intimate.
You've actually referenced it AI as a marketing term. What did you mean by that?
Yeah. I mean, I think it's, I'm being flatly literal, although I think that's sometimes taken to
mean that I'm saying AI doesn't exist or it's not serious, which is, you know, marketing is, in fact,
very serious. You know, what I'm talking about there is just sort of denaturalizing AI as a technical term of
If you look back at the term AI, you know, it was created in, you know,
156, 1957 by John McCarthy, who hosted the Dartmouth conference.
Those of us in, you know, in this world will be familiar with that at kind of an iconic
conference where a number of the quote unquote fathers of AI gathered to try to create
intelligent life, you know, in the form of a machine over the course of a summer.
and John McCarthy created the term in his own words in subsequent interviews because he wanted to exclude Norbert Weiner from the convening. They didn't get along. Norbert Weiner had created the term cybernetics in the field of cybernetics and McCarthy classically did not want to be a disciple. He wanted to be the father of his own thing, a very common academic urge. And he also wanted grant money. And he thought artificial intelligence was a kind of flashy term with a cool valence that would get some of,
of that Cold War era, ARPA money flowing to his lab, which it did. It funded the conference.
But over the history of the term, it's like over 80 years now, we've seen it applied to
very disparate technical modalities. So McCarthy was invested in symbolic systems, which would
look much more like decision trees and was actually deeply skeptical of the neural approach,
which predated the term by about 10 years and was, you know, McCullough and Pitts.
neural networks stem from that.
So what we see is a term that was invented primarily to describe an approach that's out of favor
today has now been applied, you know, because of the specific resources available and
the recognition that, you know, neural networks can do interesting things with data and compute
and the type of business models we have.
The term AI is now applied to an approach that was not actually under its umbrella when
McCarthy invented it and why is any of this important beyond it just being very interesting
if you're a nerd. I think it's important because it allows us to step back and actually
recognize that this is not a term of art and what we are describing our very particular
approaches that have their own historical and political economic formulations and that we can
actually sort of have a bit more agency to define what we mean by intelligence to choose the
technologies that we are leveraging to produce intelligent seeming outputs and to be a bit more
critical and actually regain a bit more of our own agency in relationship to mythologies that
kind of naturalize these systems as just a sort of linear arc of technological and human progress.
There's been a lot of, I don't know if it's warnings or catastrophizing from AI executives who
said, I'm scared of what I've built and I need to retreat to the, you know, that, you know,
you know, the Cotswilds and Wright Poetry.
I'm curious what you think the threat level is of AI.
And if it's been overstated,
understated, and where you see the biggest threats
and how we as a populace respond to it.
I think there are threats,
particularly if we integrate these probabilistic,
generative and decision-making systems
into high-stakes domains, you know, nuclear,
defense, energy, and put them to tasks that they are ultimately not secured or suited for.
So you can have reward hacking, you can have emergent behavior. All of those things are real.
Those aren't things that are simply going to sort of spring out of nowhere or, you know,
Athena from Zeus's head. And suddenly we have ephemeral technologies running around without our
control or delegation in some sense, right? Those would need to be choices that are made by people
and decision makers. And I do think, you know, in some sense, some of the fear has a bit of
escape velocity from material reality and almost sounds a bit like a religious fervor rather than
kind of a, you know, technically grounded concern about the rush to integrate technologies
that are not fit for purpose and could have collateral consequences, which is where I land on it.
My primary fear, however, is the combination of the mythology of artificial intelligence, which is really framing these technologies as superior to human judgment, superior to human capabilities, which on some axis measured in some ways, yeah, sure, they do math much quicker.
So does a calculator, they can, you know, produce things more efficiently, et cetera, et cetera.
yes, but ultimately these are very centralized technologies that rely on huge amounts of data,
data that is captured by an industry invested in what I'd call the surveillance business model,
which is effectively, you know, collect all the data you can via your platforms and then,
you know, train an AI model, sell it to advertisers, et cetera.
And, you know, so it requires huge amounts of data.
It requires huge amounts of infrastructure.
and I don't have to go into the wild CAPEX spending,
the kind of, you know,
NVIDIA's picks and shovels,
the monopoly on chips and the,
the build out of data centers.
And it requires huge distribution networks,
which often get left out of that calculus.
But basically, if you're going to make money
or you're going to integrate this,
you need either a large social media
or marketplace platform,
or you need a cloud business model,
or you need to latch on
one somehow. So all of that redounds to an industry that is highly concentrated in the hands of
effectively the winners of the last tech boom, the platforms who were able to establish, you know,
data pipelines and massive amounts of data, large platforms, cloud infrastructures, global reach
that were sort of cemented via network effects and economies of scale la, you know, classic communications,
network monopolies. And so my concern,
with all of that is that what we're looking at is a significant concentration of power
over infrastructure and decision-making that is then rebranded as a kind of God's Head
intelligence in ways that are making us less critical than we need to be about how that power
is being leveraged. Well, let's drill down to specifics. What do you think, and nobody knows,
but what is your best guess with respect to AI and employment? And let's call
at the West in Europe and the U.S. over the short and the medium term. I've seen TikToks of
economists and AI executives saying or AI thought leaders saying employment, you know, we're going to
see a massive destruction in the labor force. But the flip side is so far it hasn't really
manifested. There's some, there's, you could potentially interpret that the job market is
softening, but youth unemployment is about where it has been historically at average.
AI and the labor force, what is your best guess?
Yeah, and I got to be careful here.
This isn't really my lane, and I'm seeing a lot of competing headlines.
It does seem clear to me from some conversations that at least in part, AI has been a handy pretext for job cuts.
boards and media and shareholders will accept that, you know, hey, we cut X number of people
because this is part of our AI strategy. That doesn't look like weakening demand. That looks like
innovation. And so I do think there's some AI wrapping of downsizing that is happening. And I've
heard that firsthand from some folks. I do think, you know, we are seeing at least a sort of degradation
of work and, you know, degradation, meaning, you know, there are people who maybe used to have a job as a
copywriter or a translator. And, you know, we've seen this with translation, who are now just kind of
editing AI output, right? And it's a less secure, maybe less fun, less rewarding job. But you,
it's not removing the human. It's sort of removing the agency and power that a human would have
in that job under different circumstances.
I am really impressed with what I've seen or, you know, it's, it's the, the kind of new round of
coding agents are very, very capable. And, you know, they're definitely, I'm seeing a lot of
excitement across my industry there. It's, you know, you can't deny that these are very useful and
produce output that is, you know, pretty commensurate with like a junior programmer.
But again, you still need a senior programmer. You still need somebody.
who understands how it works to review the code and maintain it.
And so even though you're seeing advances in capabilities,
one thing that isn't being talked about enough is, you know,
there are few things that many engineers I've worked with hate more
than having to maintain someone else's shitty code.
So you still need somebody who has an understanding of the systems level,
who's bumped their head up against problems and understand them, you know,
and can fix them, who understand how one,
you know, pull request or kind of tranche of code might interact with another. And that's the
place where I'm not only concerned that the kind of rapid outsourcing of some of the development
work to agents, you know, I think some of that could backfire in, you know, a kind of technical
debt that is very difficult to pay down if what we're looking at is systems that are sort of, you know,
built by agents or coding AI and not fully understood by the people who, you know, the kind of
skeleton crew who are left to maintain them. So those are, you know, those, those are some reflections.
I don't think I have a clear answer because I think this is not just a question of AI.
It's also, you know, where is their market will? You know, how is AI going to be used as a pretext?
And then what happens when we do have the first significant.
issue with the reliance on these AI systems. And I, you know, I say that as I recognize that,
you know, Amazon went down apparently because of an error made by an AI agent that they integrated.
So, you know, we have already seen a kind of, you know, first wave of critical issues that are caused
by a kind of dependence without human oversight.
We'll be right back.
support for the show comes from VCX, the public ticker for private tech.
For generations, American companies have moved the world forward to their ingenuity and determination.
And for generations, everyday Americans could be part of that journey through perhaps the greatest innovation of all, the U.S. stock market.
It didn't matter whether you were a factory work in Detroit or a farmer in Omaha.
Anyone could own a piece of the great American companies.
But now, that's changed.
Today, our most innovative companies are staying private rather than going public.
The result is that everyday Americans are excluded from investing and getting left further behind,
while a select few reap all the benefits.
Until now.
Introducing VCX, the public ticker for private tech.
VCX by Fundrise gives everyone the opportunity to invest in the next generation of innovation,
including the company's leading the AI revolution, space exploration, defense tech, and more.
Visit getvcx.com for more info.
That's getvcx.com.
Carefully consider the investment material before investing, including objectives,
discharges and expenses.
This and other information can be found in the funds prospectus at getvcx.com.
This is a paid sponsorship.
Hey guys, it's me, Tuffy, the host of Tuffy Talks.
On this week's episode, we're doing a state of the union, but more state of pop culture,
2026, from OZempic to Tadwives.
Spooky.
And why the center of pop culture is in Utah now?
We do a deep dive on Chloe and Lamar.
We talk Hillary Duff.
You know what, find us everywhere at Taffy Talks.
Subscribe on YouTube and all the podcast platforms and Instagram and TikTok so you can share
with your other work bestie and hopefully everyone you've ever met.
We're back with more from Meredith Whitaker.
So there's a tension between privacy and encryption and I think the potential weaponization
of encryption and privacy by bad actors.
And I would imagine my virtue of your position,
I think I understand where you would land on this, or at least a bias or a view on it.
In London and New York, they say you can't go more than 12 or 15 feet outside without being on camera somewhere.
And to a certain extent, I like that.
I think I like it more in Britain because I'm less worried about it being weaponized by the administration here.
But if you look at the decline in crime rates, I think some of it is because of technology and then court-ordered mandated.
if you will, violations of privacy, if there's enough evidence that this person is a bad actor,
and then we need to violate people's privacy to understand, you know, if something bad is about to happen.
You must be given this question all the time.
That tension, where do you land on that tension?
And is there, is there ever a reason for why people's privacy should be violated in the context of larger safety concerns?
I want to back this up to the fundamentals of encryption.
And when we're talking about, you know, signal,
what we are talking about when we talk about it into an encryption
and the way that it works is a technology that either works for everyone or it works for no one.
If you undermine the math of encryption, if you put a backdoor in there,
you have a, you know, not actually random, random number generator that means you could
to basically perturb the encryption, decrypt it.
That's not just a backdoor.
That's just not just an error that only the good guys can avail themselves of.
That is effectively breaking encryption for everyone.
So it really is a scenario where the people you hate the most have to be able to use it to exercise that right, so to speak.
if the people you love the most are going to have access to it as well.
It's all eggs in one basket, and that's at the level of math.
I'm not answering the question, is it ever good or appropriate to undermine, you know,
that is not actually what I'm talking about.
What I'm talking about is a world in which over the last 30 years,
we are surveilled within an inch of our lives.
You said every 12 feet were recorded great.
And then you made the comment.
you know, I'm more comfortable with that under one regime than I am under another regime.
Well, that becomes the issue. You're not really in control of, you know, how the sands of that regime
shift. I mean, maybe you vote or, you know, whatever it is. But that data is indelible. Those systems
are pervasive. Meta is adding facial recognition to their rayban glasses, right? Where is that data going?
Which governments will access that? And so it is.
interesting to me that in a golden age of surveillance, when unprecedented in human history,
our actions, our preferences, our communities, you know, who we date, who we talk to, what we do
for a living, how we spend our money, are surveilled and logged at a level of detail,
unimaginable to the Stasi, that we are still pinpointing a tiny refuge where the fundamental
right to private communication that is recognized as such that is necessary for a full and
joyful and intellectually rigorous life that has intimacy and the ability to exercise our opinions
and dissent and blow the whistle and do journalism and all of that, that one right is presented
as a problematic and as the barrier between stopping crime and allowing it to run rampant in a world
where, you know, the issue is more often than not, you know, finding the needle in the haystack
of noise and the haystack of data, not getting access to an encrypted channel. So my stance on that
is very, very clear, but I also think the framing of the problem needs to be shifted a little bit.
Yeah, my pivot co-ho said something that really struck me. She said that people have the right
to have secrets, and it really struck me. And this, the kind of the smartest people I know that also
understand tech, I'll use signal. And I realize how promiscuous and careless I've been with my own
data. And I thought, what I do is just not that interesting. And most recently, when I hear
the Trump administration talking about assembling lists of people who are vocal, you know, pretty
outspoken against a Trump administration, I'm like, wow, I spoke too soon. If you were to, if, and you have
advised the government, I know you were part of, you worked with Lena Kahn, what regular
If you were to advise the administration or the FTC, maybe it's under a different administration,
on what would be the most thoughtful regulation as it relates to privacy, encryption, or AI,
kind of magic wand time.
What do you think is most needed from our governments right now?
Yeah.
This is a bit of a tricky question for me because I've been not in the policy bubble for a while.
I do think, you know, something as simple as a meaningful consent, and by which I do not mean
just a bunch of click wrap and cookie banners around whether or not a given company or institution
gets to create data about us at all, not what they do with our data, but whether they have
the right to tell my story to know about me would go a long way. Of course, I would reckon in
entire logic of the tech business model. But I do think the fundamental thing that needs to be done,
however the regulatory paintbrush would paint this, is to question and then take back the authority
to define who we are from a handful of companies that have kind of naturalized their right
to sort us and order us and tell us our place in the world.
world. That's a bit of a philosophical answer, but I do think that's the core issue is the authority
we've given tech companies who create data for advertisers to sort and order our world and tell
our stories for us. I'm just curious what you thought of the ring Super Bowl at.
My God. My God. I mean, I didn't expect it to become so flagrant so quickly, I guess.
and seeing that it was, you know, I was like, do, who are they selling this to?
Is it people who would install this, or is it the government contracts who recognize exactly
what this is selling and want to sign up for data access?
Like I certainly wasn't the core demographic.
It was aimed at, but it also felt like there was a, you know, a tertiary market that was
actually being addressed that wasn't, you know, eager doorbell.
owners. When you look at the landscape, well, I'm going to ask a market question. And my guess is you're
going to tell me it's not your lane, but I just want to remind you that's never stopped us from
opining on it on all manner of topics. We have no domain expertise. And in the markets,
there's been a meltdown around SaaS companies from a valuation standpoint. You work, you essentially
work for or run a software company as a way I would think of it. I don't know if you call it that,
but at the end of the day, I would imagine it's code. I work for a software company.
There you go. So there's been an enormous destruction of value among SaaS companies believing that AI is going to come in and kick the crap out of these guys or make them, you know, obsolete. When you see that happening, do you have any initial thoughts on the viability of some of these software companies who are, you know, some of them lost 40, 60, 70 percent of their value?
I think ultimately when you're providing enterprise software, particularly to highly regulated industry,
it needs to be interoperable with legacy equipment.
Even if you don't like that legacy equipment,
there's a superstructure there, there's a foundation.
It needs to work with the data that you have,
even if that data isn't great,
that data needs to be clean and fungible.
You need to be able to account
for the different determinations that are made,
depending on what kind of model you hook in there.
That might not be a possible,
particularly in financial services
and other industries with high compliance burdens.
You need to have often human oversight that is personally liable or accountable for different decisions.
So I do anticipate that AI in some form or fashion will be integrated, will have impacts here.
But fundamentally, this is not a magic wand, right?
And there's a lot of legacy infrastructure, regulatory burdens.
and labor processes and modes of work that need to be accounted for.
And I don't see SAS software going away anytime soon.
And I don't see AI doing anything to really erase those other considerations, right?
I think predictions of the demise are a bit self-interested and far premature.
And last question, Meredith.
It strikes me the young people are absolutely, at least when I see their actual behavior,
there's some consumer dissonance in that as people talk a big game about privacy,
and then I see people basically telling the world where they are, what they are doing,
and who they're doing it with.
And it strikes me that even if you put a thin layer, if Uber would ever get hacked,
a thin layer of AI on top could basically connote who's having affairs,
terminating pregnancies, HIV status.
It just wouldn't be that difficult to just know everything.
about someone with just their Uber data. Do you see the same dissonance I see in that as consumers
have just decided to trade off massive privacy for utility? And do you have a message for them?
I do see some of that. I would shift it a little bit. I think ultimately humans want to be loved
and they want to be included. They don't, you know, even when we talk about signal and privacy,
we're not talking about a vacuum. It's not Meredith by myself with none of my
thoughts escaping, the anechoic chamber of my, you know, meaning making, I am using Signal to share
what I think with other people because I am a human and communication maps to human relationships
and the desire to be connected and to be included, et cetera, et cetera. So I think we're in a
world where, you know, ultimately we will opt as human beings. I use these services too because I want to
go to the party. I want to see what people are doing. You know, I got to get somewhere. I want to
participate in life while I'm living, as do I think most people, right? So the ways to do that are things
we're going to do. And I don't think they represent actual choices about where we feel comfortable or
uncomfortable with our data, whatever our data might be, right? We don't really have access to it. We know we
don't want someone to share our mean DMs with our friend. We know we don't want, you know,
our health data leaps to our insurance in ways that would harm us. But that's also a place we don't
have that much control. And in the meantime, we've got to get to work. We want to see what our friend
posted. We want to be part of the popular people. And the ways of doing that have been slowly,
you know, we can say colonized or sort of, you know, instrumented by
these tech services that advertise convenience, advertise connection, advertise ease,
and then below the surface have sort of hollowed out our privacy and our ability to, you know,
define ourselves in the place in the world. So I would say what we're seeing is a natural human
inclination. You know, we use what we can to be together to connect with each other to participate
in life. Those services have themselves kind of, I think in some sense, betrayed us structurally.
and that doesn't mean we don't care about privacy.
That means a meaningful choice around what it would take to care about privacy has not really been given to us.
We do see the number of people using signal going up and up and up.
We do see people's understanding of why privacy is so important, I think becoming more acute and more, you know, felt for people at a personal level when they see people's social media posts being used.
at the border when they see these collateral consequences that are coming home, I think the issue then
is, okay, what do we do about it? And you can't say, well, the choice is never to communicate
with your friends, because that's simply unrealistic and anti-human. But you should use signal.
I'm not exaggerating, and this is my final plug. The smartest people I know, and then the people
understand technology, the most have domain expertise around technology. That vent overlap,
they all use Signal. It's almost like a badge of like, I get it. You know, these anyways, but my
favorite quote from this is people or people want to be loved and included. Meredith Whitaker is the
president of the Signal Foundation and a leading voice on AI policy. She co-founded the AI Now Institute
at NYU, advised FTC Chair Lena Kahn, and was named one of times 100 most influential people
in AI. She joins us from New York. Meredith, very much appreciate your time and your good work.
meant what I said. You're the bright, well-lit, clean part of the AI technology bookstore.
Let's put Meredith Whitaker in charge. Let's just, let's consolidate all of it. I'll go raise
$11 trillion by all of these companies and put you in charge. Deal?
It's a deal, Scott. It's a deal. Yeah, looking forward to working with you. And thank you
for having me on. Algebra of Happiness. A hack for young dads. It is striking to me.
selfish kids can be. I mean, it's just a, I feel like I'm essentially a, essentially a credit card
that occasionally gets to watch a football match with them sometimes. And let me just give you a
hack. If you're a dad like me who thinks that you're going to have all these hallmark moments with your
child, you'll have some of those. But for the most part, it's going to be mostly a one-way relationship.
And I'm not saying it's not amazing, but the hack I have implemented and has helped me a lot is that my favorite title, I've been a founder, you know, all these cool titles, see, or whatever, my favorite title in the world is dad.
And that is every time my kids call me or say, oh, hi, Dad, or they call out Dad, or, you know, I love you, dad.
every time I hear the word dad, I'm like one of those dogs that hears the word walk. And I've trained
myself to just love that term. It's the most important term in my life. And it just, it's more dope of
for me than anything is when these two things that kind of look, smell, and feel like me, called me dad.
And what I've decided, and I started believing and training myself to believe five years ago,
is that when my kids are awful, you know, they give me a little.
hard time or they come home and expectorate their emotions or they're unreasonable or they slam their
door. My kids, and what you'll find is generally speaking, your kids don't behave that way outside of
the house. If you're like 90% of us, you're going to find that outside of the house, your kids are
pretty reasonable, pretty good citizens, pretty polite. And at home, they're fucking terrorists,
assessing the household for vulnerabilities so they can strike when you're at your weakest. Now,
why do they do that? Because they're processing, they're emoting, and they know what they can do
with you because they know you are there unconditionally. They know you love them unconditionally. Why?
Because you're their dad. And so what I have done, and it's been a real unlock for me,
is that when my kids say something inconsiderate or even mean to me or aren't respectful or aren't
kind. I'm not saying I let them roll right over me, but I assume they're saying one thing to me.
They're saying, doubt. This episode was produced by Jennifer Sanchez and Laura Jenaire.
Cammy Rieke is our social producer, Bianca Rosario Ramirez, is our video editor. And Drew Burroughs is
our technical director. Thank you for listening to the PropG pod from PropG Media.
