The Journal. - What Happens to Privacy in the Age of AI?

Episode Date: January 18, 2024

The AI industry is controlled by only a few powerful companies. Is that concentration of power dangerous? WSJ's Sam Schechner interviews Meredith Whittaker, president of encrypted messaging app Signal..., at a live event at the World Economic Forum in Davos, Switzerland. Further Reading and Watching: -The Importance of Privacy in the Age of AI  -Altman and Nadella Talk AI at Davos  Further Listening: -Artificial: The Open AI Story  -Why an AI Pioneer Is Worried  Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 This week, some of the most powerful people from around the globe are in Davos, Switzerland, for the World Economic Forum. It's an annual gathering where political leaders and business moguls discuss the biggest issues in government, culture, and industry, like artificial intelligence. One of the tech leaders featured at this year's conference was Meredith Whitaker. She's the president of the non-profit encrypted messaging app Signal. And she spoke with our colleague Sam Schechner at a live Wall Street Journal event at Davos. Thank you for being here, Meredith, president of Signal. And I would be remiss if I didn't thank you also for making the app that woke me up with multiple alerts this morning. Happy to do it. Whitaker has spent decades in the tech trenches.
Starting point is 00:00:52 She worked at Google for 13 years. And she co-founded a research center called the AI Now Institute. In both those roles, she's come to be known as an outspoken critic of the tech industry. she's come to be known as an outspoken critic of the tech industry. A lot of what you're saying is that there is a lot of power being concentrated in a few companies. We're here at the kind of apex of the concentration of power up in the Swiss mountains. How is that message being received? I don't know. Everyone's so polite.
Starting point is 00:01:30 But Whitaker had plenty to say to her fellow tech leaders about privacy, profit motives, and the future of AI. Welcome to The Journal, our show about money, business, and power. I'm Jessica Mendoza. It's Thursday, January 18th. Coming up on the show, the president of Signal on the dark side of AI. Seth and Riley's Garage Hard Lemonade, a delicious classic with a vibrant taste of fresh lemons. The perfect balance of sweet and sour with a crisp, zesty edge. Welcome to The Garage, the place of refreshing hard lemonade. Available at the LCBO. Must be legal drinking age. Please enjoy responsibly. Meredith Whitaker has for a long time been a big advocate of data privacy. The concept is at the center of the Signal app, which uses end-to-end encryption.
Starting point is 00:02:40 That kind of encryption prevents anyone besides the sender and recipient from reading their messages. And Signal says that it doesn't collect or keep any sensitive information. I mean, I'm so proud to work at Signal. It's like the only tech job I can imagine. Loving, at this point, you know, we're competing alongside the bigs, but we do it differently. And one of the ways we do it differently is we go out of our way to collect no data. We have created new approaches to building messaging so that we can know as little about you, who you're talking to, who your friends are, et cetera, as possible. We think of ourselves as sort of extending the norm of private communication that has existed for hundreds of thousands of years into the digital sphere. So, you know, offering an actual option where we're not bolted onto a surveillance company.
Starting point is 00:03:36 We are a non-profit, so we're not tempted by eroding a little privacy for, you know, a handful of bottom lines, right? We are really laser focused on the mission of maintaining a meaningful way to communicate intimately and privately with the people we care about in a world where that is decreasingly possible. Is that, I mean, there is debate about the role that encryption can play. Encryption is used by bad actors too. It's used to trade, you know, child sexual abuse material. There's calls to say that, well, maybe there should be some data. The UK has been pushing that. How do you square the mission with the reality of the uses of this technology? Well, I'm going to give a kind of analogy. So imagine you and I were investigators and we're
Starting point is 00:04:21 investigating a crime and we go into a criminal's office and we find a box of pens and I pick up a pen and we go to the pen company and we say, y'all, I need to know everything that has been written with this pen. Right now, this is really important. Criminals use this pen. I need you to tell me everything that is written in this pen. This is really serious. And of course, the person at the pen company would be like, are you high? This is not how a pen works. You know, what are you doing? You know, and of course, we could make a pen that works that way. We could put a gyroscope in it. You could make, you know, have to charge it. It, you know, connects to some cloud. It's like a security vulnerability nightmare. But we don't make pens that way. And we don't expect pens to work that way. So I kind of want to flip that. It's like, why do we expect every mechanism
Starting point is 00:05:06 for communication, every experience in our lives to suddenly be a window into, you know, a window that allows, you know, surveillance actors to gain access to our communications, our activities, our preferences. Whitaker says that big advertising-supported tech platforms rely heavily on surveillance. They make money by showing targeted ads, which are based on user data that they've gathered over time. And Whitaker's worried about how that business model and framework may affect the development of AI.
Starting point is 00:05:40 And so what are the ingredients that are new to AI? You know, why is a term that is over 70 years old suddenly everywhere on Main Street when it wasn't there, you know, two years ago, right? Like, why now is a term that has been applied to many different types of technology over many years the newest thing that is going to save the world? It's bigger than the printing press. It's probably bigger than oil. Like, perhaps it's the new wheel, right? I think part of that is because there's a recognition that these large tech giants can use the marketing around AI. The idea that massive amounts of data and the capital to purchase and run, you know, rare and expensive computational systems is the same thing as intelligence. That you can create
Starting point is 00:06:26 a system that can be effectively inoculated throughout our social and economic institutions to make decisions and predictions that will shape and inform and optimize our lives. And so I think we really need to look at the root of what we're talking about when we talk about AI, because we are talking about what I've called a surveillance derivative. It is based on the back of this business model, and it is exacerbating this business model, and with it, entrenching significantly concentrated power among a handful of companies, primarily based in the US. So what we're looking at is an AI industry that is concentrated around computational resources and the sort of ability and permission to collect and create as much data
Starting point is 00:07:13 as possible, because we're in a kind of bigger is better paradigm. And the bigger it gets, the fewer actors can actually develop and use this. One could say, I mean, that's capitalism. You know, these companies are successful. What's wrong with, you know, several large companies controlling this kind of power? I guess that is a question we need to answer here. Yeah, I mean, that's why I'm asking it. I mean, you know, we are talking about a corporate form
Starting point is 00:07:40 in a sort of, you know, whatever stage of market capitalism we're in that has become the infrastructure for everything, right? There's an extraordinary amount of power that we are handing over to these companies that are governed by quarterly reports that they need to make to their board that always show growth or profit or the perspective of both based on one or another
Starting point is 00:08:07 strategy, right? Those are the objective functions. Now, the objective function is not the social good. Coming up, why Whitaker thinks it'll be hard to dismantle the concentration of power in tech, including when it comes to AI. friends a world away? You can use your travel credit. Squeezing every drop out of the last day? How about a 4 p.m. late checkout? Just need a nice place to settle in? Enjoy your room upgrade. Wherever you go, we'll go together. That's the powerful backing of American Express. Visit amex.ca slash yamx. Benefits vary by card. Terms apply. Your teen requested a ride, but this time, not from you. It's through their Uber Teen account.
Starting point is 00:09:08 It's an Uber account that allows your teen to request a ride under your supervision with live trip tracking and highly rated drivers. Add your teen to your Uber account today. Meredith Whitaker says there are a few big barriers to taking power away from a handful of big companies, especially when it comes to AI. For one thing, training AI models often relies on human feedback. And what that means is human labor. feedback. And what that means is human labor. You can think of like human buffer zones around toxic, offensive, or, you know, potentially harmful content. And these machines are fed examples of like, here is a vile racist screed. Here is one of the most misogynistic things you've seen. You know, these are the worst things on the internet and the internet is pretty bad. And then humans have to sort of, you know, look at the prompt
Starting point is 00:10:07 and sort of effectively calibrate these machines into some shape so that they kind of mimic polite liberal business discourse such that they can be commercialized. And that's a very, very expensive and labor-intensive process. Even though that process is done by very low-paid workers generally in the majority world, it nonetheless is very costly because it requires so many workers and so much time.
Starting point is 00:10:32 You mentioned reinforcement learning with human feedback, the people in AI, jobs. I mean, my job is largely to do research, do writing. People sit in front of computers all day generating copy, some of which is maybe loosely related to any actual facts. What is the impact, you think, on white-collar work from these tools? I don't know, and I don't trust the estimates.
Starting point is 00:11:02 I think there's a real incentive for those who could you know those in the c-suite if we have a story that is believable enough that ai will replace you that's a really good way to suppress sort of worker wages or unionization or simply degrade working conditions and i would point to the writers guild Guild of America as sort of a front line of some of this, right? So you have, you know, the threat of introducing AI into the writer's room in Hollywood. You have a strike that kind of codified around that set of issues and who gets to determine where AI fits in this and kind of rigidly structured, you know, long-time unionized industry. And what role do writers have in making that decision? And, you know, long-time unionized industry, and what role do writers have in making
Starting point is 00:11:47 that decision? And, you know, what sort of played out through that was an understanding that it doesn't necessarily matter if AI can replace your work, right? What it can do is serve as a pretext to degrade your work. So you're no longer a writer with health insurance and a full-time job and sort of a writer's room or whatever it is. In the Hollywood case, you are hypothetically an AI editor and you're hired as a contractor to fix a script at the end. And of course, the chat GPT can't write a compelling script, but you do something to it, it gets on production line, and suddenly you've created another category that is much less expensive, even though there's a huge amount of labor involved, right? The labor is just displaced into something that is then sort of justified as
Starting point is 00:12:36 less valuable. Several big tech leaders have recently raised alarms about AI. Early last year, a number of them signed an open letter calling for a pause in the training of AI systems. They said, quote, powerful AI systems should be developed only once we're confident that their effects will be positive and their risks will be manageable. Despite her own concerns, Whitaker did not participate.
Starting point is 00:13:02 She told The Guardian that the letter was disingenuous because many of the people who signed it have the power to pause that training if they wanted to. She added, quote, they could unplug the data centers. They could whistleblow. These are some of the most powerful people when it comes to having believers to actually change this.
Starting point is 00:13:20 Okay, I mean, but there is like enormous promise here, right? I mean, there's, what positives do you see? Is it just we're all clouds, or is there something that we're going to get from this technology that's going to make the world a little bit better? I mean, look, there is a lot of potential, right? We can create, like, halcyon visions of hypothetical futures that look really good. But I'm not actually looking at the technology, if you notice the through line here. Like, I'm looking at who gets to control it, who gets to decide how it's used and for whom, and what sort of incentives are actually driving that. And that's where I want change.
Starting point is 00:13:55 Towards the end of the interview, Whitaker took a question from an audience member, Sir Tim Berners-Lee, the inventor of the World Wide Web. He pointed out that there is a bright side to this technology and that consumers still have the ability to choose apps that meet their needs. You know, I'm the president of Signal, so I definitely know that there are great options out there that people can choose. But I do need to push back a little gently on the idea that we can just make more righteous choices, and that is an anecdote or a panacea to the kind of issues we are facing because I think a lot of this is actually not at this stage a factor of individual choice.
Starting point is 00:14:40 When I walk down the street in Davos it's very likely that there's a CCTV camera that is scanning my face, right? When I apply for a job, my data is put through a background check. When I, you know, go to school, I will be using likely some sort of G Suite apparatus or maybe meta classroom apps or what have you. So I think a lot of the connective tissue of our social and economic lives are structured through a necessity of using or participating or being subject to these systems that fall well outside the range of personal choice. So while there are choices and everyone should be using signal and everyone should be making the choices they can, I don't think we can assume that that is a sort of solution
Starting point is 00:15:26 to much bigger issues that we are facing. But I absolutely appreciate the ray of sunshine. Well, I have a lot more questions I'd love to ask you, but we are out of time. So everyone, please thank Meredith Whitaker for being here. Thank you. here. That's all for today, Thursday, January 18th.
Starting point is 00:16:07 The Journal is a co-production of Spotify and The Wall Street Journal. If you like our show, follow us wherever you get your podcasts. We're out every weekday afternoon. Thanks for listening. See you tomorrow.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.