On with Kara Swisher - Why Fei-Fei Li is Still Hopeful About AI (… and Elon)

Episode Date: October 16, 2023

What are the most immediate, and potentially catastrophic, risks posed by AI? According to pioneering AI researcher, Dr. Fei-Fei Li, they include disinformation, polarization, biases, a loss of privac...y and job losses that could lead to unrest.  The Stanford computer scientist is a fierce advocate for the humane development of artificial intelligence and for increased diversity in the field. She and Kara discuss AI’s problems and possibilities, the need for increased public sector investment and her brief stint on Twitter’s board.  Questions or comments? Email us at on@voxmedia.com or find us on social media. We’re on Instagram/Threads as @karaswisher and @nayeemaraza Learn more about your ad choices. Visit podcastchoices.com/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Support for this show comes from Constant Contact. If you struggle just to get your customers to notice you, Constant Contact has what you need to grab their attention. Constant Contact's award-winning marketing platform offers all the automation, integration, and reporting tools that get your marketing running seamlessly, all backed by their expert live customer support. It's time to get going and growing with Constant Contact today.
Starting point is 00:00:28 Ready, set, grow. Go to ConstantContact.ca and start your free trial today. Go to ConstantContact.ca for your free trial. ConstantContact.ca Support for this podcast comes from Anthropic. It's not always easy to harness the power and potential of AI. For all the talk around its revolutionary potential, a lot of AI systems feel like they're designed for specific tasks,
Starting point is 00:00:57 performed by a select few. Well, Clawed by Anthropic is AI for everyone. The latest model, Clawude 3.5 Sonnet, offers groundbreaking intelligence at an everyday price. Claude Sonnet can generate code, help with writing, and reason through hard problems better than any model before. You can discover how Claude can transform your business at anthropic.com slash plod.
Starting point is 00:01:38 Hi, everyone. From New York Magazine and the Vox Media Podcast Network, this is On with Kara Swisher, and I'm Kara Swisher. And I'm Naima Raza. Today, we're talking about artificial intelligence, and our guest is Fei-Fei Li, the famed Stanford AI professor and co-director of the Human-Centered AI Lab. She's also the author of a new book called The Worlds I See, which is kind of part personal memoir and part AI history, and it'll be out in November. Yeah, she's one of the top people in AI and one of the earliest to work on it. She worked at Google, she worked at Stanford. She has a huge reputation in the sector and one of the few
Starting point is 00:02:12 women in it, actually, at the very top. AI has been a cornerstone of our coverage on this podcast. We've had on Sam Altman from OpenAI, Reid Hoffman and Mustafa Silliman from Inflection AI, Yusuf Mehdi from Microsoft, and also Tristan Harris, who's not on the business side anymore, an ex-Googler who's now turned into a researcher who's kind of ringing the bell on AI. How would you stack rank them from bullishness to bearishness? I think they just have different opinions. I think most of them are, you know, obviously Tristan is on the more worried side and extremely worried, and others are more positive. I'd say Sam is probably the most positive of those. But they're all aware of the problems, and I think probably
Starting point is 00:02:54 Dr. Fei-Fei Li is in the middle, and I think that's the place to be. She also has been a contributor early to this technology. She created ImageNet, which is a data set that played a groundbreaking role in AI's development. That's right. She was trying to recognize photos and trying to figure out how photos are recognized by AI. It was one of the very early times because it was, you know, people can notice, even a three-year-old knows what a cat is. And it took a while to train a computer to understand that. But in some ways, she bears some similarities to Jeffrey Hinton. They both worked at Google. They overlapped early in their career. Jeffrey Hinton, of course, is the AI scientist who's raised huge alarm about AI earlier this year, stepped down from Google, and is very bearish on the technology.
Starting point is 00:03:36 And she is not quite there. Well, it's not the same kind of bears as Jeffrey is doing more of the end of humanity, as many people do. It's sort of doom-scrolling idea. And she is more, we're not focusing on the real problems, which are things like justice, you know, not applied correctly using computer vision and machine learning and things that will affect more people. I don't think she thinks the end of the world is nigh, but it is for a lot of people if AI goes the wrong way in all kinds of small areas. And I think she's right. Yeah, she has a humanistic vision, aspiration for artificial intelligence, and that can have
Starting point is 00:04:10 great outcomes in things like medicine. She holds space for all of that. Since our last AI interview, which is actually in June, so it's been a while since we've covered this. Since then, the EU is in the late stages of their AI regulation, the AI Act, which will be the first major regulation of generative AI. In the U.S., there's a lot of screaming and hand-waving, but baby steps in terms of actual changes. Gavin Newsom signed an executive order in AI in California. The White House has gotten AI companies and researchers together. And then there was just this Politico article, which examined kind of a billionaire-backed nexus of political influence in Washington. Yeah, that one's more in the end
Starting point is 00:04:51 of Humanity Group. It's backed by Dustin Moskovich and his wife, Carrie Tuna. He was a Facebook executive. He made all his money there. And they do a lot of founder. Yeah. And so it's, you know, they're trying very much to get people worried about things. And they have undue influence because they put fellows at a lot of the offices. so they're going to have an influence. That was the tenor of the article. You know, I just think who these legislators are hearing from matters a great deal, and what they should do is cast a very wide net. They tend to cast a very not such a wide net and talk to the companies more than they talk to all kinds of people affected. And so, but, you know, Dr. Lee has, but, you know,
Starting point is 00:05:25 Dr. Lee has been at the White House and there might be an executive order on this issue. Certainly, you know, waiting for Congress to act is going to be more difficult. The Politico article you shared, it was interesting. There were kind of two major concerns in it. One was that the potential bias that comes from having Moskowitz backing these Horizon Fellows that are all kinds of departments and regulatory arms of government. And Moskowitz, of course, is tied up in open AI and anthropic. So one concern was, hey, there might be some unfair bias towards certain companies. And the other concern was that this is distracting. This focus on artificial intelligence,
Starting point is 00:06:01 given the media attention, given the money around it, is distracting from more pressing tech regulation, things like antitrust or social media algorithms. I guess they can do all of it at once. You think Washington can do all of it? Well, they should. That's their job. You're always saying they have done nothing at all, and you think they could do everything. They have done nothing, but I don't think, I think they can. I just think they aren't. That's different. I think this is not, these are all linked together,
Starting point is 00:06:25 it's systemic, and they need to just get together and figure out a whole range of things. And, you know, you don't want to do this by executive order, obviously, this has to be congressional, it has to be pushed through to the various departments. I mean, every single department has to be part of this. But her real worry is that, is the concentration of power. In this book, she articulates the worry around the concentration of power in this book. She articulates the worry around the concentration of power in the hands of the private sector versus public sector, universities, etc. And she also articulates a worry about the lack of diversity because she is, we just named a bunch of people we've interviewed on AI and she is the first woman we've spoken to.
Starting point is 00:07:00 Yeah, it's the same problem in all of tech. So I don't know what to say. It's just the same thing. And this is really important. She's obviously a big name, and she's someone who's been very influential, but it's dominated by a certain type of person. Homogeneity is a real problem in tech. And she's very different to that. She was born in China. She immigrated to the United States in her teens. And she went to Princeton undergrad, but while most kids were at dining clubs, she was helping her parents at their dry cleaner. She would go back home on the weekends and work in the dry cleaning business that they ran. And so she's had a very different lived experience and was a voice that we were really excited to hear from. Yeah, I'm surprised I
Starting point is 00:07:37 haven't interviewed her over the years. I sort of kick myself because I think she's really someone I have admired for a long time. I went to hear her speak a couple times when I was at grad school and she's fantastic. Yep. Anyways, let's take a quick break and we'll be back with Dr. Fei-Fei Li. Fox Creative. This is advertiser content from Zelle. When you picture an online scammer, what do you see? For the longest time, we have these images of somebody sitting crouched over their computer with a hoodie on, just kind of typing away in the middle of the night. And honestly, that's not what it is anymore.
Starting point is 00:08:28 That's Ian Mitchell, a banker turned fraud fighter. These days, online scams look more like crime syndicates than individual con artists. And they're making bank. Last year, scammers made off with more than $10 billion. It's mind-blowing to see the kind of infrastructure that's been built to facilitate scamming at scale. There are hundreds, if not thousands, of scam centers all around the world. These are very savvy business people. These are organized criminal rings. And so once we understand the magnitude of this problem,
Starting point is 00:09:01 we can protect people better. One challenge that fraud fighters like Ian face is that scam victims sometimes feel too ashamed to discuss what happened to them. But Ian says one of our best defenses is simple. We need to talk to each other. We need to have those awkward conversations around what do you do if you have text messages
Starting point is 00:09:22 you don't recognize? What do you do if you start getting asked to send information that's more sensitive? Even my own father fell victim to a, thank goodness, a smaller dollar scam, but he fell victim and we have these conversations all the time. So we are all at risk and we all need to work together to protect each other. Learn more about how to protect yourself at vox.com slash zelle. And when using digital payment platforms, remember to only send money to people you know and trust. Support for this podcast comes from Anthropic. You already know that AI is transforming the world around us, but lost in all the enthusiasm and excitement is a really important question.
Starting point is 00:10:03 How can AI actually work for you? And where should you even start? Claude from Anthropic may be the answer. Claude is a next generation AI assistant built to help you work more efficiently without sacrificing safety or reliability. Anthropic's latest model, Claude 3.5 Sonnet, can help you organize thoughts, solve tricky problems, analyze data, and more. Whether you're brainstorming alone or working on a team with thousands of people, all at a price that works for just about any use case. If you're trying to crack a problem involving advanced reasoning, need to distill the essence of complex images or graphs, or generate
Starting point is 00:10:41 heaps of secure code, Claude is a great way to save time and money. Plus, you can rest assured Thank you. anthropic.com slash Claude. That's anthropic.com slash Claude. It is on. Thank you for coming. I want to start by contextualizing the moment we're in with AI, which you've been writing about for a long time and working on for a long time. It's a field that's been developing for decades, which people do not realize. It's just speeded up and had more attention lately. Talk about what the elements of it right now, the landscape we're in at this moment. Yeah, first of all, Cara, very happy to be here. It's quite an honor. Even for someone who's been working in this field for decades, this feels like an inflection moment. And I think part of this inflection is really a convergence of public awakening, policy awakening,
Starting point is 00:11:49 and the power of the technology and the business that is impacting. From Hollywood to sci-fi to, it's been in the ether of public consciousness. But I think this time is different, it's real. When anyone, when young kids to elderlies can just type in a window and have a legitimate conversation with a piece of machine,
Starting point is 00:12:20 and that conversation can be almost any topic. Some topics can be deeper or deep, right? Like details of biology or chemistry or, you know, geopolitics or whatever. It really is the entire world recognizing we passed the Turing test. And explain what that is for people who don't know. Of course. A Turing test was proposed by computer scientist Alan Turing where he uses that as a test to symbolize that computers can think like humans to the extent it can also make you believe it is a human behind the curtain if you don't know
Starting point is 00:13:06 it's a piece of a machine. But interestingly, in 2018, you wrote, I worry, however, the enthusiasm for AI is preventing us from reckoning with its looming effects on society. You're one of the first. I paid attention when you wrote this piece. Despite its name, there's nothing artificial about this technology. It was made by humans intended to behave like humans and affect humans. So we want to play a positive role in tomorrow's world. It must be guided by human concerns. And you called it human-centered AI.
Starting point is 00:13:31 Why did you want to call it human-centered AI? I still feel this is going to be a continued uphill battle, but it's a battle worth fighting. It's, like I said, at the center of every technology. Its creation to its application is people.
Starting point is 00:13:47 And because of the name, artificial intelligence, by the way, it's better than its original name, which would be cybernetics. Cybernetics, that's scary. Cyberdyne, that's from Terminator, right? That was the company. But artificial intelligence really, as a term, it gives you a sense of artificialness. So I wanted to actually explicitly call out the humanness of this technology, especially given it's touching on intelligence. It's going to be our collaborator at the most intelligent level. And we cannot afford to lose sight of its impact. That's why I want to call it human-centered. Right. One of the things I do a lot is,
Starting point is 00:14:30 and I wish if you could say for the record, it is not sentient. It is not. I get tired of that. No, it's not sentient. So the idea of us, that it is us and it is not, it is not self-aware is different than sentient,
Starting point is 00:14:42 that it's a human. Explain that for average people because they do begin to think that these are sentient. They're no evidence of it being sentient. If we somehow vaguely define sentientness with awareness and intention and all that, right? Of course, this is pushing philosophy. Even humans in our world don't have a precise definition of consciousness and sentient. don't have a precise definition of consciousness and sentient. But what this technology right now is,
Starting point is 00:15:32 is it's ingesting a huge amount of data that is mostly human-generated, especially in the form of internet language, as well as other digital books and journals. And then it has an ability to learn the patterns of this language and what we call the sequential model to predict when you say the word Kara. It's very likely to predict the next word is Swisher. So it has that capability. I do think some of the thinkers of our time are starting to see the power that is worth mentioning, which is that because it's machines ingesting data, it has far more capability than a human.
Starting point is 00:16:19 Yes, the bigger brain. Yes, it is the bigger brain. And when you have the bigger brain and faster chips to compute in the bigger brain, you are able to predict patterns in a more powerful way. This is why your bigger brain can say chemistry, can say history, can do a lot of things. And that is scary, and it legitimately has its consequences. Certainly, because it can then start to make patterns, as you said, which is the thing you started with, with ImageNet, the idea of patterns.
Starting point is 00:16:48 Talk about ImageNet and what it was and why you went into it. Yeah, well, ImageNet was a project that I began working on with my students back in 2007. It became public in 2009. And the ImageNet moment that the AI history book tend to write about is 2012. international competition of AI algorithms to recognize pictures of objects was won by a group of Canadian computer scientists led by Professor Jeff Hinton. And that was the first time neural network algorithm demonstrated to the world how powerful it is when there is big data and programmed on two GPUs. So that was the moment people call the ImageNet, AlexNet moment or the beginning of deep learning. ImageNet just told you a cat was a cat, a dog was a dog, and it was difficult. You had to tag these. People had to
Starting point is 00:17:58 tag them and you had to make sure people were being honest about tagging them, which also brought in biases of people, which we're going to get to later. But that's what it essentially did, is it just said, cat, cat, cat. People would say that's what it was. Right. I think the meaning of ImageNet is really, it symbolizes now in hindsight, the beginning of big data era. Because before ImageNet, AI research was very logic-based. Because before ImageNet, AI research was very logic-based. It was very, this is one jargon word, Bayesian net. You know, it was intricate math, but it wasn't really working at any scale.
Starting point is 00:18:36 And it definitely was not working at human scale nor internet scale. Yeah, a child could get images faster than a computer. Yeah, a child could get images faster than a computer. when chat GPT transformer technology made the next inflection point last year, 2022, I had to take a deep breath and reflect, my God, big data is still playing a role that even went beyond my dream back then. Yeah, absolutely. But let's talk about Jeffrey Hinton. You have similar roles in moving AI forward,
Starting point is 00:19:26 though you have different contributions and timelines. Were you surprised when he came forward after he left Google to ring the alarm bell on AI? And do you fear his fears were overblown? So I don't know if you're aware of this. I was just at my first public talk with Jeff last week. We've been friends for more than 20 years since I was a graduate student, but it was wonderful to see Jeff and be on stage with him. First of all, I really want to give credit to Jeff because he is not only known as the most accomplished AI research scientist in the world,
Starting point is 00:20:08 he's always intellectually so honest and just really that kind of scientist who says what he believes. And when he sees the power, my understanding is when he sees the power of the transformer language models, he is concerned about human consequences. Of course, he has his own view about whether this is closer to sentient.
Starting point is 00:20:32 And I respect his view, even though I respectfully disagree. Because like we said at the beginning, the definition of consciousness and sentient being is vague. And I happen to have the benefit of half of my PhD was with Dr. Christoph Koch, who is still to this day a forefront researcher in the research of consciousness. So I learned, I didn't learn that much because I was focusing on AI, but the things I've learned from Christoph is that this is a messy word.
Starting point is 00:21:06 This is messy definition. Right, and his focus was on things like killer robots. You have focused more on the small things. And I think you notice a gender difference. Some of the leading people, they are men, and it's all killer robots. And the bigger end of civilization, Elon Musk, end of civilization. But you think the smaller things are much more important and impactful to humans. Well, okay.
Starting point is 00:21:31 So I actually think there are some immediate catastrophic risks. Okay, catastrophic. All right, go ahead. Yes, but it's not existential in the sense that terminators are coming to kill all of us or machine overload. Right. That's next week. Catastrophic in the sense of disinformation for democracy. In the sense of if we don't have good economic policy,
Starting point is 00:21:57 the job changes will impact certain groups of people more than others and might even lead to unrest in our society. Catastrophic in the sense of polarization, and you and I know this very well, whether we're talking about gender or other, you know, the races, and catastrophic in the sense of, you know, bias and privacy and all that. So yes, I do believe there are risks. Right. When you look at some of the risks where they're, and Hinton and others have talked about it, this idea of terminator, that's what they do paint out. I've been at dinner parties where it's always men come up with this scheme, and it's always women who say, actually, it's the justice system. Yeah, actually, it's this,
Starting point is 00:22:42 it's actually this. Talk a little bit about that in that way, because it's harder to raise an alarm that you're raising. I think you're raising an alarm about the technology when it's smaller, even though it's just as devastating in some ways. Yeah, well, Carol, we've both lived this life for many decades now. So I wouldn't call this smaller, honestly.
Starting point is 00:23:04 I just... Yes. I mean, in people's minds, it's easier to think Terminator than it is someone's going to go to jail that shouldn't go to jail, for example. Yes. And that's why I'm calling out for human-centeredness, because if we put human individual well-being as well as community well-being and dignity at the center of this, suddenly it's not smaller. I don't want to diminish that down the road, the technology has even bigger impact, but I think the immediate things are really immediately important. I do think though, Cara, I don't know if you're noticing this, I think policymakers are starting to pay attention. Yes, we're going to get to that in a minute. We are, they are.
Starting point is 00:23:51 But do you yourself feel some responsibility? You know, Hinton has said that, and of course, you've probably seen Oppenheimer, he's like, look what I made. How do you, because you were one of the early people around this. Do you feel responsibility? How does that manifest itself? It manifests from returning from Google to Stanford to start this Human-Centered AI Institute. As you know, I can stay in big tech and probably have more personal gain. But it was a very conscious, soul-searching resulted decision in 2018 that when I see the potential human impact, I feel that our generation, my generation who made this technology, who unleashed this technology,
Starting point is 00:24:41 has a part of responsibility and also probably even a leading responsibility in calling out the human impact. So this is why I started Stanford HAI and has been in the eyes of the public and policy world calling out these important measures. Okay, let's talk about the impact then. The doom scrolling scenario that I just referenced, we've heard a lot about the fears from jobs and misinformation, existential threat to humanity. What are you most concerned with of the immediate ones? One of my current biggest concern is the extreme imbalance asymmetry of lack of public sector investment in this technology. So I don't know if you have heard me saying not a single America university today can train a chat GPT
Starting point is 00:25:34 model. I actually wonder if you combine all the compute resources of all universities in America today, can we train a chat GPT model? Which is where it used to be. This is where it used to be. Exactly. When I was a graduate student, I never drooled over going to a company to do my work. But I think there should be healthy exchange between academia and industry. But right now, the asymmetry is so bad. So now you might ask, so what?
Starting point is 00:26:10 You know, well, so we're going to have a harder time to cure cancer. We're going to have a harder time to understand climate changes. We're going to have a harder time to forecast the societal impact, harder time to forecast the societal impact, whether it's economics or law or gender, race, political situations. All this is happening in think tanks like public sector, universities, and nonprofits. If the resource is really diminished, we're also going to have a harder time to assess what's going on. It's so interesting. I had a conversation with my son who's at University of Michigan last night,
Starting point is 00:26:52 and he's studying this. This is he's in computer science, but he added philosophy to his major, which he thought was critical. But one of the things he said to me, he goes, Mom, what's AI but optimization and efficiency via automation and a way to leverage human goals? It's essentially a more efficient calculator. And he said, so it's just a spin scooter over walking. And what he was saying is it's all being applied to stupid things versus bigger things.
Starting point is 00:27:16 You know what I mean? Like, so if a private company is doing it, it will not do the larger concerns. You've got a smart kid. He is. He goes, it helps you get places faster, but it's an expression of lazy capitalism. It's an expression of lazy capitalism. And I was like, huh, well, I feel good about my money I'm spending there at college. I think there are legitimate commercial use of AI. So whether it's healthcare or-
Starting point is 00:27:41 Yes. No, but he said that will be the goal. That will be the goal versus larger societal problems. Yeah. So even larger societal goal that gets piloted in academia, hopefully some of them will get commercialized. For example, new drugs that's been discovered and climate change solutions. But if we don't invest in public sector, when are we going to have that? And also, on top of that, we need a check and balances. Who's going to assess in an unbiased way what this technology is? Who's going to open the hood and understand what it is? Let's even assume for a second, Cara, that sentient being is what we're creating. Well, you need trusted public sector to understand what this is. There's a scene in your book where you describe running into some founders of OpenAI soon after its launch in 2015. One of them says, everyone doing research in AI should seriously question their role in academia going forward.
Starting point is 00:28:39 Which founder said that? I don't remember. I actually can't tell Larry and Sergey apart anymore. Anyway, but you write that you don't disagree with that quote, the future of AI would be written by those with corporate resources. And you, of course, were at Google for an amount of time. In 2015, OpenAI was still a nonprofit. What do you think of their decision to move to a capped profit model?
Starting point is 00:29:01 And does it, as Elon Musk complained, feel like bait and switch or doesn't really matter? I'm not in the heads of the founders, but I think it didn't surprise me. Part of it is, how do you sustain a capital intensive operation where you're going after this kind of models? I don't know how philanthropy can carry that. So it didn't surprise me. So what is the role then of the researchers in a corporate-led world that they can do just as well? You're not in a corporation, for example.
Starting point is 00:29:35 Right now I'm not. But I was in Google, and I'm still part of Silicon Valley ecosystem. First of all, innovation is great. I do believe in innovation. But no matter where you are, corporate, startup, universities, you're also humans first. There's also human responsibilities. Everything we do in life needs to have a framework. And I do believe we need to have ethical, human-centered framework. If you're in the corporate world building social media,
Starting point is 00:30:05 what is the framework you believe in ensuring the health and mental health of our children? But Dr. Li, you and I both know they weren't that concerned, or it was their last, it's on the stack ranking list. It was quite down low. And Google, of course, has been accused of censorship of several of its AI ethics researchers. This is why we need a healthy public sector to be watchful, right? Who's going to assess and evaluate this technology in an unbiased way? If it's left to just one player, it's guaranteed going to be biased. Right, and you did work on this at Google, but it hardly matters
Starting point is 00:30:41 because it's what they decide the rules are, and they are unaccountable and unelected and on anything. They just decide what they want. And some of them may be, I had an encounter with the founders of Google about their search dominance and they said, well, we're nice people. And I said, I'm not worried about you. I'm worried about the next person. And then, you know what I mean? I just don't know why you should have this much power. You were also at the center of another Google controversy regarding AI ethics, referring to Project Maven, Google's contract with the Department of Defense using AI to analyze video that could be used in targeting drone strikes. You were running that department.
Starting point is 00:31:12 What did you learn from this backlash at Google? Yeah, so I learned that was around 2018, right? I wasn't part of any of the business decision-making. But I learned that was the beginning of AI's coming of age to the society, and it's messy. A technology this powerful is messy. We cannot just purely look at it from a technology point of view, nor can we just look at it from another one angle. Around that time, there were self-driving car deaths. There were horrific face recognition algorithm bias issues, privacy issues. So it's part of that early awakening, at least for some of us, that we've entered an age that this technology is messy and human values are complex. People come from all walks of life and they see this technology in different lights.
Starting point is 00:32:18 And how do we, especially as technologists, not pretend we know it all. So in that vein, let's get to job displacement. One of Hinton's top fears about AA, he said, it takes away the drudge work, it might take away more than that. You also had written there examples of a trend toward automating the elements of jobs that are repetitive, error-prone, even dangerous. What's left are the creative, intellectual, emotional roles. How do you see this playing out?
Starting point is 00:32:42 What's your worry about job displacement? First of all, I'm not an economist. I want to give credit to my colleagues at the Stanford Digital Economy Lab who are under Stanford HAI and studying this. But here's the thing. This is a big issue. Humanity throughout our civilization has always faced this. As technology disrupts the current ways of jobs and tasks,
Starting point is 00:33:08 you know, a labor market gets disrupted. Sometimes it's for the better. Many times it becomes bloody. And I think the jurors are still out there. What are the sectors most affected, would you say, off the top of your head? What are the sectors most affected, would you say, off the top of your head? Right now, given the latest advances in AI technology, especially built upon language, believe it or not, it's knowledge sectors, knowledge workers, software engineers, which is one of the most coveted jobs in the 21st century is suddenly looking at, you know, a co-pilot, assistants, office assistants, paralegals. Some of this will be empowering.
Starting point is 00:33:55 It's not taking away jobs per se. I've actually, believe it or not, talked to writer friends and artists who are so excited to have this tool. But in the meantime, you know, for example, contact centers is a global job. You know, it is definitely going to face changes. So we need to be very careful. Sort of like what happened with farming or manufacturing. So misinformation. Another recent report found in the past year, at least 16 countries have used generative AI to sow doubt, smear opponents, influence public debate, which they were using with the old internet.
Starting point is 00:34:28 Now they just have it on steroids. How do we deal with that? Because this is another thing that destabilizes societies. That's why I'm worried, Cara. So we learned that the Ukrainian-Russian war now is the first information war. And even there, we've seen quite a bit of disinformation. Like you said, disinformation is an old thing in human society, but this one is being empowered by technology
Starting point is 00:34:54 and it lowers the entry point of anyone using it. So I'm very worried about that. I think we need to look at this from a multi-dimensional way. There's technological solutions, for example, digital authentication. We cannot do this fast enough. I know I've got colleagues at Stanford doing this, but whether you're a company who cares about the contents you produce, as well as academia, we need to get on this as fast as possible. But there has to be laws. There has to be international partnership.
Starting point is 00:35:28 There has to be also general public education, right? About where things come from. Yeah, exactly. And laws cannot do everything. Awareness and education is so important. Yeah, one of the things that someone told me that there's more information on the provenance of a pack of Oreos than there is on information because of the barcodes. Like they can trace where it came from, what it was. And they were like, this is a pack of cookies.
Starting point is 00:35:54 We should be able to do this here. We'll be back in a minute. Do you feel like your leads never lead anywhere? And you're making content that no one sees. And it takes forever to build a campaign? Well, that's why we built HubSpot. It's an AI-powered customer platform that builds campaigns for you. Tells you which leads are worth knowing. And makes writing blogs, creating videos, and posting on social a breeze.
Starting point is 00:36:29 So now, it's easier than ever to be a marketer. Get started at HubSpot.com slash marketers. Do you feel like your leads never lead anywhere, and you're making content that no one sees, and it takes forever to build a campaign? Well, that's why we built HubSpot. It's an AI-powered customer platform that builds campaigns for you, tells you which leads are worth knowing, and makes writing blogs, creating videos, and posting on social a breeze. So now it's easier than ever to be a marketer. Get started at HubSpot.com slash marketers. Let's turn to the positives, starting with healthcare.
Starting point is 00:37:10 You mentioned drug development is obvious, but explain your focus on ambient intelligence and the practical applications of that. So I don't know if you had time to read my book. Yes, I did. One of the thread of my life is taking care of an ailing parent. And because of that, I have, especially as a new immigrant, I have firsthand experience in healthcare from a very non-privileged point of view. We've been lacking health insurance, healthcare insurance for many years. We've
Starting point is 00:37:39 gone through ER, ambulatory, surgical settings, ICUs. And what I learned in healthcare is a couple of things. One, human dignity is the most fragile thing. It doesn't even matter what medicine you're using, what technology you're using. Medical space is so vulnerable for human dignity. And for me, giving back or preserving as much human dignity as possible is one of the most important roles. This is your mom you're talking about. It's my mom's experience. But second thing I learned is labor. America is not having excessive doctors or nurses or caretakers. It's the opposite. And on top of that, they're overworked, fatigued, and there are many situations in healthcare context that the patients are not being taken care of, not because of ill intentions, it's just lack of labor, lack of resource. So ambient intelligence is a way to use smart cameras to serve as extra pairs of eyes for our patients and caretakers.
Starting point is 00:38:56 So to watch over if a patient has fallen out of bed, to have early detection of changes of conditions, to possibly help physical rehab at home for patients, to manage chronic conditions. These ambient sensors, whether it's cameras or microphones or wearables, can continuously discern information and package them into insight and being sent to doctors and nurses. So they know when they have to intercede. There's been a lot of that. People wear them themselves, but there are privacy concerns about ambient intelligence watching us all the time. I mean, every time Amazon wants to put a drone in your home, people go, hmm, maybe not so much.
Starting point is 00:39:44 Absolutely. We have to confront this. In fact, our research team includes ethicists and legal scholars. Here's the thing. First of all, this is going to be a multi-stakeholder approach, right? There are moments patients want it. There are moments that the situation is so dire, we need to weigh the different sides. There are also technological solutions to really put privacy computing front and center. And this is something that my lab has been doing, whether it's on the machine learning end or on the securing the devices end or the network end and all this.
Starting point is 00:40:21 So it's going to be multidimensional. You know, you are facing also people who don't even believe in vaccines, right? They think that's a surveillance vehicle. So that's a difficult thing. Another positive, education. Ignoring the dumb media frenzy about essay generation, which I'm tired of those stories.
Starting point is 00:40:37 What's the argument for tech actually closing gap, AI especially closing gaps in education and deepening learning? So Cara, when TechGPT came, I was testing it myself. My first knee-jerk reaction is, my God, this should be the biggest moment in education sector. Not because of fake essays, it's because of what kind of children are we making? kind of children are we making? Because very soon, or maybe already, AI algorithm can pass tests that are focusing on memorization and regurgitation. Yet human intelligence, human capital is creativity, innovation, compassion, and all those much more complex things humans are capable of. I really hope the entire education sector,
Starting point is 00:41:27 especially K-12, also college level, is taking this moment and realizing we need to reassess education and think, again, human first. Put the children first and think about how we can use this tool to superpower learning, superpower teaching. In the meantime, rethink about education so that our children become much more creative. Is there an area you're not thinking about, I would say autonomous vehicles, climate change, that you're thinking, when you're thinking about the applications of advanced AI, what you think would be the most groundbreaking? To me, a scientific discovery. I think everything from new materials to curing diseases to even as remote as piecing together
Starting point is 00:42:19 old relics of, you know, archaeological sites. Right now, it's all done by PhD students and their professors. All this will be aided by AI, and that's exciting. Yeah, all right. So, but the problem is the state of the industry and who holds power in AI right now. There's a number of things. One is the importance of diversity in the AI workforce. It's something you work on with your organization, AI for All. This has been tried all over technology, this idea of diversity and who's working on it. You have your own interesting introduction to this world, your personal story that influenced your work in Viewpoint and advocating for more diversity, but it doesn't happen. Talk a little bit about the need for this, and at the same time, I think you have to acknowledge it just hasn't happened.
Starting point is 00:43:08 It's an uphill battle, Cara. You experience this as much as I do. To this day, I'm often the only one or very few women in the room. And oftentimes, I'm not in the room. And look at who holds the megaphone. But compared to Ada Loveless, it's been better. It's a low bar there. I know, but it's a battle we cannot give up because this technology is too important not to include everyone. When I say everyone, I mean both
Starting point is 00:43:42 the creation of the technology, as well as you use the word decision-making, who is the decision-maker, as well as those who hold this technology accountable. So I do think we have to continue to chip away at this. The awareness is really important, and the empowerment of people from different backgrounds is so important. You're more confident than I. I feel like over and over again, they continue to— Cara, I don't know if I'm confident. I just don't give up. That's where I am.
Starting point is 00:44:16 Because, look, I'm sure you face the same thing. If you and I give up, where is the next young teenage girl going to look up to? I get that. I understand that. I am lucky because among reporters, I suppose I would be one of the most powerful people in the room. So that makes it easier. But at the same time, it's so clear that the leadership is so homogeneous and still has not, the needle has not moved. It's gotten worse.
Starting point is 00:44:50 Actually, I would say in AI, the needle has probably moved to the worse because of the rapid concentration of power. But overall, again, Ada Lovelace is a low bar, but Grace Hopper, you know, we just have to keep working. I guess. You saw what happened at the Grace Hopper conference. A bunch of men invaded it this year to get jobs. I mean, really, of course they did. It's a women's conference, and therefore they should be there at the front of the line. But does there have to be a USTA technology, not Tennis Association,
Starting point is 00:45:21 or a government agency that plays a role in reconciling the public versus private debate. There's been talk about agencies around AI, around technology. There still isn't one. Okay, so this is an interesting topic. We can talk about this. I don't know yet I feel there should be an agency for the following reasons. This technology is extremely horizontal. Many parts of it should be, in my opinion, in the existing framework where the rubber meets the road, like the FDA, like the SEC.
Starting point is 00:45:57 So in every agency. Right. So every agency should be very vigilant and have a sense of urgency to update what they have for this technology. Now, it's possible that even with all existing framework, we cannot cover it all. And when that happens, I think it's possible we should look at what agencies we should create in addition. should look at what agencies we should create in addition. But I guess if you're asking me my point of view, right now, I think there's more urgency to figure out the existing frameworks. Because, you know, I don't know about you. Actually, you're in journalism. I really wake up worrying to hear the news. We'll see the first death of self-diagnosis using
Starting point is 00:46:46 chat GPT. I don't know if it has happened yet or it's been reported yet. Although people were still on the internet doing that themselves already. Yeah, but again, we're talking about lowering the bar, right? Right. Just like disinformation. It's just easier now. Right.
Starting point is 00:47:01 So some of these rubber meets the road scenario, we've got agencies and they just, they need to move faster. Well, that's what they say. Talk about the idea of advancing. This summer, you're part of a group that met with President Biden to discuss AI. You told him to take a moonshot mentality, which is a Google word, by the way. That's their favorite word over there. It's a JFK word. Yes, I get it. It's a JFK. I know it is, but they love to say the word moonshot every five minutes. But what does that mean to you? I'll give it to JFK, but what does it mean to you? At least I was a physics major, and I think about reaching the moon and beyond. I think it means to me the kind of public sector investment that is so important to usher in the next generation of technology for scientific discovery.
Starting point is 00:47:54 Such as Kennedy with the space program. Yeah, as well as the kind of public participation and evaluation of this technology. So it's both for innovation as well as evaluation and framework in this technology. Sure. What have you gleaned from your interactions with the White House on AI so far? Okay. So I've got, like you say, you began this talk with the 2018 op-ed. Like you say, you began this talk with the 2018 op-ed. At that time, I don't think many of them care, dare I say, or it's still only a few people are thinking about it.
Starting point is 00:48:38 Fast forward to 2023, I've been to D.C. a few times. There's so much more talks about this. So it has reached to the level of consciousness. I still think we urgently need to help our policymaker to understand what this is. And to that end, Stanford HAI is hosting as much as we can educational programs. We're creating policy briefs. We're doing boot camps.
Starting point is 00:49:00 We're having closed-door training sessions for the executive branch, because we have to begin with giving them proper information of what this is. So let's talk about legislation. Your Stanford Institute supports the Create AI Act. First of all, explain the bill and what's notable. Why are you supporting it? This bill is, if passed, will create a public sector AI research cloud and data repository. What that means is that universities across the country, think tanks, nonprofits, will be able to access more compute and possibly more data to do AI research. We just
Starting point is 00:49:47 talked about bottom-up research, right? If I'm a cancer researcher who's looking at a rare disease, not your, you know, big cancers, it's very hard for me to get funding, to get industry support, to get philanthropy. But then I can hopefully get to this cloud and use the compute and some of the data coming from NIH or whatever, CMS, to do my research. This bill is around $2.6 billion over six years for the public sector. Microsoft gave OpenAI $10 billion, I think, of compute. Amazon just gave Anthropic four. Four, yeah. Small.
Starting point is 00:50:33 This is small, but it can move the needle. All right, before we leave, I need to ask you about Twitter. Sorry. You served on this board from May 2020 to October. I know there's not as much as you could say, I get it. They were tumultuous years for the company, but even before Elon's bid, you know I've talked about this, that it was a troubled company for a long time. What was your impact on the board? What did you hope to accomplish by joining that board?
Starting point is 00:50:59 And how do you rate your success? I don't think it was a high rate for the same reason you talked about. So here's the real story, right? Parag invited me. This is the former CEO. Well, he was a CTO. CTO, right. Right. Well, Parag, before he was a CEO, he was actually CTO.
Starting point is 00:51:20 He's a Stanford CS alumni, PhD alumni. So he and I talked about big data. He talked about different aspects of using machine learning techniques. I mean, really as mundane as advertisement optimization to other aspects. And it's under that technological premise I joined. I was very happy to be on the board when Twitter established its ethical team, Ruman and her colleagues. I was participating in more the technological side of the discussion. But also, you know, I did see my role as someone with human-centered belief of technology.
Starting point is 00:52:11 But there are far greater forces that dominated the company in the last couple of years. Right. Well, yeah, the new owner. The new owner. Can you talk a little bit about how the board went from a poison-filled tactic to ward him off to accepting his price? I think you had no choice from a shareholder point of view. It was such a high price. Yeah. So that, Cara, truly, I'm sure it's public knowledge. There is a within-board committee that led this effort. And not surprisingly, I was not on this committee.
Starting point is 00:52:47 I did not have the expertise. And frankly, my understanding was that's my fiduciary duty to look at, you know, from a legal... We can argue if it should be that way, but it was that way, it is that way, and that's what it is. But really, the juicy details, I was not part of. You were not part of, but did you feel regret in handing over the company? I know at the time, I felt someone's got to do something here, because it was bumping along as a company. I thought he could possibly do a good job.
Starting point is 00:53:27 I thought he overpaid. And especially when he started with his antics of not buying it, then you started to see the trouble. How do you feel now afterwards? So I like the word public square. And I think part of the reason they also picked me as a board member is I actually use Twitter, right? So I did use and I still do use it. Yeah, most of the board didn't for people. Much of the board did not. Use it as a public square. But it's actually a very philosophical
Starting point is 00:54:01 word. I know it came from the ancient Greece and where, you know, debates and discussion. But public square is public. But then if you really look at the ownership of even public square, who owns a public square? Governments tend to own public square. So what does it mean a private company is a public square? I agree. Where are the boundaries? Especially now a private company is really a private citizen, right?
Starting point is 00:54:30 So it's actually a very philosophical issue. Do I regret? I don't know. I'm still using it. I want to use it as a public square, but I don't know whether it is or not. I really, it's, I don't have a, I don't feel I have a strong sense. How do you assess his stewardship so far? He has cut those ethical people.
Starting point is 00:54:57 He has cut the trust and safety people, said they're not necessary because it's a public square. said they're not necessary because it's a public square. I think every company, every organization, every country needs frameworks, needs norms. And these frameworks should be maximizing multi-stakeholder well-being. Well-being includes many things, financial freedoms of speech, dignity, and all that. And I think we have a long way to go. Yeah. Do you have hopes for it under him? Early AI investor thinks big thoughts, for sure. Absolutely. Absolutely.
Starting point is 00:55:40 Let me say I'm going to keep my hope up and observe. And observe. Yeah. Any assessment so far? I'm not an avid user. Never I had been. So I don't think I'm the best person to assess. We'll see. I agree with you. There's a need of a public place, although I don't think you can have a public square owned by a private company, and especially when it's run by one person. I think that's called a dictatorship. That's what I used to call it in any case. Anyway, one last question. I'm curious about what your outlook on the future is. You have
Starting point is 00:56:17 young children, so do I. Do you feel hopeful or hopeless for them, and why? Hopeful or hopeless for them and why? So people ask me this question a lot. And my answer is always the following. I'm only hopeful when I feel I'm part of participating in the change. Or if many people like me, like you, were not not powerful, we're not extremely rich by any measure. If people like us feel we have no longer any say, any way to participate, then the hope will be gone. But right now, I still feel as a researcher, as a technology leader, I'm working. I'm working with students who always make me feel hopeful. I'm working with civil society. I'm working with policymakers. I'm
Starting point is 00:57:14 working with industry. And right now, I'm still holding the hope. But I feel it's a, maybe it's just my personality. I'm not letting go of the work. Therefore, I'm hopeful. But if I feel there's no place for people like me to participate, then that's the beginning of trouble. This is why I wanted to write the book. I wanted to encourage all the young people, especially from different backgrounds, to join us and feel the hope through their own doing. Well, you have been a pioneer and an inspiration for a lot of people, I think, more than you realize. Dr. Lee, thank you so much. I'm excited to see more of your work. It's a great book.
Starting point is 00:57:58 Thank you, Cara. Thank you so much. She still has hopes for Elon. What do you think of that? Well, all tech people, they can't help themselves. I mean, she has to. Like, what is she going to do? I don't think she has any admiration for him, for sure. She's a very kind person in general and very caring about the human race. I think Elon cares about saving the human race, but individually humans are more problematic for him.
Starting point is 00:58:29 Her point about public squares was really interesting in that conversation about Twitter. What is a public square? What defines it? And is it a public square if it's owned by a private company? Which is a similar articulation to what Naomi Klein in some ways. No, we've said it for years. I've been writing that for years. It's not a public square. It's owned by private companies. They can make their own rules over and over and over again. It's hard to, I know people think it is. It just isn't. It's just isn't. It's owned by giant private corporations. It's a city owned by giant corporations. And certain voices get amplified, others get drowned out. So the idea that it's a public square of equal opportunity is certainly not the case, certainly not in some places these days, Kara, as you've been. Right, yeah. I mean, I think what's
Starting point is 00:59:08 important is that the government get re-engaged in certain things that are critical for our country, that some things aren't solved by tech, and they're important that government is there because it represents the people. Even if they do it badly, they represent the people. The sums of money that the government is putting into regulating AI or investing in AI versus the sums of money that these private companies can put in. And that was her. I love that she said, there are some immediate catastrophic risks. I don't like when they drag out the word catastrophic. Just some immediate catastrophic risk.
Starting point is 00:59:41 Don't worry. And the main one was this kind of squeezing out of the public sector, and specifically universities. What I appreciated about her, she talks a big game about accountability and about the people, the humanity of this. And then we've asked a lot of people the question, do you feel some accountability? Do you feel responsible? I mean, that's a common journalistic question.
Starting point is 01:00:03 A lot of people beat around that. And her answer was, yes, I do. Yeah, she does. She does. Well, it's hard. I think a lot of people who create things don't understand what later happens to them. She's at least thoughtful about it. And you can't deny it. There's pluses and minuses. And I think she's just, she's an adult. That's what she is. That's all. Adults know how to do that. She has also worked hard to increase diversity in tech and to get more people like her, you know, empathetic to the point, into the room. She's often the only woman in the room. I remember in the prep for this reading a 2018, that Wired article, which she is the only woman in the room.
Starting point is 01:00:38 She is. And are you hopeful for that to change? I know you're not hopeful about Elon, but are you hopeful for that to change? No, I'm not. Are you hopeful for that to change? I know you're not hopeful about Elon, but are you hopeful for that to change? No, I'm not. I've written this story for 20 years about the problems and the numbers, and it hasn't changed. What do you think gets it to change? Nothing. I don't think it does. Do you think over the course of generations more—
Starting point is 01:00:56 I don't think this industry is committed to diversity in a significant way, no. I don't know why they would be. They like themselves. As I've said hundreds of times over the past two decades, it's a meritocracy, not a meritocracy. And that's what they like, and that's the, advancing beyond the United States in terms of the number of people studying new technologies and starting companies. I think there could be a real change in the room because you don't need, you have to get to a point where you don't need to rely on someone else to give you a role at their company and give you a promotion. You can start your own thing. You can be as hopeful as you want. The numbers are declining. From many, many years ago, absolutely, there were more women all over tech.
Starting point is 01:01:48 They're declining. CEOs, there's Lisa Su now, I think, or Dr. Su. It's gotten worse in terms of diversity. And as new things come in, AI is dominated by the same people, and robotics dominated by the same people. All the areas of the future are, and dominated by big companies.
Starting point is 01:02:06 It has not changed. I think that's the case in a lot of, a lot of the world, not just in tech. Nope. But that sounds like Vinod Khosla to me. That's his argument. Everyone's terrible. That's not really a good thing. Well, I'm not saying it as an excuse. I'm saying it as a, you know, if we can change it,
Starting point is 01:02:22 we should try to change it everywhere. Anyways, read us out, Cara. Today's show was produced by Naima Raza, Christian Castro-Rossell, and Megan Burney. Special thanks to Kate Gallagher and Claire Teague. Our engineers are Fernando Arruda and Rick Kwan. Our theme music is by Trackademics. If you're already following the show, Fei-Fei Li is right about AI. If not, get ready for Cyberdyne Systems' Hasta La Vista, baby.
Starting point is 01:02:44 Go wherever you listen to podcasts, search for On with Cara Swisher, and hit follow. If not, get ready for Cyberdyne Systems' Hasta La Vista, baby. Go wherever you listen to podcasts, search for On with Kara Swisher and hit follow. Thanks for listening to On with Kara Swisher from New York Magazine, the Vox Media Podcast Network, and us. We'll be back on Thursday with more. Support for this podcast comes from Stripe. Stripe is a payments and billing platform supporting millions of businesses around the world. Thank you. customers globally. The platform offers a suite of specialized features and tools to fast-track growth, like Stripe Billing, which makes it easy to handle subscription-based charges, invoicing, and all recurring revenue management needs. You can learn how Stripe helps companies of all sizes make progress at Stripe.com. That's Stripe.com to learn more. Stripe. Make progress. Support for this podcast comes from Klaviyo. You know that feeling when your favorite brand really gets you.
Starting point is 01:03:50 Deliver that feeling to your customers every time. Klaviyo turns your customer data into real-time connections across AI-powered email, SMS, and more, making every moment count. Over 100,000 brands trust Klaviyo's unified data and marketing platform to build smarter digital relationships with their customers during Black Friday, Cyber Monday, and beyond. Make every moment count with Klaviyo. Learn more at klaviyo.com slash BFCM.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.