Factually! with Adam Conover - A.I. and Stochastic Parrots with Emily Bender and Timnit Gebru

Episode Date: April 26, 2023

So-called “artificial intelligence” is one of the most divisive topics of the year, with even those who understand it in total disagreement about its potential impacts. This week, A.I. re...seachers and authors of the famous paper “On the Dangers of Stochastic Parrots”, Emily Bender and Timnit Gebru, join Adam to discuss what everyone gets wrong about A.I. Learn more about your ad choices. Visit megaphone.fm/adchoices See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Transcript
Discussion (0)
Starting point is 00:00:00 You know, I got to confess, I have always been a sucker for Japanese treats. I love going down a little Tokyo, heading to a convenience store, and grabbing all those brightly colored, fun-packaged boxes off of the shelf. But you know what? I don't get the chance to go down there as often as I would like to. And that is why I am so thrilled that Bokksu, a Japanese snack subscription box, chose to sponsor this episode. What's gotten me so excited about Bokksu is that these aren't just your run-of-the-mill grocery store finds. Each box comes packed with 20 unique snacks that you can only find in Japan itself.
Starting point is 00:00:29 Plus, they throw in a handy guide filled with info about each snack and about Japanese culture. And let me tell you something, you are going to need that guide because this box comes with a lot of snacks. I just got this one today, direct from Bokksu, and look at all of these things. We got some sort of seaweed snack here. We've got a buttercream cookie. We've got a dolce. I don't, I'm going to have to read the guide to figure out what this one is. It looks like some sort of sponge cake. Oh my gosh. This one is, I think it's some kind of maybe fried banana chip. Let's try it out and see. Is that what it is? Nope, it's not banana. Maybe it's a cassava potato chip. I should have read the guide. Ah, here they are. Iburigako smoky chips. Potato
Starting point is 00:01:15 chips made with rice flour, providing a lighter texture and satisfying crunch. Oh my gosh, this is so much fun. You got to get one of these for themselves and get this for the month of March. Bokksu has a limited edition cherry blossom box and 12 month subscribers get a free kimono style robe and get this while you're wearing your new duds, learning fascinating things about your tasty snacks. You can also rest assured that you have helped to support small family run businesses in Japan because Bokksu works with 200 plus small makers to get their snacks delivered straight to your door.
Starting point is 00:01:45 So if all of that sounds good, if you want a big box of delicious snacks like this for yourself, use the code factually for $15 off your first order at Bokksu.com. That's code factually for $15 off your first order on Bokksu.com. I don't know the way. I don't know what to think. I don't know what to say. Yeah, but that's alright. Yeah, that's okay. I don't know anything. Hello and welcome to Factually. I'm Adam Conover. Thank you so much for joining me once again as I talk to an incredible expert about all the amazing things that they know that I don't know and that you might not know. Both of our minds are going to get blown together and we're going to have so much fun doing it. I want to remind you that if you are watching this podcast on YouTube, go subscribe to the podcast in your favorite podcast player if you want to hear it every week. If you're listening on your podcast player, go check out the video episode on YouTube.
Starting point is 00:02:47 Now this week, we're talking about AI. Just a few weeks ago, I released a YouTube video called AI is BS, in which I argued that AI has become a marketing term that tech companies are using to hype up software that cannot do what they claim it does, and in which in many cases could be dangerous if they jam it into mainstream products that it is not ready for. In fact these companies are hyping up AI to such an extent that they're trying to convince people that software like ChatGPT is a step on the road to some kind of godlike artificial
Starting point is 00:03:21 general intelligence when in reality what they actually made is a text generator that can write some pretty cool fanfic and help you program. And you know, if they had marketed it that way in the first place, they said, hey, we made a tool that'll output a recipe that tastes bad if you try to cook it. I mean, that would be pretty neat. We could think of a lot of uses for a text generator like that. But a step on the road to true artificial intelligence, it is not, and it is kind of fucked up to tell people that it is.
Starting point is 00:03:49 Now that video got a somewhat, let's say, divisive response, because a lot of people on the internet have drunk the AI hype Kool-Aid. These tech companies have succeeded in confusing the issue of AI so much that a lot of the time when we say AI, most of us don't even know what we're referring to. We don't understand how the software works and we don't understand how that's connected to the science fiction fantasies that the companies are peddling us. And because of that confusion, a lot of weird shit is happening. For instance, while I was in the process of editing my video, 1600 major AI researchers
Starting point is 00:04:25 as well as people like Elon Musk and Steve Wozniak came out and asked for a 6 month pause on AI research. But they didn't ask for that pause because of, you know, AI giving out misinformation or the fact that it just recycles the copyrighted work of artists like myself and others. No, they asked for that pause because they were worried that it could create an AI superintelligence. Again, the bullshit hype science fiction claim. So a bunch of other AI researchers came out against this letter saying that we do not have a problem with AI because of the superintelligence thing. We have a problem because you are exploiting the work of real people and making the world a worse place right now. So I don't blame the people in my
Starting point is 00:05:05 comments for being confused. Even AI researchers themselves do not agree entirely on what the problems are. So for that reason, we are going to spend a couple episodes of this podcast talking to some of those AI researchers about what the problems are and how we might go about fixing them. And on the show today, we have two incredible guests. Their names are Emily Bender, who's a professor at the University of Washington, and Timnit Gebru, who's the executive director of the Distributed Artificial Intelligence Research Institute.
Starting point is 00:05:34 And you might recognize the name Timnit Gebru because she is the researcher who was famously fired by Google for raising AI ethics concerns in that famous paper. I am so excited to talk to them because they are two of the sharpest minds on AI and how the problems with it are not what the tech companies have been telling you. But before we get to that interview,
Starting point is 00:05:54 I want to remind you that if you want to support this show, please head to patreon.com slash adamconover. You can get every episode of this podcast ad-free and get a bunch of other goodies. And even more importantly, please come see me on tour this summer. I'm taking my brand new hour of stand-up to San Francisco, San Antonio, Tempe, Arizona, Batavia, Illinois, just outside Chicago, Baltimore, Maryland, and St. Louis, Missouri. Head to adamconover.net for tickets. Come see me.
Starting point is 00:06:22 I'd love to give you a hug in the meet and greet line after the show. And now without further ado, let's get to my interview with Emily Bender and Tim Neat-Gebru. Tim Neat and Emily, thank you so much for being on the show. Super happy to be here. Thank you for having me. It's an honor to have both of you considering, you know, I've read your work, I've talked about it in my last YouTube video, all about AI. You're some of the foremost researchers on the topic, some of the foremost critics of how the tech industry has been employing AI. So I'd love to hear from you. First of all, Emily, we last talked, it might have been close to a year ago, back when AI was very
Starting point is 00:07:02 much an active research subject. We were hearing a lot about it, but it wasn't something that the average person was using. In the years since, AI has become radically mainstreamed. A lot of these companies have just shoved it into consumer products without any concern for what the results are. As two people who follow the field extremely closely, what has your reaction been to the last six months or so of rapid development in the industry? Blank faces here. It's been a lot of, oh, come on, again? More? Seriously?
Starting point is 00:07:39 Yeah, that's how I feel. I mean, I just, I am dumbfounded by the number of people I thought were more reasonable than this, kind of jumping on a bandwagon of what seems to be mania. So I don't know. That's how I feel. What do you think the greatest potential harms of this are? In your paper, your very famous paper on the dangers of stochastic parrots, you talked about many dangers that these would reify discriminatory materials in the training data by repeating it out to people,
Starting point is 00:08:15 that people would take it too literally, would take the pronouncements of a large language model as fact. A lot of those have seemed to come directly true. Do you feel validated that your criticisms have come to pass? No, no, because those were predictions. Those were warnings. Like, don't do it. You know, we don't want to get there. And then we got there and then some, right? Like, I just, I definitely don't feel validated. I feel upset and sad about it. Yeah. How do you feel, Emily? You know, I think that some of the biggest problems that maybe I didn't understand, because it's sort of half an economic problem,
Starting point is 00:08:56 is the way in which people would say, hey, this looks like it could be a robo lawyer, this looks like it could be a robo therapist. And look at all those people who can't afford real lawyers and real therapists. So let's give them this instead. And like that jump from you've identified a real problem in the world. It's a problem that mental health resources are inaccessible. And it's a problem that legal representation is inaccessible. But then try to fill that hole with something that is just a joke and can directly cause harm when deployed in those cases. I think even when we were writing the paper and saying, you know, it would be bad if this was set up in such a place where people might believe it or believe it knew what it was talking about. I don't think I was in a position to predict that that's
Starting point is 00:09:40 a direction that it would go in. At the time we wrote the paper, I was just, you know, seeing this whole mine is bigger than yours kind of race and just being very confused. Why is this the thing that anybody, everybody just wants to be the biggest one. And now, and now you have not just a text to text models, but text to image, text to video, video or whatever, you know, and so I didn't, I didn't imagine that in such a short time that
Starting point is 00:10:06 kind of explosion of synthetic media into the world would happen and i also didn't think about i would say what the content moderation demands and issues would be with that much like explosion of synthetic media you know like the um the, what is the Clark's World shutting down submissions because they got an equivalent of a DDoS. Yeah, stuff like that is something I didn't predict. Emily, you talked about just now the sort of thing happened that happens a lot of technology. Once it's released, people come up with new uses for it that nobody predicted. And one of the uses that people have started,
Starting point is 00:10:45 you know, you're talking about robo-lawyers. I've seen people say that they're using chatbots as therapists or as relationship surrogates. And what are the dangers of those types of uses? Because I'm certainly seeing those promoted all over. There's a lot of folks saying, hey, if you don't have access to XYZ, an AI can do that for you in all sorts of fields. And you said that these are a joke. What makes
Starting point is 00:11:11 them a joke for that purpose? So they're a joke because they're all form and no content. So what these systems are really good at is mimicking the form or the style of something. So it absolutely can write something that looks like a legal contract for you. But if your purpose in drafting up a legal contract is anything other than intimidating the other party with legalese, then the specific content and the way that it maps into your situation really matters. And it might be that there's some sort of template type situations where it's like, okay, yeah, this is a contract for, you know, the rights to use a piece of music. And I want this right assigned and that one not, and it's gonna be paid for this much. And here's like, you could answer a few questions and get something out from a template
Starting point is 00:11:56 that would work reasonably well, but that's not what they're doing, right? They're saying, what's a plausible next word? What's a plausible next word given this context? And, you know, who knows where that's going to be? So for the legal case, you know, you're asking for that because you are not a lawyer and you can't afford a lawyer. You're not going to be in a position to tell if it's good or not, but it'll look impressive. Right. Right. It it sort of will it will create a convincing imitation of a piece of text that will most readily convince someone who knows nothing about the field. Like a lot of, you know, I work in television writing and there's a lot of talk in, you know, oh, can studios use chat GPT to write scripts? And when I use one of these services to write, you know, to output text, I'm like, yes, this superficially looks
Starting point is 00:12:45 like a script, but it's missing so many of the things that you would need to film a script. And someone might say, well, what if the technology gets better? And it's not a matter of aping something even more correctly. It's a matter of to successfully write a piece of screenwriting, you need information about the rest of the world that no algorithm or AI program could ever have. You need to understand what is physically possible to produce. You need to talk to a department full of people who say, a very good example I use of this is I didn't realize until I started writing television that you can never have someone
Starting point is 00:13:23 jump into a pool on television. If you watch never have someone jump into a pool on television. If you watch TV and someone jumps into a pool, it'll always happen off camera. And because I had a scene where someone jumped into a pool, my line producer told me, you need to remove that. And I said, why?
Starting point is 00:13:35 It seems not that hard. They said, because we need to film every take five times. So that means we need five pairs of wardrobe because we need to film them one after another and we need to dry the person off and do their makeup and it's going to take all day. And so as a result, people never get wet on television or when they do it's very expensive. Okay? Interesting.
Starting point is 00:13:53 And you'd have no way of knowing that without real life experience. And even if an AI could eventually figure that out, there's also a million other things like that that are specific to the particular production. Hey, it's going to be cloudy on Wednesday. We need you to rewrite the scene. You know, there's, there's so many details that are, that are fundamentally about humans communicating with each other. And that's the same thing with, with a lawyer doesn't just output text. A lawyer talks to like, knows what the other side might do in response, knows how aggressive they'd be. If you're trying to sue. Yeah, exactly.
Starting point is 00:14:23 If you're trying to evict a tenant, they're going to have a much different response than if you're trying to sue the Church of Scientology, right, who are very aggressive. And knowing what the laws are, too. So, to me, this is... It's very obvious when I actually look at how they're used,
Starting point is 00:14:40 but it's... Is it a problem with the technology, or is it a problem with humans not understanding how our own society works to not realize that these tools are going to be effective? The hype, too. The hype, yeah. So it's a problem with the task technology fit. So what is it that we need and how does the technology fit into it? And Timnit, I want to bring up your wonderful line about how these things are unscoped technologies. And then maybe you could elaborate on that a little bit.
Starting point is 00:15:08 Yeah. I mean, I was going to bring up all your work on hype, but which I think, and I really, Adam, I mean, it just like when I'm talking to you, I'm like, yeah, we live in the same planet and we're having the same conversation and the same language, that's not the language that we're speaking with the other researchers in AI or machine learning or whatever it is. I am, I'm so confused what's going on, but because, you know, scoping systems is a very basic engineering concept, right? When you're building something, you want to know what you're building it for and then see if what you're building it for is actually being fulfilled. Whereas in this case, the way they're advertising their systems is that they're building it to accomplish anything for everybody, anywhere, write code, speak whatever language, you know, write scripts, movie scripts, protein folding, whatever it is. And that's a fundamentally unscoped system that
Starting point is 00:16:05 I don't even know how we can make sure can be safe or work. To add to the notion of being an unscoped system, right, which to me is a basic engineering concept. When you're trying to build something, you ask, what am I building to accomplish? What are the tasks that I want to accomplish under which scenarios, under which conditions? And in this case, when you see the kinds of things that they are advertising, all of these companies, Meta, talked about this large language model-based system
Starting point is 00:16:38 they had called Galacticon. They said, oh, it's going to write code, just protein folding stuff, and write scientific papers and more, you know, and with opening eyes, chat GPS, like write movies, replace artists, do this, do that and more, you know, and so already you just haven't have built a system that we don't even know what it's supposed to use for to be used for. And how do we even test whether it is actually accomplishing its task, the task that it's
Starting point is 00:17:06 supposed to be built for. And one problem here is that open AI is not at all open about how these things are trained. And so not only is it not tested in specific contexts where you can say, okay, here are the safety parameters of the system. Here's how well it has been tested to work in these contexts. We don't have that information. We also don't know what its training data or training regimen was. And according to OpenAI, this is somehow for safety, which makes no sense at all. Because one of the very first things that was worked out about responsible development of these kinds of systems is provide documentation of the underlying data set and of the parameters of sort of safe use of the model. And that's documentation of the underlying data set and of the parameters of sort of safe use of the model.
Starting point is 00:17:48 And that's kind of the first place that Timnit and I got to know each other. Independently, right? We were both working on this separately. Yeah. And then we connected through that sort of related work. And that was really, really fortunate. It was 2017, I think. There was just something in the air where a whole bunch of groups said, we got to document these things so that we could figure out how we could use them. And OpenAI, while claiming to be doing this for safety, is flat out refusing to do that.
Starting point is 00:18:15 And what's the danger of that if they are refusing to release or make the model transparent? transparent. So you can't make any decisions about whether it would be good to use the model or not if it's not transparent. Like, let's say I want to use it for writing computer code. Well, I can try it a few times and see if it seems to work well and then maybe get some confidence. But I don't know what its training data looks like. And for programming languages, from what I've heard, again, they're not open about this, but part of the training data is literally sort of English descriptions together with executable codes. There's a lot of paired stuff in there that helps it do well a lot of the time. The other thing about programming language is that they are specifically designed to be unambiguous, which is in stark contrast to natural languages where
Starting point is 00:18:59 ambiguity is sort of a fundamental design feature. Everything is ambiguous. And so the fact that it does well with this more constrained universe of programming languages kind of makes sense. But again, we can't really know because we don't know how it's trained. But imagine like you're happy with this performance in helping you generate code. And you've even got some like computer security buffs
Starting point is 00:19:20 on your team. And they look at that and say, yeah, I don't see anything frightening coming out here. But they keep changing the system. And then all of a sudden it's maybe putting that in secure code, but there's no information about what version you're using. You can't say, no, I want to keep using the version from December of 2022, because that's gone. All you have is the open AI APIs that I was trying to say that allows you to connect with whatever they've put up for
Starting point is 00:19:45 you to connect with. And a lot of tools are being built on their API right now. If you open the app store, any kind of app store, you'll find countless tools that are AI XYZ, AI help you write code, AI help you write a movie script, AI therapist. And they're really just hooking into OpenAI's model and paying them a couple pennies per however many requests. And people are now starting to use those tools to do real things without knowing what is, where the output is coming from or what the model is. It does kind of remind me a little bit, you're talking about models, like I've talked to plenty of climatologists on the show and like, you know, climate models are a huge part of our understanding of how the climate works, but we also know how those models work and we
Starting point is 00:20:33 can compare them. And we have a lot of information about them so that we know when they predict something, you can go back to the, what the source was. But in this case, it's both by design, but also by corporate structure, a black box, because they're not telling us anything about it. I was just thinking about something even more basic, like how do we know we're not, they're not stealing people's work to profit off of it. So, you know, there were all these lawsuits by artists to Debian, RIT, Stability, AI, and Midjourney, right? But not OpenAI, not DALI, because we don't know what training data they were used. So we don't know if there were copyright violations or not,
Starting point is 00:21:12 if they compensated anybody versus not. But they can do that, right? There's nothing that is preventing them from doing that right now. Yeah. And it makes me wonder. So look, I can go to into chat GPT. I talked about this in my YouTube video. I can go to chat GPT and say, write an episode of Adam ruins everything about X, Y, Z. And it'll output something that looks like a prose version of Adam ruins everything. Adam walks into the room and says, blah, blah, blah about dogs. And I'm like, where is this coming from? Because I don't believe it has access to my shooting scripts because those are not public anywhere so I'm trying to figure out where is it getting the information from and again this is based on my own copyrighted work that I spent a lot of time
Starting point is 00:21:56 putting together is the character I created and I would think that you know I would if it's being used for profit I would like to be paid for it in some respect. I think that's a pretty fundamental feature of how capitalism is currently arranged, and I would like it to follow those rules. But it's difficult for me to tell. It could just be getting it all from fan fiction. I feel like it's really scraped archive of our own and all these big fan fiction sites. But it's really, really difficult to tell. Let's spend a second, though, and talk about OpenAI as an organization and the sort of ideology behind it
Starting point is 00:22:29 because this is the organization that is most in the news pushing AI forward and it was incorporated originally as a nonprofit, right, and made a lot of noise about how the point is to make sure they're going to do it responsibly. It's research. It's not about profit. But it has recently, I believe, changed its incorporation status to be a for-profit company, completely changed its tune.
Starting point is 00:22:51 And they now say it wouldn't be safe if we were to open it up to all of you. But meanwhile, you've got the founder, Sam Altman, going on all sorts of podcasts, talking to the news about how he has to do all this because AI is really, really dangerous. But then the dangers they're always talking about are always the science fiction kind where how the robot takes over the spaceship or, you know, from Isaac Asimov kind of science fiction where a super powerful intelligence, you know, takes over the universe kind of thing. They're never talking about the harms that we're talking about, that people might use it to write a legal document it shouldn't write or that it might rip off somebody's copywritten work or anything like that.
Starting point is 00:23:26 So what is your view of this organization and its supposed altruism? Is this just a faint to sort of trick us all into thinking that they have our best interests at heart, or has a corruption happened, or what? So this is the Tess Creel question, and I'm going to let Tim need to find Tess Creel. But just my take on this very quickly is, I think they believe they are being altruistic and working in the best interests of people. But their view of who counts as a person is very narrow, and sort of leaves out of view all of the people who are being harmed now or just sees those harms as inconsequential compared to what they're worrying about, which is in this science fiction
Starting point is 00:24:10 universe. It's hard to say the phrase science fiction fantasy, because to me, those are two genres of wonderful speculative fiction. And you don't want to bucket this into that. Yeah. They're just so, I, you know, I can say so much about them. I've been on them for a long time. So in 2015, when they were announced, I wrote a open letter that I didn't end up sending to anybody. I just kept it to myself. Because I was a PhD student back then. And people were like, people will know that it's you because I was so angry by the tone. So I don't think it's a corruption or they've changed their tone or whatever. To me, they've like stayed exactly the same.
Starting point is 00:24:53 And initially they said exactly they talked, they wrote, you know, they talked about it as if they were going to save humanity. Peter Thiel and Elon Musk always, as usual, just on the ball to save humanity. Of course, that's always what they've been doing in the world. And all the whole media was talking about it like, oh, this nonprofit is starting. They're going to save humanity from AI. Because back then what happened is that they had invested in DeepMind, that they also wanted to create, you know, AGI, artificial general intelligence, which is a system that none of us know what it even is supposed to do. This is the acronym for a, for a super intelligent, uh, uh, AI. Sounds like a god.
Starting point is 00:25:37 Sounds like a god to me. Um, and so they were all very much trying to develop this thing, which I don't even know what it is. And deep, you know, invested in deep mind deep mind got bought by Google 2015, the future of life Institute had a similar letter as to the one that we see the pause letter. And then they, you know, found open AI, put bill, you know, hundreds of millions of dollars into it, because they say they're going to save humanity and all of that and create this AGI thing. Fast forward, right? They realize they need a lot more money.
Starting point is 00:26:11 They're now essentially bought by Microsoft and then now they have a competition. So they need to be closed and all of that. So to me, really, it wasn't like a pivot or anything. I never believed that they were going to, you know, save humanity or anything like that. And in terms of Emily's cue about the test grill bundle. Yeah, what does this mean? So, you know, I have been really so irritated by the whole crew because I've been around them for a long time. I went to school with some of them, been around this, you know, AGI community for a while. So recently I teamed up with a collaborator of ours whose name is Emil Torres, who used to be a long termist.
Starting point is 00:26:54 And so long termism is this weird, you know, the Future of Life Institute behind the pause letter is a long-termist institute. And so they literally think that our job as humans is to maximize the number of future humans who colonize space and digitally upload their minds and live in the matrix kind of thing, right? This is a real thing. It's not an exaggeration. That's what they want. Yeah. I've read a lot of that philosophy that, you know, the idea that we need to be thinking about how do we maximize the future, uh, happiness and wellbeing of humans 10,000 years from now, if you could, why save one life today when you could save 10 million lives, uh, 10,000 years from now. Now I'd say, how the fuck do you know that what you're going to do is going to have any effect on people that far in the future? It's the height of hubris to think that you can project that far into the future at all.
Starting point is 00:27:47 Well, they give you some random numbers. They pull some number numbers out of their asses like, oh my God, we didn't know Sam Bankman Freed was going to be doing this, but we will know what 10,000 years are now, 0.001 probability that, you know what I mean? It's absolutely ridiculous. But anyhow, so the test real bundle is a bunch of ideologies that are all sort of descendants of the first wave eugenics, eugenics movement. And, you know, when you hear this word about human flourishing, maximizing our potential through both positive and negative eugenics, positive would be the ones who are desirable. You want them to breed, you want them to, you know, multiply, right? And the ones who are negative, the ones who are undesirable, you
Starting point is 00:28:30 want to kind of get rid of them because they don't help you with this human flourishing thing. So the transhumanists, you know, were very much, that ideology was very much developed by 20th century eugenicists. And Nick Bostrom, who is also a long-termist, you know, he's also a very famous- Very prominent philosopher. Very prominent transhumanist, right? And so we trace how these ideologies, transhumanism, extropianism,
Starting point is 00:28:56 the singularity people who say singularity is coming because of AI, the cosmists who are actually the people who wrote the first book on age, Artificial General Intelligence in 2007, the effective altruists and the long-termists, the Cosmists who are actually the people who wrote the first book on artificial general intelligence in 2007 the effective altruists and the long-termists and how they're all in this circle kind of learning from each other networking with each other lots of money going into them and they are all sort of either selling AGI utopia or AGI apocalypse, right? If we do it right, it's going to bring us utopia. It is
Starting point is 00:29:25 out human flourishing. We need to do it. If we do it wrong, we're going to have an apocalypse because it's going to take over the world or China is going to do the devil kind of AGI. And you need to let us do this utopian kind because we're vanguards of humanity. So it's obviously a very kind of convenient ideology for the billionaires because, you know, they're saying, give us all the money. We'll do the utopian kind. And but we're super worried about it because it might be super powerful, but we're careful because you can trust us. Right. And so Sam Altman is kind of is doing that thing. Right. And to me, he's in the same sort of camp as the future of life people,
Starting point is 00:30:11 because that's the same thing they're selling. Yeah, the connection to eugenics is not theoretical. I've seen it myself. If you look at Nick Bostrom's writing and the writing of a lot of folks who write extensively about AI or AGI, the future, you know, super AI that could control the world. They also write overtly about eugenics. They have charts and tables about if we, what if we started a human breeding program and only allowed people in the top percentage of intelligence to breed, and then they would have super babies
Starting point is 00:30:37 and the babies would be super smart. And it's like, this was tried in the 40s in a country in Europe. You know, this is very, these are very old ideas. By the way, you can just look at the interview I did a couple weeks ago about intelligence to learn about whether intelligence is actually heritable in that way. It's not. But so the proximity of these ideas to each other is not theoretical. These are the same folks promoting neougenics and promoting AI catastrophism. I want to refer, though, to this pause letter that you mentioned a couple times
Starting point is 00:31:11 so that folks know what it is. A couple of weeks ago, actually, as I was editing my AI video, a whole bunch of AI researchers from many, many different organizations signed a letter suggesting a six-month pause on AI after the release of GPT-4. And they said, well, this is very dangerous. We need to evaluate it, and et cetera, et cetera. And some folks who I have read and enjoyed as AI researchers are signed to the letter. And it sounds on the face of it that that might rhyme with some of what you folks are saying. But you took objection to the letter. And so I'd love a little bit of explanation from you about exactly what your issue with that is. Like, what did that letter get wrong? Do you feel we need to pause or
Starting point is 00:31:56 do we need to pause for a different reason or what? So I think pause is unrealistic. I think six months is unrealistic. I think the letter makes it sound like these researchers are just now noticing that this might be harmful, despite, you know, years and years and years of work of people saying, hey, there's harms here. And the letter itself is basically saying, oh, no, we've built something too powerful. Better be careful. So it's what Lee Vincil calls pretty hype. We're too good. Yeah, we're too good. Gotta stop. So it's basically helping to sell the technology. I have to say that I found out about the letter a little bit before it dropped because there was a journalist who contacted me asking if I was going to sign it
Starting point is 00:32:39 and would I comment. And I'm like, haven't seen it, not going to comment on what I haven't seen. And then I think later that day it came out and I was busy. And then finally evening, I sat down to read it and I thought, Oh my God, I have to, I have to react to this. So I put out a tweet thread. Um, and then, you know, media, you know, craziness about it. And so I said to Tim Neat and the other two listed authors of the stochastic parrots paper, let's put together a statement coming from us so that we can point the media at that for one thing, but also to have sort of a joint statement here. And Tim Neat sort of took everybody's remarks, including my relatively snarky Twitter thread, and pulled together a first draft that we then worked on. And where we start with that is with the observation that they cite us in their first
Starting point is 00:33:24 footnote. Number one. Yeah. They cite your us in their first footnote. Number one. Yeah. They say your stochastic parents paper. Number one. And number two is Nick Bostrom. Yeah. Oh, wow. OK. They got they went all the way from alpha to omega there. They're they're citing everybody. But they didn't ask you to sign the paper. That's interesting. Oh, they would know. I would never. Yeah. Okay. So their sentences, AI systems with human competitive intelligence can pose profound risks to society and humanity as shown by extensive research, footnote one, and acknowledged by top AI labs, footnote two. And that footnote one has us and Bostrom and some other people, but the Stochastic Parrots paper was not a paper about AI systems with human competitive intelligence. It was a paper about large language models, which are not AI systems with human competitive intelligence.
Starting point is 00:34:14 As we note so many times in the paper, like it was just infuriating. And there's like one or two things in here that I think do rhyme, as you say. So, you know, we need regulation and that regulation should involve things like watermarking so that we can tell when we're encountering synthetic media. And, you know, liability for AI caused harm. That sounds good. That liability should sit with the companies that are creating and deploying the AI. They don't say that. But then there's a bunch of really weird stuff in here. AI research and development should be focused on making today's powerful state-of-the-art systems more accurate, safe, interpretable, transparent, robust. Okay, I'm all right with all those. Aligned is a key word for this weird.
Starting point is 00:35:04 Yeah. Trustworthy. Yeah, it's like, eh, eh, eh. Trustworthy, yeah. But then the last one, the last one is loyal. Loyal. Loyal. To who? Oh, wait, no, no, sorry.
Starting point is 00:35:17 They started doing Boy Scouts. A scout is trustworthy, brave, reverent, kind, obedient, or whatever. I quit Boy Scouts when I was like 10. But there's like a whole list of things, and one of them is reverent. So I'm surprised they didn't include reverence. Shouldn't it go to church, the AI? Yeah. But here's the thing, an AI, I mean, so there's the first question that Timnit raises of, okay, loyal to whom? Whose interest is it serving? But also, these are not the kinds of things that can be loyal. To be loyal is to experience certain feelings,
Starting point is 00:35:45 to have certain commitments. And large language models are just text synthesis machines. Yeah. They predict what comes next. Or a really great description I heard of them is to think of them as word calculators. They do a good job of you give it a bunch of words and it can turn them into other words
Starting point is 00:36:04 that are derived from the first words. And that can be a useful thing to do sometimes, particularly if you're a computer programmer or someone else who like is manipulating text on that sort of level. But that's not a it's it's not a thing that has an ethical drive such as loyalty. Yeah, yeah, exactly. So so this this letter, you know, got a lot of attention partially because of who signed it. And, you know, we've, we've had, so we, we pushed back pretty quickly and then we were getting reactions like, oh, you're squandering the opportunity. This is Gary Marcus complaining about us. Going out for blood, coming out. What is, what did he say he say coming out they went for blood we went for blood or something like that it's just like okay and basically it's like they supposedly created an opportunity for regulation that would maybe get this six month pause whatever that means
Starting point is 00:36:55 it's all completely unfounded right pause on systems more powerful than gpt4 well we don't have the specs on gpt4 so that's a unmeasurable, undefined thing anyway. And as someone was pointing out, and I'm sorry I don't have the source for this, a lot of the work in creating these systems is actually in the data preparation and gathering these enormous amounts of data. And a six-month pause on training the systems wouldn't prevent anybody from going and collecting more data. We need to prevent that in other ways to prevent data theft, but that's a separate question, right? And then you get people out there saying, well, why can't the so-called AI safety and AI ethics people get along? So the AI safety people are the long-termists who want to prevent the AGI from taking over the world. And AI ethics is sometimes used to refer to the people who are concerned with the problems in the here and now in the ways that you yeah they basically created the term ai ethic i'm ai safety to
Starting point is 00:37:49 separate their themselves from us is how i feel about it because like we have the same technical expertise we have other expertise also but like it doesn't mean that you know so i feel like they they named that that field or whatever it is to explicitly separate themselves from kind of our crew, right? Yeah. So my answer to why can't we get along is like, well, why can't we find common cause? If the AI safety people wanted to find common cause with those of us working in ethics, they would cite us. They would go to Tim Neitz's work. They would go to the work of Sophia Noble and Ruha Benjamin and Cathy O'Neill.
Starting point is 00:38:25 Two past guests on the show, by the way, just want to ding, ding, ding, ding. Excellent. And build on that and lend some of their money and resources to making that happen. But of course they don't want to because they're aligned with corporate interests and to really push back
Starting point is 00:38:41 and to really reduce the harms here, we need regulation that reigns in the corporations. Like why would Elon Musk sign? Like everybody has to ask, why does Elon Musk, an advisor and funder of Future of Life Institute, someone who pumped hundreds of millions of dollars into open AI and deep mind and whatever and whatever,
Starting point is 00:39:01 why is he so interested in like caution and whatever? As long as it doesn't touch him? Sure. You know, if we're talking about regulating Tesla or looking at the racial, the largest racial discrimination lawsuit in history in California, that's not what he wants us to talk about, right? Like he doesn't want us to talk about any of those things and whatever he's doing with Twitter. We have to think about, oh, my God, like this super powerful science fiction thing that's going on. And it's just so disappointing to see the number of people who went along with it.
Starting point is 00:39:32 I think for us, we wanted to make sure we wanted to make it clear that we are not aligned with this vision of AI safety. The whole eugenics roots. There's Emily has a a thing she always says, always read the footnotes. We read the footnotes and they have a footnote that says, you know, if we don't do X, Y, and Z, we, AI systems might have, might be potentially catastrophic, like other potentially catastrophic things like eugenics. And we want it to be like eugenics is not just potentially catastrophic. It has been catastrophic. You know what I mean? So we just want to make sure that they should not be able to launder people's reputations to
Starting point is 00:40:16 make themselves mainstream and appear reasonable. I also think that there's a huge number of unexamined assumptions in that letter that they are using the letter to promote to the public that are essentially myths about AI. And I want to get into some of those and ask you to react to them and maybe debunk them. But we have to take a really quick break. We'll be right back when we're Emily Bender and Timnit Gebru. Okay, we're back with Emily Bender and Timnit Gebru. So we were talking about the AI pause letter, and I was starting to talk about how it seems to have a lot of assumptions built into it about how AI works and how it's going to progress,
Starting point is 00:40:58 that the people who wrote it and the people who founded OpenAI and the people in the tech industry have really pushed onto the public. And I see those assumptions actually in my YouTube comments. people who founded OpenAI and the people in the tech industry have really pushed onto the public. And I see those assumptions actually in my YouTube comments. I'll see them in the comments to this video when we post it on YouTube and in the comments to my last one. People say, well, AI is progressing so quickly. It's unstoppable. It's progressing every single day. And so this idea of the pause seems to like build, you know, connect to that idea where, oh my God, this is a runaway train. And all we can do is try to steer it in a direction when, you know, we could be questioning, like, these are just humans, like making these things,
Starting point is 00:41:36 like they can do whatever they want at any time. Um, and A and B, is it maybe not a foregone conclusion that it's going to progress in the direction that they say it will? Like, it seems to me that the large language models are designed to make you think, oh, this is a step on the road to general intelligence, to a literal thinking computer. But, and if you play with it for five minutes, you might think that. If you play with it for, you know, tens of hours, as I have, you stop thinking that and you realize it's just mashing text up. I'm curious if, you know, if we could dig into some of that. Is AI something that is constantly going to keep improving no matter what we do? And we just need to, like, control it and make sure it's not going to destroy us.
Starting point is 00:42:19 So I have a lot to say on this. Good. a lot to say on this. So there's a wonderful explanation that comes from Beth Singler about how combination of like looking back in what's happened in science and technology to date, combined with science fiction and imaginings of the future, makes us think that there is a path that we are just racing along. And it's only a question of how fast do we get there? Who's going to get there first? And that's not how science happens, right? Science is exploration. It's communication. It's choosing things to work on or not. I think there's some interesting stuff in the history of nuclear power and how the interests of building nuclear weapons shaped the decisions we made about what
Starting point is 00:43:00 kind of nuclear power to work on, for example. And it's all, as you're saying, choices that we can make. And we don't really know what's possible in the future. But because of this idea of AI that's given to us from science fiction, and to say, I'm a huge fan of speculative fiction, but I'm in it for... Yeah, it's cool. But I'm largely in it for the exploration of what happens to the human condition given these different settings. Like, that's what I see the point of science fiction to be. And a lot of this seems to come from this idea of, no, the point of science fiction is the cool spaceships and the teleport devices and the robots. And yeah, those are cool, but that doesn't mean that it's going to exist.
Starting point is 00:43:42 So this notion of a path that we're just racing along as fast as we can is false and we don't have to buy it. And another part of it is when they say, and you repeat, AI is just progressing, that makes it sound like AI is doing it on its own. And no, what's happened is a lot of corporations and individual billionaires have put a lot of money into gathering big piles of data and doing some clever engineering about how to manage that data and then build these learning systems that compress it into something that can do the word calculator thing. And that happened quickly, way more quickly than we thought it was. Like Tim and I are both quite surprised by how fast this happened, not because the tech got incredibly cool, incredibly quickly, but because it sort of got out into the world that quickly.
Starting point is 00:44:34 And what I see there isn't rapid scientific progress. I see a lot of money and a lot of hype. And all of a sudden someone's got the money to set up this thing so that anybody can access it, apparently for free. Although every time you do that, you're doing some work for OpenAI, just by the way. But ChatGPT is less a technological advance and more of a product that was created. and opening it up to people and prompting it in a way to maximize the sort of public shock of it and to make people think that it was extremely capable and to sort of further this narrative
Starting point is 00:45:11 that things are progressing so quickly. But there's a little bit of a comparison to, you know, Steve Jobs invented the iPhone. Well, Steve Jobs didn't invent any technology. He combined a lot of technologies, some of which were invented by the federal government 30 years earlier, and like put them into a very well marketed product with a with a shiny wrapper and like a really nice clean store you could buy it in. There's maybe a little bit of a comparison there. The big thing with ChatGPT, the reason it just exploded all over the media was anybody could play with it, which was a brilliant PR move on OpenAI's part because it meant that everybody was doing their hype for them. Right. I mean, I would say you only need to try to play with it in any other language for like two minutes.
Starting point is 00:46:02 I like Tigray. It doesn't even, it's complete gibberish. You know what I mean? So I'm like, well, I guess the AGI speaks English. We've already assumed that, you know what I mean? Like, but it's, you know, it's so crazy how many people I've been talking to who are engineers and researchers and they, and stuff say parrot exactly the talking points of OpenAI and, you know, Anthropic AI and similar organizations that are making this point that everything is going to be built on top of it. And it's going to trump like any other kind of development.
Starting point is 00:46:37 So most of GDP is going to be dependent on that. And so whoever is not, you know, on top of, whoever does not have that technology by like 2025 or something like that, they're not going to be able to catch up because it's just going to be accelerating so fast. You know what I mean? This is the kind of stuff a lot of people are saying. You hear this argument that in response to the pause letter,
Starting point is 00:47:00 you heard people say, well, if we pause, China's going to keep making the AI and they're going to use it to kill us. I'm like, what are they going to use GPT for? Are they going to write more shitty fan fiction with it? Are they going to output more bad recipes? It's unclear what it's being presented as some threat to national security when the actual capabilities of these large language models, they're cool. They're very cool tech. There's some cool stuff you can do with them. But this is not launching nuclear warheads or waging national security battles. But I'm sorry, please continue
Starting point is 00:47:38 your point. No, I was just, that's basically it. That's all I was going to say. And it's a few, the really surprising thing to me and what's been a huge lesson I guess in history or current affairs whatever you want to call it is how few people can drive this is is really what is unbelievable to me few billionaires a few billionaires and a few people in in the space of deep learning together just can drive this entire thing, the whole media eco chamber, the whole research direction, the entire Silicon Valley ecosystem.
Starting point is 00:48:15 And it's been extremely surprising to see that. Yeah. And disappointing to see Microsoft and Google. And I've got criticisms of these corporations, but they were pretty staid and stodgy, especially Microsoft, like jumping on this. We should maybe talk about the sparks of AGI. Do you want to talk about the sparks? Yeah, I was going to say, I wanted to cue you to talk about the sparks of AGI paper. What's the sparks of AGI anyway? What is the sparks of AGI paper? This sounds fascinating. I'm the host. Emily,
Starting point is 00:48:45 of AGI paper. This sounds fascinating. I'm the host. Emily, what is the sparks of AGI paper? Something that takes the form of a research paper. It's not peer reviewed. It was just thrown up on archive, which is this place that was initially developed, I think by physicists to help disseminate research faster. And what's happened in machine learning and computer science more generally is that it's become this place to like put things up as if they were research papers and just bypass peer review entirely. And so there's a whole problem over there. So Sparks of AGI is one of those papers published by Microsoft Research, a big group of people there. Including the head of research was an author too, Eric Horvitz. Did not notice that. And they took an intermediate version of GPT-4 because Microsoft is in bed with OpenAI on this. This is – Microsoft can't be above the fray here.
Starting point is 00:49:32 They are part of this. Their funding was like $10 billion to OpenAI. And then GPT-4 is now driving the Bing chat thing. And remember that they have a search engine and it's called Bing? Yeah. Right? Yeah. Right. Yeah. And now it's, and now it's got a super clippy embedded in it where it talks back to you. Yeah. Right. Yeah.
Starting point is 00:49:51 Super clippy. And so they have, as researchers at Microsoft access to a sort of an interim version of GPT-4 and they use a whole bunch of these benchmarks on it that were developed by different people trying to test natural language processing systems, generally without really good construct validity. That is, what is this thing supposed to be testing and how do we know it's actually testing that, especially given a large language model as the thing taking the test? And it's a 154-page thing where they try GPT-4 and all these things and they say, yeah, it looks like we have the first sparks of artificial general intelligence here.
Starting point is 00:50:28 And that's that's what the paper is. But it gets worse. All right. So my first comment on seeing this was remember when you used to go to Microsoft for stodgy but basically functional software and the bookstore for science fiction. Well, now we've got this like, maybe it's like a fan fiction to GPT-4 that's been published as if it were a research paper out of Microsoft. Well, so what is so ludicrous about the idea that large language models are a step on the way to AGI or the sparks of AGI. Because as you point out in the Stochastic Parrots paper, it looks like AGI to us, right?
Starting point is 00:51:12 It passes if you want to loosely interpret the Turing test, right? Can it fool a human into thinking they're talking to another human? Yes, it can do that. You could trick somebody using its output. And so for a lot of people, that's what they were taught to believe is a step on the way to AGI. That's like the version you learn in college. And so, and it certainly seems that way to people. So what, what is, you know, what are the barriers
Starting point is 00:51:38 that stop it from being that? Can you start with the first page? The first, the first sentence? Yeah, exactly. Of the paper? The first sentence of the paper? I have to get the paper up so that I can do that for you. But the first problem with it being the first steps to AGI is that AGI is undefined. It's what Timnit was describing before as an unscoped technology. So that's first steps to nowhere, number one. Number two, we know what a language model is. It's a word calculator, as you say it, right? So the fact that it seems to be giving us something coherent, that's all us gladly interpreting it as if it were coherent and nothing on the side of the actual system. I was trying.
Starting point is 00:52:24 I'm really trying hard to get you to talk about what they cite. I'm getting to it. I'm getting to it. It is so atrocious. I have to get my, I love how much fun you guys have with this roasting these papers. I love it. I love it when academics get spicy and you guys are delivering the goods. That's right. You know, we just read the footnotes. That's how we get spicy. No, but yeah, it's always read the footnotes. So the pause letter, by the way, cites the sparks of AGI. This is one of its academic sources for the danger that's coming, right? But it's not peer-reviewed and it's fan
Starting point is 00:53:05 fiction to a machine, right? All right. So sentence one, intelligence is a multifaceted and elusive concept that has long challenged psychologists, philosophers, and computer scientists. Sentence two is where it is to me. An attempt to capture its essence was made in 1994 by a group of 52 psychologists who signed on to a broad definition published in an editorial about the science of intelligence. So I thought, hmm, let's go look at what this definition is and where this came from. That editorial was published in reaction to the public outcry and discussion about the book called The Bell Curve. Do you remember this book? By Charles Murray.
Starting point is 00:53:47 Uh-huh. Yeah, I remember this book. So we've got a bunch of psychologists. Yeah. No, please go ahead. No. Yeah. A bunch of psychologists who are saying, okay, we've got to wait in here because this discussion has gotten out of hand. And I'm like, okay, okay. What are they saying needs to be established? What they say in this terrible editorial is no, no, no. IQ is real. These measures of it are good. They are not racist. And yes, there are group level differences in IQ where Jews and Asians are the smartest, but we don't know exactly how much. And then you've got the white people centered around 100.
Starting point is 00:54:23 And then they say, but the black people are centered around 85. And this is flat out what is in that editorial that this group of researchers at Microsoft decided to use as the basis for their definition of intelligence. So they can say, yes, GPT-4 is the first steps on the way to artificial intelligence using this definition. So it's totally foundational. And it is shocking to me that nobody in that group of authors thought, maybe we shouldn't be pointing to race science and just like flat out racism posted in the Wall Street Journal as the basis of what we're doing. And the more charitable interpretation here is none of them
Starting point is 00:55:05 actually read what they were citing like that would be better than reading it going yeah this seems okay but you know it's eugenics all the way down i mean so yeah how do they know that chat gpt is or that gpt4 is a is intelligent then are they checking to see if it's Jewish or what are they? Like if that's what they're citing and they're citing a paper that says that, you know, Asian people and Jews are more intelligent, then that's a pretty easy thing to test. They could just test if the AI is circumcised. I'm sorry. I'm a comedian. I apologize.
Starting point is 00:55:41 The chatbot circumcised. Yeah. But so, I mean, in addition to the shoddy research, though, like what is it about these language models that fails so profoundly? Like one thing to me is that I keep coming back to, and I even wish I had put more clearly in my own YouTube video on this subject, is that no AI that we have has any kind of like understanding that other minds exist, you know, like, and that's like a foundational part of intelligence is you and I are talking to each other, us three, and we each have our own minds and they're interacting and they're communicating. And when you are communicating with chat GPT, you are imputing a mind to it. It's almost impossible to use it without imagining that there's a mind in there, even though there isn't.
Starting point is 00:56:28 But it is not imagining a mind talking back to it. It's just chopping up words and phrases. And, you know, like a self-driving car. What is the foundational problem with self-driving cars? They can't communicate. They can't make an eye contact with another person and go. You know, I had a whole interaction with a car the other day where I was like, oh, this car needs to pass. I was walking in the street where there's no sidewalk. This car needs to pass me. I'm going to go stand where in the parking part, you know,
Starting point is 00:56:53 where the other cars are parking. And then I look back at the car. It hasn't gone past. I look back and the lady points and she goes, no, actually, I wanted to park there. And I said, oh, now I'm in your way. I need to go get in the street because you were trying to use the parking lot. You're trying to use the shoulder. There's no way for an AI to have a communication with a person like that, to know that there's a person with intent who I need to deal with in order to decide what the machine should do. And that seems to me to be like a extremely fundamental part of intelligence that no level of, hey, let's make the language model better is ever going to accommodate. Because it's it's all it is, is a thing that you put words in one end and more come out the other end. I imagine you might have more examples, though, of like what would actually constitute intelligence that these fail at or maybe not.
Starting point is 00:57:41 Like Octopus, Octopi. She has a whole paper on arts and good morals. Yeah. So I have a paper from 2020 coauthored with Alexander Kohler, which has the octopus thought experiment, which is why I'm wearing my octopus earrings here. Purchased from an artist on Etsy, by the way. And what we were talking about there is basically showing it doesn't matter how intelligent the thing is. It's not going to learn to understand if all it has access to is the form of the language. So to make that point, we put together this thought experiment with a hyper intelligent deep sea octopus. And credit for it being an octopus goes to my co-author, Alexander.
Starting point is 00:58:20 I was thinking dolphin. And he's like, no, octopuses are inherently funnier. And also that makes the environment more distinct from where the humans are. So hyper-intelligent deep sea octopus, two humans stranded on two separate desert islands that happen to be connected by a telegraph cable. The humans figure this out and they start doing Morse code to each other. English as encoded in Morse code. The octopus, remember hyper-intelligent, we're not doubting its intelligence, taps into that cable and starts listening to the patterns of the dots and the dashes. And then after a while, it cuts the cable and it starts sending dots and dashes back based on the patterns that it's seeing.
Starting point is 00:58:54 And it can get away with this because a lot of the communication is, you know, just sort of keeping each other company. And so if something comes back, that's good enough. Right. But then we have this point where one of the people on the island says, oh, no, I'm being chased by a bear because the thought experiment, right? Spherical cows and all that bear shows up on the desert island. All I have are these two sticks. What am I going to do? And of course, the octopus can't provide anything useful because the octopus hasn't understood, has no model of the people's world, even though we've posited it to be hyper-intelligent, right? So flipping that around to what people are seeing in the language models, our primary evidence, such as it is that these things are intelligent, is their apparent ability to understand and create coherent text. That's the only evidence that they're intelligent.
Starting point is 00:59:51 Yeah. But in fact, we know because of the octopus thought experiment that it can't be that it's just coming up with plausible next words, something that looks like an answer. And so there's no evidence for intelligence there at all. Now I frequently get asked by people, okay, Emily, so what's the test that would convince you that one of these things is actually intelligent? To which my answer is, that's not my job. I'm not trying to build one of these things. Why do we want to build those things? That's what I don't understand. Who wants to do that?
Starting point is 01:00:19 And what is it supposed to accomplish, right? AGI and then what? Like no more climate change like clean water what I don't I don't understand the connection at all that's the weird thing because they say it's coming we got to get ready we we need to be ready for it when it comes but it's like wait why you're making it people are making it if the if if anyone's going to make it it's the people who are telling you to be worried about it if anyone's going to make it, it's the people who are telling you to be worried about it. If anyone's going to create an AGI. And why are they? What's the purpose? And so actually, this leads me to a good question to end on. Because it occurs to me a lot that,
Starting point is 01:00:57 look, I love new technology. I think ChatGPT is super cool. I played with it a ton. It's like the kind of technology I love to play with. I love to play with it, see what kind of output I can get if I mess around with it. If they had advertised it as, this is a word calculator. Here's what it does. You put words in one end and it'll make the most, it'll make a plausible sounding answer to any question you ask it. Or you can, you know, it'll imitate any, you know, you give it input and it'll imitate a plausible output. That would have been really cool. And they could have come up with a lot of very plausible, narrow uses for that, such as computer programming or other things of that nature. But instead, the industry made a marketing decision to say that this was a step on the
Starting point is 01:01:41 road to a super intelligent AI that we have to protect ourselves from. And so the question I keep wrestling with is why? What was the purpose of misleading people about what the technology can do? What were they trying to accomplish? Do you have any idea? That's the paper I just – so that's what we've been. You just wrote a paper on this? Oh, my God. You're the perfect person to ask that question. That's why I've been trying to figure out, like, why?
Starting point is 01:02:10 When did people decide, like, we have to do AGI? Because when we're thinking about large language models, for instance, I was telling Emily, I didn't have a problem with, like, BERT. That was a large language model. And I didn't really have a problem with you know them being used in components to do various things whatever it was when opening I came in the scene and started talking about these things like they are this huge super you know intelligent thing and we're gonna do AGI we're gonna you know it's gonna be like uh either amazing or utopia or apocalypse and we have to focus on the future stuff. That's what
Starting point is 01:02:46 really started driving this whole thing. And our paper with Emil, which is like the one I was talking about, was, you know, tracing back these test real ideologies back to first wave eugenicists and all of that. It basically talks about how this whole movement came about because you know so when we were talking about the connection to eugenics it definitely was not theoretical it was for instance you know the chair of the british eugenics society talking about transhumanism right transcending humanity and the cause the people who first they called it they say christened the term AGI in 2007 in a whole book that they wrote, the way they described it was transhuman AGI. You know, they think that the AGI will help humanity transcend being human and become post-human, colonize space, you know, and live in like digital mind. So when you see Sam Altman's writings, if you just read his blogs right now, he says, we're going to have unlimited energy and intelligence
Starting point is 01:03:50 before the decade is out. He writes in his blog post that we have something about human flourishing and the cosmos, the universe, you know? And so that's, that was why I had to write the paper. The final conclusion was precisely what you were saying. We were saying that there's been AI winters and such. At some point, people doing various things like natural language processing, computer vision, etc. didn't call themselves AI whatever. Just like, oh, I'm doing NLP. I don't want to be associated with these AI weirdos who always talk about building a god
Starting point is 01:04:29 because they over-promise like that. And then, you know, people see through it and then it crashes and then it comes back, right? And so now it's back and there's this whole AGI thing. And I really would love to play this game where I ask people, is this from like 1962 or 2022? Right. Like we're like, oh, my God, you will be astonished to see what we have built, you know, and in the next 20 years, people will not have to work. Right. And so this is what's
Starting point is 01:04:58 going on. And the thing is, as Emily was saying earlier, that it is super aligned with this, this super you mentioned Scientology. I want them to build the Church is super aligned with this, this super, you mentioned Scientology. I want them to build the church of test grill, you know, ideologies or something like that. It's because it really is like that. Um, they have very much like religious characteristics of like the end of times kind of things, you know, apocalyptic and utopian, but it also is super aligned with corporate interests, right? right you know if you build this like one model that can do anything for anyone and everybody just pays you and you can steal everybody's data and say that you're actually like saving humanity and creating a god oh and
Starting point is 01:05:35 you shouldn't be regulated because otherwise china's going to do the devil kind and you don't want that you want us to do the good kind like it is super in line with like corporate interest so yeah that's the conclusion that we've come uh to with our um paper that we're hoping to publish at some point after peer review so it is corporate interest but it also these the people who are doing this actually have a definite ideology of 100% of eugenics of the yeah of they come from this sort of weird world. They write this stuff. Yeah. Yeah. And unfortunately, the money's behind them. And so it's becoming everybody's problem instead of this niche little research community that could just go be weirdos on their own. Well, how do you suggest that for the public who's watching this and being flooded with misinformation, with hype about AI, how can they gird themselves against it?
Starting point is 01:06:39 And what's the best way to resist it and to think a around pushing for appropriate legislation, not the AI pause, but something that actually is governance of collection of data and synthetic media that is built through consultation with the people who are bearing the brunt of this right now. So the people who are being exploited in developing the systems, the people whose data is being stolen, the people who are getting misinformation said about them and all of this. So developing regulation collectively, but also resisting misinformation and the non-information. So with ChatGPT being set up, it looks to me that it's sort of like the oil spill in the Gulf of Mexico when the oil rig was broken and there was just oil going and going and going, right? And BP was eventually saying, look at all the birds we cleaned, right? ChatGPT is polluting our information ecosystem in the same way with non-information. And I've had people say to me, well, hasn't the horse left the barn on that? It's out there. And my answer to that is we used to have lead in gasoline and we
Starting point is 01:07:45 discovered that was bad news. So we made some regulation and now we don't. Like we don't just have to live with this. We can regulate. And one thing that I would love to see regulation-wise, like my wishlist item on this is corporations are accountable for the actual literal output of their tax synthesis machines. It's libeling someone, you get sued for libel. It's putting out bad medical information, people get hurt, you're liable. I would love to see it set up that way. Don't know if that's something that works policy-wise, but that's an idea. In terms of just on an individual level, how do you resist the AI hype? I think the questions to ask are, okay, what's the actual
Starting point is 01:08:23 task here? What's the evidence that this machine is well matched to that task? How is it evaluated? Can I see the data that it was evaluated on? And who's really benefiting by using this system? And who gets hurt when it's wrong? Who gets hurt when it's right, but people are sort of using it as a shortcut and so on? That's a wonderful answer. And I think, I think that last question is who is benefiting and who is actually getting
Starting point is 01:08:52 hurt right now is one of the most important questions we can always ask ourselves about the world, but especially in this case. I can't thank both of you enough for coming on and, and you're, I mean, you're just the perfect people to speak to this topic. And it was an honor to have you. Where can people follow your work? And what's the most important thing of yours you think they should read? Is it the On the Dangers of Stochastic Parrots, your favorite, your famous paper? That's worth a read. We are in the process of creating an audio paper of that. We've recorded it and I have to edit it together.
Starting point is 01:09:25 I'm sorry I haven't done that. I have to release a recording of our event, Stochastic Parents Day. Yeah. So probably the best way to find me is probably my faculty webpage at the University of Washington. And from there, you can see links to everything I do in the media and my papers and stuff like that. I am for the moment still on Twitter and also on Mastodon and that's available through my webpage. So if you search Emily Bender, University of Washington, you'll find me.
Starting point is 01:09:52 And Timnit, how about you? Yeah. So DARE, DARE Institute website, which is not that much information right now, but in a couple of days revamped, we will see much more information there with a lot of our work and other things. I'm also on Twitter. So if you want to hear me rant, I'm there, but also a mess it on.
Starting point is 01:10:11 I'm a huge fan of the Fediverse these days because I, I don't know. I'm not worried that some random billionaire is going to take over that anytime soon. I ventured into LinkedIn and I'm trying to stay there, but it's kind of difficult, but I'm there too. Awesome. Well, Timnit and Emily, thank you so much for coming on. It's been a true honor to have you. Thank you for having us and for raising these issues. Well, thank you once again to Emily
Starting point is 01:10:39 Bender and Timnit Gebru for coming on the show. I hope you loved that conversation as much as I did. And if you did, I hope you will consider supporting the show on Patreon. Head to patreon.com slash adamconover and join our wonderful community, including folks who back this show at the $15 a month level. And I'd love to read a couple of your names.
Starting point is 01:10:57 We got Hydra Cloric, Victor Densmore, Francis Amadar, Kill Me Inc., Christina Mendez, Akash Thakkar, Frank F. Kling, Robin Dumlap, Jeffrey McConnell, Nississy Pods, Brian Taboney, Leslie Koch, Sean Garrison, Raghav Kaushik, Always Sunny, and Ashley Molina Diaz. Thank you folks so much for your support. And if you want to join them, head to patreon.com slash adamconover to get every episode of the show ad-free and a bunch of other goodies as well. We even do a live book club. Would love to see you there. I want to thank our producer, Sam Rodman, our engineer, Kyle McGraw, and the fine
Starting point is 01:11:30 folks at Falcon Northwest for building me a wonderful custom gaming PC that I record every episode of the show on. You can find me online at adamconover.net. You can find my tour dates at adamconover.net slash tour dates. Come see me do stand up all across the country. And of course, you can find me on social media at Adam Conover, wherever you get your social media. Thank you so much for listening. And we'll see you next time on Factually.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.