It Could Happen Here - How Scared Should You Be About AI?

Episode Date: April 4, 2023

Robert sits down with Noah Giansiracusa, math professor at Bentley University, to talk about the reasonable and unreasonable fears people have over AI.See omnystudio.com/listener for privacy informati...on.

Transcript
Discussion (0)
Starting point is 00:00:00 You should probably keep your lights on for Nocturnal Tales from the Shadowbride. Join me, Danny Trejo, and step into the flames of fright. An anthology podcast of modern-day horror stories inspired by the most terrifying legends and lore of Latin America. Listen to Nocturnal on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Hi, I'm Ed Zitron, host of the Better Offline podcast, and we're kicking off our second season digging into tech's elite and how they've turned Silicon Valley into a playground for billionaires. From the chaotic world of generative AI to the destruction of Google search, Better Offline is your unvarnished and at times unhinged look at the underbelly of tech
Starting point is 00:00:49 brought to you by an industry veteran with nothing to lose. Listen to Better Offline on the iHeartRadio app, Apple Podcasts, wherever else you get your podcasts from. On Thanksgiving Day 1999, five-year-old Cuban boy Elian Gonzalez was found off the coast of Florida. And the question was, should the boy go back to his father in Cuba? Mr. Gonzalez wanted to go home and he wanted to take his son with him. Or stay with his relatives in Miami? Imagine that your mother died trying to get you to freedom. Listen to Chess Peace, the Elian Gonzalez story, on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Hey, welcome to It Could Happen Here, a podcast about things falling apart.
Starting point is 00:01:47 And today it's kind of going to be a conversation about, is shit falling apart? Are we all about to be devoured by a rogue AI? Is your job about to be devoured by a rogue AI? These are the questions that we're going to, you know, talk around and about and stuff today. And with us today is Noah John Siracusa, a math professor at Bentley University. Noah, welcome to the show. Thanks for having me. And I'm reaching out, we're talking right now because there's an article that was put
Starting point is 00:02:17 up in the New York Times on March 24th, 2023, titled, You Can Have the Blue Pill or the Red Pill and We're Out of Blue Pills, which is a fun title, by Yuval Harari, Tristan Harris, and Aza Raskin. And it's an article that is kind of about the pitfalls and dangers of AI research, of which there definitely are some. I enjoyed your thread on the matter. I thought it was a lucid breakdown of the things the article gets right in the areas in which I think they're a bit fear mongery. So yeah, I think that's probably a good place to start. Unless you wanted to start by just kind of generally talking about where you kind of are on AI and what you kind of think, you know, the technology is advancing towards right now?
Starting point is 00:03:08 Yeah, I mean, I think I can probably answer both those questions in the same because part of why I enjoyed writing that thread, dissecting the article is, I just had the strangest feeling reading it that I agreed with it so much in principle, and yet somehow objected it to so much in detail. Yeah. And thinking about that article helped me think about my own feelings on AI, which, you know, every day of the week is slightly different because so much news happens. Yeah, I found myself overall deeply frustrated that I agree with the central conclusion, which is that maybe we shouldn't be just like plowing headlong into this and should be more careful when we screw around with technology like this, which I agree with. we screw around with technology like this,
Starting point is 00:03:43 which I agree with. And I feel like should have been the thing we did with like, I don't know, Facebook, Twitter, like all of these things, like the, it's less, my obsession is less with like the specific dangers of AI and more with,
Starting point is 00:03:55 well, we keep letting these guys who are fundamentally like gamblers with venture capital money, um, really put our society through the ringer without ever asking, should we we do any research on maybe how social media affects children and how all of these different things. And it's right that, yeah, we should be concerned about what these people are going to do with
Starting point is 00:04:15 AI, but also, why now? Why just now? Yeah, and that raises a really good point, which is what's different now versus what we've been experiencing with social media. And just to give your listeners some context, one of the three authors on this New York Times article is famous for writing this book, Sapiens. That's a sweeping history of humanity. And the other two are actually most famous for the Netflix documentary, The Social Dilemma. So they really are in this camp of warning people about social media algorithms.
Starting point is 00:04:45 And that's exactly as you're saying, that's sort of this thing that we've been dealing with probably quite poorly. And now we're kind of moving on to the next societal risk, which is AI. So that is a really important question of what's different now. And I think that's one of the things the articles try to address, which is many of the problems that we already have with algorithms, data-driven algorithms, and even AI as it's used in social media is still happening now. But somehow things feel like they're spiraling out of control. Yeah. And I think, I mean, honestly, I think a lot of this just has to do with culturally what our touchstones for AI were going into this, you know, which are Skynet, you know, like it's that sort of thing.
Starting point is 00:05:25 And you do see, I feel like the uncredited fourth author on this particular article is James Cameron, because there's pieces of it throughout this where like there's, it opens actually pretty provocatively. Imagine that you are boarding an airplane. Half the engineers who built it tell you there is a 10% chance the plane will crash, killing you and everyone else on it. Would you still board? In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future AI risk. Half of those surveyed stated there was a 10% or greater chance of human extinction from future AI systems. Let's zoom in on that.
Starting point is 00:06:04 Yeah, let's talk about that. Because what I tried to do in my thread was go through all the claims and assertions and really pause and say, hold on. But that's a great one to start because there's a lot to dig in right there. Yeah. So first of all, there's a huge difference in that airplanes are based on science and physics and things that we understand pretty well. There's a lot to it. And there's been millions of flights.
Starting point is 00:06:28 So you have a lot of data. You know how many planes crash and how many don't. Maybe one engine goes out. You can do the statistics and see, oh, you know, whatever percent of planes without that engine still land safely. The problem with AI is we're just guessing. Yeah. Right?
Starting point is 00:06:43 There's no way to know 100 years from now or 10 years from now what it's going to do, what the real risks are. So we speculate. And that's not uncharted territory. Nuclear weapons were first introduced. People had to guess and speculate. But the danger, I think, is putting it in that same category as things like airplanes or climate change I like to think about. putting it in that same category as things like airplanes or climate change I like to think about. Climate change, when you see these, you know, what's the IPCC, I forget the acronym, these reports, that's based on thousands of scientists digging into thousands of published papers and
Starting point is 00:07:15 all this data, really modeling the environment. There's a lot of meat and substance to it. The problem with the AI is it's mostly people, I hate to say it, but like me or like you, just kind of guessing and thinking, maybe this will happen. Maybe that'll happen. Welcome. I'm Danny Thrill. Won't you join me as the fire and dare enter
Starting point is 00:07:40 Nocturnal Tales from the Shadows presented by iHeart and Sonorum. An anthology of modern day horror stories inspired by the legends of Latin America. From ghastly encounters with shapeshifters to bone-chilling brushes with supernatural creatures. I know you. Take a trip and experience the horrors that have haunted Latin America since the beginning of time. Listen to Nocturnal Tales from the Shadows as part of My Cultura podcast network.
Starting point is 00:08:25 Available on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Hi, I'm Ed Zitron, host of the Better Offline podcast. And we're kicking off our second season digging into how Tex Elite has turned Silicon Valley into a playground for billionaires. From the chaotic world of generative AI to the destruction of Google search, better offline is your unvarnished and at times unhinged look at the underbelly of tech from an industry veteran with nothing to lose. This season I'm going to be joined by everyone from Nobel-winning economists to the leading journalists in the field,
Starting point is 00:08:59 and I'll be digging into why the products you love keep getting worse and naming and shaming those responsible. Don't get me wrong, though. I love technology. I just hate the people in charge and want them to get back to building things that actually do things to help real people. I swear to God things can change if we're loud enough. So join me every week to understand what's happening in the tech industry and what could be done to make things better. Listen to Better Offline on the iHeartRadio app, Apple Podcasts, or wherever else you get your podcasts. Check out betteroffline.com. On Thanksgiving Day, 1999,
Starting point is 00:09:35 a five-year-old boy floated alone in the ocean. He had lost his mother trying to reach Florida from Cuba. He looked like a little angel. I mean, he looked so fresh. And his name, Elian Gonzalez, will make headlines everywhere. Elian Gonzalez. Elian. Elian.
Starting point is 00:09:52 Elian. Elian. Elian. Elian Gonzalez. At the heart of the story is a young boy and the question of who he belongs with. His father in Cuba. Mr. Gonzalez wanted to go home and he wanted to take his son with him. Or his relatives in Miami. Imagine that your mother died trying to get you to freedom.
Starting point is 00:10:13 At the heart of it all is still this painful family separation. Something that as a Cuban, I know all too well. Listen to Chess Peace, the Elian Gonzalez story, as part of the My Cultura podcast network, available on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. The reasonable thing to say if you're in AI research is like, yeah, I have concerns that AI could cause serious negative externalities for the human race. Perfectly reasonable statement. It is physically impossible to say there's a 10% chance.
Starting point is 00:10:51 Exactly. Because it's never done that before. You can't know. I'm a math professor and I'm the first to say numbers don't have some intrinsic meaning, right? If I just say something has maybe a 15%, I'm just making it up. I'm pulling it out of my ass. Yeah,'t make it true. So it's a general pet peeve I have of sort of giving a false sense of precision by using numbers that you don't really know where they came from or
Starting point is 00:11:17 they're just made up. So that's one issue is these numbers are made up and asking a thousand people to make up numbers isn't necessarily any better than asking one or two. If the number's made up and asking a thousand people to make up numbers isn't necessarily any better than asking one or two you know then if the number is made up it's made up so that's one issue yeah i also do think and i i'm not the i saw someone make a note i think it was ben collins who writes for nbc on twitter made a note that like well the fact that all of these statements about like how dangerous they are about human extinction are coming out of people in the AI industry has started to kind of feel like marketing. That's right. Yeah, exactly. There's a little bit of buzz marketing going on here.
Starting point is 00:11:52 And I think you mentioned social media and the authors of this article mentioned social media, and we have to look to the past, right, to understand the future. I think that's the only way to do it. So what was one of the biggest scandals in social media was Cambridge Analytica. And as we probably remember, this was this data privacy scandal where a bunch of data was collected from Facebook users that shouldn't have been, you know, people didn't realize that their data was being collected, they didn't approve it. And it was used for this election company, or this political company that was trying to profile people and influence campaigns towards Donald Trump, towards Brexit. So this was a huge scandal. And, you know, Facebook was fined $5 billion or something very justifiably.
Starting point is 00:12:36 But I would say what it was in retrospect was a data privacy issue. People's personal data was leaked when it shouldn't have been. The problem was there was so much fear and fear mongering over it that people felt this data was used by these sort of algorithmic mind lasers to kind of know us in such great detail and trick us into voting for Donald Trump and targeting us. And the jury's still kind of out, but most of the evidence looks like Cambridge Analytica, it wasn't that effective. They just couldn't do it. And it turns out you can know a lot about a person, a lot about their data, and it's really hard to influence them, to change them. So what happened, I think, was there was a lot of alarm spread, rightly so, about the tech companies.
Starting point is 00:13:21 They have too much power, too much data. They know too much about us, and this horrible thing happened. about the tech companies. They have too much power, too much data. They know too much about us. And this horrible thing happened. The problem was a lot of the alarmism then actually reinforced this aura of power, of godlike power that the tech companies have.
Starting point is 00:13:33 People criticizing them actually gave them more potency than they deserved. And then suddenly Google and Facebook and all, they had the, it wasn't sudden, but it kind of built it up. They had this aura that our algorithms are so insanely powerful and we have to make sure they stay in the right hands and we can do so much.
Starting point is 00:13:51 And that's unfortunately what I see happening now a lot. And that is kind of the setting for critiquing this article. I absolutely agree that this stuff is risky AI. I absolutely agree that we could go down a dangerous path. But once we start leaving firm ground and speculating wildly and using the Terminator stuff that you described, even if you think you're
Starting point is 00:14:11 criticizing the tech companies, you know what you're doing? Giving them the biggest compliment in the world. Saying that you guys are godlike and you've created these mighty machines. You've created a deity, which is very similar to the language this article has at the end. And I think it's kind of worth, like, as you're bringing up, there are real threats.
Starting point is 00:14:31 There are real threats that are immediately obvious. The threat that a lot of writers are going to lose their jobs because companies like BuzzFeed decide to replace them with, you know, ChatGPT or whatever. The fact that a lot of artists are going to lose out on work because their work's been hoovered up and it's being used to generate like these are very real and very immediate concerns that we don't have to. They're not hypothetical. We don't have to theorize about the AI becoming intelligent for this to be a problem. These are things we we have to immediately deal with because it puts people at risk. It's the same thing with like, you know, there's a lot that gets talked about with Cambridge Analytica, with kind of like the different Russian disinformation efforts. But when I think about the stuff that was happening in the same period that worries me more,
Starting point is 00:15:15 one of the things that occurred is because there was so much money to be made, if you could get certain things to go viral on YouTube, companies that use tools that weren't wildly dissimilar from some of these basically generated CGI videos based on kind of random terms that they knew were likely to trick the algorithm into trending. And God knows how many children were parked in front of these like very unhinged videos for hours at a time that like they would start watching some normal kid musical video or something. And then they're watching like the disembodied head of Krusty the Crown bounce around while some sort of nonsense song gets sung. And it's like, what is that actually going to do with kids? We don't know.
Starting point is 00:15:52 That's unsettling, though. And it's deeply unsettling. Yeah. Obviously, one of the things that this article is not wrong about is that if we kind of leap forward into this technology with the kind of abandon that we're used to giving the tech company, there will be unforeseen externalities that we can't predict right now that will be very concerning. I just don't think it's Skynet. That's what was so challenging, not just with that article, but with, I think, the movement we're having, is I do agree very much in spirit. I agree with the recommendations. We need to slow down. We need to be more judicious and cautious. We need to really consider these. But again, if we overhype the technology, we may be doing ourself a disservice by empowering the very entities that we're trying to take power from. And as an example like that, can I read a quick quote from the article for you?
Starting point is 00:16:50 AI's new mastery of language means it can now hack and manipulate the operating system of civilization. By gaining mastery of language, AI is seizing the master key to civilization, from bank vaults to holy sepulchers. That's right. And that, I mean, that is funny, and you're right to laugh sepulchers. That's right. And that, I mean, that is funny and you're right to laugh. But let's actually zoom in a second. And I think this is such a tempting trap that AI is super intelligent in some respects, right?
Starting point is 00:17:15 Yeah. It's done amazing at chess, amazing at jeopardy, amazing at various things. Chat GPT is amazing at these conversations. So what happens is it's so tempting to think AI just equals super smart. And because it can do those things, and now look, it can converse, that it must be the super intelligent conversational entity. And it's really good at taking text that's on the web that it's already looked at and kind of spinning it around and processing. It can come up with poems and weird forms, but that doesn't mean it is super intelligent in all respects. For instance,
Starting point is 00:17:50 one of the main issues is to hack civilization, to manipulate us with language, it has to kind of know what impact its words have on us. And it doesn't really have that. It just has a little conversation, a text box, and I can give it a thumbs up or thumbs down. So the only data that it's collecting for me when it talks to me, any of these chatbots, is did I like the response or not? That's pretty weak data to try to manipulate me. You know, it's so basic. That's not that different than when I watch YouTube videos. YouTube knows what videos I like and what I don't like. Would you say that YouTube has hacked civilization? No, it's addicted a lot of us, but it's not hacked us.
Starting point is 00:18:28 Yeah, people have hacked YouTube, and that has done some damage to other people. But the thing is, and that's part of why, while I have many concerns about this technology, it's not that it's going to hack civilization, because we're really good at doing that to each other. There's always huge numbers of people hacking bits of the populace and manipulating each other, and there always have been. That's why we figured out how to paint. Like, it's... I do think that there's an interesting conversation to be had about the part of why people are kind of willing to believe
Starting point is 00:19:07 anything is possible with this stuff is that for folks who were just kind of living their lives with a normal amount of attention paid to the tech industry, it seems like these tools popped out of nowhere a couple of months ago, right? It feels like, oh, there was just suddenly been this massive breakthrough. And the reality is that all of the stuff that people, you know, chat GPT, these different AIs that everybody's talking about, this is technology that people have been pouring resources into for years and years and years and years and years. And that's why it's able to do some of these amazing things that we've seen. But it's not, I don't think it means that in a month, it's going to be 1000 times smarter. It's good. It's it's a process of labor, and it was finally ready to be unveiled to the extent that it has been, maybe.
Starting point is 00:19:51 That's right. And a good example is GPT-4, which recently came out. There was GPT-3 before and ChatGPT. And there was so much speculation that GPT-4 is going to be, again, this godlike thing that just brings us to the singularity. And honestly, it's done better at tests. I forget the numbers, but maybe one of them got a 20% grade on some tests, and this one got an 80%. So that is a significant improvement, right?
Starting point is 00:20:16 If you're a teacher and your students improve that much, you should be happy. You're right. But as you said, is that a thousand times? No, even though the machine is much bigger, much more data. And it just shows that, yeah, like the reality is this is incremental progress going at a very fast rate, very unsettling. Even for those of us following the field closely, we're experiencing that kind of vertigo that you're saying that, whoa, where did this come from? So even within the field, and you're absolutely right. If you're just at home, you know,
Starting point is 00:20:44 not paying attention for a week or a month or a year, suddenly this stuff pops up. It is disorienting. But one thing I think that's helped me at least kind of clarify what, not even answering what the risks are, but just understanding the different camps of why certain people are reacting differently and why even the people afraid of AI seem to be now fighting amongst each other and why it's getting fractured is, are you more afraid of this AI used as a tool by people or are you more afraid of it kind of taking on its own autonomy and kind of going rogue and doing its own things?
Starting point is 00:21:18 And I'm very much afraid of people using it. I think big companies are going to use it and there's going to be a lot of problems just like we saw with social media. People will get addicted. Democracies will be flooded with misinformation. It'll be weaponized by various actors. There'll be bot accounts. So I am very concerned about it being used. Basically, it performing the job it was told to do, but it'll be told to do dangerous jobs, either making money or making discord. it was told to do, but it'll be told to do dangerous jobs, either making money or making discord. There's another group of people that are more worried about the AI somehow deciding on its own to do things, to take over. And that's where, you know, I can't rule it out, but that's where I
Starting point is 00:21:56 kind of am skeptical. Let's focus on how people are using it for now, for the foreseeable future. I don't think we need to worry yet, at least, about the AI somehow having a life of its own and stabbing us in the back and enslaving us. Because there's just so much that can go wrong before we even get to that point. Yeah. And it's not, that's exactly like, it's a threat triage kind of thing where like, is it theoretically possible that one day human beings could create an artificial intelligence that is capable of having its own agency that is malicious? Yeah, sure, I guess. I mean, maybe. But man, there's a lot of us that are very malicious right now that are actively trying to harm other people at scale. I'm concerned about how they will use AI to do that. I think botnets are a really good example. One of the things that this newest generation of AI tools allows is more realistic and intelligent bots than I think have been accessible at scale before. And that's a very real concern. Sorry, when I kind of war game this back and forth with myself, one thing that is oddly comforting is like, well, the shared comments that we all inhabit of like ontological truth is already so shattered that like there's there's only so much damage.
Starting point is 00:23:25 I feel like adding additional bots and additional disinformation can really do. Like, I don't- One thought on that, though, because I've been digging into that too. I've been, you know, trying to ponder how I feel about that. Because a lot of this, I don't know, you know, I'm trying to make it up. Yeah, nobody does. Is I do think if you go back to like 2016, earlier versions of the internet, you know, before leading up to Donald Trump's election,
Starting point is 00:23:43 I think there was a lot of wild west to Google, to social media, to all these things, right? Fake news was just like piling up to the top of Google search results. That election was so monumental and such a seismic shockwave through tech that fake news and misinformation might've played a role that they really had to do something. And I think some companies are more effective than others. I think Google put a lot of effort into making sure authoritative sources rise to the top. So what that means is when now you go online and you Google for medical information, the top results you get are WebMD or some official CDC, your government thing. They're pretty decent, reliable. It's not to say there isn't all that crap on the internet,
Starting point is 00:24:23 but Google has done a pretty good job of having the good stuff float to the top. And that's the information that people see. So what I'm worried is now we might be kind of resetting ourselves back to the 2016, where when you're talking to these chatbots that are trained on all the internet, I don't know if the WebMDs and the CDC type of information is necessarily going to float to the top. Maybe they'll work that out. But I'm also worried that OpenAI or Google or Microsoft or wherever, they'll have ones that are pretty reasonable and kind of, you know, tuned to appeal to a lot of people.
Starting point is 00:24:57 But Elon Musk might build his own competitor one that might be really tuned to elevate the right-wing sites. It'll live in your car. Yeah, who knows? Welcome. I'm Danny Threl. Won't you join me at the fire and dare enter. Nocturnal Tales from the Shadows.
Starting point is 00:25:23 Presented by iHeart and Son Sonora an anthology of modern-day horror stories inspired by the legends of Latin America gasoline counters with shape-shifters to bone-chilling brushes with supernatural creatures. I know you. Take a trip and experience the horrors that have haunted Latin America since the beginning of time. Listen to Nocturnal Tales from the Shadows as part of My Cultura podcast network, available on the iHeartRadio app, Apple Podcasts, or wherever
Starting point is 00:26:08 you get your podcasts. Hi, I'm Ed Zitron, host of the Better Offline podcast, and we're kicking off our second season digging into how tech's elite has turned Silicon Valley into a playground for billionaires. From the chaotic world of generative AI to the destruction of Google search, Better Offline is your unvarnished and at times unhinged look at the underbelly of tech from an industry veteran with nothing to lose. This season I'm going to be joined by everyone from Nobel-winning economists to leading journalists in the field, and I'll be digging into why the products you love keep getting worse
Starting point is 00:26:39 and naming and shaming those responsible. Don't get me wrong, though. I love technology. I just hate the people in charge and want them to get back to building things that actually do things to help real people. I swear to God things can change if we're loud enough. So join me every week to understand what's happening in the tech industry
Starting point is 00:26:56 and what could be done to make things better. Listen to Better Offline on the iHeartRadio app, Apple Podcasts, wherever else you get your podcasts. Check out betteroffline.com. On Thanksgiving Day 1999, a five-year-old boy floated alone in the ocean. He had lost his mother trying to reach Florida from Cuba. He looked like a little angel. I mean, he looked so fresh. And his name, Elian Gonzalez, will make headlines everywhere. Elian Gonzalez. Elian Gonzalez. Elian. Elian. Elian Gonzalez. At the heart of the story
Starting point is 00:27:33 is a young boy and the question of who he belongs with. His father in Cuba. Mr. Gonzalez wanted to go home and he wanted to take his son with him. Or his relatives in Miami. Imagine that your mother died trying to get you to freedom. At the heart of it all is still this painful family separation. Something that as a Cuban, I know all too well. Listen to Chess Peace, the Elian Gonzalez story, as part of the My Cultura podcast network, the Elian Gonzalez story as part of the My Cultura podcast network available on the iHeart radio app, Apple podcasts, or wherever you get your podcasts.
Starting point is 00:28:15 So I have been messing around as I mean, and you have been doing so in a much more rigorous manner, I'm sure, but I've screw around with a couple of different AI chat and search engines. I use Find, P-H-I-N-D sometimes. I've been playing around with Bing. And one of the things I've noticed is that if you ask it like, hey, summarize for me why the Battle of Hastings mattered, you'll get a reasonably decent answer. But if I ask it like, I don't know, specific questions about myself, I've come to, I noticed at first when I did it, I would get some really weirdly like colloquial vernacular from it explaining things. And I realized it was just pulling answers directly that fans had asked about me on the subreddit that this show has. And so when I think about like ways in which to game the system, well, you make a bunch of bots, you have them post questions and answers that are supportive of this specific product line or whatever on a subreddit and hope that it gets picked like scanned by an AI.
Starting point is 00:29:10 And that becomes part of it's like answer for, you know, what happens if, you know, I can't stop itching or whatever. I don't know. Like, but I like obviously you can see using them ways in which these can and will be gamed to some extent. them, ways in which these can and will be gamed to some extent. You know, it's always kind of a Red Queen sort of situation where you have to disinformation, people fighting disinfo, you're always running as fast as you can just to stay in place. That's right. And that is, that brings up another issue, which I do feel like this is possibly really tipping the balance in that it takes a certain amount of resources to create misinformation. It takes a certain amount of resources to debunk it, right? A journalist has to sit down, Snopes has to write
Starting point is 00:29:50 a little piece about it. And the problem is with this AI, it's suddenly just dropping the price of creation down to essentially zero. Anyone can create essentially a limitless supply of quasi-information that may or may not be true. the problem is is the price of journalism of debunking also going down maybe by 50 percent right maybe it takes you half as much time to write an article it's not going to zero no so that's the balance is creating stuff has gotten a lot cheaper detecting debunking doing proper journalism has gotten a little bit cheaper so So I'm worried that that's – journalists are already stretched thin. And this is going to be that attention. By far my biggest concern because it's not just – that's obviously a significant factor in it.
Starting point is 00:30:34 There will be more disinformation. There will not be more journalists in part because I think AI is going to take jobs from particularly low-level – it's not going to replace prize-winning columnists at the New York Times. It's not going to replace, like, guys like me who have a very long and established, you know, career of doing the specific thing that we do. But I think back to when I got started as a journalist, as a writer, it was as a tech blogger, and I had an X number of articles that I had to get out per day. And obviously, like my boss was essentially trusting that with that many articles, I'd have a few that did well on Google.
Starting point is 00:31:10 And that brings in traffic and that brought in money. And there's a degree to which you're just kind of doing SEO shit. But it's also I conducted my first interviews for that job. I went to trade shows for the first time. I did my first on the ground journalism for that job. It taught me how to write quickly and in a polished nature. And I was not writing anything that was crucial to the development of humankind, but it made me into the kind of person who was later able to write things that were read by people all over the world and that had an influence on people. And I worry about
Starting point is 00:31:43 the brain drain, not just among journalists, but among writers and among artists, you know, people who do illustrations and stuff, eventually musicians, at least some kinds of musicians will probably also run up against this, where the stuff that it was easy for kind of people breaking in to get a little bit of work that would hone their skills and allow them to, you know, live doing the thing that they're interested in is going to disappear. And more and more of the stuff that we kind of casually low-level consume, not our high art, not our favorite movies, not our favorite books, but the stuff that we encounter when we stumble upon a webpage or like in a commercial or whatever will be increasingly
Starting point is 00:32:21 made by AIs. And that AI will be pulling from an increasingly narrow set of things that humans made because less humans will get that entry-level work. And that is, there's something concerning there. That is something that worries me about the future of just creativity. Yeah. And I think, I mean, two points. One is just to kind of be devil's advocate a little bit, because I do sympathize and I think you're right, but a little bit devil's advocate is, it might be on the flip side of the coin that there's people that feel like they have artistic imagination and desires,
Starting point is 00:32:53 but lack the technical ability. And suddenly they can paint, so to speak, by using these AI image generators. Maybe someone has some form of dyslexia or their English is a second language, or even native speaker without any of these issues, obstructions, but just finds the writing process difficult. And maybe AI enables them to be a writer to contribute. So I could see, you know, there's going to be the pros and the negatives, and I don't know how that balances.
Starting point is 00:33:21 But I think you're right. Thinking from a profession, that's sort of like a passion project view. From a professional view, I do see the profession narrowing. If journalists are expected to work twice as quickly because they're all using chatbots, there's probably going to be half as many of them. Right? I mean, that's the economics. But this brings up a bigger issue, which is I do think what you're hitting on is there are these long-term risks that maybe AI is going to fuel this rebellion of robots in this, you know, maybe. But again, we have an economics, a social political economic world we live in. And I just think let's really focus on the issues we have now. That's not discounting the future. It's not like let's burn a bunch of carbon emitting fuels
Starting point is 00:34:05 because who cares about climate change? That's our grandkids' problems. Yeah. This is different. It's like, let's think about the jobs, the world. I mean, another way to put this is if we mess up our economy
Starting point is 00:34:16 and mess up our democracy by people losing jobs and mass protests and losing trust in the government and there's just an erosion of truth, we're not going to be able to handle climate change or any of these big AI, you know, the singularity type of risks. So what I feel like is, let's focus on what keeps our economy and our sanity and our humanity. Let's keep this fabric of society together now so that we're more equipped in the future to handle
Starting point is 00:34:45 all the risks, AI and otherwise. But this goes back to what you're saying, which is these are real issues in the short term. And if we don't address them, if we get distracted by the long term, we're not going to be ready to address the long term. Even if we think about it now, we'll be so distracted and so dismayed. Yeah. So I think we have to be practical here. I agree. And I am also, I think it's a valid point that you make about the fact that while these are tools that will reduce options for some people, they're also tools that create options that can be used for the creation of art, of culture.
Starting point is 00:35:20 I do think some people I know have brought up Photoshop when I talk about my concerns with AI and are like, you know, there were a lot of people, draftsmen and whatnot, who were concerned when Photoshop hit because it was a threat to some of the things that they did for money. And Photoshop effectively has created whole forms of art that didn't exist or didn't exist in the same fashion before it did as a tool and tools like it. didn't exist in the same fashion before it did as a tool and tools like it um and that's not a i i think i i think it's kind of worth i i don't like uh i don't want to be kind of just on the edge of tragedy here you know this is a there's a lot of different ways this could go and they're not all bad i think we're all used to calamity right now so much so that we we potentially expect it in situations where it's not the inevitable outcome. Well, I mean, I think one way to kind of boil a lot of that down is we can adapt, we just need time to do so, to many things. And what's really challenging and
Starting point is 00:36:19 frustrating now is the pace is so fast. It's not just an illusion. It's not just, oh, if you don't pay attention to AI, it really is fast. It's very, an illusion. It's not just, oh, if you don't pay attention to AI, it really is fast. It's very, very hard for us to adapt. So just thinking of the internet, we got a lot, like individuals as users and tech companies got a lot better at dealing with clickbait, right?
Starting point is 00:36:36 YouTube was tons of clickbait and they figured out ways to demote that to some extent. We got a lot better at keeping fake news out of the high search rankings in Google, like I mentioned. A lot of these problems that came up were not perfectly addressed, not even close, but there was significant progress. And that's often understated. But if these problems are coming so fast and so intense, it's a lot to adapt to. And that's what's really the challenge is the pace. And I think we're seeing a very, very breakneck pace that's
Starting point is 00:37:03 really hard. Now, does that mean you're on the side of like Elon Musk and some of those folks who just signed that letter being like, maybe we should put a pause on AI research? Because, you know, I'm not 100% against it. Again, I kind of am like, man, I wish we'd been having this conversation when Facebook dropped or YouTube dropped. But I don't think that's a realistic thing. I'll say that. But I do think, yeah. Yeah. So I would say, no, I'm not a favor that, for one thing, I mean, in a very practical sense,
Starting point is 00:37:39 you think all these companies that are putting billions of dollars into these investments and AI are all going to sit around saying, you know what, let's just not do this for a few months. No, of course not. So here's what I think. They're not going to slow down. What's going to happen is going to happen. Even if some players decide to be responsible and slow down, guess what? That means the only people plunging ahead are going to be the irresponsible ones. So what I think we need to do is I don't think we can really slow that down. So what about the flip side? I think we need to accelerate public education on artificial intelligence. I think we need to accelerate government legislation, regulation, international
Starting point is 00:38:16 cooperation. I don't think we can solve this by slowing AI down. I do think we need to find a way to speed up our democratic processes. It's taken us how many years to pass basically nothing about social media in the US and some mixed results in Europe. That's the problem, right? If we could work faster, then I think we could keep up. And I think that that's actually the long-term practical survival thing from this is that i hope we get is like yeah we've always needed to be more careful about the things that we expose billions of people to suddenly um it should have happened before now but i i hope that this i hope that all i hope the fact that ai because of james cameron is coded into our brains to be something that triggers a little bit of panic in people, I hope that rather than reacting with panic, it leads to a more
Starting point is 00:39:09 intelligent and considered state of affairs when potentially embracing technologies that are going to change life for huge numbers of people. That's right. And that is, I think we have an opportunity here to experience that and explore that and try. And that is kind of what I was aiming for in that thread is, again, I love that article that you mentioned in the beginning, but if we start going down this road of hype,
Starting point is 00:39:32 there is a danger that we're going to fall into these traps. And I think let's stay grounded. Let's stay practical. Let's really identify the risks. Not that I'm some guru and know what they are, but it's almost easier to see what's not true than what is true. And that's, I think, let's all try to police each other and make sure we're focusing on practical things
Starting point is 00:39:50 that really are manageable, that really are genuine risks, that are impacting people, that are impacting people today, and especially ones that are impacting marginalized populations. Yes. So I think, let's hope we learn these lessons.
Starting point is 00:40:04 And I am not optimistic, but I'm not as cynical. I think there's a lot of important discussions happening now that let's just say there's a lot more discussion now than we had with social media and maybe that's a good thing. Yeah. Well, I think that's a good note to end on.
Starting point is 00:40:20 Noah, did you have anything you kind of wanted to plug before we roll out here? No, I just, I think it's, it's a great topic that everyone can be involved in. And I just, my plug is just don't be intimidated. Don't be afraid. I am writing a book that's not going to come up for a couple of years that's trying to help empower people to kind of be part of these conversations, but that's far off. I just want to say broadly, don't be intimidated and don't fall for this narrative that sometimes happens in tech communities that, oh, you know, I'm not a
Starting point is 00:40:50 tech person. I don't have a chance to understand it. This stuff affects all of us and how it affects you matters and your opinion matters and your voice matters. And we're all part of social media. We're all very soon going to be part of AI and chatbots. So don't be afraid to join the conversation. You don't need any technical background because I think the subject is just as much sociological as technical. It's about people. I think that's a great point to end on. Thank you so much, Noah. Really appreciate your time. And everybody else, have a nice day. I mean, you have a nice day too, also. Thanks, you too. It was lots of fun. Have a nice day. I mean, you have a nice day too, also.
Starting point is 00:41:24 Thanks, you too. It was lots of fun. It Could Happen Here is a production of Cool Zone Media. For more podcasts from Cool Zone Media, visit our website, coolzonemedia.com or check us out on the iHeartRadio app, Apple Podcasts, or wherever you listen to podcasts. You can find sources for It Could Happen Here updated monthly at coolzonemedia.com.
Starting point is 00:41:45 Thanks for listening. An anthology podcast of modern day horror stories inspired by the most terrifying legends and lore of Latin America. Listen to Nocturnal on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Hi, I'm Ed Zitron, host of the Better Offline podcast. And we're kicking off our second season digging into Tex Elite and how they've turned Silicon Valley into a playground for billionaires. From the chaotic world of generative AI to the destruction of Google search, Better Offline is your unvarnished and at times unhinged look at the underbelly of tech brought to you by an industry veteran with nothing to lose. Listen to Better Offline on the iHeartRadio app, Apple Podcasts, wherever else you get your podcasts from.
Starting point is 00:42:47 On Thanksgiving Day, 1999, five-year-old Cuban boy Elian Gonzalez was found off the coast of Florida. And the question was, should the boy go back to his father in Cuba? Mr. Gonzalez wanted to go home and he wanted to take his son with him. Or stay with his relatives in Miami. Imagine that your mother died trying to get you to freedom. Listen to Chess Peace, the Elian Gonzalez story on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.