The Daily Zeitgeist - Better Video Games, Better Voter Suppression: The Future of AI 09.19.23

Episode Date: September 19, 2023

In episode 1549, Jack and Miles are joined by Assistant Professor of Technology Operations and Statistics, Dr. João Sedoc, to discuss… Ways The Future Might Look Different Big and Small, Philosophi...cal Questions About These Language Models, Will ChatGPT Language Models Lead to Skynet? (It Won't) and more! LISTEN: Si Chomphu by SalinSee omnystudio.com/listener for privacy information.

Transcript
Discussion (0)
Starting point is 00:00:00 Hey, fam, I'm Simone Boyce. I'm Danielle Robay. And we're the hosts of The Bright Side, the podcast from Hello Sunshine that's guaranteed to light up your day. Check out our recent episode with Grammy Award-winning rapper Eve on motherhood and the music industry.
Starting point is 00:00:16 No, it's a great, amazing, beautiful thing. There's moms in all industries, very high-stress industries that have kids all across this world. Why can't it be music as well? Listen to The Bright Side from Hello Sunshine on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Kay hasn't heard from her sister in seven years. I have a proposal for you.
Starting point is 00:00:40 Come up here and document my project. All you need to do is record everything like you always do. What was that? That was live audio of a woman's nightmare. Can Kay trust her sister, or is history repeating itself? There's nothing dangerous about what you're doing. They're just dreams. Dream Sequence is a new horror thriller from Blumhouse Television, iHeartRadio, and Realm.
Starting point is 00:01:01 Listen to Dream Sequence on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Curious about queer sexuality, cruising, and expanding your horizons? Hit play on the sex-positive and deeply entertaining podcast, Sniffy's Cruising Confessions. Join hosts Gabe Gonzalez and Chris Patterson Rosso as they explore queer sex, cruising, relationships, and culture in the new iHeart Podcast, Sniffy's Cruising Confessions. Sniffy's Cruising Confessions will broaden minds and help you pursue your true goals. You can listen to Sniffy's Cruising Confessions, sponsored by Gilead, now on the iHeartRadio app or wherever you get your podcasts. New episodes every Thursday. Hello, the internet, and welcome to season 305, episode two of Dirt Daily Night Guys. Hey, production of iHeartRadio. This is a podcast where we take a deep dive into America's shared consciousness.
Starting point is 00:01:49 And it is Tuesday, September 19th, 2023. Oh, yeah. Oh, yeah. National IT Professionals Day. It's also National Voter Registration Day. Hey, make sure you register to vote because a lot of people would probably hope you aren't doing that uh and also national butterscotch pudding day shout out to all of my lovers of the butterscotch flavor for including myself yeah i think that's one of the better pudding flavors that it's also
Starting point is 00:02:17 it's also talked like a pirate day but i just i didn't want to you know you don't want to do that to everybody with children out there? No. Is that a thing you would do as a parent? Would you be like, hey, guys, it's top of the pirate day. We ready to annoy ourselves to death? In this household, we look down on international days that are made up for no reason. We scoff at them, Miles.
Starting point is 00:02:40 Right, right, right, right, right. We're out here living in the real world. There's real consequences to our actions. And when we scoff at them we say har har har because it's always international talking about the pirate day my name is jack o'brien aka l jacko that's courtesy of maxer 1216 because they think i look like el chapo's son and oh okay yeah there was a new picture of el chapo's son being extradited to the u.s on board a plane in handcuffs and people once again any time a picture of that fellow hits the news i am reminded oh okay you? Anyways, it is not. I'm thrilled to be joined, as always, by my co-host, Mr. Miles Gray!
Starting point is 00:03:31 Miles Gray, a.k.a. I'm far out of range. One bar is how I feel. My phone is searching hard, lying naked on my thigh. Irradiation changed my guts to something real. They're wide awake and i can see a quaddo has been born shout out to scouty on discord because we're talking about how that iphone 12 had higher levels of radiation yeah than the than the french government would allow uh and i thought hey maybe
Starting point is 00:04:04 that would just create a little Total Recall Quado type character. And that's a fun one for you to have, a fun news story for you to have a laugh with. Because what I've heard you have. Turned out I had an 11. And I looked mine up. I got the 12, baby.
Starting point is 00:04:20 Hey, but no fair. You have to name your Quado after me. Give me that Quado. Miles. Yes. We are thrilled, fortunate no fair. You have to name your name, your quaddo after me. Give me that quaddo. Miles. Yes. We are thrilled, fortunate, blessed to be joined in our third seat by an assistant professor of technology operations and statistics at NYU, where his research focuses on the intersection of machine learning and natural language processing, of machine learning and natural language processing, which of course means, you know, his interests include conversational agents,
Starting point is 00:04:52 hierarchical models, deep learning, spectral clustering. You guys are all probably finishing this sentence. Yeah, of course. Spectral estimation of hidden Markov models. Yes, obviously. And of course, time series analysis. Please welcome Dr. Joao Sadak. Dr. Joao!
Starting point is 00:05:09 Welcome, Joao. Hi, everyone. Hi. Thank you. Yeah. Thank you, Jack and Miles, for having me. It's a pleasure to be here. Pleasure to have you. We're glad you're here because we got a lot of questions about a lot of stuff, mostly to do with what your area of expertise is.
Starting point is 00:05:25 Some things I just need a second opinion on. Just around the career of Allen Iverson. Yeah. Is it a bad joke if it's been used in like a cable? I believe that's a direct TV commercial is using. They're like, and we have our very own AI that helps you select shows. And then Allen Iverson sitting on the couch with the person. Wait,
Starting point is 00:05:46 does that make it off limits? Yeah. Yeah. That's a, that's a real commercial. Wow. And I'm going to be running with that joke all day today because I'm not any better than the comedy writers at direct TV.
Starting point is 00:06:01 I don't know what the company it is. Yeah. But Dr. Sadak, is it, is that. But Dr. Sadak, is that good? Dr. Sadak, is that a good way to address you? Yeah, sure. I mean, I'm very easy about things. Okay. So, yeah.
Starting point is 00:06:15 How about J Money? Dr. Sadak, Jow. Well, I've never been called that. I had some students in my class call call me j wow j wow probably yes yeah is it jowl so so the pronunciation is um so it's brazilian portuguese so that's what i'm named after but i go by i go by uh joao joao okay so yeah, very easygoing about it. Dr. J, even? Dr. J. Oh, no, no.
Starting point is 00:06:47 No. Rock the cradle, Dr. J. Amazing. All right. Well, yeah, I mean, the main thing, we've brought you here today to ask the big questions around AI, such as, what is that what is ai and we'll follow from there but you know questions that reveal that we're we're smart people obviously no i actually do think that
Starting point is 00:07:17 the fact that ai is used as the phrase to describe a bunch of different types of technology probably has something to do with where people's fear comes from and where people's like kind of confusion, where my confusion up to this point has come from. But yeah, so we're going to talk to you, what is something from your search history that is revealing about who you are or where you are in life? for which of Pfizer or Moderna had the highest amount of mRNA in the vaccine, which took me forever to figure out. And that was a wasted 30 minutes of my day where I could have just slacked someone and had the answer. I could have just slacked someone and had the answer. And wait, why were you trying to discern which had a higher concentration of mRNA? Yeah, I have a feeling that, like, you know,
Starting point is 00:08:34 so that was kind of like, okay, the one with the higher dosage might still have, be slightly more effective. The new vaccines have come or been approved just a couple of days ago. Right. Trying to. It seemed like on a spectrum, like Moderna was having the better results, right?
Starting point is 00:08:53 Like on a continuum. I felt like repeatedly. Or just in general, because I remember when the vaccine first came out, it was like, oh, you got Johnson and Johnson. Oh, you got that J&J. And then I was kind of like, I had Moderna, but as an American consumer, I'm like, but I know the brand Pfizer more. And then I ended up being like, okay, good.
Starting point is 00:09:14 I got that good one, the Moderna, it turned out. Or at least it was slightly more. Oh, Moderna was better than Pfizer? Slightly more effective there, from what I understand. I mean, both are extremely effective. Yeah, right, right. I mean, both are extremely effective, but that was my, that was probably the thing in my today's browsing history that told me the most about me, which is that I'm still after this many years paranoid about something. I mean, everybody in the rest of the world is sort of moved on about. Yeah, for better or worse.
Starting point is 00:09:43 But yeah, it does seem like that yeah i'm still i'm with you on that because like i still somehow have an ego like that attached to what brand of fucking vaccine i got like well you know i got moderna so you know i'll sit over there with the elevated folks and so it's not you're not googling to see which has the more mRNA or what's it called? mRNA. mRNA. Because you are worried that it's going to kill you or cause you to start shaking uncontrollably like the people in those TikTok videos, just to be clear. vaccines have adverse side effects, but no, I'm in it for, Hey, I'm, I'm already, you know, old enough that, you know, that's not going to really matter. I'm more worried about, you know, adverse effects of actually getting the disease. So yeah. And, you know, but, um,
Starting point is 00:10:39 very scientific answer, you know, I was thinking about it and, you know, it was funny because I was asking the chatbots. And so one thing as I work on this stuff, that's both funny and somewhat annoying to my wife is that like, I'll do a search and then I'll play around with like three different chatbots. And then like half an hour later, she's like, have you not been listening to me? Right. And I haven't. I've been going and trying to figure out where we're going to order from yet. I don't know. But being AI thinks that this chain is still in New York.
Starting point is 00:11:15 It's not. Foolish. Foolishness one. Exactly. Yeah. And so it's, you know, yeah, I do tend to get distracted while playing with, you know, all the cool new things. And I, I mean, I'm both, I mean, every once in a while you learn something by anecdotally testing something or, you know, somebody making a statement like, oh, you know, did you notice like, you know, this kind of behavior? And I'm like, oh, that's really cool. Okay. You know. And then off you go. Go down the path of, like, trying to understand it. Right.
Starting point is 00:11:50 Cool. What's something you think is overrated? I think that in terms of things that are really overrated, like, you know, probably artificial general intelligence, also known as AGI. Occasionally people think about it like the singularity where you know we start to assume that you know the machines are going to take over and we're going to be in a term terminator world i think you know the likelihood of that risk is super overstated. Okay, good.
Starting point is 00:12:27 Okay, yeah, that's what I think, too. Well, you actually just answered the only question we had on this episode, so I think we're good here. Sorry. No, no, that's... Yeah, so I think coming in, and we'll get to this a little bit, but coming in to this week's episode, I had read a couple articles on AI, but I think artificial general intelligence, the idea of some chatbot or language model or some sort of artificial intelligence, deep brain model, whatever it is, like gains sentience.
Starting point is 00:13:08 And then it like takes over weapon systems immediately was like the threat that I had in my mind. And after like doing some research, there's no longer something that is it? Well, it doesn't seem to be the one that most people are are worried about at least and then there are the people like the what we'll get into it but i do think yeah that's an important distinction it kind of gets it that first question of like what what is ai
Starting point is 00:13:39 because yeah people use it to stand in for a bunch of different things. What's something you think is underrated? This is maybe mildly underrated, but I think that the amount of upheaval that AI, this generation of technologies is going to have in the next couple of years, this means that certain jobs, a lot of technology in the industrial age has gone from our lower jobs, like jobs that don't require
Starting point is 00:14:18 having skill sets that require a large amount of education. I think Chachi PT already has surfaced evidence of being able to either replace or facilitate somebody to be much more productive and that this will actually cause a reasonably large amount of change in the workforce globally that I think will have real impact,
Starting point is 00:14:48 right, and could cause problems, you know, with more erosion of middle class jobs and, you know, needing to have like sort of deeper retraining. Interesting. What are the, yeah, I guess let's dig into it. Like, what are the yeah i guess let's let's dig into it like what are the jobs that you see being replaced because like the ones that we've heard most about specifically and i've seen sort of poor results for are the writing jobs like when people are like now we're just gonna have ai like write these articles for us now and it they they seem like they do kind of a bad job and they have a tough time with like fact checking what what where are you going where do you expect to see the kind of changes in the workforce first because i get i guess i i will caveat that by saying that
Starting point is 00:15:46 it doesn't seem like a lot of companies care if if the technology does a bad job they're just like yeah but it's it's good less money put butts in the seats and it's less money and the entire model of capitalism currently because of private equity and you know just that being how the incentive system is set up is like to cut costs so maybe it won't matter and it'll just like make journalism shittier and have all journalists jobs replaced by these these ais but what where is? Is that kind of where you see it? Or where do you see the AI replacing jobs? Well, so, okay. So I think true creative writing, right? Like things that writing from journalists or let's say creative writing, songwriting, poetry, these require such deep understanding of people and theory about how people interpret the words and the language and the reactions and emotions that are involved
Starting point is 00:16:57 that I don't think AI is going to help very much with that. I think, sorry, replace. going to help very much with that. I think, sorry, replace, right? I think it can help, right? And make you more productive, but I don't think it's really going to be, let's say, a replacement for a journalist, but some stuff that people do that are pro forma, like a pro forma report, right? That's, you know, almost template based, but not quite. So somebody's got to do some filing. Well, okay, ChatGPT can do this well enough for me that, okay, here's the filing. Very few people are probably going to look at, okay, let me use ChatGPT,
Starting point is 00:17:38 and maybe not need to hire someone to do that. I think the other thing where we're going to see a lot of change is also in coding. Here, I think we're seeing already the impact of it where there's this tool called Copilot, which is driven by similar technology as ChatGPT, which is able to create massive amounts of code for people. So it's making a certain subset of coders
Starting point is 00:18:08 just way more efficient. You know, I see that in, even with my students, so I teach an undergraduate data science class. Last semester, the projects that my students were able to do was actually much more impressive than the previous semester. And part of that was just them being able to use chat GPT to be able to make better code to make their projects better. Is like, I was just reading like an op ed from a former IBM
Starting point is 00:18:39 employee that he's like describing himself as like a tech evangelist and had always been, you know, that he's like describing himself as like a tech evangelist and had always been, you know, you know, say like cheering on like the advent or the arrival of chat GPT and things like that. And he said, and I only it only ended up actually taking a lot of work from me, like because some companies who were less interested in good writing were like, we're actually going to use a lot of this for like content generation. So we don't need, like your services as much he said it hasn't completely replaced him but it's a significant he has notices a more than significant amount of like businesses sort of be like you know what we're kind of gonna just sort of lean into this thing that costs like way less money and are these kind of like the stories that are like canaries in the coal mine but i feel like sort of like in the industrial age, we've always seen like there was just automation, right? Where people who had jobs at a bank to do specific things like file things or put things in a ledger ended up like those jobs ended up going away because of automation or just you think of customer service. Now, a lot of that is automated. And I was also reading about how a lot of local governments are now adopting or really
Starting point is 00:19:45 interested in chatbots as a way to like sort of replace bureaucracies at certain levels. Like, is that kind of is that sort of where you see it ending or what? Like, what's the sort of 50 year outlook? Because I think to Jack's point, too, we always look at this as like, well, when we're trying to reconcile that with like capitalism and a corporation's need to like always look at this as like well when we're trying to reconcile that with like capitalism and the corporations need to like always look at their bottom line is it a thing that there we can strike a balance or it's like yeah we have these human employees who use this as a tool obviously because it makes them better or or is it like the cynical version is like they're going to get as
Starting point is 00:20:22 much as they can done with it and only up until the point that they probably need human workers to kind of really fine tune the processes. Yeah, I think the latter. I mean, I think, you know, the company is their interest is to optimize. So there I mean, the truth of the matter is that the tool right now is just not good enough for most, you know, really for most tasks. I think, you know, 50 years from now is a really hard thing for me to predict.
Starting point is 00:20:54 I know, but you have to. You have to. You have to take wild speculation, but sound extremely confident. Because this is going to go on TikTok and we're going to scare the heck out of a lot of young people. So go ahead. You were saying how Skynet is imminent. Yeah. Skynet is imminent. I'm kidding. So, so I think that, you know, looking forward,
Starting point is 00:21:17 like looking far forward into where I think we'll be in, you know in maybe 10 or 15 years. This technology is just really just going to improve. And what we're going to see is that we will have certain jobs, which were done by people for a very long time, go away. That will probably mean the new jobs will open up and people will just be more productive. That's the sort of optimist in me. Right.
Starting point is 00:21:49 That's also how everything has happened to this point, right? Is that like people are at first scared of new technology, like airplanes are going to be really scary and like make it so that, you know, nobody has to ever take a train and it's like well then there's like the entire aerospace jet propulsion industry and you know i guess that one's a little easier to foresee but like it does feel like there's always fear of new technology and like belief that it's going to end the economy as we know it. And it does, but it's always replaced by a new version, right?
Starting point is 00:22:32 Yep, exactly. And so I think we're going to see something similar in this. Some questions of factuality, sometimes people call this hallucinations in our models. How difficult is that going to be to really fix those sort of corner cases? But given the amount of money and people that are working on this, I think that within the next 10 to 15 years, we're going to see that a lot of that gets solved, or at least the likelihood of it concurring is so small that it's, you know, more that the likelihood of it, you know, lying to you or saying something that is untrue is going to be, you know, way, way, way less than any person. So that's kind of where I think we're going.
Starting point is 00:23:24 So that's kind of where I think we're going. And so we'll see some jobs like entry-level coders may go away, certain jobs like certain types of business analysts, even some form of middle management, I think, is at risk. Lots of various amounts of places, think are at risk i do want to say one other thing which i think is like going to be amazing and and we're seeing this in higher education but in education in general is that probably the place that's been most impacted by chat gpt has been education and i think what we'll see is in the next 10 to 15 years, education is just going to dramatically reform. Right. Hopefully for the better, but like we're going to see
Starting point is 00:24:11 like major changes in how we teach students, how we assess students. Hopefully this will lead to, you know, just better quality of education for everyone. Right, because is it sort of like the main i guess i feel like the point of our traditional educational system is like almost like memory recall it's like how good is your fact recall memory recall and things like that or can you sit through reading this thousand page historical textbook to glean like these eight
Starting point is 00:24:42 points really that we want you to come away with it come away with and i see like how much that distillation of information how quickly that occurs with like you know chat gpt and things like that so i guess it does become more like okay granted if this is the information we want people to learn then how are we now taking that next step to make sure it's sort of ingested properly for people to know that they are making sense of it rather than like being like, yeah, here are the 18 things you need to say to pass like a like a historical course on the Columbian Exchange and more like, OK, we know you know how to get to those answers, but how can you demonstrate that knowledge? And I guess is
Starting point is 00:25:19 that that's sort of where the I guess excitement is in academia, or at least that's the challenge, That's sort of where the, I guess, excitement is in academia, or at least that's the challenge, right? Yeah, yeah. It's the challenge, the worry, the, you know, and I guess excitement goes in both ways. Positive and negative, depending on where you are and just be able to interactively, one-on-one, teach certain concepts. Like with an AI teacher, or you're saying it would help teachers teach help teachers help yeah so so it'd be like a tutor right so you know the teacher's still going to teach but the tutor is going to be able to like the ai tutor is going to be able to you know help the student with okay i, I didn't understand this. Right.
Starting point is 00:26:31 Like, I mean, and one thing that we do know about chatbots is that in general, you know, people have this impression that chatbots have less judgment. Right. And so people are willing to ask, you know, in air quotes, stupid questions. Thank you. Yeah. Because no questions are stupid. Yeah.
Starting point is 00:26:44 We're in a classical classroom. They wouldn't be willing to ask that to a person, and they're willing to ask it to the AI. Yeah. I don't know. As a parent, I would be very worried just based on the work I've seen from chatbots up to this point. What, this point? Yeah, up to this point like like at this point we've covered yeah up to this point like we've covered the columbus dispatch like recap of like a football game like high school football game
Starting point is 00:27:12 and it's just like such absolute shit it's like one of the worst articles i've ever read you know the sports the star wars one that gizmodo put up is terrible so i don't like i i believe that like the chatbots will continue to get better but i guess i have questions about what they're going to get better at whether they're going to get better at like fooling us into thinking that they have like human intelligence or whether they're going to get like more more accurate because it seems like within the world of ai like there's still people who are like... I think it was somebody who worked at OpenAI. It was like Wikipedia level accuracy is years off at this point.
Starting point is 00:27:57 Wikipedia is suddenly being used as this phrase for something that is the gold standard. Whereas outside of the context of AI training, it's something that is the gold standard. Immutable truth. Outside of the context of AI training, it's something that we joke about being easily editable and stuff. So, yeah. It worries me a little bit that I think people's faith is being misplaced in the language models
Starting point is 00:28:21 because language is inherently an abstractive system that is designed to lie like essentially is i mean it's to tell you a story that gives you meaning but it feels like from a philosophical level like that i i see that i see where people who are skeptical that this is the path to like really useful stuff are coming from but let's take a quick break and we'll come back and i just i do just want to like kind of nail down two different things that are being called ai that seem vastly different to me so let's let's take a quick break we'll be right back i've been thinking about you.
Starting point is 00:29:05 I want you back in my life. It's too late for that. I have a proposal for you. Come up here and document my project. All you need to do is record everything like you always do. One session. 24 hours. BPM 110.
Starting point is 00:29:22 120. She's terrified. Should we wake her up? Absolutely not. What was that? You didn't figure it out? I think I need to hear you say it. That was live audio of a woman's nightmare.
Starting point is 00:29:37 This machine is approved and everything? You're allowed to be doing this? We passed the review board a year ago. We're not hurting people. There's nothing dangerous about what you're doing. They're just dreams. Dream Sequence is a new horror thriller from Blumhouse Television, iHeartRadio, and Realm. Listen to Dream Sequence on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Hi everyone, it's me, Katie Couric. If you follow me on social media, you know I love to cook, or wherever you get your podcasts. So I started a free newsletter called Good Taste that comes out every Thursday, and it's serving up recipes that will make your mouth water.
Starting point is 00:30:29 Think a candied bacon Bloody Mary, tacos with cabbage slaw, curry cauliflower with almonds and mint, and cherry slab pie with vanilla ice cream to top it all off. I mean, yum. I'm getting hungry. But if you're not sold yet, we also have kitchen tips like a foolproof way to grill the perfect burger and must-have products like the best cast iron skillet to feel like a chef in your own kitchen. All you need to do is sign up at katiecouric.com slash goodtaste. That's K-A-T-I-E-C-O-U-R-I-C dot com slash goodtaste. I promise your taste buds will be happy you did. How do you feel about biscuits? Hi, I'm Akilah Hughes, and I'm so excited about my new podcast, Rebel Spirit, where I head back to my hometown in Kentucky and try to convince my high school to change their racist mascot, the Rebels, into something everyone in the South loves, the biscuits.
Starting point is 00:31:23 I was a lady rebel. Like, what does that even mean? The Boone County rebels will stay the Boone County rebels with the image of the biscuits. It's right here in black and white in print. A lion. An individual that came to the school saying that God sent him to talk to me about the mascot switch is a leader. You choose hills that you want to die on.
Starting point is 00:31:43 Why would we want to be the losing team? I'd just take all the other stuff out of it. Segregation academies. When civil rights said that we need to integrate public schools, these charter schools were exempt from that. Bigger than a flag or mascot. You have to be ready for serious backlash.
Starting point is 00:32:01 Listen to Rebel Spirit on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts and we're back so yeah i i guess i think chad gbt seems to be the thing that everyone has been that everyone has been taken by like very like interested in right like the it seems to have reached a new level where people can do things and have conversations that are remarkably like human seeming and can do write really a good maybe b minus term papers and then use that to kind of scrap together something or depending you know maybe maybe it's a higher level of term paper i i don't grade term papers so i i wouldn't know but just based on the journalistic work i've seen it do i haven't been
Starting point is 00:33:01 impressed but so that's the main thing that I'm hearing referred to as AI. And they're like, there's a new future upon us. Look at chat GPT. And then there's like, when you look into some of the things that are being done with machine learning, like accurately predicting how proteins fold, which is like a problem that has been just too complex for humans to solve on their own, just like too slow for humans to know the atomic shape of proteins. Like they just kind of fold in ways that is like too small, too complicated for humans to predict. And like through machine learning and like this deep brain system at Google, they were able to like knock it out within a couple of years. And like they gave it to the public. They were like, here, here is the shape of the proteins. And so if you're using
Starting point is 00:34:01 AI to describe both of those things, I think that is that's where the creepiness comes in because you're like okay it has this like godlike science ability and then it also has this like child personality where it like you can like talk to you and flirt with you a little bit and it's like that that mismatch seems uncanny and weird to me like and i think that that is what is happening in people's minds when they're like oh yeah ai is scary it's this like massively powerful thing and it is massively powerful i think it's just more massively powerful when it has a task and like a specific thing that it's seeking which is not what you can really have with the language models which are like predictive right yeah i think i think it's
Starting point is 00:34:53 kind of unfair in the way that people characterize the language models is like oh they just predict the next word. Right. I mean, in some sense, it's what we know of what the language model is trying to do is it is trying to predict some sequence of text. And in doing so, it's learning a whole bunch of the knowledge in the world and how the world is connected. It's learning some seemingly probabilistic logic rules however yeah i mean the this is oftentimes you see this weird sort
Starting point is 00:35:39 of disconnect between chat gpt or the that, which we call large language models in general, being so smart and so stupid at the same time. Right. That's kind of our specialty on the show. Well, and I think as we, as like society interact with the technology more and more, we're going to get sort of a better mental model of the technology. Right. And in some sense, like right now, a lot of people, you know, I ask my students, OK, like, what do you think of, how do you reason about chat GPT? How do you reason about the quality of the technology
Starting point is 00:36:30 and how it is understanding the input and giving you output? And some people think about it kind of like this person and reason about it like a person. Other people see it like google search but just a really really interactive version of google search sure and somehow it's somewhere in between which i think is like the weird sort of place that we're at and you know i think that the fact that the same type of deep learning and machine learning models can predict protein folding and find new ways of inverting matrices that are even more efficient and doing all this very, very incredible intelligent stuff. stuff at the same time as you know making incredibly dumb statements is is something that we will like people are going to have to deal with right as as okay you know there's this
Starting point is 00:37:34 weird disconnect you know if you think about it like a person it just it's not a person right it's it's not a you know it doesn't map on like in general we like to take technologies or different types of constructs and map them onto what we already know and in some sense it's not mapping well onto what we already know and so we're gonna have to figure out how to sort of properly map it onto its own new sort of construct. Yeah. I get like the one thing that, you know,
Starting point is 00:38:09 in talking about it as like, well, it's just predicting the next word or, you know, it's doing like when you describe the very specific tasks that's being programmed to do versus the emergent abilities that are coming out of that task like the analogy that i heard used that that made sense is like i guess it's a quote from darwin that like from so simple a beginning endless forms most beautiful like that that quote being applied to
Starting point is 00:38:42 basically the argument is like that's all the human mind is doing also. Like in some ways, like the human mind is just can be seen as being programmed to like look for lions in the bushes and move towards food and procreation. And then like from all of these like complicated synapses firing you get this amazing thing called consciousness and that you know just by describing ai in a reductive way you're ignoring the fact that like it could be these very basic things and also be moving towards you know what we describe as what we think of as consciousness so yeah i i think yeah i it's a super interesting conversation that was like i think i was scared to like read into like to get like to research because it's just so massive and like when i was
Starting point is 00:39:42 i was a philosophy major in college and when i was studying philosophy back in like the early 2000s like ai like the question of ai and what was going to happen was like a major question in philosophy all the way back then and it's like all of the all of the ways that philosophers have been like thinking about and asking all the questions they've been asking about AI for, you know, since like the sixties are, are now like coming, coming to a forum.
Starting point is 00:40:11 Like as we've gotten closer to the moment, we still don't really have any clearer of an idea of like what, what exactly is going to happen. So it's, it's a little scary. I do want to, I do want to take a break and then come back and just talk about like kind of some specific ways just like take some guesses as to like specific ways that the future might look different that we might not be that i at least going into like researching this hadn't hadn hadn't been taken into account as a possibility.
Starting point is 00:40:46 So we'll be right back. I've been thinking about you. I want you back in my life. It's too late for that. I have a proposal for you. Come up here and document my project. All you need to do is record everything like you always do. One session.
Starting point is 00:41:07 24 hours. BPM 110. 120. She's terrified. Should we wake her up? Absolutely not. What was that? You didn't figure it out?
Starting point is 00:41:20 I think I need to hear you say it. That was live audio of a woman's nightmare. This machine is approved and everything? You're allowed to be doing this? We passed the review board a year ago. We're not hurting people. There's nothing dangerous about what you're doing. They're just dreams. Dream Sequence is a new horror thriller from Blumhouse Television, iHeartRadio, and Realm. Dream Sequence is a new horror thriller from Blumhouse Television, iHeartRadio, and Realm. Listen to Dream Sequence on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Starting point is 00:41:55 It was December 2019 when the story blew up. In Green Bay, Wisconsin, former Packers star Kabir Bajabiamila caught up in a bizarre situation. KGB explaining what he believes led to the arrest of his friends at a children's Christmas play. A family man, former NFL player, devout Christian, now cut off from his family and connected to a strange arrest. I am going to share my journey of how I went from Christianity to now a Hebrew Israelite. I got swept up in Kabir's journey, but this was only the beginning. In a story about faith and football, the search for meaning away from the gridiron, and the consequences for everyone involved.
Starting point is 00:42:33 You mix homesteading with guns and church, and then a little bit of the spice of conspiracy theories that we liked. Voila! You got straight away. I felt like I was living in North Korea, but worse, if that's possible. Listen to Spiraled on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. When you think of Mexican culture,
Starting point is 00:42:54 you think of avocado, mariachi, delicious cuisine, and of course, lucha libre. It doesn't get more Mexican than this. Lucha libre is known globally because it is much more than just a sport and much more than just entertainment. Lucha librere. It doesn't get more Mexican than this. Lucha Libre is known globally because it is much more than just a sport and much more than just entertainment. Lucha Libre is a type of storytelling.
Starting point is 00:43:10 It's a dance. It's tradition. It's culture. This is Lucha Libre Behind the Mask, a 12-episode podcast in both English and Spanish about the history and cultural richness of Lucha Libre. And I'm your host, Santos Escobar, the emperor of Lucha Libre and And I'm your host, Santos Escobar, the emperor of Lucha Libre
Starting point is 00:43:25 and a WWE superstar. Santos! Santos! Join me as we learn more about the history behind this spectacular sport from its inception in the United States to how it became a global symbol of Mexican culture. We'll learn more about some of the most iconic heroes in the ring.
Starting point is 00:43:41 This is Lucha Libre Behind the Mask. Listen to Lucha Libre Behind the Mask as part of My Cultura Podcast Network on the iHeartRadio app, Apple Podcasts, or wherever you stream podcasts. And we're back. And Miles, I mean, we talk a lot about how we we what we think this future is going to look like right yeah i mean i share the name with miles dyson ever heard of him the guy started a little thing called the sky net you don't think that keeps me up all night well but in seriousness i hey he went out of hero miles you know i did i did um look it's all about altruism you know at the end um but like i think for me personally right so much of my so much of my understanding of science is derived from film and television because I'm American. And I think with like with AI, I think whenever I would think of like, oh, like, it's like, you don't know where this thing's going go the place i think it's gonna go is sky net from
Starting point is 00:44:45 terminator i just see the shot from the sky of all the missiles being launched like just coming down i'm trying to have a nice day at the park with my kids i'm watching through a chain link and next thing some ladies over there shaking the chain link fence for some reason skeletons rattling the kids over there get her out of here but i'm, first of all, it's sort of twofold question. How, how far off base is the idea of like, how many jumps am I making in my brain to be like Chad GPT, next stop Skynet. And, and then also because of that, what are all of the ways that people like me are not considering what those actual real tangible effects will be of? And I don't want to be like the most cynical version, but, you know, if we aren't careful and we have very sort of individualistic or profit minded sort of motivations to develop this kind of technology, like what does that worst case sort of look like? Or not worst case but what
Starting point is 00:45:45 what are the ways i'm actually not thinking of because i'm too busy thinking about t1000s right like i just to like add on to that when when i think about the computer revolution like for decades it was these people in san francisco talking about how the future was going to be completely different and done on on computers or something called the internet. And we were just down here looking at a computer that was like green dots, like not, not that good of a screen display. And by the time it like filtered down to us and the world does look totally different.
Starting point is 00:46:21 It's like, well, shit, like that's, that turns out that was a big deal so yeah i i feel like i i want to be constantly like on our show just asking a question like where is this actually taking us because some in the past i feel like it's been hard to predict and when it did come it wasn it wasn't exactly what the tech oracle said it was going to be.
Starting point is 00:46:47 And it was unexpected and weird and beautiful and banal in weird ways. And so I'm curious to hear your thoughts on once this technology reaches the level of consumers, of just your average ordinary person who's like not teaching machine learning at NYU like what what is it going to look like but first but obviously first Skynet Skynet but first I think uh Skynet okay so so I think the idea of worrying about some version of, you know, this technology going out of control, right, is there's so many checks and balances of ways in which we are thinking about and the technology is sort of moving forward that skynet you know the idea of something emergent and then not only is emergent but it's going to take over and try to destroy civilization right and also send and also send cybernetic organisms into the past to ensure that those people do not grow up to take arms against Skynet, like John Connor and his... Yeah, yeah.
Starting point is 00:48:11 So I'm a firm believer in being able to send people to the future. Thank you. I still don't think that we're not going to be able to send people in the past. Okay. So that's maybe one problem. That's just like your opinion man all right i'm not a back to the future is gonna happen for me and by the way i'm the doc brown character i'm gonna befriend a young child yeah right exactly uh how long have they been mart Marty and Doc, been friends?
Starting point is 00:48:45 Probably like, it seemed like eight years. So like when Marty was like nine years old, they became best friends. Anyways, sorry about that. Forgetting sidetracks. So, well, so I think, you know, so what I'm thinking about when we think about this kind of technology and how it could be, how the technology itself could go wrong, I don't think that we have a good ability or understanding of like consciousness. And I'd also don't think we have a really good understanding of like, okay, you know, why would, you know, these large language models start, you know, trying to harm us. So there's all those steps and jumps and leaps and all the intermediate pieces that just
Starting point is 00:49:48 seem so incredibly unlikely. Now, I know that a lot of the AGI people who worry, their worry is, oh, well, we got to worry about this tail race, right? And a human level extinction event is worth worrying about. And some people worrying about that is probably not a bad thing. But I just think it's very unlikely for the reasons of all the chain that would need to happen. And we're not very good at robots also. Robots are actually much further away. That is one that is one okay we'll talk about it but let's put a pit in that because i want to come back to that to bad robots to bad robots and how easy i just want to brag about how easy it would be for me to beat one up but uh those boss and
Starting point is 00:50:39 dynamics yeah yeah i got money on you in that fight. But the thing that I do want to point out is that, you know, the concept of starting to build some of these rules into the large language models, you know, to try to say, okay, well, you know, try to be benign. Don't do harm. I think that's a really super good idea. Right. Yeah. That's a really super good idea, right? And I think that even if you don't believe in Skynet, actually believing in trying to incorporate responsibility into the large language models,
Starting point is 00:51:14 I think that is something that's very important. Moving into some of the dangers, though, I think actually large language models and people, like OpenAI was actually worried about this. And one of its first iterations of GPT was the ability for the large language models to create misinformation and disinformation. And, you know, I think that that's a really bad use of the technology and very, very potentially harmful. And it already seems to be what it's being used for. Like the,
Starting point is 00:51:50 the tech, not like the, the ways that like companies are trying to replace journalism with it or, you know, clickbait article, like just generate tons of clickbait articles that are like targeted at people is like, it feels like it's already training
Starting point is 00:52:05 in that in in a lot of ways yeah unfortunately i think you're right like that that there is this that there is this bad use of the technology where it's like oh let's make this so that it could be as persuasive as possible. You know, there's obviously been a ton of research and marketing on figuring out, OK, how do we, you know, position this product so that it's, you know, you am I going to make this tailored advertisement just for him to be able to do to make him buy this particular product that I'm selling or to make him not vote? Right. Yeah. Right. You know, those kind of this. I mean, I think that this is scary and I think that this is in the now.
Starting point is 00:52:59 Right. Yeah. Which, you know, is something that in some sense is going to be hard to like really stop people from using this technology in that direction without things like legislation. You know, there's just some components that need, you know, more legislation. And I think that's going to be hard to do but probably necessary right yeah yeah i could even see like just even in politics you can be like okay i need to actually figure out the best uh campaign plan for this very specific demographic that lives in this part of this state and for sure and then just imagining what happens but i guess is that also part of the slippery slope is that like the reliance sort of gives way to like sort of this like thing where it's like this actually is going to whatever this says is the solution to whatever problem we have.
Starting point is 00:53:54 And like just kind of throwing our hands up and all just becoming totally reliant. I mean, I'd imagine that's also seems like a place where we could easily sort of slip into a problem where it's like, yeah, the chat bot may give us this answer. Well, or that people are starting to make like very homogenous decisions and choices. Right. Where you would make, you know, many different choices. But because you already have a template that's been given to you by, you know, this AI, you're like, oh, okay, well, you know, I'm just going to follow, you know, this choice or this decision that it's going to make for me and not, you know, do one of the thousand different alternatives. Right. does that right like all of a sudden we would have done vastly different choices but now we have this sort of weird sort of uh centering effect where all of us are actually making much less you know varied choices in our decision making right uh which has i mean it does possess real risk i mean
Starting point is 00:54:59 imagine applying this to resume screening right where you could imagine the same type of problem, right? There's just so many scenarios that you can think of where, you know, we need to take care. Right. And this is a good time to think about that. Right. Yeah, we're turning our free will over to the care of algorithms and, know the phones like the skinner boxes in our hands that we're carrying around and like that feels like a thing that's already happening ai might just make it a little bit more effective and like quicker to respond and like but yeah it
Starting point is 00:55:38 feels like a lot of the concerns over ai that make the most sense to me are the ones that are already happening and yeah just in like to kind of tip my head a little bit like the in doing a lot of research on the kind of overall general intelligence concerns that we we've been talking about the ones where it like takes over and evades human control because it wants to defeat humans. I was surprised how full of shit those seem to be. For instance, there's one story that got passed around a lot last year where during a military exercise, an AI was being told not to take out human targets by its human operator.
Starting point is 00:56:29 And so the AI made the decision to kill the operator so that it could then just go rogue and start killing whatever it wanted to. And that story got passed around a lot. I think we've even referenced it on this show. And it's not true. First of all, I think when it like got passed around a lot i think we've even like referenced it on this show and it's not true first of all i think when it first got passed around people were like it actually killed someone man and it was just a it was just a hypothetical exercise first of all like just in the story like as it was being passed around and second of all it was then debunked like the person who said that it happened in the exercise later came out and was like that actually like that didn't happen but it seems like there is a real interest and
Starting point is 00:57:13 real incentive on behalf of the people who are like set up to make money off of these ais to like make them seem like they have this like godlike reasoning power. There's this other story about where like they were testing the, I think it was GPT-4 to see like, like during the alignment testing, alignment testing is like trying to make sure that the AI's goals are aligned with humanity's and like making humans more happy happy and they like ran a test to see where the gpt4 was basically like i can't solve a captcha but what i'm going to do is i'm going to reach out to a task rabbit and hire a task rabbit to solve the captcha for me and the gpt4 made up a lie and said that he was visually impaired and that's why he needed the TaskRabbit to do that.
Starting point is 00:58:08 And again, it's like one of those stories like it feels creepy and it like gives you goosebumps. And again, it's not true. It's like they were, the GPT was being prompted by a human. Like it's very similar to the self-driving car myth, like that self-driving car viral video that Elon Musk put out where it was being prompted by a human and like pre-programmed. the kind of clever task that uh we're worried about this thing having done we're just like taken out we're just edited out so that they could tell a story where it seemed like the gpt4 which is like what powers chat gpt that like it was doing something evil right and it's like
Starting point is 00:59:00 so it's i get how these stories get out there and become like, like, because they're, they're the version of AI that we've been preparing for because like in watching Terminator. head of open ai like just leans into that shit and is like he like he in an interview with like the new yorker he was like yeah i like keep a cyanide capsule on me and like all these weapons and like a gas mask from the israeli military because in case like an ai takeover happens and it's like what like you of all people should know that that is complete bullshit but and it and it totally like takes the eye off the ball of like how the actual dangers that ai poses which is that it's going to just like flood the zone with shit like it's going to keep making the internet and phones like more and more unusable but also more and more like difficult to like tear ourselves away from like i feel like we've
Starting point is 01:00:12 already lost the alignment battle in the sense that like we've already lost any way in which our technology that is supposed to like serve us and make our lives better and enrich our happiness, like are doing that. Like they, they stopped doing that a long time ago. Like, that's why I always like say that the more interesting question, like the questions that were being asked in philosophy classes in the early
Starting point is 01:00:37 2000s about like singularity. And like, suddenly this thing spins off and is we, we don't realize we're no longer being served and like we're like 20 steps behind like that happened with capitalism a long time ago like that we're like capitalism is so far beyond and it's like you know we're no longer serving ourselves and serving fellow humanity we're like serving this idea of this market that is just you know repeatedly making decisions to make itself more efficient take the friction out of like our consumption behavior and yeah i think ai is
Starting point is 01:01:14 going to definitely be a tool in that but like that is the ultimate battle that we're already losing yeah i mean i i completely agree with you on the front of, like, your phones are, I mean, the companies are trying to maximize their profits, which mean, you know, possibly you being on your phone at the disservice of doing something else, like going outdoors and doing exercise, right? Right. Or, you know, doing something else like going outdoors and doing exercise right right or you know doing something social uh i i do see on the other hand that like you know maybe yeah i can have positive effects right where you know take tutoring for an example or public health where we take the phone and take the technology and use it for benefit, like providing information about public health by helping students who don't have the ability to have a tutor have a tutor right like all of these kind of things where let's take advantage of the fact that you know the majority of the world has phones right and try to use that technology for you know a good positive societal benefit but like any tool and it has usage that are both good and bad. And I do think like, you know,
Starting point is 01:02:46 there's going to be a lot of uses of this technology that are not necessarily in the best interest of humanity or individual people. Right. Totally. It's like really like sort of the real X factor is how the technology is being deployed and for what purpose, not that technology in and of itself is like this runaway train that we're trying to rein in.
Starting point is 01:03:09 It's that, yeah, if you do phone scams, you might come up with better, you know, like both like voice models to get this rather than doing one call per hour. You can do a thousand calls in one hour running the same script and same scam on people.
Starting point is 01:03:23 Or to your point about how do you persuade jack o'brien to not vote or to buy x product then it's all going in that direction versus the things like how can we optimize like the ability of a person to learn if they're in an environment that typically isn't one where people have access to information or for the public health use and that's why like i think in the end because we always see like we're like yeah this thing could be used for good and it's almost every example is like yeah and it just made two guys three billion dollars in two seconds right and that's what happened with that and now we all don't know if videos we see on instagram are real anymore thanks yeah yeah it's not yeah it's not inherently a bad
Starting point is 01:04:06 technology it's that the system as is currently constituted favors scams like that's why we have a scam artist as like our most recent president before this one and probably could be future president again, like, because that is what our current system is designed for. And like, it is just scamming people for money. That's why when, when blockchain technology, like for has it's like infant baby steps, it immediately becomes a thing that people use to scam one another because that is the software of our current society. So yeah, there are like tons of amazing possibilities with AI. I don't think we find them in the United States, like applying the AI tools to our current software. I would love for there to be a version of the future where AI is so smart and efficient that it changes that paradigm somehow and changes it so that our software that our society runs on is no longer scams. You know, right, right. I mean, there is a future where AI starts to do
Starting point is 01:05:28 a lot of tasks for us I think that there's starting to be depending, it's mostly in Europe now but also here it talks about universal basic income it's kind of, I don't think that that's going to be the case. I don't think that in the US anybody has an appetite for that, maybe in 20, 30 years. certain things about my job right as a professor about like writing and certain other components right where you know here's some things where ai can help right and you know today i use it as a tool to make me more efficient like you know okay i want to have an editor look at this okay well you know it's not going to do as good as as a real editor but maybe as a cop as a you know bad copy editor yeah it's good you
Starting point is 01:06:36 know like and but you know I mean I've been writing papers for a while you know when the undergraduates write the paper, it does an even better job, right? Because they have more room for growth, right? And so I think that, you know, yeah, I think that on a micro level, right, these tools are now being used. I think that, you know, Russian spam bots have been most likely using large language model technology before chat GPT. Right. And so, you know, I mean, there are some ways that I think we're going in a positive and good direction. Right.
Starting point is 01:07:15 Yeah. Now it's like now that you say that, I'm thinking like we thought Cambridge Analytica was bad. And it's like, what happens now when we like turn it up to 1 000 with this kind of thing where it's like yeah here are all these voter files now like really figure out how to suppress a vote or get to sway people and yeah i again it does feel like one of these things where in the right hands right we can create a world where people don't have to toil because we're able to automate those things. But then the next question becomes, how do we off-ramp our culture of greed and wealth hoarding and concentrating all of our
Starting point is 01:07:54 wealth into a way to say like, well, we actually need to spread this out. That way everyone can have their needs met materially because our economy kind of runs on its own in certain sectors. Obviously, other ones need real like human labor and things like that. But yeah, that's where you begin to see like, OK, that's the fork in the road. Can we make that? Do we do we take the right path or does it turn into, you know, just like some very half assed version where like only a fraction of the people that need like universal
Starting point is 01:08:25 basic income are receiving it well you know we read more articles about why we don't see van vat in hollywood anymore as jack pointed out that yeah terribly worded thing about vince vaughn that ai could not get right couldn't get his name right so they call him van vat the one so the sci-fi dystopia that seems like not not necessarily dystopia but the sci-fi future that seems like it's most close at hand and like given you know doing a weekend's worth of research on like where where this is and like where all the top thinkers think we're headed the thing that make that seems the closest to reality to me is the movie her like her is kind of already here there there's already these applications that use chat gpt and you know gpt4 to again like it really seems to me like and i don't necessarily mean this like in a dismissive way this is probably you know a good description of most jobs like that
Starting point is 01:09:27 the thing that david fincher i was like listening to a podcast where they talked about how david fincher says like words are only ever spoken in his movies by characters in order to lie because like that's how humans actually use language is just to like find different ways to like lie about who they are what their you know approaches to things and like i think you know these language models the thing that they're really good at like whether it be like up to this point where people are like holy shit this thing's alive that's talking to me well it's not and it's like just doing a really good approximation of that and like kind of fooling you a little bit like take that to the ultimate extreme and it's like it makes you think that you're in a like loving relationship with a
Starting point is 01:10:12 partner and like that's already how it's being used in some in some instances to great effect so like that that seems to be one way that i could see that being becoming more and more a thing where people are like yeah i don't have like human relationships anymore i get like my emotional needs met by like her technology essentially is that what what would you say about that and what is there a different fictional future that that you see close at hand so i think her is great i actually i mean it's really funny because you know i remember back in 2017 2018 where i was like oh her is like a great sort of this is where ai is going and you know for those of you haven't seen the movie it's a funny movie where you know they
Starting point is 01:11:06 have uh scarlett johansson as the voice of an ai agent which is on the phone and i think that this is actually a pretty accurate description of what we're going to have in the future not necessarily the relationship part but the fact that we'll all have this personal assistant. And the personal assistant will see so many aspects of our lives, right? Our calendars, our, you know, meetings, our phone calls, everything. And so it'll be, you know, this assistant that we all have that's helping us to, like, make things more productive. And as a function of the assistant, to be effective, the assistant will probably want to have some connection with you. And that connection will be likely to
Starting point is 01:11:56 allow you to trust it. And here's where the slippery slope comes from, you know, where you, where you're like, oh, this understands me. And you start getting into a deeper relationship where, you know, a lot of the fulfillment of a one-on-one connection can come with your smart assistant. And I do think that there's a little bit of a danger here. Like, you know, again, I'm not a a psychologist i'm someone who studies ai machine learning but i do actually you know study how well machines can make sense of emotion and empathy and really you know gpt4 which is the current state-of-the-art technology, is actually already really good in understanding. I use that phrase with a quote, the phrase understanding, but really what you are feeling, right?
Starting point is 01:12:59 How you're feeling under this scenario. And you can imagine that feeling heard is one of the most important parts of a relationship, right? And if you're not feeling heard by somebody else and you're feeling heard by, you know, your personal assistant, that could shift relationships to the AI, which, you know, could be fundamentally dangerous because it's an AI, not a real person. Could be fundamentally dangerous because it's an AI, not a real person. Right. Yeah.
Starting point is 01:13:32 The one I was talking about is Replika, that app, R-E-P-L-I-K-A, where they are designing these things explicitly to fill in as a romantic partner. And at least with some people, and it's always hard to tell did they find like the three people who are using this to fill an actual hole in their lives or is it actually taking off as a technology but yeah the the personal assistant thing seems closer at hand than maybe people realized any other like kind of concrete changes that you think are coming to people's lives that they aren't ready for or haven't really thought about or seen in another sci-fi movie?
Starting point is 01:14:16 Sci-fi movies are remarkably good at predicting the future. Yeah, or maybe they have seen in a sci-fi movie. No, but I think, you know, another thing that one potential that I think not enough people are talking about is how much better video games are going to become. So people are already integrating GPT-4 into video games. And I think our video games are just going to be so much better. to video games and i think our video games are just going to be so much better because you know you have this ability to like interact now with a character ai that's not just like very boring and chatting with you but they can actually be truly entertaining and fully interactive
Starting point is 01:15:02 interesting i think you know computer games are going to be much, much better and possibly also more addictive as a function of being much better. Perfect. And then that will dull our appetite for revolution when our jobs are all taken by the, yes, this is great.
Starting point is 01:15:18 This is great. All right. I feel much better after having this conversation. Dude, Grand Theft Auto is way better. The kinds of conversations I have with people I would normally bludgeon on the street with my character. It's really something else. Well, Dr. Sadak, such a pleasure having you on The Daily Zeitgeist.
Starting point is 01:15:37 I feel like we could keep talking about this for another two hours, but we've got to let you go. But thank you so much for joining us. We'll have to have you back. Where can people find you, follow you, all that good stuff? Yeah, thank you for having me. I'm on what is now called X, at J-O-A-O-S-E-D-O-C.
Starting point is 01:16:01 We still call it Twitter. Okay, well, Twitter twitter fine and uh you know yeah that's my that's my currently still my main venue but yeah and is there a work of uh media you've been enjoying oh good question uh modern now so a fall for dance is starting in new york city so um i've been uh you know moving into to watching some modern dance which is our uh kind of fun thing to do oh fun modern dance okay okay there you go i'm gonna have uh bing chat recommend some modern dance shows for me in new york so let's see what they say yeah through work we get access to the uh bing chatbot so pretty cool all i do is mess around i'm i'm like a child with it i'm like pitch me a movie with seth rogan where he's a stoner biker did a pretty good job pretty good
Starting point is 01:16:59 job feels a little derivative but it makes sense because it's only deriving its ideas from existing things out there. But yeah. I asked it to summarize the plot of Moby Dick in the format of a Wu-Tang rap, and it was not great. Not great. Not great. It was kind of like more early 80s rap where it was like, hey, my name's Ahab and i'm here to say i hit this whale in a major way miles where can people find you what is the work of media you've been enjoying now you really got me thinking now you've really done it wow jack sorry I'm asking it to summarize of Mice and Men in the form of a ghost face
Starting point is 01:17:45 rap verse. You can find me at Miles of Grey on Twitter. I don't even... You know, formerly X. I'm calling it TwitterFormerlyX.com. Okay, we're inverting the form here. So check me out there, pretty much anywhere, Instagram, all that.
Starting point is 01:18:02 And obviously find us on our basketball podcast, Miles and Jack got mad boosties where I promise my takes are not written by AI or are they and also find me on my true crime podcast the good thief where we're in search of the Greek Robin Hood and also for 20 day fiance where I talk about 90 day
Starting point is 01:18:18 fiance whoa do you know what you know what you know they just said when I said give me of mice and men in the form of ghost face yo let me tell you about a story so true of two migrant workers george and lenny who we're on a mission to own their food and land and live off the fat of the land that was the plan george was smallish come on you got to give me like a random Italian dish mixed with like Gore-Tex or North Face. Anyway, it's got some learning to do. It's got some work.
Starting point is 01:18:51 Let's see. Any tweets or works of media I'm liking? No, not really. Not really. I thought I had something that I saw recently. Oh, new season of Top Boy. I was watching the new season of Top Boy on Netflix. New season of Top Boy. Yeah. Is Top Boy
Starting point is 01:19:10 a reality show? Top Boy, no. It does sound like it would be. It's like the best Cabana Boy. The best boy. Yeah, no. It's like a London gangster show. Okay. Yeah. About road men, isn't it? Road men? Yeah, road men.
Starting point is 01:19:26 I was a highwayman. All right. You can find me on Twitter at Jack underscore O'Brien. Tweet I enjoyed was from Jeff
Starting point is 01:19:37 at Used Wigs tweeted, the weekend ending trauma of seeing the 60 minute stopwatch as a kid lives forever. Oof. I feel that. minute stopwatch as a kid lives forever. I feel that.
Starting point is 01:19:47 And not just as a kid. Not just as a kid. You can find us on Twitter at Daily Zeitgeist. We're at The Daily Zeitgeist on Instagram. We have a Facebook fan page and our website DailyZeitgeist.com where we post our episodes and our footnotes where we link off to the information that we talked
Starting point is 01:20:03 about in today's episode. As well as a song that we think you might enjoy miles what song do you think people might enjoy uh this is from a thai artist she's like a drummer producer dope drummer her name's salin s-a-l-i-n i'm sorry if i i botched the uh the pronunci And I'm going to do it again with the track. It's called Si Chomphu. The first word, S-I. The second word, C-H-O-M-P-H-U. And I believe that is a region of, like, northeast Thailand. But she's, like, this, like, it's, like, a really funky drum track. If you like krungbin and kind of sort of, like, funky, like, southeast Asian funk kind of music,
Starting point is 01:20:43 this is kind of what that track sounds like, except for drumming. She is so tight on the kit. Anyway, so check this one out. It's Salim with C Champu. All right. We will link off to that in the footnotes. Today's Zeitgeist is a production of iHeartRadio. For more podcasts from iHeartRadio,
Starting point is 01:20:57 visit the iHeartRadio app, Apple Podcasts, wherever you listen to your favorite shows. That is going to do it for us this morning, back this afternoon, to tell you what is trending. And we'll talk to you then. Bye. Bye.
Starting point is 01:21:13 Hey, fam. I'm Simone Boyce. I'm Danielle Robay. And we're the hosts of The Bright Side, the podcast from Hello Sunshine that's guaranteed to light up your day. Check out our recent episode with Grammy award-winning rapper Eve on motherhood and the music industry. No, it's a great, amazing, beautiful thing. There's moms in all industries, very high-stress industries
Starting point is 01:21:36 that have kids all across this world. Why can't it be music as well? Listen to The Bright Side from Hello Sunshine on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Kay hasn't heard from her sister in seven years. I have a proposal for you. Come up here and document my project. All you need to do is record everything like you always do. What was that? That was live audio of a woman's nightmare.
Starting point is 01:22:01 Can Kay trust her sister, or is history repeating itself? There's nothing dangerous about what you're doing. They're just dreams. Audio of a woman's nightmare. Can Kay trust her sister or is history repeating itself? There's nothing dangerous about what you're doing. They're just dreams. Dream Sequence is a new horror thriller from Blumhouse Television, iHeartRadio, and Realm. Listen to Dream Sequence on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Curious about queer sexuality, cruising, and expanding your horizons? Hit play on the sex-positive and deeply entertaining podcast, Sniffy's Cruising Confessions. Join hosts Gabe Gonzalez and Chris Patterson Rosso. We'll see you next time.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.