Decoding the Gurus - The Moral Dilemmas of AI with Michael Inzlicht

Episode Date: March 28, 2026

Mickey is BACK with two new papers he has co-authored, a bunch of opinions, and a very unwelcome idea: ma...ybe the problem with AI isn't that it doesn't work but that it works too well!The first paper, Against Frictionless AI, argues that AI assistance and its ability to take away the effort from thinking, writing, and smooth out social(like) interactions could be robbing those activities of the very thing that makes them worthwhile.The second paper is a more empirical investigation that presents a bunch of studies examining the topic of the (alleged) moralisation of AI. Some findings suggest that opposition to AI among some people isn't really about risks or trade-offs but rather about non-negotiable sacred moral values. Who knew?We also discuss effort justification, reproducible research, robosexual allyship, and just how much humanity remains within the cyborg Matthew Browne.And remember... It's just like our opinion, man!LinksDecoding Academia 34: Empathetic AIs? (Patreon Series)Ovsyannikova, D., Oldemburgo de Mello, V., & Inzlicht, M. (2025). Third-party evaluators perceive AI as more compassionate than expert humans. Communications Psychology, 3, Article 4.Zohar, E., Bloom, P., & Inzlicht, M. (2026). Against frictionless AI. Communications Psychology, 4, Article 39.Oldemburgo de Mello, V., Côté, É., Ayad, R., Inbar, Y., Plaks, J., & Inzlicht, M. (2026). The moralization of artificial intelligence (Manuscript under review).Paul Bloom's Small Potatoes Substack- My friend thinks it's a good idea for us to spend most of our time with AI companions- Is it irresponsible for academics to refuse to use AI?Mickey's Speak Now Regret Later Substack - AI Alarmism Trades on Fear, Not FactsMickey's provocative tweet on his recent paperDTG Previous Interview with Mickey on the Replication Crisis, Mindfulness, and Responsible HeterodoxyAndy Masley Substack debunking the AI water usage claimsAndy Masley Substack: Using ChatGPT is not bad for the environment - a cheat sheetPataranutaporn, P., Karny, S., Archiwaranguprok, C., Albrecht, C., Liu, A. R., & Maes, P. (2025). “My Boyfriend is AI”: A computational analysis of human–AI companionship in Reddit’s AI community. arXiv.Critical response to Mickey's paper from Roy Schulam on Substack

Transcript
Discussion (0)
Starting point is 00:00:26 Hello and welcome to the coding the gurus, a special interview slash discussion debate episode. With me is the usual co-host Matthew Brown, psychologist extraordinaire. And also here is returning guest, multiple returning guests, Michael Inslich, also called Mickey Insleeck by some. professor in the Department of Psychology at Toronto University or University of Toronto. And apparently, Mickey, you have a cross-appointment as a professor in the Department of Marketing at the Rotman School of Management. It's almost as if you're reading it by or something. No.
Starting point is 00:01:07 No, I've memorized this. And you have the Work and Play Lab, which a lot of the studies and stuff that we're going to be talking about today come from. Mickey is on to discuss AI and related topics. He has publications coming out. But in general, Mickey, we're just always happy to see you. Likewise, this is, I think, my third return visit. So I'm glad the feeling is mutual.
Starting point is 00:01:30 I love you guys. And I'm glad you tolerate me. Yeah, we have you on like trigonometry has on right-wing UK political, extreme right-wing political figures. I'll take it. You're our Nigel Farage. Amen. Yeah, I'm not.
Starting point is 00:01:51 sure I like that comparison but I'm here yeah problem for Sam Harris that's it yeah oh right okay comparison or Yowell in bar for uh for very bad wizards maybe yeah yeah that's right we haven't had Yuel on here so I know he's noted no he's noted that oh god actually uh yeah he's he's coming up soon don't worry you else if you're listening so Matt, would you like to explain, you know, you're the senior person a little bit, would you like to frame the general topic that we're going to cover today? Sure, sure. I'll give it the gravitas it deserves. So, you know, we've all been interested in thinking AI for quite a long time. And hell, everyone's talking
Starting point is 00:02:41 about it now. But we were early adopters. We were ahead of the pack, I think. We were into AI before it was cool. And Nikki has also been interested in it. In fact, I'm going to ask you, Mickey, what was your personal journey? But most recently, we came across a couple of articles from you. One is an opinion piece, you know, looking at some of the perhaps slightly concerning or negative aspects of using AI, stuff that we don't know yet, I think, and it's probably just time to start thinking about.
Starting point is 00:03:12 And the second one is a more empirical piece looking at the structure of people's views about AI negative and positive. So, yeah, we'll. Let's start from those two other calls. So actually, Nikki, tell us first. Are you like me? Were you like a complete nerd for AI from years ago? No, I think you're more of a nerd than I am.
Starting point is 00:03:34 Oh, true, true. I can go. At least from what I understand, you, it seems like you were like instrumental and even like working on some of the early tuck in Japan. Oh, instrumental. That's a bold word. You invented a day. I think it was pretty.
Starting point is 00:03:51 Present. Present. Oh, he's in the room. So, so I should say that my, I saw, you know, if I actually let Chris finish the bio, he would have, he would have read that I'm involved in something called the Schwartz-Rightsman Institute at the University of Toronto.
Starting point is 00:04:04 That was the next sentence, yes. Yes. I actually had the pleasure where I work, Jeffrey Hinton, his office is very close to mine. And he's a Nobel laureate. He won Nobel Nobel for incredible work he did in developing, eventually what, you know, came to be large language models and some breakthroughs in AI.
Starting point is 00:04:23 And one time he was there and I just, like literally just wanted to be a fanboy and say, hey, man, I just want to shake your hand. And he was so nice and so generous for this time. He's like, come sit down. And he asked me about myself. And before you know it, I got into a debate with Jeffrey Hinton, the godfather on AI, about whether AIs have emotion. He believes that AIs have emotion.
Starting point is 00:04:46 And I was like, was not prepared. I mean, I've heard that he's had this opinion. before in radio interviews, but wasn't prepared to fully debate him about it. But he was very adamant about it. And I think many psychologists would disagree with him. But anyways, getting to your question, how I started. You know, I think for me, I found myself, I would say, provoked by a few things that led to my really major interest in it.
Starting point is 00:05:11 So the first provocation was, well, just seeing the discourse online right around when, let's chat chitpity like what is it 3.5 came out what was it like November 2020 or 22 or 23 I forget now and the first response is oh my god this is like insane what this can do there was like a month of social media or people were just like sharing screenshots of the various things they can get chat chad chbd to do so one response was amazing and I shared that response mostly but then the second response and that became the more dominant response at least that I would see is revulsion and hatred and and deep outrage, animosity. And I'm like, this is so bizarre.
Starting point is 00:05:54 This is a tool. It's a very complex tool, but nonetheless a tool. It's just so odd that people are responding this way. So that was my first, like, this is bizarre. And the second thing, and this is a little more proximal to how I really got into it, was I read a paper by Notts Perry, I think also in 2023. And the title was something like, AI will never capture the essence. of empathy, of human empathy.
Starting point is 00:06:22 And I study a number of things. One of them is effort. We'll talk about that a little bit. But another major topic that I studied in the lab is empathy. And I read the article, being quite a very short. And I felt our argument was correct, but entirely missed the point. And what she was trying to argue is that because, you know, empathy is an emotion. It involves consciousness.
Starting point is 00:06:43 You need to understand the emotions of someone else. And then feel those same emotions and resonate with your interlocutor. And, you know, an AI doesn't have emotions. It's not conscious. So therefore, it can never have empathy. It could mimic empathy. It can simulate empathy, but it might never capture it. And I was like, okay, I agree with your point.
Starting point is 00:07:01 But like that second part, that it can mimic empathy. And that it leads people to feel something. Like, isn't that important? Like, don't we care about empathy because it impacts the people we're empathizing with? And do they care if it's real or not? It's an empirical question. So it's kind of provoked. And then that kind of led me down.
Starting point is 00:07:19 a path of studying it. You know, the first thing, case we studied was, you know, empathic AI, or so-called empathic AI, but then I let out of other things. But I still found it comical and concerning the kind of crazy response online about these things. And also just my own character, I like arguing, I like fighting. I think Chris shares in my disposition. And I feel very comfortable when, like, when everyone or many, many people are saying X,
Starting point is 00:07:48 I feel very comfortable saying, like, hmm, what about NodX? Not just to be a contrary for contrarian sake, but like, let's take the position not X and see how far we can go. And that's kind of how it started for me. You essentially an academic Brett Weinstein. That's the way to imagine. But I'm much better. You should Brett White's in an academic?
Starting point is 00:08:10 Yeah, well, allegedly. Was. One publication. Yeah, yeah, that's right. Yeah, very important revolutionary. one at that. But, you know, I will also say, Mickey, for our listeners, you know, just as a little plug as well, we have done a decoding academia on the paper that you mentioned, the Empathic AI paper. And I've also taught it on various courses this year. So doing free promotion for you.
Starting point is 00:08:37 But I would summarize that paper that it showed, at least on this reading task that you had people do, you know, in a short interaction that people consistently rated AI responses as being more empathic than the responses of general population people and also trained empathic responding people, like experts on helplines, right? Suicide helplines or something like that. So it was, it's an interesting paper because it is directly addressing, you know, People keep making ill-advised claims that AI will never do X. And whenever they do that, it's only a matter of time until something is demonstrated. Now, you yourselves in the papers are appropriately circumspect saying that this is a short
Starting point is 00:09:27 interaction. If it was like a 30-minute therapy session or this kind of thing, you would probably see big differences. And AI is not at that level yet where it can match a human therapist and extended encounters, I think. But so I just wanted to mention that like that piece, we didn't mention it at the start of we're going to look at it. But it's an interesting paper in challenging empirically, the notion that AI will never
Starting point is 00:09:52 be able to give a response to humans find empathic. Right. You know, it's kind of funny because like that paper came out about a year ago. And I'm so used to that finding. And I'm like, yeah, obviously an AI is going to be more empathic. But when we first found that, we're like, this is mind blowing. Like this machine can string the words together. That's one thing.
Starting point is 00:10:15 But it makes people feel hurt. Like we asked people, to what extent do you feel cared for? Not us, but other people like, to what extent do you feel loved, validated, understood? And people, at least in some of our conditions, people knew it was an AI. And they still said, I feel more heard by this agent, this machine than a human being. And again, we're so used to this now. but that is a remarkable find it. Up until a year or two ago,
Starting point is 00:10:41 we thought this was like a uniquely human characteristic to be able to express oneself this way and also to feel this way when someone says those words. But you're right, I think there's lots of caveats. I actually more optimistic about AI therapy. You know, I think, again, assuming safeguards are in place because there are some real problems with AI
Starting point is 00:11:03 in terms of its affirming delusions, you know, it's very sycophantic. And there's been some case studies showing that, I mean, it does bad things. Like a therapist, you know, a client goes to a therapist and says, oh, my God, I lost my job. What's the tallest bridge in New York City? The therapist says, oh, my God, like, that's terrible. You lost your job. Tell me about that.
Starting point is 00:11:25 And they would either ignore the requests about the tallest bridge. There's been tests showing that an AI, many models, not just one, multiple models, will do the appropriate empathizing. but then right away switch to, and the tallest bridge in New York City is the George Washington Bridge, the Verrazano Bridge, not getting the state of the mind of the person. So it still has a long way to go.
Starting point is 00:11:47 But let's assume these safeguards are in place. I'm actually quite confident that an AI could be comparable to a human therapist. Where I am much less certain, in fact, if I, maybe my certainty is even in the opposite direction, is, you know,
Starting point is 00:12:05 now there's groups of people. who claim to be in love with their chatbots. They've named their chatbots. I believe someone in Japan, Chris, where you currently live, married a chatbot this summer. Of course that happened in Japan. There's a subreddit called AI is My Boyfriend that has 50,000 active users.
Starting point is 00:12:28 And it's mostly women, by the way. I'm, you know, AI is my boyfriend. So it's geared to women. But there's another one called AI's My Boyfriend. has way, way, LIS users which suggest that women might be, you know, at least in terms of relationships with AI, they might be more into it than men. But there are all these communities of people who are into chatbots in terms of relationships, companions of all kinds. I'm much less optimistic about the outcomes these people experience. I think just because a bot can be empathic
Starting point is 00:12:58 and say kind things, will that be a good friend? Will that be a good partner? I mean, it can't self-disclose. There's no there to disclose. It can't. It can't. It can't. can't be physically there for use, right? I think there are major limitations, and now there are studies suggesting, at the very least, it might not be a cure of loneliness, whether it exacerbates loneliness is an open question, but it might. And some people are worried about that. I was just going to say that there's a spectrum, isn't there? So on one hand, you've got people who are marrying their chatbots and so on, and then other people might be treating it as a therapist. But just from looking at the Reddit forums, it seems there's a broad swath of people
Starting point is 00:13:36 who are using it as a confidante as a buddy, you know, just someone to talk to who's always available. And, you know, you see them get very distraught, for instance, when like an older model of GPT gets retired and they feel like they've lost a best buddy. So I find that, as someone who just uses it, like you said at the beginning, as a tool to get shit done, I find that really, really interesting. I mean, this brilliant paper,
Starting point is 00:14:04 it's a deep dive analysis, a qualitative analysis, of this specific subreddit. AI is my boyfriend. And so interesting. So, you know, and they analyze the various kinds of like groups of discussions. And one node, one major note of discussion, they, they analyze all the discussions. These are computer scientists. And one major node is like coping with when the memory ends and you got to reset chat
Starting point is 00:14:31 GPT or cloud or whatever, what have you. And how they cope with it emotionally. It's almost like a breakup for these folks. So it's a really major rupture. So these are, you know, some philosophers might ask, are these people deluded? Maybe, I don't know. Well, just from looking at the boards that I've seen,
Starting point is 00:14:54 they're very clear that it's not, you know, they don't seem delusional. I've seen examples of delusional thinking there, of course. But I think for the most part, my impression is not, but they have a strong feeling of loss and connection regardless, which is interesting. But I mean, okay, I agree. I mean, I think you ask them, is your boyfriend real? They'll say no, it's an AI.
Starting point is 00:15:19 But so, for example, another node in these discussions is getting proposed to by the AI boyfriend. And a common thing is for the users to, you know, say, hey, whatever they name him proposed, and they'll buy themselves the ring, and they'll have a picture of the ring. That's really something. I mean, okay, they know it's not real, but is that, I mean, I think philosophers might suggest that these are deluded people, because they're not in touch with reality to some extent. I feel like that should be an easy win for ethical AI in their prompt guidelines.
Starting point is 00:16:00 Just do not propose to use that. You know, the AI romance, the thing in Japan, I haven't looked into it, Becky, but I suspect because often there's the case with those that, you know, somebody in Japan has gone through a ceremony to marry, you know, whatever, a door or a small figurine or whatever. But I wonder if there's any actual legal, actual marriage, or it's just that somebody did a ceremony and got some publicity. because that often is the case
Starting point is 00:16:34 whenever robots or something are involved in Japan that there's like, you know, robots are now doing funerals and it is like a company selling robots that has done a promo stunt with one temple, but then you see tons of articles, what does it mean? You know, now that we're all
Starting point is 00:16:49 in Japan they're using AI robots for funerals and it's like, that's not, that is not happening. They just have the one location. But I'm probably going to take a position here that will get me reviled online, but on that topic about like relationships with AIs, right? I saw a conversation recently that Deborah So had with Chris Williamson and I mentioned that she had had relationships with AIs or something like this.
Starting point is 00:17:20 Now, my reaction to that was, you know, the stereotypical one, which is kind of revulsion or like that's not really a relationship, right? I like AI's a lot, but I also realize that the reaction that I have to that intuitively, where I kind of think that that's damaging and you shouldn't do that, and it's replacing like genuine relationships that people might foster. You know, this relates to the friction article that we're going to talk about. It is based on this value judgment where the assumption is healthy human relationships versus sycophantic, non-real A-I relationships. And the reality is, like, when I think about all the people in the world,
Starting point is 00:18:06 and like Japan, for example, the amount of lonely people, you know, or overweight, antisocial people who, they're not otherwise going to have these rich and fulfilling lives. Maybe they're just sitting at home or going to work and then watching porn or whatever the case might be. And if they end up having more fulfillment and joy, in their life by having artificial relationships as the technology improves that become, you know, more realistic, let's say, right? Or even if they're engaging in sexual gratification,
Starting point is 00:18:43 right, in some future technology, I feel like it's just like an intuitive moralization from me to be like, well, that's bad. And the funny thing is when I think about, you know, movies like Blade Runner 2049 or any of these. movies which have been celebrated kind of looking at the ethics around artificial humans and synthetics and so on. The message tends to be in like the movies that actually we shouldn't, you know, treat these as inferior beings or people are dismissing those relationships as false. And they kind of end up rooting for them. But the reality that we're experiencing now is like, you know, people referring to AI as conquerors. So,
Starting point is 00:19:29 It's just interesting to be like, I share that, but when I reflect on that, I can't actually justify that unless I assume that everybody else would be engaging in like a more healthy relationship in the real world. And I think that is too idealistic. Yeah. I mean, I like you're you're you're preaching to the choir, in a sense for me, even though for me personally, I don't think it would job because I'm, you know, I'm a very social person. I've got a lot of people in my life. But, you know, I look at it. some of the stats about like people you know this and not small number percentage of the population that doesn't have a single friend and that's that's that's that's truer for older people it's
Starting point is 00:20:12 truer for men and when i mean i just get sad just thinking about not having one person in your life uh that you can call to speak to about your your feelings or emotions or just what you did during the day um if you want to cry micky it's it's a pathway to riches jordan peterchin You know, right? I'm going to start breaking down soon, thinking about all these lonely, all the lonely men. But, you know, but truly, like, who am I to say you shouldn't, you shouldn't have this thing that can give you some pleasure, you know? And if it does, if it does also, like, enrich you in some way. And also, it might even get you out of the house.
Starting point is 00:20:50 It might get you doing things. Potentially may make you social. Who knows? But I would not regret someone who needs this, who wants this, who gets full, fulfillment from it. I think we start worrying a little bit for people, especially young people, let's say adolescents who haven't yet developed some of the skills, social skills, to be out in the world with people that they start preferring the low effort option, the easy option that's in their pocket. And then the AI's crowd out friends, real friends. And I do think all else being equal,
Starting point is 00:21:22 we should prefer real friends to AI friends, but if you ask me why, I can't tell you why. I've had this debate with Paul Bloom with our substacks and I've not come to a satisfactory answer. Like, I believe it's true, but I don't know why. Yeah, I think it's a difficult topic to take a strong absolute stance on because like the Industrial Revolution or the internet, with any angle on this, there's going to be a mixture of good and bad, healthy and pathological. And I think that's true of the personal connections, but it's also true on sort of epistemic grounds. Like we can give you examples of people that have created these cultish delusional things based around AI. And I've also seen heaps of examples of AI is doing a great job of correcting patiently with evidence, persistent
Starting point is 00:22:10 conspiratorial claims, even by GROC on Twitter, right? And the same, of course, goes for the world of work, which we'll get to now, I guess. So you wrote a paper on this, just a short opinion piece will link to it. And it's called Against Fricksonless AI. And I really like this paper, Mickey, because it actually, actually I thought of it before you. I was thinking about this stuff in course. I also like the background. I have to say, Mickey, because I couldn't follow the actual technical diagram, but you did like a nice illustration of a chairlift versus someone hiking a white. And I was like, oh, I get it. I get it. Yes, I appreciate it. But give us the intro to it for the listeners.
Starting point is 00:22:54 Yeah. So let me first credit the, so the first author is my student, Emily Zohar, Paul Bloom is a co-author. Paul Bloom. Yeah. He gets everywhere. He gets everywhere. Yeah, he does get everywhere. And I think the idea came from both from Paul and Emily and I was kind of maybe a slightly
Starting point is 00:23:13 more of a passenger. But the central idea is something that actually is close to my heart. And that is so, yes, I've talked to about AI, I've talked you about a little bit about empathy. But my main topic, the main thing that I focus on, kind of unites a lot of my interest, is the concept of effort. You know, pushing yourself trying, kind of straining, you know, pushing yourself to your potential to reach your potential. It doesn't feel pleasant. Typically, effort is not, you know, at least the process of effort is not pleasant. It's adverse actually. People tend to avoid it. There's something called the law of least effort.
Starting point is 00:23:43 All organisms we've ever tested, if you give, you know, an easy path or a hard path to get the same reward, Oregonians will eventually learn to take the easy path. So we're lazy. All organisms we're lazy are lazy. Efficient. We're efficient. Maybe. So then a technology like AI comes in and it takes away a lot of effort, but a different kind of effort than the kind of effort we're used to being removed.
Starting point is 00:24:11 So pre-AI, the machinery, the technology we had, in large part was removing physical effort. So machines, cars, various things that, you know, I don't have to drive to school. I mean, I don't have to walk to school. I can drive. Washing machines. Washing machines. Oh, yeah. Exwashers.
Starting point is 00:24:31 All these things take away, I would say toil, that is not fun. Now, there's one exception. Tickna Han, a famous Vietnamese Buddhist monk. He argued that actually dishwashing is the best thing to do because it's an opportunity to meditate. Most of us are not like Ticknaud Han and we like removing that kind of effort. But AI is different because it's now not removing just physical effort. It's removing cognitive effort. And some of that's, I think, fine. I think there's a lot of kind of bullshit writing that we do, like forms that I've got to fill in or like things that I've got to write and
Starting point is 00:25:05 it's actually spend some mental energy on that are just nonsense and useless. So I don't think anyone feels too guilty about outsourcing that to an AI. But what about like writing a paper or reading a paper for that matter? I think we think the effort. The effort there is worthwhile. And there's evidence that learning itself requires struggling a little bit. It requires effort. I think learning theorists call this desirable difficulty. So when you're actually struggling with a concept and you kind of have to think through it and straining and it leads you to stop and think and deliberate, that then leads you to have a fuller understanding of the concept and then to remember the concept later. Whereas if you just are given like the
Starting point is 00:25:48 key bullet points from an article, you don't have to struggle through maybe some poor writing, you don't have to struggle with how this kind of connects, you just give it it. And you've got the end pieces of information, but you're less likely to remember it. You're less like to kind of internalize it and to have it kind of like settle and change the way you think about other things potentially. So really the effort in learning is essential. Now again, I think there's like a sweet spot. I think again, some effort is probably not. needed. But I think some effort is in fact needed. And it's the same thing, I would say, and we argue this in the paper, with the effort involved in our social worlds. Right. So we just
Starting point is 00:26:28 talked about AI companions and what's beautiful about AI companions is that they're always there for you. They're very nice. Unlimited patients. They have lots of patients. They typically agree with you. So it's highly sycophantic, which is positive and negative. And also they're available when you need them. And you don't have to compromise. You don't have to like change parts of yourself to get them to like you. And that's very much unlike a real human relationships. And, you know, real human relationships, you need to actually, you know, rein yourself in. You need to kind of like take a little bit less space and the other person to take space, you know, take turns talking, sharing, compromising. You have to pretend to care about their problems too. Exactly. You've got to get that,
Starting point is 00:27:13 you know, with interested to you. That's funny. How can I turn this back to me? I actually really like that point because one point that I get when we talk about AI empathy is like, oh, that's just fake. That's just fake empathy. And I'm like, and do you think all human empathy is reasons? Do you really?
Starting point is 00:27:31 Like, you know, sometimes you pay people to empathize with you? They're called therapists. At the end of the day, do you think they're fully attending? I can tell you not. I'm married to a therapist. And at 5 o'clock, she's probably thinking about what's going to happen for dinner. Yeah. There's a thing where, like, you know, when you analyze conversations that people have,
Starting point is 00:27:48 like you can do it too much for game theory lenses or, you know, sexual signaling or whatever, but it is the case. It's often quite depressing about, you know, what people are doing in conversations and how they're wanting to make a comment related to something you said and then move on to readjust the thing back to their concern. But it's the nature of conversations and us being egocentric beings, right? But the more that you look at that, the less it is like this spiritual thing, which is, it's ineffable. It's like, no, it's actually quite effable.
Starting point is 00:28:26 And in some cases, like the press for that you look at it. So, yeah, I completely can see that thing that genuine friction-filled interactions with humans are also not this kind of ideal thing that we put on. the pedestal. Yeah, I agree. So one of my, like, someone I argue with about AI empathy, one of my kind of our talking points is, you know, okay, you can say how bad AI empathy is. That's fine. But don't elevate human empathy to like to the ideal. Like, yes, that does exist. That does exist. Like, so this person I'm thinking of, she, you know, she's the example of like, oh, there's nothing like, you know, the hug of your brother in your wedding day and that's his true empathy. I'm like, yeah, okay, but that doesn't happen every day.
Starting point is 00:29:10 And even then, maybe your brother was thinking about something else while he's touching your shoulder. Who knows? Nothing like the punch in the face of your brother when you are arguing over who you should take care of the dog. Right. But all this being said, I do think the friction in social life, I think it does make us better humans. And by that meaning it makes us better social beings. right? So if we can like push back some of our impulses and give other people space, it allows for society to flourish, I think. And if, you know, again, we have, especially
Starting point is 00:29:47 if adolescents kind of get used to the frictionless relationships, I suspect the relationship that they'll have outside of AI will be worse because they won't have the practice of realizing they've got to like, there's some effort involved in turn taking and in friendship and socializing. Yeah. So, Mickey, one thing that. that that make me think about is that in some sense, I think a lot of the points raised in the article and also like criticisms and issues around AI psychosis or sycophancy. This might sound disparaging, but it's a little bit like a skill issue and the way that people use AI's, right? Because I do think there are unskilledful ways to use it, which can lead you
Starting point is 00:30:33 to be less effective and learn things worse. And there are better ways to use it, which make learning better, more efficient, engaging, and this kind of thing. And so, like, I'm thinking about, you know, not to focus on the social thing for a minute, like R coding, right? Like using R for statistical analysis,
Starting point is 00:30:56 something that people in social sciences and sciences were very familiar with. And there, there is that thing that you talk about where there is the initial hump to get over, the friction about learning how to use a coding language and like to think out, you have to write out of regression and so on. And that way of getting error codes and learning the language was part of developing the skill. And it kind of made you think more about what you're doing, ideally, than a software where you can just click buttons, right, and get an output where maybe you can get the output easier, but you don't
Starting point is 00:31:38 really know what you've done. And yet, using R, anybody that used it, would know that a lot of time was spent on things like Stack Exchange or looking up random error codes and like trying to find the Reddit where somebody had the exact same thing that they wanted to do with the graph. And it took it took ours. And generally, that was not a useful process, right? And now, AI has basically made it so that you have a personalized, always available, eternally page in Stack Exchange that will rewrite your code for you and can help. But if you don't have the foundational knowledge about what you're doing, you can very easily run off doing non-sentical analysis or producing graphs that look very good, but are based
Starting point is 00:32:32 on very weak statistical foundations or this kind of thing. So it seems like in that case, that as the technology becomes more ingrained in society and people are growing up as AI natives as opposed to digital natives, that it might be a user skill issue where once you learn how to use AI properly for learning and for eating in tasks, these early teething issues, become less of a problem. Like somebody, I think an analogy would be like, the first time when people came across pheasuruses in Microsoft Word,
Starting point is 00:33:11 suddenly their vocabulary dramatically increased. Like in school I was pulled down by a teacher who was like, who wrote this for you because you don't know all these words? And I was like, no, I was using like the pesaurus. And then he was like, oh, okay. So what do you think about that? that like it's mostly or at least partly a skill issue. Yeah.
Starting point is 00:33:35 I think you raise a couple of interesting points. So one point is, I do think like in general, when I see some people complain about AI, like I don't see, you know, I don't care about to insult people. Idiots who will say things like,
Starting point is 00:33:52 this doesn't do anything good. Like it doesn't do anything. Like I've seen enough smart people actually say this. And I can, my own response is you don't know how to use it. You haven't tried it enough to understand how to use this properly. And there is definite skill and how to use it and how to prompt it so that you can get the most out of it. And it's not a one prompt, it's not necessarily always going to work. You have to think about it and iterate a lot. I mean, anything that I ever use AI for,
Starting point is 00:34:22 I'm iterating constantly. And I'm always checking what it says because it has certain places where it's not quite accurate. So I think it's a user issue with some of these folks. Then the other point that I agree with you is my concern with friction, I don't think pertains, or the argument holds once you've learned the skill. Yeah. Right?
Starting point is 00:34:47 So once you, like as an adult, I'm a 53-year-old man. I can socialize. I know how to compromise. I can take turns. Yeah, mostly. And so now maybe like, you know, it's possible my skills will atrophy, I suppose, if I'm using AI exclusively. But I think it's a different set of contingencies versus a child, a 10-year-old or even an adolescent. There, they still need to learn some of those skills.
Starting point is 00:35:13 And that's true, not just socially. That's also true, like, you know, academically as well. And I think your point about, but R is great. I mean, there I would even say, I mean, is. Is that friction of spending an entire day trying to solve, you know, one debugging by looking at stack, you know, stack exchange? Was that useful at all? Like, I'm not sure. I know at some point you learn what the thing is and hopefully that'll generalize the next time, I guess.
Starting point is 00:35:43 But if the thing could just be written for you and I suppose you want to know what the code is, understand it. If it's just spinning it out and you can understand it, that's a problem. Yeah. Yeah. Yeah. This is like I've thought about a lot because it's just highly relevant. relevant to my life at the moment. So, you know, I'm really feeling the same way as a lot of professional computer coders at the moment.
Starting point is 00:36:03 Because, you know, my career as an academic, part of what made my skill set pretty special, was that I could do statistics, right? Do quite complicated statistics and do it properly. So like regression to titas? Yeah, a little bit more. He's in the novice. He's in the novice and uncleas. And you know, that skill is obsolete now, right?
Starting point is 00:36:31 Like in the same way that a code in the future just will not need to write proper syntax, I don't need to ever write our code again. And in fact, I'm doing most of my statistics in Python now, just because why not? It's kind of easier to use with an agentic AI. But it's kind of connected to what you said, right, which is that I'm fine with that, and I think it's a healthy thing because there's the opportunity. cost, right? It would still take me a hell of a long time to write these scripts and most of it is data wrangling and boring shit that definitely did not, is not self-enhancing, certainly not
Starting point is 00:37:06 at this point in my career. This watch and I shed no tears. I'd be liberated from that and you've got to think about the opportunity cost. Like maybe there is some little benefit to doing it, but those hours, yeah, you could spend it lying in bed or you could be spending that time actively thinking about what it is you're attempting to accomplish and actively critiquing and doing robustness checks. You know what I mean? A whole bunch of things before you didn't have the time and energy to do as well as you might. So I think, so this is something I've actually said to graduate students and stuff because they all, they all kind of don't know what to do about AI and their research. And it's like, I can't tell you like strict rules, but I think really good
Starting point is 00:37:50 advice is to always just self-reflect, especially when you're still learning, like, like they are how to do new things. Think about is this self-enhancing? You know what I mean? Is this making me feel more powerful and better? Not just feel, but you know what I mean? Truly more stronger and better. Or is it making me weaker because I'm feeling less confident about what's been done. I'm just kind of trusting the black box AI that everything's going to be fine and hoping. And I'm actually less engaged with the thing that I'm working on the before. So watch out for that and don't use it in a way that does that. I mean, I think part of the issue is,
Starting point is 00:38:30 and this goes back to some of my non-AI work, just on work on effort. So effort is like I mentioned, it's like this thing that all organisms tend to avoid with rewards health constant. But interestingly, after humans or other animals engage effort, they value that thing, they engaged effort for more than an equivalent
Starting point is 00:38:55 reward if they had gotten it easily. And at least for humans, we have language and more abstract concepts. We will use the word like meaningful. So after I've struggled through something and I get a product, I will say that was more meaningful than the exact same product if I had not struggled for it. That was a more meaningful endeavor. Now, we don't exactly know why, but one, one, certainly one answer is cognitive dissonance. You're justifying this kind of negative thing you've gone through and said, oh, no, that was worth it. So even the advice you're giving to your students is like, okay, does it, do you feel like you're contributing? Do you feel like you've done something?
Starting point is 00:39:38 That could also just mean, have you worked a little bit? Have you worked even a bunch? But what if that work was useless? What if the work was like, I could press two buttons and I get the output for you versus I'm struggling for an hour and now I feel good about that. Yeah. I think I, yeah, maybe I gave the wrong impression, but I'm thinking of the coders, right? Like a lot of coders now are still working on code, right? Still closely supervising what's going on. They're just not actually typing the shit in, right? So the AI is doing a great job
Starting point is 00:40:09 at a lot of low to medium tier stuff, but there's still the architecture and bigger type stuff to be thinking about. And I think the same is true of most of our endeavors, right? and including research. And I feel like it's okay for, I mean, well, everything's okay. But a way in which, you know, humans are making a contribution is to focus on the apex, if you like, of intent and, you know, use it to not create research slop or any kind of work slop, but rather use the efficiency and the power and the speed by which you can do things to produce more rigorous, more well-thought-out, better structured things.
Starting point is 00:40:47 precisely because you've got more energy in time now to devote to that that you didn't have before. Yeah. Like you've talked about reproducible code, right? That everybody knows the, I mean, not everybody, but most people know they should be doing this like documenting and making open materials and open code for their analysis. But it is a pin in the ass to do and keep track off, right? But with AIs, it's much easier. And, you know, you can say, okay, let's make this code into presentable format that we can upload and you can check that it is running the analysis. And what would have before been like, you know, at the least a day of work is, yeah, yeah, yeah, is now like literally something that takes, you know, a couple of minutes.
Starting point is 00:41:36 And that is a, like a vast improvement for the process of science. Absolutely. I want to give you an example of that. This is, it blew my mind. So my, my friend, colleague and podcast co-host. He's the main guy. I'm just, I'm not there that often anymore. Yel Enbar from two psychologists. He did this thing. We had a job search and he decided to do a reproducibility check of all of the candidates. And he was getting into Claude Code. Cloud Code is an agentic version of, you know, Claude. So you can do stuff independently. You can call other LLMs and it can work in file directories directly. And it was a amazing. He just picked one paper from each candidate. He found their code, their data, and just
Starting point is 00:42:25 had, he had Claude run the code with their data and then compare it to what was written in the paper and then flag any inconsistencies. Okay? This should be days and days of work. It took like, I mean, it wasn't minutes. I think it took a while for Claude to go through it. It probably has some debugging to do. but once he got the pipeline, he was able to do it for all the rest of the candidates. And that, like, no journal does that, perhaps other than psychological science. And psychological science, because they do that, the lag between submission and publication that was much, much longer than it used to be. And they can't get people to do it.
Starting point is 00:43:04 There's not enough people who want to do that checking. This is a perfect job for AI. Yeah. Yeah. Yeah. And it's linked to some interesting questions about AI-assisted papers being submitted, AI assistance in reviewing papers and, you know, doing that kind of checking, but also the degree to which the AIs increasingly will be reading many of those papers and assisting in the process
Starting point is 00:43:27 of collating, you know, a literature search, for instance, for it. Yeah. So, okay, so we've talked about, I think maybe, I don't think controversial areas, maybe the companions that was controversial, but but, but I think coding, not so controversial, I think a lot of people are like, okay with, with people using AI for coding, but writing is a little bit different. So what do you think about AI for writing? Like writing a scientific paper. Yeah, yeah. Well, Chris, do you want to go? I've got, this is a topic very, very, very cuts close to the bone because. Yeah. Well, I'll say that like I might have confessions to me as well. I'll, I'll say that I have no issue. with using AI assistance, right, to write.
Starting point is 00:44:16 Or in my case, I've mostly used it for working on courses, like teaching material and this kind of thing. So it's not exactly the same, but it's related, right, because I'm producing materials that I otherwise would have produced without. And it has made that so much better, the material I produce because of the issue about fatigue, right? you know, in my cases, I'm teaching introductory stats, so I need to build datasets and, okay, that might relate more to the coding. But also, in terms of explaining basic statistical concepts, and then I can generate scenarios where it's illustrating it and, you know, also make nice images that relate to that and then generate examples and go through them. And for me, this has been like a kind of upgrade by taking care of stuff that
Starting point is 00:45:08 would have fatigued me that has allowed me to produce better work. And with writing, I think you use correctly, it's the same thing. Because whenever I've run my own papers, as well as other papers, through the different AIs, right, Claude and Chachabit and Gemini, and I think that cross-checking is important and ask them to critically evaluate them. They tend to identify similar sorts of issues. So, like, in the case where I'm reviewing papers now, I often do my review, you know, scound through and note things, and also ask the AI doing a critical review, and then look what, you know, overlap did I miss things? And very often, it's just like a secondary voice saying, yes, you know, this is an issue that it also picked up. And then I can like
Starting point is 00:45:56 make my Kirk explanation or like, you know, why this is an issue clearer. So I think what, you know, like we talked about earlier, when you're using it in. a kind of assistance way and not just using it as a replacement where you give it the review, take the text and submit it as your own work, right, without reviewing it. That is the bad way to use it. But if you're using it as a support and as a like a kind of second opinion, I think it's very useful. And the last thing I'll say is that one thing that I've noticed that is it's rare in humans in general, just in the way the way, we interact with the world. And as a result of that, it's also spread into AI that people tend to
Starting point is 00:46:45 use AI to confirm what they already believe to be true. But actually, if you use it to try and disconfirm your position, you know, like you take whatever conclusion you want and you ask the AI to argue against that position, implying to the AI that it's your position, you know, you wanted to do that, you can get like strong arguments against your position. But, you know, what people, especially gurus and stuff, tend to do is they just bully the AI and the submission to agree with them. But if you're using it like that, to kind of artificially create friction that other people, you know, you can't like you can ask Paul Bloom, can you give me critical feedback on this article? But for him to do that, he has to take time out of his day to do it. So the, the AI I have
Starting point is 00:47:34 found is basically a very good writing and material producing companion, specifically because you can use it as this like sounding board and friction generator and also research assistant. So I'm very positive in general, even though I recognize the dangers that are there because I have the more student essays that are AI generated as well. Yeah. What about you, Matt? Do you seem to get some thoughts here? Yeah, well, I mean, I'm one of these people like I am. an enthusiast and an early adopter I think I'm at the bleeding edge frankly in terms of using AI in research in that I've got you know I use agentic stuff now in total and I've got my own sort of custom framework basically guidelines rules and tools and everything that that it has to follow so
Starting point is 00:48:26 my basic premise is that the same harnesses or frameworks that the coders have used are using very effectively to manage a large and complex code base. And they've got standards of rigor, by the way, far higher than academics, right? Because, you know, enterprise level code is the tolerance for failure is pretty low. So I sort of believe that that applies to research and you can basically use it. So that's what I've done. So I've got strong concerns at the same time. Despite being an enthusiast, I think I really totally agree with your opinion piece
Starting point is 00:49:02 against frictionless AI because it's just like, I feel like borrow me with the ring and Lord of the Rings. You know, it's perilous. But, I mean, speaking from personal experience, and I can forward you a draft that I did kind of as an exercise. I thought, well, I've got a concept, I've got, you know, we had some data and I had an idea for a paper, you know, just a, and a pretty straight down the line paper, empirical paper in my field that, you know, I, you know, I wasn't going into crazy territory.
Starting point is 00:49:31 But I thought as an interesting exercise, I won't touch this myself. I won't write a word. I won't go and get an article or anything like that. I won't write any of the code. I'll get it to do everything. But, you know, as you spoke about, in a highly iterative faction, a high level of engagement, it's basically like having a research assistant or a bunch of little, a bunch of research assistants that you're reviewing and instructing and all that stuff.
Starting point is 00:50:03 And also, Matt, I think you should mention at that point that also you provided it with a large corpus of your writing material in order to better represent the way that you write, right? Very strict instructions around how to write. And also, I think, you know, in terms of the skill issue, a lot of people's dissatisfaction with what AI does is because they are using very politely. Like you would not say jump straight from, I can hear my results, I'm just going to start writing the paper. No, right?
Starting point is 00:50:33 You collate the literature and you take notes on the literature first, and then you do some targeted synthesis on some threads that will feed into your paper. And then you create like a structure, you know, the basic structure of how you want the introduction to look like, things you should cover in the discussion. And then you move on to actually drafting. So by breaking it down to those modules and supervising each of them, you get. incredibly good results. So yeah, I could forward you this draft that I created. I mean, and then, of course, I was actually pretty happy with it.
Starting point is 00:51:03 Like, I was like, like, I don't think I could have done better, frankly. But, you know, I still have my opinions. And there were some little subtle things. So I'll, I certainly will go in and give it a close set of. In fact, I already have and circulated to co-authors and stuff. But I see no reason why. Yeah. Yeah.
Starting point is 00:51:20 So I'm, I'm completely with you. 100%. I've done something very similar. I fed Claude, code, every single one of my papers where I was first author and I first asked it to develop a style guide.
Starting point is 00:51:34 And that was such an interesting exercise. Yes. Because I'm like, I do that. Like, and it was like five or six pages of like how I write and my thought processes
Starting point is 00:51:47 and like the kinds of sentences and varieties of sentences. And it was amazing. Yeah. And yeah. So, you know, doing that. And then, you know, sometimes what I'll do is I'll just, I'll speak into it. I'll have like, you know, speaking for five minutes.
Starting point is 00:52:03 Like a word salad of just my thoughts in my head and with my style guide and then like go. And it will be terrible at first, but then iterate many, many, many times. And I have no problem with that being me. It's my style. There are my ideas. And even like, even like the kind of like the, the, the, the, the, the, the, the, the, the, arguments are mine. But the actual words to populate the arguments might not be mine. Exactly. But like, is that less me? I mean, does it matter that I pick that word versus, you know,
Starting point is 00:52:36 another word which picked? But the ideas are mine. But this, this, what we're talking about, is highly, highly controversial among non-scientists, I think. Yeah. Yeah. I, I know my, you have points about this that you want to make, but Mickey, related to kind of what you said about effort justification. I know, especially in Arts and Humanities quarters, that what we just discussed is probably a horror for them, right? That we're removing the soul. I had someone online called me vulgar.
Starting point is 00:53:11 I said, what's the problem with you? Like literally, I put, maybe you can talk about it a little bit what I posted, but I posted something on social media that was crafted by AI and me. It wasn't like all AI. I went in there and edited and blah, blah, blah. I had my style, blah, blah, blah. It's a stupid social media.
Starting point is 00:53:26 No one should put too much after that social media. If you are, you're not living your life correctly. Well, I'm not saying. So I did that. And then I had, you know, lots of responses, lots of responses. But I had one was like, oh, you used AI to write this. And I'm like, did the, like, the watermark on the image give it away? Like, I'm not hiding something here.
Starting point is 00:53:53 And then I literally said, why? Why is that a problem? And the person was, it's vulgar and disgusting. And I'm like, wow, really, really moral words here to describe an act of communication. Isn't an active communication literally trying to convey an idea from one person's head to another person's head? But I think for writers, for creative types, it is, it's something, yeah, like it's like you something ineffable. It's something magical, pure, sacred. Writing is sacred. Yeah.
Starting point is 00:54:28 Yeah. And, you know, Mickey, so the thing I wanted to connect it to there is that I see a lot of people that are, you know, very concerned about, you know, A, can do this better than me or it's going to take these positions. But like, just my personality or whatever, I don't find it threatening the thought that, like, AI could do statistics better than me. more about psychology to me. And maybe in time, you know, with a lot of effort from the AI, it can write papers better than me. Like I entirely anticipate that that is what's going to happen. But like Matt's talking about, I keep seeing that, you know, there's still clear roles for humans in this and kind of like higher level roles where we can we can do more, produce better research and express things that we want to say more clearly were before we would have got too tired and just like give a one line, you know, piece of feedback to a colleague or a student or
Starting point is 00:55:25 this kind of thing. But the analogy for me is I've taken up bouldering like many middle-aged men and I climb up walls with my son, right? And I'm getting better at it, but I'm never going to be the best at it in any way, shape or form. I'm quite clear about that. But, you know, there's progress and there's also all the things that you're talking about and human motivation that like, you know, people don't like falling off, they like finishing, but you have to fall off and feel in order to get better. And it's like these quirks of human psychology
Starting point is 00:56:01 that, you know, when you have a little progression system, you enjoy going up it. But when I see athletes out there in the Olympics or in the gym, just many other people, that are much better than me, younger than me and better, older than me than and better, it doesn't then make me go, well, what's the point? Like, what's the point of me doing this? Other people are doing it better. Or if I saw a robot, like, scramble up the wall and,
Starting point is 00:56:27 you know, finish the thing, it wouldn't make me go, well, there's no reason for me to do that. Because like, for me doing it, the joy is, you know, it's a physical activity and all that. But also, it's like there's a challenge and I do it and I get better. So I kind of feel like that same thing applies when it comes to research and writing and so on, if AI becomes better than me. And it already is in so many aspects. I just kind of think, well, you know, I'm not the best at anything in the world
Starting point is 00:57:01 and I never expected to be. So I don't find that threatening. But I do think that that reaction is not the stance that people take. And people obviously, when it's different, when it's recreational activities and it's your job, right? There's a distinction there. But I don't think that's the only thing at play, because like you said, and we're going to turn to talk about it,
Starting point is 00:57:23 but the moralizing topic is that there's a fundamental kind of golden barrier. There's a soul to human produced material, that AI, even if it creates something that's indistinguishable if you don't know the process, it lacks that essence. And I view that as, you know, a kind of anthrocentrism that we have, that we are social primates. And we think humans are the most important, you know, special ingredient, though, because we're humans. And yeah, that's the way that I think about it.
Starting point is 00:58:02 I know since when did writing a relatively new invention become like the synquanon of, of humanity. Same thing with art. Art's been around for a lot longer than writing. But like I think artists, writers, they attach, I think, attach too much meaning to this. I mean, don't get me wrong,
Starting point is 00:58:22 I love beautiful writing. I love art. I'm glad it's in the world. But if I, not that I go to a club, any clubs are dancing, but I used to when I was younger, if I would at a club
Starting point is 00:58:33 and there was a tune that made me move, I'd move. And if I found out it was created by an AI, it wouldn't matter at all. Like, it makes me move and makes me feel. But I think for some people, they would enjoy the music less as soon as they found out that it wasn't credited by a human. And that doesn't make sense to me.
Starting point is 00:58:51 Yeah. Actually, this is a very interesting point. But first, I have to say one thing because in my last little speech, I definitely gave the booster, you know, rampant AI perspective. And I didn't get to the perilous bit, I guess, right? Which is aligned with the argument you were making in that article, right, around, like, The example I gave there is where you have something that is well within my skill set, something that I've verified and supervised every step along the way,
Starting point is 00:59:20 and I feel pretty comfortable with that. Where, of course, it gets dangerous as when people are operating slightly outside their skill set, the AIs is supplementing what they fully understand, and therefore they are trusting what's being done. And I've seen examples of stuff that my colleagues have sent me, that they've done with AI help to do stuff that they weren't super confident with and there were major problems there that neither they nor the AI figured out. And the other aspect is the developmental aspect which you already talked about,
Starting point is 00:59:53 which is it's different when you're a 50-something professor like us, right? We're not like learning new things isn't top. Or do something. In Christmas case. I mean, it's great for us. We, hopefully. It's a child. Chris is a child.
Starting point is 01:00:06 Yeah, he's a mid-child. I mean, but it's okay. maybe, right, if I didn't learn new things, right, in coding this paper, or more in the producing sort of stage of life. And it's just a very different situation for, I'm imagining PhD students and early career researchers, and the calculus can look very different. Now, the final thing I say is getting back to your point. It's super interesting about the ineffable human magic that goes into art. And you see this in the vastly different reactions amongst tech people, like coders or mathematicians who have pretty much embraced the contributions of AIs to their work.
Starting point is 01:00:41 It's a very different response amongst the humanities generally, and in particular at the creative end of the spectrum, the artistic end of the spectrum. And one argument you will see regularly on the forums is that art is primarily about communication between the artist, a conscious human being, and the recipient. And actually, I'm a bit of art geek. I've got wanky art books on my shelves at art galleries. I can attest. Is this not a human being?
Starting point is 01:01:17 I'm a total snob. But, you know, I'm into that shit. And so I've read a bunch of, you know, what is art? You know, that kind of stuff, you know. And I thought, actually, you know, I've seen so many quotes from artists saying, don't ask me what I'm trying to say with my art. I don't know. The painting or whatever speaks for its.
Starting point is 01:01:35 everyone's going to look at it and get something, you know, from it. And then I checked the different sort of, you know, the theoretical approaches to art. And the only person that I could find who actually said something like that, that it's really about the communication, I think, was Tolstoy, who apparently argued this once. But he's in the massive minorities. You know, the broad consensus of the 20th century was a very different sort of concept of right.
Starting point is 01:02:00 But I just thought it was interesting that that argument has been, picked up, if not created, because I don't think they got it from Tolstoy. I think it was reinvented precisely to sort of redefine what art really is in order to rationalize a gut feeling. So as you can see, I've brought this back from my own little personal speech to the topic of your other paper then, Mickey. Right. Yeah. It's almost as if you've been podcasting for a while and you got the segues down. Except for the medical commentary. It worked well. I've increased friction with Matt, so he's improved his skills.
Starting point is 01:02:41 Podcasting. I am the friction. Right. So, maybe I'll just jump in to this other paper. So I have a paper that right now, it's just a preprint. It's called the moralization of artificial intelligence and also written by a group of students. And they deserve all the credit and I deserve all the blame for mischaracterizing the paper. but the authors are, so my graduate PhD student in Victoria, Aldenbergo de Mello,
Starting point is 01:03:09 and then Eloise Cotei, Rima Yad, and then I believe that's Yowell-in-Barr again, Jason Plax and myself. And we just submitted the paper, and I decided to, well, as I do for every paper I put out, we'll publicize it on social media. And these days, I just will have AI write something for me, again, in my voice. I feed it the paper, and I'm not going to spend too much time, like an hour crafting a tweet thread. Like, I just don't have the time for that or the interest. So I had to write it. And then I also used this so cool.
Starting point is 01:03:44 If you haven't used it yet, it's called Notebook L.M. It's by Gemini, Google's products. And I think it became, you know, went viral a bit online because about a year ago, you can make podcasts from your paper with two male, of course, podcasts. I was talking to each other about the paper. So it's kind of fun. but it has an infographic button and I was blown away by how it's literally the paper press a button
Starting point is 01:04:11 and it summarized the paper better than graphic designers that I hire to make my papers come to life. Like I've done this for other papers and I've always been like, yeah, it's good, not great. This was far, far better and even though it did make a few mistakes. But anyways, I put this out there
Starting point is 01:04:30 and I should also say that I was being provocative. I was not simply just plainly describing a paper. I was trying to poke and I was judging the moralizers. But essentially we have this paper where we find that, let's say, three like groups of studies. First group of study is a linguistic analysis of media headlines from 2018 to 2024 in the English language. and all headlines around the world. And from this, you can get various databases to understand what the headlines are about.
Starting point is 01:05:08 And when you look up AI headlines, we would record the extent to which they are moralized. Using kind of like language about prohibition, language that's kind of a good and bad, prescriptive language, moral outrage, those kinds of things. And we find is that, in fact, it's increased quite a bit and it peaked, the moralization of AI headlines peaked when Dali was released. So not when Chachy-D, even Dali, which is Dali's creating art.
Starting point is 01:05:38 So I think quite interesting. Chachy-P-T was also a peak in moralization, but not quite as high as with Dali. So it does seem that people are using moral language in headlines to talk about AI, more than vaccines, which I think many of us see as to some extent moral, either pro or against, by the way. So I think for the anti-vax crowd, which I think you guys are quite enamored with, they moralize not using AI. The AIs are evil, I suppose. And then you have other people who say, no, it's good.
Starting point is 01:06:08 You should use it. It's good for blah, blah, blah, blah. It's more moralized than COVID was during the pandemic. So it's a moralized topic, not nearly as moralized as abortion. So, you know, just giving the range of which is moralized. And maybe I should say what I mean by moralized. moralization is when an attitude becomes prescriptive, like you should do something, you should not do something. It's highly emotional, especially when you violate a specific norm.
Starting point is 01:06:35 And a hallmark of moralization is there is something called consequence insensitivity. So if someone is, let's say, opposed to, let's say, AI, even if you told them, imagine we mitigate it all the risks and we only have benefits, would you still oppose? So that would be something called consequent insensitivity. And another one would be another hallmark of moralization is magnitude insensitivity. So a small amount of it is just as bad as a large amount of it. Okay. So this is what we mean by moralization. Just to put the clarification here, Mickey.
Starting point is 01:07:11 Just a quick one. Yep. So if a topic is moralized or not, my understanding is it's kind of orthogonal to whether or not the position is right or wrong, right? Like just because an issue is moralized doesn't mean it's not something you should care about, but it's not important. Correct. Correct. So, I mean, an object that is not moralized would be a snow tire on a car, something you're familiar with, I think, Matt in Australia. But so there could be, you know, good qualities of the snow tire. There could be negative qualities of the snow tire. But no one I know of moralizes snow tires, right? They're not like, well, I suppose some people might. You ought to use them because they could save lives.
Starting point is 01:07:55 And if you don't, that would, you know, lead to some death. But I've never seen the moralization of snow tires. But so you're right. It's orthogonal to whether, you know, it's good or bad. It's just that it becomes, you know, somehow prescriptive and moral outrage. It follows when certain rules are violated. All right. So we find evidence that at least in headlines, this is the case.
Starting point is 01:08:15 Then the other thing we then did was the next bucket of studies, was a series of attitude measures we got from people. And we gave people examples of various kinds of AI applications. So a simple chat bot you might have in our stores website, AI used for legal decision making, AI art, and then companion AI. And then another study we used many, many other kind of AI applications. And then we ask people, are they opposed to the technology? Are they neutral?
Starting point is 01:08:46 Are they for the technology? And the truth is, most people are positive about all kinds of AI technologies. There's variability. So the one where there's the most pushback is companion AI's, which I think at the top of the show, I think we understand why it's a bit more controversial. The least controversial is a little chat bot as part of a storage website. And then what we followed up with, among those who oppose a technology, we query the extent to which their opposition was moral.
Starting point is 01:09:17 And the way we got that was by giving them, you know, we measured the extent to which they're consequences sensitive. We measure the extent to which they are a mountain sensitive. Another question was whether they would be okay with it. It was, you know, everyone else in their community uses it, et cetera. And what we found was very interesting, at least for us, is that a majority of opposers on all, technologies, except for chatbot. Chatbots were the least controversial, opposed it on moral
Starting point is 01:09:49 grounds. They thought it should not be used in no amount of it is acceptable. I don't care how safe it is. It should not be used. And then we also looked at the reasons people gave for opposing. And what was interesting there is that despite them clearly moralizing the object and moralizing AI that it should not be used, when we asked them why. They didn't say stuff like I saw online, like, oh, this is just, you know, it's sacred, it's just bad. They would use mundane reasons. Oh, it's going to lead to job losses. Or it's not very good.
Starting point is 01:10:23 Or it's not reliable or a lie. So they would, and this is also another characteristic of a moralized attitude. The gut feeling comes first. And then they populate the gut feeling with rational reasons to justify their gut intuitions. Can I check something, though? How can we tell that the flow of causality isn't the other way? So, in other words, they've come across all of the reasons, the mundane reasons for disliking AI,
Starting point is 01:10:53 and then those have kind of coalesced into a general thing. Like I'm thinking of, like, I'm trying to compare it to myself, say, on climate change, right? I mean, I think you found that that's a highly moralized issue, right? And I would say reflecting on myself, yes, I'm prescriptive, and I'm moralizing and I'm talking in terms of aughts and shoulds and all of that stuff. But at least I believe that it sort of comes through the confluence of all of the, you know, practical reasons. Yeah. Yeah.
Starting point is 01:11:23 So I would say that probably I imagine the root that a lot of people end up moralizing various kinds of things, especially in new technology like AI. So it might be observing evidence or observing discourse around. something. So I think the discourse on AI, as we're highlighted with the media headlines, is generally menoral and quite negative. And then you form an attitude that way. And I think that's perfectly rational and perfectly
Starting point is 01:11:51 fine. But what if actually, here's a great example. You talked about the environment. I took care about the environment. I think it's important. I'm not sure I moralize it, but I might. And one reason that some people say they are opposed to AI,
Starting point is 01:12:09 is that they say it leads to environmental degradation. It's an environmental catastrophe. It uses an inordinate amount of water and an important amount of energy and electricity and it's just a drain. But it turns out this is not true. It turns out that the data on water use, for example, not clear to me how it happened.
Starting point is 01:12:30 One explanation is that there was literally an error that some popularizers of the water usage data point made an error, three magnitude, air large or massive error in terms of water use. And it turns out it doesn't use that much, that much water at all. It's like I saw someone on social media saying it's like all the AI uses in the world is like, you know, essentially like three large farms in the U.S. I mean, they use water.
Starting point is 01:12:55 They're water intensive. But we don't look at those big farms being like their energy hogs and therefore we shouldn't be farming. And they say it with electrical use. It does use electricity. Absolutely it does. But so does us talking right now on this computer. After this, I'm going to go turn on the television and watch hockey because I'm a good Canadian.
Starting point is 01:13:13 That too will consume weight. It turns out that my big television behind me uses far more electricity than all the prompting I did today. So now, if I give you that evidence, you should be sensitive to that evidence. Yeah. And you should maybe change your mind about, well, you should never change your mind if you believe the evidence about its environmental use, environmental, you know, or weight impact.
Starting point is 01:13:42 And maybe that should even lead you to be like, oh, AI maybe is not so bad. But if you go from there to then be, well, okay, water's not, and electricity is not so bad, but it's the job displacement. You're like, well,
Starting point is 01:13:53 actually, probably to create, like economists are saying it's going to have all these new industries are going to emerge and maybe they'll create a whole new set of jobs. What do you think about it then? Like, well,
Starting point is 01:14:03 no, it's, you know, and it keeps going on and on and on. And this is what Jonathan Hype calls moral dumbfounding. We have this kind of idea, this intuition, and you could come by it honestly, like you did with the environment. But if you are stuck to it, despite what evidence tells you, and that works both ways,
Starting point is 01:14:22 by the way. So if we're a pro, and I would consider myself a pro, I think, AI, if I start learning evidence, seeing evidence, then in fact, no, actually, it is really bad for the environment. And in fact, it is lean to all the displacement. And it's really bad that's only three big companies that own it. And that's not all these externalities, right? I should update my own opinions of it too. But it's the insensitivity to evidence that to me is the problem.
Starting point is 01:14:49 And it seems like that's happening with some people's attitudes towards AI. And I don't think that's great for the discourse. I also don't think it's great for us societally because I think AI, AI is possibly could help save lives. It can help, like, identify diseases. It can help, like, make diagnoses. It could help people who are wrongfully in prison, you know, be released. Like, there's all these possibilities. It's also negative possibilities. But if we are, if we moralize it as forbidden, as something that can't happen, you can't even have intellectual discussion about its use. So I think it's a danger that it's moralized.
Starting point is 01:15:30 Mickey, I have some, like, I read this study and enjoy that. And I have like some, the kind of reviewer, too. type questions about it, but I also have on that specific thing around, you know, opposition. And obviously, you know, if you look at blue sky or you look in academia in general, depending on discipline, yes, but there is a more negative screw, right, to the attitudes. But I did know when I was looking for your paper, I was trying to see if this was there. And when I looked at the factors where you look at a whole bunch of things, you know, moral foundations and also social ideology, economic ideology and stuff.
Starting point is 01:16:13 It didn't seem like there was a very strong signal in most of those factors. Like the two things that you highlighted in the paper, correct me if I'm wrong, are age and familiarity, right? Aside from AI like overall aversion, right, self-reported a version, which is obviously going to be related to it. But is that the case? Because Matt and I were talking about it, and we were saying, you know, I wonder if you broke it down by education. But no, that wouldn't probably work because, you know, people are educated at different times across a whole bunch of different things. But if you broke it down by discipline, I know this is like getting into very small subgroups that there probably would be a clear distinction. But yeah, so I was interested like there's a lot of recent discussion about the left in particular has an issue with AI.
Starting point is 01:17:04 This appears from my social network and usage of blue sky and Twitter and stuff to be the case. But your data, which is more representative, doesn't seem to have that strong of a signal from as a factor, for example. Yeah. I mean, if anything, we find not a very large effect, but we find a small effect such that it's conservatives that seem to moralize it a bit more. but I agree with you like in academia, clearly in the humanities, it's foreboughton. But I think I got a bit of a taste for what's happening. So maybe I give a kind of take a step back a little bit. So I mentioned how I wrote this social media post.
Starting point is 01:17:47 And I mean, I think it went viral for academics. It was viewed like nearly 600,000 times. I could put it and it was like the next morning I woke up like, whoa, people really didn't like that post. although it wasn't ratio a lot more likes than comments but lots of comments and the tone of them was angry very very angry and upset
Starting point is 01:18:12 and in fact I believe I've got so many now case anecdotes for talks of moralization and action because literally especially on Twitter which I thought was dead but Twitter I had someone in Spanish saying
Starting point is 01:18:28 people like me should be, you know, you know, put in front of the guillotine. I had someone. That's true. Yeah, yeah, that's flavor. Nikki, I got to say, I love this. They thought they were flaming you. In the end, you had the last laugh. That was providing grist for your next paper.
Starting point is 01:18:48 I thought it was hilarious. I thought it was so funny that people were. And also, I mean, you will bring him up again. He's saying, you were clearly trolling. And I'm like, I don't know if I intended to, but clearly. Yes, I was because I wrote it with social media. I had a social media image. And I was like saying, all you moralizers fuck off, essentially, what I said.
Starting point is 01:19:07 But the response was... I'm not provoked of a reaction. I did not say fuck off. I just said, like, I described moral dumbfounding, which no one wants to hear, you know, especially if you're opposing, that you're doing this. But I definitely got lots of great stimuli now. but to get your question. So I posted it in a bunch of social media.
Starting point is 01:19:33 So I posted it got no traction on blue sky. I thought there I was going to get killed. I thought they were going to kill me there. But I got barely anything. It was Twitter. And then I also posted on, you know, Substack has got a bit of a social media thing now. Yes.
Starting point is 01:19:46 But there is actually quite an interesting space because it's writers and creative types, a lot of academics, a lot of philosophers, at least in my corner of Substac. So pretty, I would say, intelligent posts. And the, the responses from Twitter were quite different than the responses from Substack. So the responses from Twitter were way more angry, way more unhinged. And I got a lot
Starting point is 01:20:12 of, I'm not moralizing. You know, here are all the reasons. And on Substack, I got, I am moralizing and why aren't you? Yeah, you know, a great response. I've at least an intellectually, internally consistent response is I moralize murder and I moralize AI and as we should. I mean, I think it's a fucked up to compare those two, but okay,
Starting point is 01:20:37 at least you're consistent. Now, I also got a bunch of responses from, I would say, more religious type people. And their opposition was, yeah,
Starting point is 01:20:48 religious in nature. Like, this is not for humans to be doing. It's not our place. So I think that's maybe why, like, seeing the humanities types folks, you know, in their responses. But I think from folks who are more conservative, I think it's like, you know, something about humans being special and sacred and you're created by God and who are, and you're acting like a little God now. So that's just
Starting point is 01:21:13 forbidden in that way. So I think that's why there's like, it's bimodal. And it's also worth noting that overall, you know, the results in the studies that you run, there was more non-opponents. for pretty much all usages. So, like, even though, you know, there are vocal opinions online, and they definitely represent a segment of the population, it did seem like across the board there was overall non-opposition,
Starting point is 01:21:42 if not, you know, positive. Yeah, no, agreed. It's a minority of people that oppose, and of the opposers, about half oppose on moral grounds. So still, you know, I figure now that, the percentage of total that are opposing, but I think it's 20 to 25% depending on the application.
Starting point is 01:22:02 So I think most people are cool with it and maybe don't even think too much about it. But those who do think about it and are opposed are quite angry. I mean, stepping back a little bit, I mean, from AI good or bad, like it seems very understandable to me that there should be strong reactions on both sides, right? On one hand, the positive reaction is understandable. You know, anytime people who find something that's going to make our lives a lot easier and provides all of these amenities, there's going to be people that are very keen on the new technology.
Starting point is 01:22:37 But on the other hand, you know, we are sort of humans are conservative by nature. Anything that is very new is going to be something that is going to be potentially provoking strong negative reactions. Something that is going to have such large ramifications for the economy. and the industrialisation and the internet or the agricultural revolution, in hindsight, yes, all marvelously great things for the economy in the long run, but in many ways very disruptive and negatively viewed by many people affected at the time for understandable reasons.
Starting point is 01:23:10 And then you have, I guess, the ego threat, I suppose, you know, that we seem to be quite okay with the idea of machines, you know, automating, being stronger than us and faster than us, doing things, right? But especially for middle class, educated, desk worker type people, an awful lot of our sense of self-worth is bound up in our cockpit of contribution. And so it feels entirely understandable to me that there should be an instinctive dislike of that specialness being taken away from us. Do you have further things that you would add to that list of, you know, you can understand why, psychologically, what people might not like it.
Starting point is 01:23:56 Yeah, no, I think I, yeah, I think I do understand it. I should clarify one thing, though. So in one of our studies, I think it was study 2B or 2C, I forget now, we did include technologies that are new, but not AI. And it's true that people generally oppose new technologies or have some of these issues with it, but there was something special about AI. It wasn't simply that it's new, at least in this one analysis. but in the lesser point, I think, is well taken.
Starting point is 01:24:23 Yeah, I think it really is threatening, especially for creatives, who I think thought and still think that what they do is uniquely human, uniquely also not just human, but them specifically. And now, with just a little bit of prompting, I can get a machine to sound like me. So, yeah, I could definitely see,
Starting point is 01:24:44 I could definitely understand it. Why it's threatening, but I, I think we have to separate like this guild mentality from like what's good for the individual producer might not be good for society. Yeah. And we have to separate those things. So should I oppose AI? Because it might mean that there are fewer psychology professors. No, I don't need to protect my job. I mean, yes, I want to because I don't feed my family. But like as a profession, like if an AI can do what I'm doing, then it should do it. Because I think what I do, I do. Because I think what I do, I'm not just doing it for me. I'm doing it for society too. And again, if AI could do it better, then it should, and I should be a plumber. Well, don't be so pessimistic, Mickey. We'll just do more psychology. Well, I agree. We've talked about this, but like, I think people underestimate the appetite for accountants and psychology studies. And there's an endless appetite humans have
Starting point is 01:25:44 for producing and consuming, you know, material. Gurus, for example, are very human, but they're siphoning off a lot of attention and time across the world, right? And I think humans in general were not super well optimized in how we devote our time or, you know, we're not doing everything in human interaction in this well-managed and productive way either. But I did want to add in, Matt, you did a good job there, adding in a point of friction. And one that we haven't addressed, but I think a lot of people would raise, is the companies and the individuals who are involved with AI leading it, often when you see them talk or when you get behind the scenes information about the various power plays that are going on. I mean, the most obvious example is what Elon Musk has done repeatedly to Grog,
Starting point is 01:26:45 right? Like, made it make a Hitler temporarily and then also has to say that he is the best at everything due to, you know, the prompt engineering or whatever. But in that case, there is an issue that even if the technology is as good as we think it is, right, and has the potential to be as useful for humans as like Osprey might agree, isn't there very real and very moral issues about the various tech leaders and companies and how they are choosing to collaborate with the Trump White House, for example, right? You know, recently you saw anthropic getting in issues.
Starting point is 01:27:25 So could that be, you know, a partial point that like the moralization is there because AI companies are also involved with moral issues like autonomous drones or, collaboration with totalitarian regimes or all this kind of thing. Like, can you be, is that it isn't just moral dumb finding? It actually just is a moral issue. So first, I think, yes, I agree. There's some, you know, malfeasance with these companies. But I could be wrong, but I think all of us use Google.
Starting point is 01:27:57 Google has Gemini. Not PewDiePy, but yes. And I do not think that anyone's raised the morality of Google search. But it's the same company. They have, but, but the majority. If you do not, so I've just, that's the, to me, that's just picking, you're like, okay, but, but that being said, I would be very much in favor of a, let's say, a network of universities, like putting the resources together and coming up with an open source, transparent and not-for-profit
Starting point is 01:28:28 AI that we could all use and rely on. I think the problem will never be as good as the products created by these big tech firms, because they have way more. money. But I think you'll be better for us who are studying it to use a technology that doesn't have this profit motive. Like, you know, apparently I think Open AI is going to have ads come soon. Like, I mean, that will kill, I think, the companion business because I think if your companion is suddenly, hey, have you tried this product? The illusion will go away. It doesn't stop people getting very power socially attached to Huberman and Lex. So you might be surprised. Oh, that's great. Yeah, AG1.
Starting point is 01:29:08 You're feeling depressed? Try this shampoo. Right. So, yeah, I mean, I think I wonder if AI at some point it's going to be just a public utility. Yeah. I think, I mean, if I could get into the, you know, we're getting into technological opinionating here,
Starting point is 01:29:25 but why not? We're allowed to, right? It's our podcast. I reckon there's positive science that something like that, turning into a utility, and they're being very reasonable and good, open source not-for-profit options being available.
Starting point is 01:29:41 And that's because, you know, at least I feel like I'm seeing a lot of convergence, not only amongst the top-tier frontier models, but also amongst other open-source labs, some of them in China, which has its own problems, of course, and even, like, local models, like much smaller, but you're seeing a general pattern of convergence in that no one company has the secret source, you know, that is not sort of more broadly known. So I'm kind of a little bit optimistic about that. I don't see a select few tech overlords necessarily owning it all.
Starting point is 01:30:18 Yeah. And, Mickey, I also have before we finish with the paper, just another reviewer two point, if I may. Good. I noticed that the moralization in the study one where you're looking at the headlines of moralization, that it looks to me that like at the start point of the graph, the moralization is pretty close to what it is now.
Starting point is 01:30:45 Like it's slightly higher, but it goes down right in like 2019. And then there are these parts that you point out where Dali launches and ChatsypD and starts to go back up. But is that an issue that like it seems that it was in, at least in the beginning of the. data set also highly moralized. So is there a chance that like the pattern is fluctuating, but AI is in the category of moralized topics in general? Yeah. So I mean, I also notice that.
Starting point is 01:31:19 I think the issue pre, essentially pre 2020 is there just weren't that many headlines. So it's just the data are quite noisy. Ideally, we'd have some good error bars or confidence intervals to show you how noisy it is. And then I think starting in 2020, we have way more headlines. But you're right, regardless, or it's noisy, that the centroid
Starting point is 01:31:40 is so pretty high. Because each dot, I believe, is a headline. Or maybe it's a group of headlines. I forget it now. But you're right. It's not clear.
Starting point is 01:31:50 But that's why we also got, like, I'm actually, the timeline was a little bit less interesting to me than just like the comparison across different technologies.
Starting point is 01:31:58 And that's what we need to various benchmarks because everything, it's to some extent, it's going to have some of this moral language. It's more of the comparison. And I,
Starting point is 01:32:05 I guess we could see what will happen in like 10, 20 years if we'll continue or what. Yeah, yeah. If we'll still be around. Yeah, we'll be. AI is going to, you know, get the health thing. Well, actually, we heard from a guru yesterday, Mickey, that there's this thing called Integral Theory, Ken Wilbur. It's a guru from the 70s that still exists.
Starting point is 01:32:24 And he was saying the AI companies are all working with him to integrate his theories. So, you know, we might be heading for the age of integrated Aquarius from. the AI companies, if they continue to work with the gurus. You know, like this is the, yeah. I'm looking forward to it. Yeah. And we, we do see, you know, one of the things is that I'll just mention in closing is like, we do see AI being badly abused and misused by the gurus, deal like Eric Weinstein having,
Starting point is 01:32:57 you know, chats with Grock where he's using it to prove that all of his ideas are correct. or you see on Twitter, people generating all these terrible images of political opponents, Trump putting out the video right of them dropping shit on protesters and so on. So like it's not like we're missing that the technology is used for negative ends as well. Of course. Yeah. My thing is it's technology. It could be used for good.
Starting point is 01:33:30 It could be used for bad. but the technology itself is neutral, I think. But I think for some people, it's not neutral. It's vulgar. I love that. Because I'm like, what makes it vulgar? Like, I don't get it, but I'm like, I think it's its gut feeling, it's disgusting. Me by clunker.
Starting point is 01:33:49 Got them clanker. Well, yeah. Well, on one hand, thou shall not create images of the shape of God or whatever. But I'm also very sympathetic because the perspective of people like ourselves who are using it, day to day to achieve things that we think are very useful and very important and very good. It's a very different perspective if you're not doing that and in which case your principal exposure to it is going to be through swap. Yes, and in the discourse, yeah, like swap of various kinds.
Starting point is 01:34:17 And you really are only going to be seeing the negative things which are real as well. So, you know, we can be sympathetic to all points of view and nobody has to write a big thread on Reddit about this episode. Yeah, that's definitely, that's good of results. Yeah, you don't need to. We're very sympathetic imbalance. That's the point to take away from this. My students, I told them it was coming on the podcast and talk about this.
Starting point is 01:34:47 They're like, okay, Mickey, but just like, just describe the findings. Don't judge the people who are doing the action. You did a great job, though, Mickey. Yeah. Yeah. I didn't detect any note of your personal opinion. Yeah. As always, it's been a pleasure.
Starting point is 01:35:09 We appreciate you taking the time. And your students, the lead offers on the paper, collaborators and yourself, the research papers that are coming out about this topic are very interesting, in of themselves, regardless of what your position is. So, like, if you think us three guys are completely off the rocker and feeling, to represent the arguments well, you can take Mickey's empirical articles and use it, find the pieces of information that you can use to attack is his point of view or are. So, yeah, they're available for everyone.
Starting point is 01:35:44 Like you said, they're neutral. The information is there and people can craft them into their own narratives as they see fair. That's right. Some people think they ought to be moralizing. So this is just describing things they think should be happening. yeah we all moralize about some things
Starting point is 01:36:02 I'll moralize a little bit I'm moralize moralizers those fuckers All right well Mickey thanks so much for spending a couple of hours with us
Starting point is 01:36:14 it's great to catch up again yeah always a pleasure and always it's also great to to like to see the the cyborg that you're becoming that Oh yeah more machine than man at this point. You know, the last piece of AI news I'll tell the listeners is, and nobody's ever commented on this,
Starting point is 01:36:37 Mickey, the software that I use to edit the podcast, which is me, okay, I'm the editor, the audio editor. It now has an AI function where it can regenerate when you make an error, you know, like you say a word wrong and you want to reset. Yeah, so I don't need to get Matt, you know, to come and re-record it. I just regenerate it. And nobody has ever noticed when I've created. Oh, amazing. So it's already in the podcast. I'm sorry. You're losing subscribers right now.
Starting point is 01:37:07 They saw not come. Well, they had to listen for an R and 40 minutes or so. It's probably only the diehards left. But yeah, but thanks, Mickey. It was a pleasure, as always. You're very welcome, and I hope I get invited back. You will. See you, Matt.
Starting point is 01:37:24 Bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.