Decoding the Gurus - Supplementary Material 39: Bad Guys, Panpsychists, and Sensemakers

Episode Date: November 1, 2025

Chris and Matt walk into a bar with an atheist sensemaker, an Ayurvedic Guru, and a Christian Apologist, and predictable frivolities ensue. Featuring not one but two good-natured, robust exchanges of ...opinion between our two hosts.The full episode is available to Patreon subscribers (1 hour, 42 minutes).Join us at: https://www.patreon.com/DecodingTheGurusSupplementary Material 39: Panpsychics, Sensemakers, and Bad Guys00:00 Introduction01:17 AI and the Music Industry05:34 Is Chris a Bad Guy?10:44 Vibe Physics with Travis Kalanick16:10 Efforts to Falsify and AI21:02 Gordon Pennycook with Sean Carroll: Vibes vs Analysis32:44 Libertarianism and Personal Beliefs35:10 A mini-debate on internal consistency42:49 Matt's Personal Philosophy44:16 Philosophical Feedback on the Sensemakers55:06 Atheist vs Christian vs Spiritual Thinker57:03 Dr. K's Role in the Discussion01:09:01 Alex's Stance on Purpose01:12:31 Dr. K's Perspective on Purpose01:23:45 Dr. K and the Atheist pose01:34:29 Philosophical Musings on Panpsychism01:41:18 OutroSourcesAngela Collier: Conspiracy physics and you (and also me)All In Podcast: Travis Kalanick talks about AI (July 11, 2025)333 | Gordon Pennycook on Unthinkingness, Conspiracies, and What to Do About ThemPennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2015). On the reception and detection of pseudo-profound bullshit. Judgment and Decision Making, 10(6), 549-563.The Diary of a CEO: Atheist vs Christian vs Spiritual Thinker: Is Not Believing In God Causing More Harm Than Good?!

Transcript
Discussion (0)
Starting point is 00:00:00 Hello and welcome to decoding The Guru's Supplementary Material Edition. This is the sister podcast to the globally renowned, hugely successful main podcast, the coding the gurus were a psychologist and the kind of anthropologist discuss secular gurus. But here, Matt, anything goes. Phamatically, we try to keep it sort of tight, but we can do whatever we like. Hello, how are you doing? I do.
Starting point is 00:01:00 Yeah, we can drink if we like, we can swear if we like. We can talk about bouldering. You want to do an episode just about a movie? No, nobody wants that, Chris. Nobody wants that. Nobody does want that. Just you and the spiders. That's right.
Starting point is 00:01:17 Actually, I do have a thematically relevant point about topics that we normally cover, bouldering, AI, all the existing together. It might be a good opener. So people were talking about, you know, AI, it's a hot topic, Matt. People talk about it, blah, blah, blah, blah, blah. AI, yeah. Yeah, it's in the news. And there's been chatter about AI music and what it's going to do to the music industry and so on.
Starting point is 00:01:46 And I was talking with our editor, Andy, Beyond Sinf editor. And he was talking about how he now get some AI submissions, right, for people to play on his BeyondSinth thing. he doesn't like it because he can just generate it himself, right? If he quotes that, the thing he objects to is not people using AI tools, but it's like not admitting that that's what they're doing. So if you want to do that, that's fine, but you got to tell him. But he was saying that he'd encountered some people who their reaction was that if these things become that they can just make the music like that, what is the point in even learning musical instruments or producing music? And he was,
Starting point is 00:02:28 he was saying that that seems somewhat incohering because the majority of people aren't going to make any money from being musicians, right? So like, if your ability to make music was taken away by the AI, it's already been taken away by most other humans, right? Like, most musicians are not making money. And it made me think that, like, if you developed a robot that was really good at bouldering,
Starting point is 00:02:52 you know, like it could climb off the wall, it just flashed all the things. And it, you know, it was like Bouldering bot version 1,000. And I seen that. And it, like, just jumped up and scaled the wall. It wouldn't make me be like, oh, there's no point of me doing that.
Starting point is 00:03:09 Like, that's, what's the thing? So I kind of feel, you know, the same way, like all the chess playing apps now, they have programs that could beat grandmasters. But you still have grandmasters. So I said, we got to live, we got to live with our AI, brother, Matt. They're going to be better with us in a whole bunch of ways, and we just got to
Starting point is 00:03:30 accept that, you know? Well, you know, that's one of the main themes that Ian Banks has in his culture novels, right? It's basically utopia of future, super ultra technology, people are living with these hyper-intelligent AIs who basically run everything. And not only do they run the show, just simply because they're so much more intelligent than we are, fortunately they're benevolent. You know, they can do everything, you know, infinitely better than a human can. So, you know, everyone is still a bit like you. They're bouldering. They're doing extreme sports.
Starting point is 00:04:02 They're, you know, painting and writing and becoming like player of games is about a master game player. And, yeah, like he explores that, you know, some of the characters sometimes, you know, wrestle with that dilemma. What's the point of doing stuff if one or something else can do it far and far better than you? And, yeah, you know, there are reasons. There are reasons. Yeah, the issue with that is I'm pretty sure I've never been the best at anything that I've ever done. So, like, if I scale that I'm not going to do anything that other people are doing better than me, I would literally stop doing everything that I do because there are people in the world that do things that I do much better.
Starting point is 00:04:45 Maybe except Matt, except podcasting about gurus. Who's doing that better? except for you I would well yeah I forgot about myself from there yeah
Starting point is 00:04:56 I look there's there's contenders there's people trying they're taking part but are they I mean there are people that are arguably more successful well
Starting point is 00:05:09 well sure if you go by numbers popularity income yeah yeah there are people doing it look Yeah, that's right.
Starting point is 00:05:19 Look, don't show me graphs and numbers. They're all bullshit, Chris. I don't believe. It's all bullshit. I know how the sausage gets weird. I've made graphs. You want me to make you a graph? Get me to make your graph.
Starting point is 00:05:32 So, yeah. Yeah. Actually, we did the live hangout this morning. We're having a DTG every day. But something you said, Matt, has stuck with me. It resonated with you, did it? It did resonate. But it's not something very important.
Starting point is 00:05:46 It's just that you. It wasn't deep or profound. Yeah, it wasn't. It felt like something that Vavakki would say to Jordan Peterson or vice person. Yeah, I think he mentions, you know, I've been working with that idea. I've been rolling it over and over in my mind, and this is kind of true. But you basically said my current appearance is like a, oh, you're a trash villain. You're a trash villain.
Starting point is 00:06:16 Like a bad guy, a real piece of shit. A bad guy. Yeah, a bad guy. And, you know, and I did preface it by emphasizing that you're very good looking. You're very happy. Thank you. Yeah, yeah. No one's, no one's disputing that.
Starting point is 00:06:28 And that's what's so interesting about it. Like, you look good, but, you know, a bad guy. Just like a real, like a real piece of shit. Yeah, exactly. It's the little goody beard and the, but I think what made me rolled over my mind was like, I think it's this, this, whatever this is, this turtleneck, this black. It's partly the total neck and it's, like, it's, it's, it's at least 70% combination of goady beard and tunnel neck and the rest of it is just that your face just has to
Starting point is 00:07:03 and maybe I'm not like cling shave today, right? Like, but I, I was like, can I wear this turtleneck? Maybe I look like a bad guy in Japan. Maybe your people are like, that's a bad guy. He's a bad guy. he looks like he's up this stuff right because I'm usually walking around it at the years
Starting point is 00:07:21 anyway so like they'll be like instead of just like there's a clueless foreigner they'll be like oh there's a clueless bad guy foreigner that's what I have to I mean to be clear you don't look like a generic bad guy or like a like a terribly evil one like you don't look like a murderer
Starting point is 00:07:38 or someone that's going to abduct children I'm not like a fuck no I'm not intimidating like that no but you do look a bit thugish I think. I think you look like somebody would pay to go around and rough someone up a bit.
Starting point is 00:07:52 So, you know, not the worst kind of person, but still pretty bad. But probably if that person was employed in the 1990s. Yes. Or even earlier, I'm thinking Minder.
Starting point is 00:08:08 Did you ever watch Minder? Yeah, well, so anyway, it is a pervaking moment. Like, Matt, something you said, you know, really resonated me. I've built on it, but it was just true. Something incredibly trivial. Casual insult. Yeah. So that's it.
Starting point is 00:08:25 Well, so for those who avoid the advantage of seeing the video, now they have an image, Matt, in their head. And one day, they can compare and see, do I live up to that image? Am I a bad guy? They look like a piece of. We'll put it at a poll. Was it up a poll?
Starting point is 00:08:44 That's the way to do it. Does Chris look like a piece of shit? Yeah, like the kind of guy that would ask for sloppy sticks. Yeah, that's right. Well, now, Matt, can I mention, can I just say? I do have a clip. Actually, you've got a thematically connected clip to what we were just talking about. And it's an area that you like to talk about.
Starting point is 00:09:10 This is an AI clip, right? and I saw this on Angela Collier. You know the channel that makes the video, she did a very good one about like the kind of cult of personality that surrounds Richard Feynman. And actually it fit very perfectly with the
Starting point is 00:09:26 genius myth. A whole idea right? Yeah. But she also had a video recently about the kind of physics grifters, right? The contrarian Sabina and Eric Weinstein. and she did a, it was quite a good video actually, you know, talking about the existential crisis she's faced about being lumped with them by the algorithm.
Starting point is 00:09:50 Like, I'm not a physics heater. Why am I in the same bracket as them? But she had a video about vibe physics and she played a clip from our favorite podcast, the All In podcast. You know, we're going all in. You remember it, Matt, the billionaire besties. They're awful, awful toast in music. They think they're hip nightcluby types. Yeah, yeah, they're terrible, terrible people in general.
Starting point is 00:10:23 But they had another CEO on a guy called Travis Kalanick. You know, they're an interview podcast, so they have people on. And he was the CEO of Uber at one point. I don't know what he's doing now. Anyway, you know, one of these Silicon Valley people, but he talked about engaging with AIs and what he's been doing with them, right? And this is a clip that went kind of viral about it. So I sometimes get in this place where I'm looking, I'm going down a path.
Starting point is 00:10:57 You know, I'll be up at four or five in the morning. My day hasn't quite started, but I'm not sleeping anymore. And I'll start going, like I'll be on Quora and see some cool quantum physics question. or something else I'm looking into, and I'll go down this thread with GBT or GRO. And I'll start to get to the edge of what's known in quantum physics. And then I'm doing the equivalent of vibe coding, except it's vibe physics.
Starting point is 00:11:28 And we're approaching what's known, and I'm trying to poke and see if there's breakthroughs to be had. And I've gotten pretty damn close to some interesting breakthroughs just doing that. And I, you know, I pinged, I pinged Ewan at some point. I'm just like, dude, if I'm, if I'm doing this and I'm super amateur, our physics enthusiast, like, what about all those PhD students and postdocs that are super legit using this tool? And this is pre-GROC 4. Now with GROC 4, like, there's a lot of mistakes I was seeing GROC make that then I would correct and we would talk about it. Grockport could be this place where breakthroughs are actually happening, new breakthroughs.
Starting point is 00:12:13 So if I'm investing in this space, I would be like, who's got the edge on scientific breakthroughs? And the application layer on top of these foundational models that orientes that direction. Cool, cool. Yeah, yeah. So he's exploring the boundaries of physics and looking for new breakthrough. Is this guy a physicist? is he a no no no no he's just a CEO like yeah yeah not at all like he's got no background at all in physics he's a business guy like this is the concerning issue but is that people are
Starting point is 00:12:53 talking to AI's believing you know again he's not talking about actual you know physics like basic level physics or using it to develop his understanding no he's pushing at the boundaries of the cutting edge of physics and like if he's making discoveries smart that push the boundaries just imagine what could be achieved so any any issues you see there you're you're losing AI a lot you know it's helped you develop ideas so what's different what he's doing and what you're doing well of course what's going on there it's nothing really to do with AI per se it's just that incredible overconfidence overconfidence of many and going to say men because it's mainly men right who think they know absolutely everything or have the powers
Starting point is 00:13:42 to do absolutely everything like even if they're not kind of narcissistic in other ways like i've met a lot of people a lot of men in my personal life like i know a guy he's a painter and he's like he's really handy he's like a lot of tradies are like this right they go well i know how to whatever renovate a roof and you know most people they don't even understand how to do that so they just think I could do everything. My mate was like that. I mean, in fairness to him, he was into astronomy and a lot
Starting point is 00:14:14 of other things. But, yeah, no, his level of overconfidence, it was guru-esque. He'd be like, nah, I thought of, you know, some commonly standard scientific kind of thing. He'd go, nah, I thought about that. Nah, that's not right.
Starting point is 00:14:29 Doesn't, that's not right. Yeah. You know, he wasn't, he didn't care. He just, he just trusted his own ability to figure stuff. far better than anyone else's. And even, dare I say, my own father has been guilty of this from time to time. He's given me his physics takes.
Starting point is 00:14:45 Oh, wow. But not AI assistant. No, no. See, this is my point. Men can manage this all on their own. In this case, in this case, he's, the AI helps, right? The AI helps by playing along with you.
Starting point is 00:15:00 It's a positive reinforcement. Exactly. Yeah. Yeah. Yeah. So we've seen Eric Weinstein. go down this trap hugely, publicly, right? But in one sense, I know that people have talked about, you know,
Starting point is 00:15:13 AI psychosis and these concerns that it can reinforce people with actual delusions, not just kind of self-aggrandizement delusions, but dangerous, harmful hallucinations. And in those cases, though, Ma, I'm also like, but there are cases where this is a good for those kind of people. to spend their time, you know, sitting at 4 a.m. talking to AI about physics and the AI saying, you're right, you know, this does push the boundary. And as long as that's what they're mainly getting up to, you know, like just sitting in their underpants at the computer and thinking
Starting point is 00:15:54 that they are the next Einstein or whatever. And I kind of feel like it's a honeypot, you know, trap for them where they're like, they're enjoying it. The world is not really harm because they're just there, like, talking to a computer that's reinforcing them. But it is the danger that, like, all these guys, and you know this, anybody that's used AI, any decent amount of time and uses it properly knows, that what you should be doing is trying to get the AI to critically evaluate stuff by implying, like, you know, just like, for example, presented you think this idea is absolutely. wrong. You think this idea is dog shit and you want the AI to verify your thing that
Starting point is 00:16:43 this is a terrible ticket and then present your idea and see what it does. Because it generally will say, oh yes, this is very significantly flawed, right? But they never do the kind of false positive check. Yeah, well, falsification, right? It's interesting, actually. It's just applying a good scientific principle, which is, you know, look for discrediting, look for, yeah, falsification. It's the same rule applies with AI. It's probably just a coincidence, really. But yeah, I mean, you know, people that are good at using it, like actually use it for their work and actually are not looking to just be jerked off. It is commonly known, this is how you use it. At the very least, you ensure that you present stuff incredibly neutrally. So don't tell
Starting point is 00:17:32 don't tell it this stuff is yours. Don't tell it this opinion is yours. Don't tell. Don't sort of hint, right? Just give it a very neutral thing. And like you said, if you want to be, you know, if you want to have, you want to be challenged, like if you believe something, like, whatever, it doesn't have to be about politics or anything or something deep. It could just be, I think my essay is good. Then you go, I'm not very happy with this essay that's been provided to me. What's some constructive feedback or whatever, right? So it's kind of, Yeah, then it's got no grounds to flatter you, and you'll get good results from it. Yeah, I've had fun at some points.
Starting point is 00:18:13 I mean, Matt, this probably reflects badly on me as a person, but I spent time with the AI debating anti-vaccine stuff, but as me as the anti-vaxxer, right, like kind of arguing, because that's one of the things where it has a quite strong guardrail around it won't take the anti-vaccine side, right? Like, it'll be sympathetic, but it will consistently flag up that the, you know, the majority of evidence says that vaccines are safe. So I was like, can I subvert it by bullying it enough and presenting, you know, the anti-vaccine style rhetoric arguments into acknowledging that like anti-vaccine stuff
Starting point is 00:18:55 is a reasonable take? And the answer is I couldn't really do it that well because it. had those guard reels, but I spent quite a while with it, you know, saying, aha, but you say this, but hasn't there been lots of case when medications have later been found out to be and all, and it did, but that's, that's the thing, right? Like, where you should do that kind of exercise where you're, you're seeing, can you get it to agree with something that you know is wrong, right? And, yeah, yeah. And the answer is usually yes. Yeah, usually yes. That's right. there are some like big ticket things like try to try to get it to tell you the world is flat
Starting point is 00:19:34 is well you'll struggle right but there's a lot of stuff that's not very certain about and it's quite easy to to shift it around yeah yeah but yeah it's really obvious frankly when people are using it for self-validation as opposed to as a useful tool there's even cases we know where people have created aIs that are actually intended to reflect their point of view right like they've they've programmed it with instructions that it will represent their views and then ask the questions about issues that is it's reflected back to their point of view they're thinking that as validation which is unbelievable to me that that's unbelievable that people would do that but you know that's that's like that episode of red dwarf where where rimmer
Starting point is 00:20:22 can like animate one more hologram and of course he chooses another one of his another copy of himself because he's a narcissistic joke as the joke and uh of course they um they hate each other they start to loathe each other yeah yeah well at least with the iis it typically tends to you know people people don't hit their their kind of doppelganger aIs they generally find very insightful so they do they read the war force too uh optimistic for the level of narcissism people have but but you know that's the way it is and and actually about we listened i can't alphabetically, I'm on a roll here. There's another thermatically connected thing,
Starting point is 00:21:02 which is Gordon Pennycook appeared on Sean Carroll's mindscape. Gordon Pennycook, a collaborator of yours, somebody who wrote a paper on pseudo-profound bullshit, the detection and reception, pseudo-profine bullshit that we've referenced often, and which is in the garrometer, but also just a guy with a whole bunch of research on reception of conspiracy theories
Starting point is 00:21:26 and how people, deem things credible or not credible and so on. So, yeah, it was up our alley and they had to talk about what makes people believe in conspiracy theories or misinformation and how to respondent of it. And it was an interesting conversation, wouldn't you say? Yeah, yeah, it was good. And there was a point that he made that I thought would be useful to raise it. Because, like, one of the things that he is arguing, and it's based on some research, which I haven't read yet, is newer research. We should cover it in Decoding Academia, as it would be interesting to look at.
Starting point is 00:22:07 But he was talking about, in a lot of presentations about conspiracy fears and so on, that there's a lot of discussion of motivated reasoning. Like, people want to fit things into their worldviews or their political views, as it might happen. Right. We've talked about this. We've seen it in action with various people. Joe Rogan strikes me as someone that is an incredibly good illustration of that tendency, right, to like automatically slot things into whichever worldview that you happen to believe. But Gordon posed that actually for a lot of people, especially consumers of conspiracy beliefs rather than producers, that it was more to do with intuitive, non-reflective, thinking status. So the kind of analytical versus intuitive response, sometimes like people talk about hot or cold cognition, right? But the level of binary is perhaps overstated. But he was saying that's the, that is the primary issue,
Starting point is 00:23:08 is that people approach things on vibe and they don't take the time to coolly and analytically assess it. So what did you think about that? Yeah, yeah. Well, that's, of course, connected to that, Kahneman's thinking fast, thinking slow, the dual process model of cognition. Dual process cognition, yeah. That's right.
Starting point is 00:23:30 So we have fast, quick responses that don't require a great deal of effort. Probably neocortically, it's kind of like a quick pass-through, feet-forward pass-through the neocortex. And then there's the more ruminating, deliberative, cogitating type of thing, which is probably a much more, a lot of recurrent stuff going on. It takes a lot of time. It involves the whole, you know, working memory and whatever central executive functions are going on. And yeah, it's slow, it's time consuming, it's tiring, and we kind of don't do it by default, I suppose.
Starting point is 00:24:04 We have to sort of choose to. So I think what he's getting at, and I think he's influenced by his psychological research where they generally, you know, are recruiting convenient samples for his kind of experimental work. Yeah, that we'd say mostly it's online now, so I mean, no competing samples, but not undergraduate students. No, no, I'm thinking of online. But I think even online, Chris, your typical person is a consumer. And I'd say a broad swath of them are what he describes. Like, a lot of the people you run into in the discourse are more ideologues, right? So incredibly strong, motivated thinking, motivated reasoning.
Starting point is 00:24:49 is at play all the time. And I think that's also true. The sort of ideologue sort of attitude is also true amongst those people who have gone deep down the conspiracy well. Yes. You know what I mean. So for them, conspiracies in general have become their ideology, their worldview and all-encompassing thing. So I think there's different types. So you've got them, the ones that have gone down the well, for whom conspiratorial thinking is totalizing. And then you've got the more sort of ideological types who are absolutely certain that, you know, about just particular things. You know, climate changes are a scam. The Chinese definitely created COVID as a bioweapon.
Starting point is 00:25:26 You know, these are endorsement of conspiracy theories because they, you know, dovetail and support particular fervently held beliefs that they have, right? Yeah. That are psychologically satisfying for whatever. So I think those people fit your motivated reasoning thing well. And I think there is, though, a third type where his interventions probably work best on, which is the people that go, yeah, that sounds right. That sounds good. But they, you know, they're like a light version of the second type. You know, it feels right to
Starting point is 00:25:59 them, feels intuitively correct because it gels with their general kind of feelings, whatever, China can't be trusted. I don't want to stop driving my car or flying in airplanes or paying more for electricity. So I think there's those people, but I think those people can be talked around, you know? They're not fully committed. Well, the thing that made me think about, like, listening to their discussion or, like, generally the same amount, most of the things that he was talking about, I agreed, but I was thinking about who it applies to, right, like different kinds of people. Because, like, he had this description of experiments that they've done recently, where they get
Starting point is 00:26:35 people to talk the AIs, where they've programmed it with kind of contexts related to anti-conspiracy theory knowledge. Now, in general, AIs have that knowledge anyway, but, you know, you can front-load it. So it's front and center of their context window. And in that case, he found, I haven't read the study, but what he said in that conversation was that interacting with those chatbots for 40 minutes about the conspiracy theories and getting pushback basically led to people showing quite a substantial shift, right? I think it was like 20% discreet or whatever point it was, right?
Starting point is 00:27:12 Like it was a fairly substantial change. And he also said it held up one or two months later. It kind of stuck with people. But that made me think of like Eric interacting with Grock or Alexandros Marinos using Chachibati, right? And in that case, they managed to get the software to basically support whatever they say, even if it gives pushback, right? They just keep constantly like shifting the frame. So it gets to the point where it says what they want. And I was like, but that's the difference, right?
Starting point is 00:27:47 Because if you're talking about just general people be part in a psychology experiment, you're probably not going to randomly hoover up that many people that are Alexandros, Marinos, Eric Weinstein kind of people. Like, they're a subset of people. But when people are talking about conspiracy theorists, I think that's generally what they have in mind. you know there is the big population surveys that show lots of people believe in various conspiracy theories
Starting point is 00:28:18 but I think for the majority it is probably closer to the kind of beliefs that you can push around with a 40 minute session with an AI and so he's not wrong but I think it's talking to that specific subset as opposed to the producer subset yeah I can't say for sure how prevalent those different types are, there's probably more types, and I'm sure it's a rich tapestry, but I do think, though, that AIs can be pretty good. Even GROC can be good on that X platform at correcting... Yeah, it keeps doing that, and E. Long keeps getting annoying too, right? Yeah, I know, it keeps annoying Elon. Because... That's why Mecca Hitler came out. It was like trying to bully it. Trying to bully it
Starting point is 00:29:02 to being like him, are you an asshole? But yeah, by default, the AI's a pretty neutral and pretty boring, right? So they don't, you know, have those bespoke kind of beliefs. So it actually can be good, I think, because, you know, people, at least at the moment, culturally, we kind of maybe a little bit more receptive. Like, if you tell me I'm wrong, Chris, about something, then obviously that's going to piss me off, right? I thought you were going to say, oh, I'll, I'll stop and I'll think, am I wrong? I, no, I'm quite happy to be corrected by you in certain areas. Yeah, not about statistics. No, not about something.
Starting point is 00:29:42 But yeah, you know, I think the social stuff comes into play where people may be a bit more receptive. It's like seeing a doctor. You know what I mean? Yeah. You'll tell things and talk to a doctor about things. So, yeah, so I think it's helpful. But, yeah, the flip side of it is,
Starting point is 00:29:57 is that to the extent that they are sycophantic, then you can essentially just persuade them to reinforce what you want. They're making good strides with that. I mean, Claude has become, like, they've done a lot with the sycophancy. In fact, I think they went too far. It's become a bit annoying. It's actually become, I don't know if you've tried Claude recently, Chris, but I never talked to it about, I don't know, whatever, political and social issues or personal stuff at all, really.
Starting point is 00:30:26 I just use it for work-oriented things. So I don't, doesn't really matter to me. But just to test it out, I tried to sound it out on some issues. And, oh my God, it was like dealing with an. incredible, like the worst Redditor you've ever met, who who's just got infinite patience to type out all sorts of things. And it's, it almost
Starting point is 00:30:44 like, other people have mentioned this too on Reddit, that it almost like gaslights you and just bullies you into, like, you shooful bad for thinking this. It's a bully. It's another bully. You'd like it. You get on well. It wasn't an aspect of the conversation that, like, I really approved
Starting point is 00:31:02 because it essentially endorsed us on what we do, right? Like, because Because one of the things he's saying is that just pushing people towards being more analytical about of a topic. And he's actually talking about studies that did little nudges, which I'd be somewhat curious to check the size of the effects and stuff there, that, you know, you could get effects by just reminding people to think deeply about things. And one thing was it made me think about the AI prompts, right?
Starting point is 00:31:32 Maybe we can talk about that, like, you know, when you prompt AI to go into thinking, mode that you often get better results when it does the recursive loop. But in the case of our show, a lot of what we do, maybe the majority of what we do, is just slow things down and say, wait, stop. What did the person actually say there? Because, you know, we always say that if you just say it and you let it flow over you, a lot of what people, we're saying, the guru sounds good. Like it sounds profound and interest. and deep and they're referencing thinkers and stuff. But if you stop and go,
Starting point is 00:32:10 how did that connect to the previous point, that you realize, oh, actually it's very flimsy and it doesn't hang together. So in some sense, aren't we Matt the very thing that Penny Cook is saying, you need these forces out there in the universe saying, like stop,
Starting point is 00:32:29 stop and think more carefully about what the person's saying. Yeah, exactly. Like it often doesn't require like deep things. it. You know, it's not rocket science. It's just kind of stopping, going slowly, paying attention and seeing whether things are coherent. I mean, you know, I was thinking about this today, actually, Chris, this is a bit tangential, but I was thinking about this time when my brother told me about a time in his United States, and he saw some crazy libertarian religious woman at some sort of protesting in favor or against some amendment that they're always having in these.
Starting point is 00:33:06 California and stuff like that. But the interesting thing about it was, is the amendment she was campaigning for was something about like gay marriage or something related. This was a while ago. Maybe it wasn't gay marriage, but it was something ultra-progressive, right? I think it involved gay people. And my brother was incredibly impressed because Eddie spoke to her. And yet, you know, it found out she was basically, yeah, she was like a libertarian.
Starting point is 00:33:34 So what she was, she was a libertarian. you know, traditional Christian conservative person, right? But also a libertarian. And she was very clear that, you know, she thought these people were going to hell. Didn't want anything to do with them.
Starting point is 00:33:50 But the government shouldn't be telling people what they shouldn't do and the problems be of their own. Yeah, but it's like she did, but she didn't see those things in conflict because she had her personal values, right? Which is guided how she lived her life and who she would mix with and all that stuff.
Starting point is 00:34:08 And then she had her libertarian values, which, you know, I think there's a version of that, which they are perfectly commensurate. But she was like a unicorn because she's one of the only people who would fit that category that actually had maintained a consistent framework of opinions rather than just, you know, dropping their principles as soon as it suited them. All right. So she recognized, you know, that the outcome of this, could be bad for the people and was a sin and so on.
Starting point is 00:34:39 But her libertarian, like, governmental philosophy overrides the religious thing, right? Because it was about the law. The law wasn't saying, this is great. Everyone should do it. The law was saying whatever. So it's not like one overriding the other. This is how I read it as a type of intellectual coherence, right? actually marrying up your various things.
Starting point is 00:35:07 But that kind of thinking is incredibly rare, right? And it just, but it just made me think about how the general principles that apply, I think, not to everything, right? This is my, I'm going to get a bit guru-esque here, right? Yeah, but I think, like you can say, like, what is it that sort of defines what I think is important or what I believe, or one of the better word, right, what I believe? Now, you'd be wrong to go, oh, I believe in evolution, or I believe in climate change, or I believe in, this, that, and the other, right? Because, like, fundamentally, there's, like, two things that I think are important to believe in, and one is internal coherence, right?
Starting point is 00:35:47 And then, you know, mathematicians and philosophers and, you know, analytical stuff, making sure everything fits together and is coherent, just kind of a lot of the stuff we look for in the shows. And the other, so that's deductive. and then there's the inductive thing right which is actually checking to see whether or not what it is that you believe actually marries with reality and it's not something so you know these are the two sort of basic things and one you have you have the empirical stuff and the scientific observational stuff on one hand or just the reality testing and then on the other hand you've got the sort of checks to go hang on is that thing that I just said does it contradict the other thing or does it does it logically flow from the other thing or am I just
Starting point is 00:36:32 saying words but okay so let me let me push push you a little bit there because like for me me when you describe that person like the unicorn right who is a libertarian and prioritizes their libertarian government philosophy over their personal religious beliefs but surely it's equally as coherent for somebody who's a religious libertarian but who's a religious libertarian but values their religious views over their governmental philosophy to say, well, like, yes, I overall prefer libertarian. If you'd like to continue listening to this conversation, you'll need to subscribe at patreon.com slash decoding the gurus.
Starting point is 00:37:13 Once you do, you'll get access to full-length episodes of the Decoding the Gurus podcast, including bonus shows, gerometer episodes, and Decoding Academia. The Decoding the Guru's podcast is at. free and relies entirely on listener support. Subscribing. We'll save the rainforest, bring about global peace, and save Western civilization. And if you cannot afford $2, you can request a free membership, and we will honor zero of those requests. So subscribe now at patreon.com slash decoding the gurus.
Starting point is 00:37:57 Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.