Decoding the Gurus - Yuval Noah Harari: Eat Bugs and Live Forever

Episode Date: April 5, 2024

Yuval Noah Harari is a historian, a writer, and a popular 'public intellectual'. He rose to fame with Sapiens (2014), his popular science book that sought to outline a 'History of Humankind' and follo...wed this up with a more future-focused sequel, Homo Deus: A Brief History of Tomorrow (2016). More recently, he's been converting his insights into a format targeted at younger people with Unstoppable Us: How Humans Took Over the World (2022). In general, Harari is a go-to public intellectual for people looking for big ideas, thoughts on global events, and how we might avoid catastrophe. He has been a consistent figure on the interview and public lecture circuit and, with his secular message, seems an ideal candidate for Gurometeratical analysis.Harari also has some alter egos. He is a high-ranking villain in the globalist pantheon for InfoWars-style conspiracy theorists, with plans that involve us all eating bugs and uploading our consciousness to the Matrix. Alternatively, for (some) historians and philosophers, he is a shallow pretender, peddling inaccurate summaries of complex histories and tricky philosophical insights. For others, he is a neoliberal avatar offering apologetics for exploitative capitalist and multinational bodies.So, who is right? Is he a bug-obsessed villain plotting to steal our precious human souls or a mild-mannered academic promoting the values of meditation, historical research, and moderation?Join Matt and Chris in this episode to find out and learn other important things, such as what vampires should spend their time doing, whether money is 'real', and how to respond respectfully to critical feedback.LinksThe Diary of a CEO: Yuval Noah Harari: An Urgent Warning They Hope You Ignore. More War Is Coming!Our previous episode on Yuval and the Angry PhilosophersCurrent Affairs: The Dangerous Populist Science of Yuval Noah Harari (a little overly dramatic)

Transcript
Discussion (0)
Starting point is 00:00:00 Hello and welcome to Decoding the Gurus, the podcast where an anthropologist and a psychologist listen to the greatest minds the world has to offer, and we try to understand what they're talking about. I'm Matt Brown. My co-host is Chris Kavanagh, the Ernie to my Bert. I couldn't do without him, our most valued player in GurusPod Proprietary Limited. G'day, Chris. Wow, that's a lot of synchrosity going on there because just yesterday I was watching Sesame Street with my youngest son, not on my own. There's a series about the cookie monster and a kind of smaller pink monster called Gonger and they were on a foodie truck, a food truck. They're cooking various things.
Starting point is 00:01:07 So he enjoys that. So it was topical. That sounds good. Yeah. So only at birth, they're still around. I mean, they don't feature that much in that, but they're around. You know, they haven't got out of Sesame Street gig. They haven't gone independent. I wonder if they're podcasting.
Starting point is 00:01:27 Like they've started their own podcast that would be pretty on point yeah yeah yeah i'd listen yeah i think a lot of people would listen i haven't watched kids shows for a while i don't do that anymore kids uh the kids are watching grown-up stuff now but uh you're at a different life stage you recommend the bluey to me and that was pretty good that's good yeah yeah that's very popular a lot of people say it's really good i watched a few episodes just because people raved about it i was like it's a kid's show it's all right i guess but now well there's an episode where the parent dogs are like watching the kids from the balcony and they get drunk i mean that's not the main point of it, but I thought that kind of thing is the reason the parents like it. You've seen Australian dog parents getting hammered
Starting point is 00:02:11 and that's the subtext for the parents watching. Oh, that's pretty cool. Yeah. Actually, I'm glad you mentioned that because I had the impression that it was just a bit too saccharine, like a bit too everything is nice and positive messages and all that stuff. And, you know i kind
Starting point is 00:02:25 of like the kids shows from the 1970s and 80s where they were just insane freaks like the goodies and monkey magic and stuff not really many positive messages you want the edgy bluey yeah red and red and stippy bluey that's what i want oh okay right yeah i see it's not like that but but it has you know a little bit of it just a little bit at times, just a little sprinkling for the parents. So there's that. But, Matt, look, you look well. You know, you're fresh-faced. We're here.
Starting point is 00:02:52 There's another decoding to go. You're geared up, right? No, no. I slept for like two hours last night, but I've had three coffees, so I think it all balances out. But, no, I'm fine. I'm good to go. I'm keen to do. I'm keen to do.
Starting point is 00:03:05 I'm going to do Yuval Noah Harari slowly, Chris. That's a nice image. And I love it when you clap because it makes it so easy to edit. It's in there afterwards. But, yeah, I've actually, from being such a powerhouse of muscle, I have injured my shoulder from doing too many pull-ups. I went too hard, too fast. I reached levels that mere mortals couldn't comprehend
Starting point is 00:03:31 and it kind of twinged my shoulder. So I'm dealing with that, but we're all middle-aged people dealing with the slow destruction of our bodies. Everyone listening is in the same bucket. Of course we know this. The audience demographics don't lie same bucket of course we know this yeah the audience demographics don't lie we know we know yeah no no i'm gonna i'm gonna be good i have pains too
Starting point is 00:03:51 my shoulder still hurts from when i fell over ice skating in japan i think it's gonna hurt for the rest of my life and that has nothing to do with me i didn't knock you over i did not it was not involved in the ice skating to my knowledge so yeah i can't take the blame for that no i'm ready to go i'm gonna be on my best behavior chris gonna be focused they're gonna be good takes i'm not gonna clap i'm not gonna breathe yeah don't breathe he tells me not to breathe you can do that thing the chocolate rain guy you know i step away to breathe from the microphone that guy that's you should do his take but actually matt i'm. We're approaching 20 seconds left for the banter quotient of this episode, given that we now have a pure show dedicated to talking whatever we want about.
Starting point is 00:04:36 So we have to move on. The people have spoken. This is the format. And now we are going to turn to look at the israeli academic yuval noah harari streamers america deserve 9-11 dude fuck it i'm saying it academics can i make a comment about canceling culture streamers yeah please explain this to me so i can tell you how fucking stupid you are academics and when i'm talking about that anagogic in and out of the imaginal augmentation of our ontological depth perception, that's what I mean by imaginal faithfulness. You'll provide some interesting lessons for us today.
Starting point is 00:05:22 This is going to be really interesting. Yuval Noah Harari, historian, author of Sapiens, A Brief History of Humankind, and Homo Deus, A Brief History of Tomorrow, popular speaker, frequent appearance, man on TED, this kind of thing. Oh, and enemy to folks like Alex Jones and the reactionary right-wing conspiracy folks. Yes, this is Yuval Noah Harari. And one just side point here, Matt, side point. I have made a mistake. I recently discovered that I know what political reactionary means, right? Like the person hearkening back to the glory days trying to take things back but i also use that same word reactionary to mean like emotionally responsive and you know knee jerk giving things but what i mean is reactive reactive when i say yes yeah but the thing is mostly the
Starting point is 00:06:17 times when you're talking about a reactionary they are also reactive right so it doesn't people don't correct you very often because the two things go inside but they're different yeah i'll correct you if i could just i'll be glad to your pronunciation as well yeah i'm gonna help you and also millenarian matt millenarian not millennial just that was feedback for you right millenarian preacher not a millennial preacher okay well i know that again i know what we know these things all right we know just insert the correct word jesus christ it's just one letter i misspeak out of i don't know just being old and senile not from not knowing what the words mean
Starting point is 00:06:56 that's it well that was correction corner which is not part of banter town so you can't complain. We are now ready to idea jack into Yuval Noah Harari. And the content that we are looking at this week is his interview on the diary of a CEO of Stephen Bartlett. This is from just two months ago. Yuval Noah Harari, an urgent warning they hope you ignore. More war is coming! I think some of that is to do with them trying to tap into the YouTube algorithm because this conversation is not so dramatic. Yeah, I think that's definitely tapping into the algorithm. No, Yuval Noah Harari is not speaking to those points very much. Chris, let's talk about what we knew about Yuval Noah Harari before we listened to this because I didn't know a great deal.
Starting point is 00:07:49 I started reading a couple of his books. I bought Sapiens and I got a few chapters in and I think I got a little bit bored. But my vague impression of him was, you know, fine, like a kind of light, popular, history, science, big picture type author in my mind he lives in the same sort of space as people like jared diamond or malcolm gladwell maybe yeah was that was that your vibe exactly that exactly that and i as is often the case with those kind of offers i was
Starting point is 00:08:20 also there for the wave of kind of criticism which came, which said he's oversimplifying things. Historians have issues with some of the ways that he represents, you know, eras of history and things that I knew when I listened to him, like from anthropology, his stuff about the history of war or whatnot. I also find that he oversimplified and, you know, sometimes spoke with undue confidence. So I believe that is all correct. I also knew of him through his second job as a villain of the Alex Jones right-wing conspiratorial ecosystem, where he's basically an agent of Klaus Traub. And because he's talking about human augmentation and AI, they see it as he wants to usher in the brave new world. So that's the other way I've come across with him. And I know that leftists have an issue with him because he's essentially a pro-UN, WF, technocratic kind of
Starting point is 00:09:22 guy. So he's not calling for revolution. He's not a radical hip guy. That's all of the ways that I've encountered him. So yeah, I've seen some of his TED Talks and we listened to the thing that got all the philosophers in the tizzy about ideas being fictions or whatever it was. So yeah.
Starting point is 00:09:38 Yeah, and we're going to be hearing more about that. Well, what a good pivot, Matt, because, oh, and again, the one other thing is, I will say, I also bought Sapiens and listened to a bit of it, the audio book, and I enjoyed it. I also gave up, I think, or just lost interest after a while, but I did enjoy the kind of sweeping overview of, you know, taking early human evolution as part of the human tale that's not always the case and like history focused book so i i appreciate that that despite the limitations of the the chosen form yeah i think we have to be like a little bit well make some allowance for people that are writing those popular books like
Starting point is 00:10:17 yes obviously it's good not to misrepresent things and it's good to be accurate and the world and history and biology and science everything gets all much more complicated than is is usually made out than it's made out to be in a in a popular book by gladwell or steven pinker or jared diamond or or nova horari for that matter and or ricka bregman or ricka bregman yeah we're here but i but i have so i think there are legitimate points to criticize them on on the other hand I've also seen sort of a certain kind of academic specialist type like really going over the top and going, you know, they've described the Middle Ages
Starting point is 00:10:53 as being like this, but they don't recognize that actually things were completely different in France and England and, you know, and it's just, okay, all right, calm down. Yeah, there are people that, you know, get accused of being somewhat jealous of the attention that he's given. And I wouldn't say that's entirely inaccurate in some cases.
Starting point is 00:11:11 But in any case, that's not for us to dwell on. That's just for us to point at and hint towards as we move into the content. And actually, the start of the interview, they kind of talk about Yuval's grand mission, and it touches on the thing that he got in controversy for. It's to clarify and to focus the public conversation, the global conversation, to help people focus on the most important challenges that are facing humankind,
Starting point is 00:11:38 and also to bring at least a little bit of clarity to the collective and to the individual mind. I mean, one of my main messages in all the books is that our minds are like factories that constantly produce stories and fictions that then come between us and the world. And we often spend our lives interacting with fictions that we or that other people created with completely losing touch with reality. And my job,
Starting point is 00:12:16 and I think the job of historians more generally, is to show us a way out. Some interesting thoughts. I don't think that is the job of historians, that just one note. It's thoughts. I don't think that is the job of historians. Just one note is like, I don't think the goal of historians is to show us the way out
Starting point is 00:12:31 of understanding the narratives that people construct as he refers to them, fictions. Like he kind of presents it as that's the obvious job of historians. I'm like, well, like aren't historians primarily about documenting history
Starting point is 00:12:44 and what's happened yeah yeah like on one hand it's a you know it's a it's a nicely framed goal for he's you know he's trying to bring clarity and help people see things more clearly and understand what things are just ideas or stories or themes that we have super or narratives that we've superimposed on the world versus the more concrete facts of existence but yeah i mean as we'll talk about i'm not quite so sure you can draw such a strong dividing line between these are the real material facts of history or whatever and the stuff that's just stories and so but we'll get to that yeah we will get to that because Yeah, we will get to that because I think there is an issue here with the way that he uses fictions.
Starting point is 00:13:27 And in one sense, it's unobjectionable because talking about the importance of symbolic representations and political systems and ethnic identities and whatnot are things which humans have constructed, right, and which have had a big impact. And we should consider the empirical shakiness that various things that we take for granted rest upon. But on the other hand, the symbolic social realities that we exist in are very real in the sense that the people build castles and they marry people because they're associated with certain kinship groups
Starting point is 00:14:03 and also like the notion that that is like kind of it's all not true i it like it's kind of present using fiction in two ways but we'll we'll see as it gets on so the the overall agenda is fine and we're going to spend a little bit more on this these concepts about what he means by fictions and ideas. So here's another clip talking about the power of fictions. Much of what we take to be real is fictions. And the reason that fictions are so central in human history is because we control the planet, and rather than the chimpanzees or the elephants or any of the other animals,
Starting point is 00:14:44 not because of some kind of individual genius that each of us has, but because we can cooperate much better than any other animal. We can cooperate in much larger numbers and also much more flexibly. And the reason we can do that is because we can create and believe in fictional stories. Because every large scale human corporation, whether a religion or nations or corporations are based on mythologies, on fictions. Again, I'm not just talking about gods. This is the easy example. Money is also a fiction that we created. Corporations are a fiction. They exist only in our minds. Even lawyers will tell you that corporations are legal fictions. And this is on the one hand, such a source of immense power. But on the other hand,
Starting point is 00:15:41 again, the danger is that we completely lose touch with reality. And we are manipulated by all these fictions, by all these stories. And stories are not bad. They are tools. As long as we use them to cooperate and to help each other, that's wonderful. Some very broad thoughts there, Chris. I guess there's a few points there. I mean, there's a sense of which this is all i think reasonable and even a little bit interesting because i do agree with him that look we can call these things fictions if we want to but to just take one example the sort of
Starting point is 00:16:13 nationalism and xenophobia that is very prevalent in a place like russia at the moment that is a narrative that people tell themselves at different places and times, that we have a manifest destiny, that all of this belongs to us. We are the preeminent people and we deserve and should be subjugating other people. Like he says, those are ideas, those are narratives, those are stories you tell themselves. And I'm not one of these people that subscribe to like a purely materialist or a purely realist view of historical events. I think the ideas matter and the stories that people tell themselves matter. There's a reason why Ukraine is not in danger of being invaded by contemporary Germany at the moment, but is being invaded by Russia. And it's got a lot to do with the cultural stories that those people in those respective countries are telling themselves. So I think
Starting point is 00:17:01 that is all fine and true and even a little bit interesting because it is debatable and it's an interesting debate about the degree to which ideas and narratives play a pivotal role versus more material factors but it's a very broad definition of fiction fictions and on one hand he's right about that they are an important characteristic of the human race that distinguishes us from other animals yeah language transmissible culture as you know chris i mean but i think he he overstates that a little bit because he implies that's the only thing that sort of is a difference between us and other animals but actually you can't teach a chimp to repair a car you know there are limitations of other animals that are that are purely intellectual and you can't give them the culture so i'm on board to a large degree with the idea that the preeminence
Starting point is 00:17:50 of the human species has got a lot to do in terms of cooperative effort and incrementally increasing culture but he just overstates it a little bit i've got other bones to pick but i'll let you have a chance chris well i one thing that i would, slightly petty of me perhaps, but the philosophers are wrong. This clip makes it clear. He was not making a special point about human rights. He was talking about money and corporations and nations and human rights. He puts them all in the same category of fictions, right? So he didn't want, he is not saying they are not important, which is the way that some people interpreted a little clip of his that was going around
Starting point is 00:18:31 and doing the rhymes. So I think that criticism is not so good. But the point that you raised about him being overly broad and perhaps conflating different definitions of fiction, I think comes up here, because there's the sense of fiction in terms of something symbolic, right? Which is, you know, a kind of interpretive thing, which is not related to some inherently material, factual, material thing, whatever, right? And then there is fictions, meaning like stories that are non-existent. The difference here, I would say, is like a corporation and a dragon differ in various
Starting point is 00:19:13 important ways, right? And one never exists in the material world, except, you know, people build whatever, you know, statues of dragons. But corporations do exist in that there are buildings, there are people, there are concepts, there are things that trade on networks and so on. And so, you know, calling them both fictions, it seems to be playing a bit of a semantic game. But in any case, the point you made about wars, I think is a good one. And he did give an illustration of this, which highlights that point quite well. I'm now just back home in Israel. think is a good one. And he did give an illustration of this, which highlights that point quite well. I'm now just back home in Israel. There is a terrible war being waged, and most wars in history, and also now, they are about stories. They're about fictions.
Starting point is 00:19:58 People think that humans fight over the same things that wolves or chimpanzees fight about, that we fight about territory, that we fight about food. It sometimes happens, but most wars in history were not really about territory or food. There is enough land, for instance, between the Jordan River and the Mediterranean to build houses and schools and hospitals for everybody. And there is certainly enough food. There's no shortage of food. But people have different mythologies, different stories in their minds, and they can't find a common story they can agree about. And this is at the root of most UN conflicts.
Starting point is 00:20:40 And being able to tell the difference between what is a fiction in our own mind and what is the reality, this is a crucial skill. And we are not getting better at finding this difference as time goes on. seems sort of contradictory to me because you know i think he makes a very good point about a lot of human conflict is about differing ideas about nationhood and about who owns specific parts of land and you know who are the people associated with particular areas and so on that's all true but that last point where you know he's like it's all about the fiction it's not about the reality but like the reality is that there are people dispossessed from territories or there actually is restrictions and conflict and so on. Well, I read it like this.
Starting point is 00:21:35 I sort of classify this as true but trite, right? So he's saying that there isn't some sort of material necessity for these two groups to be in vicious conflict it's it's happening because of the ideas that are in their heads right and yeah but everything i know that's what i'm saying that's what i'm saying that's the trite bit chris right it's the same with this point about corporations and so on yes they are a concept that lives in your head but you know you could talk about about, I don't know, your family house, your family home rather, right?
Starting point is 00:22:07 Yeah. And you go, that's just a concept that lives in your head. This is just, you know, some walls and a roof and a place where a group of people statistically are more often to spend the night. Yeah. Like, yeah, okay, fine. But it might be interesting to certain types of philosophers. I don't know, but not to me.
Starting point is 00:22:26 And, you know, I mean, but if you put that aside, I mean, I think there's a more generous way to read it, which is that social constructs that we have, which then actually become formalized and aren't just like a concept like a dragon, are very important and actually have an instrumental role in the world. Sure, your passport. Your passport. role in the world your passport your passport like for instance when you are charged with a crime in australia then the the mechanism is that the crown presents one side of the case against you another of course it's a civil case it could be another person who's presenting a case against you so we have this this this construct of the crown which is like the state acting on behalf of everyone sort of acting as a surrogate for a real person. And so that is somewhat interesting, I think. And same with corporations.
Starting point is 00:23:08 You know, corporations, you know, first came about as an instrument to impersonate a person, essentially, something that can own things, someone who can employ other people and pay tax and so on like a person. You know, I think they wanted to send ships around the world to go grab some spices and come back to merry old England. Probably there's only a 50-50 chance of the ship ever seeing it again. Massively risky but potentially high payoff. Even a very rich person could not be willing to take on that risk. So a lot of people can put all their funds together, get a ship.
Starting point is 00:23:42 Now you've got to employ a captain and the ship has to be owned by some entity. And so those roles of employing the ship's captain, owning the ship and then distributing the money to the shareholders, these are all functions. But it is still, like he says, like a purely constructed thing but nevertheless has a very real and instrumental role, especially in the modern world yeah and i i do think that people encountering analysis of the social construct
Starting point is 00:24:12 or how states came into being right the you know eric hobbes born and and ranger or imagine traditions all of all of these kind of concepts where you're critically evaluating many of the concepts that you have been steeped in, but maybe taken for granted. I do think that is a valuable thing. And that in most cases, maybe this isn't fair, in a lot of cases, it happens whenever people are teenagers or they go to university and they come across these kind of ideas. But it can come across, you know, whenever you read a philosopher or you see a YouTube discussion about some point, right, read Naomi Klein's No Logo or whatever the case might be, or see a documentary about, you know, the corporation and their kind of psychopathic nature. But I think that Harari, in highlighting these things, is in a way,
Starting point is 00:25:09 he might be the first person that people encounter that are introducing this idea, right? And if that is the case, I think it's perfectly reasonable that people would find it, you know, kind of mind opening. But if you have come across those ideas many times before like you say it feels like yeah a relatively trite observation so it might be in our case the ivory tower effect right where yeah of course people know that money is a you know like a yeah that yeah money doesn't have intrinsic value it's just a token a means of whatever like yeah it might be blindingly obvious might feel blindingly obvious to some people but you know like you say more charitably you know it wasn't obvious when i first found out
Starting point is 00:25:57 somebody the first time someone in a dorm said to me what money really is yeah that's right and you know for someone people out there encountering these things for the first time it isn't obvious so you know the i think the generous interpretation is that he's encouraging us or reminding people that these things which we which we do tend to treat as very real and it's just like a permanent fixture of the world like money in corporations and countries, like the John Lennon song goes, you know, they are a choice.
Starting point is 00:26:30 And, you know, you shouldn't think that they're necessarily inevitable. You could organize things in other ways because they are simply ideas. Well, I'll play one last clip for Harari blowing our minds about the concept of money to illustrate this. Here's the clip I said, money is not real, man. But also if you think about the world's financial system.
Starting point is 00:26:51 So money has no value except in the stories that we tell and believe each other. If you think about gold coins or paper banknotes or cryptocurrencies like Bitcoin, gold coins or paper banknotes or cryptocurrencies like Bitcoin. They have no value in themselves. You cannot eat them or drink them or do anything useful with them. But you have people telling you very compelling stories about the value of these things. And if enough people believe the story, then it works. They're also protected by language.
Starting point is 00:27:24 Like my cryptocurrency is protected by a bunch of words. Yeah, they're created by words and they function with words and symbols. When you communicate with your banker, it's with words. You know, when you communicate with your wife, it's with words. When you talk to your dog, it's usually words they don't adhere to them all. But, you know, just there's, yeah, I'm being a bit mean, but. I feel like he's making his point more interesting by.
Starting point is 00:27:55 His tone of voice. His tone of voice partly, but also, as we just discussed, he has an extremely expansive definition of this idea of stories we tell ourselves, f right it's very very broad yeah so it kind of sounds more interesting when you say that hey money is just stories we tell ourselves man but you know you could just easily say you know money is a form of information and it's a form of yeah of tracking credit and debits from one person to another and or sort of collectively i feel like he's saying again something that is pretty pretty trite but it and debits from one person to another and or sort of collectively.
Starting point is 00:28:26 I feel like he's saying again, something that is pretty, pretty trite, but it is making it sound a bit more interesting in the way that he's choosing to express it. Right. And one of the other topics that he talks about is AI. And he actually initially links this to the, you know, point about the importance of words and understanding stories
Starting point is 00:28:44 and points out a limitation with AI. And also with new technologies, which I write about a lot, like artificial intelligence, the fantasy that AI will answer our questions, will find the truth for us, will tell us the difference between fiction and reality. This is just another
Starting point is 00:29:05 fiction. I mean, AI can do many things better than humans, but for reasons that we can discuss, I don't think that it will necessarily be better than humans at finding the truth or uncovering reality. It won't necessarily be better than humans in finding the truth or uncovering reality well yeah so like there's an issue there because if he's talking about which particular national story is the most compelling to right maybe it can't right it's a subjective thing but if it is can the ai distinguish which which of these scans shows cancer and which does not more accurately than the person? The answer is probably yes, it can. And over time, you know, it will more accurately reflect the reality. But on the other hand, Matt, you know, cancer,
Starting point is 00:30:00 isn't that just a word that we put on a biological thing which is happening isn't it just a story that we tell about the immune system so anyway yeah well you know at the beginning he defined this this hard dichotomy between materially real things and ideas and it says that we're all somewhat clouded by all and mistaking the ideas for the material reality i mean and and an ai won't be able to do any better. But, I mean, I don't even know what the point is there. I mean, I think I can distinguish between, without getting into word games,
Starting point is 00:30:33 I could point to things like a table, which is physically there. It's associated with a very particular set of physical things and a corporation, which, you know, anyway. Maybe you're the savior, savior matt you're what we need but it doesn't seem that hard basically you know it's very hard for ai man it's very hard it does discuss the ai revolution and when people are talking about this it's kind of interesting to see where they fall on the utkowski spectrum? Like are they doomers or are they accelerationists, right?
Starting point is 00:31:06 That's the thing. So where does he fall? And this should be clear. And there is no chance of just banning AI or stopping all development in AI. I tend to speak a lot about the dangers simply because you have enough people out there, all the entrepreneurs and all the investors talking about the positive potential. So it's kind of my job to talk about the negative potential, the dangers. But there is a lot of positive potential and humans are incredibly capable in terms of adapting to new situations. I don't think it's impossible for human society to adapt to the new AI reality. The only thing is it takes time and apparently we don't have that time. He's actually a kind of positive doomer in a way, but like his issue is
Starting point is 00:32:01 things are progressing so rapidly that we're like, we could adjust, but it's going too fast. Right. So that is calling for things to slow down. Yeah. Yeah. I think in a few other points, too, he points to technological things which will disrupt society in ways we can barely comprehend. And yeah, while he's somewhat optimistic, he talks about AI in the same terms. Yes.
Starting point is 00:32:23 So he compares it to the printing press, and here's that bit. And when I hear these kinds of comparisons as a historian, I'm very worried about two things. First of all, they underestimate the magnitude of the AI revolution. AI is nothing like print. It's nothing like the industrial revolution of the 19th century. It's far, far bigger. There is a fundamental difference between AI and the printing press or the steam engine or the radio or any previous technology we invented. The difference is it's the first technology in history
Starting point is 00:33:00 that can make decisions by itself and that can create new ideas by itself. A printing press or a radio set could not write new music or new speeches and could not decide what to print and what to broadcast. This was always the job of humans. This is why the printing press and the radio set in the end empowered humanity. That you now have more power to disseminate your ideas. AI is different. It can potentially take power away from us. It can decide, it's already deciding by itself what to broadcast on social media, its algorithms deciding what to promote.
Starting point is 00:33:50 And increasingly, it also creates much of the content by itself. Point of order. He is conflating AI with like all kinds of technology, algorithms, and, you know, there's been recommendation engines for Netflix and so on around for a long time before AI came along. And our attention has been guided by that. So it's AI being involved in that loop isn't really going to fundamentally change anything. So that's an interesting point, Matt, because you have some expertise there. So I think that conflation looms larger. But to my non-expert mind, is that not in the grander scheme, part of the evolutionary trajectory
Starting point is 00:34:28 of the development of AI? Like initially you have recommendation engines and, you know, things ranking popularity of stuff. And then over time you put more systems into it. So it more cleverly organizes what things to show people and so on and yeah but in your case it's obvious that like lms are doing something significantly different right than has previously been going on so no no i mean you're basically right like you could frame it as a difference of degree like those are those original recommendation algorithms were were also done via like a factor
Starting point is 00:35:06 analysis type thing like putting all of the data of what everyone's seen and you know what maybe people have enjoyed and then collapsing it down into a latent subspace and then reconstructing it to get a prediction for you so that isn't like a fundamentally different thing from what's going on in a large language model for instance so you know you're kind of right i guess so i mean putting that aside those i guess the thing is we wouldn't call those things intelligent or like human like intelligent right they're just they're they're a clever statistical pattern recognition thing yeah i did have the same thought about weller is algorithm what people are referring to with ai because usually they treat them separately but but it is true then a lot of discourse they kind of get mixed yeah no one like an algorithm is an extremely broad concept you
Starting point is 00:35:51 don't need a computer to do an algorithm your algorithm could be make the toast get the butter out of the fridge butter the toast follow those steps if then that kind of thing so no it probably is recommendation algorithms that people mean when they refer to that right like the social media recommendation oh when they talk about the algorithm on social media yeah yeah yeah they're referring to the recommendation engines of which you can have simple ones you can have complicated ones you can do it in any number of ways but yeah i mean what do you think chris i mean he's just giving his opinion here right so on one hand yeah it's like fine whatever and he's a historian yeah he's a historian his opinion here, right? So on one hand, it's like, fine, whatever. And he's a historian.
Starting point is 00:36:25 Yeah, he's a historian. And I think he can make a very credible case that it could be bigger. I'm not certain it would be, but you could make the case that it's going to be bigger than the printing press, bigger than the Industrial Revolution in terms of social impact.
Starting point is 00:36:38 Yes, I definitely think it, you know, we can't tell you because we're just in the, like, second or third year of the LLM revolution. We could guess. We could guess. No, I'm not worried. Yeah, we could. It is certainly the case that you would expect there's going to be a lot more technological development in the next 50 years than there were in the previous 400 years. The exponential increase in technology is apparent. increase in technology is apparent, right? Like we didn't have the internet 30 years ago,
Starting point is 00:37:12 and now everyone has the internet in their pocket in high-speed form, like a lot of people. And the internet has been transformative in a lot of ways. And AI, you know, seems it can combine with the internet. So it would make sense. But one thing he said was like, AI has the power to remove power from humanity and kind of go on this under its own agency, right? Like pursuing its own goals and it could remove power. And yes, I can see future where that's possible, but he says the previous ones had the ability to empower people, to disseminate their ideas more widely, to, you know, do things more easily.
Starting point is 00:37:44 But AI also has that too so yeah it's one potential future in which the ai converts us all into paper clips and another one is just people are using ai in the way they use internet now for good and bad purposes yeah yeah like you i didn't buy his his contrast there which is that these were all empowering technological revolutions and this is going to be a disempowering one i mean my experience in using ai so far is very much an empowering one right it empowers me to do a lot of tedious tasks more quickly and this allows me to focus more on stuff that i want to focus on so that's clearly empowering right and i don't think that's fundamentally different from the lawnmower that I've got in the shed, which empowers me to cut the grass faster, right?
Starting point is 00:38:28 Another boring job. But does the lawnmower have its own thoughts about where you should be cutting the grass and whatnot? This is the issue. And actually, pushing them closer to the doomer spectrum, you have them talking about what if we get it wrong? Because he's highlighting the cases that during the industrial revolution, there were various political ideologies and ideas about empire that came up, right? That people tied together. And these were, you could say, failed experiments, right? That led to millions of people dying and the atomic bomb and so on. And he argues that with AI... So these are just a few examples of the failed experiments. You know, you try to adapt to something completely new, you very often experiment and some of your experiments fail. And if we now have to go in the 21st century through the same process, okay, we now have not radio and trains, we now have AI and bioengineering, and we again need to experiment, perhaps with new empires, perhaps with new totalitarian regimes, in order to discover how to build a benign AI society,
Starting point is 00:39:46 then we are doomed as a species. We will not be able to survive another round of imperialist wars and totalitarian regimes. So anybody who thinks, "'Hey, we've passed through the Industrial Revolution "'with all the prophecies of doom. "'In the end, we got it right,' no. "'As a historian, I would say that I would give
Starting point is 00:40:07 humanity a C- on how we adapted to the Industrial Revolution. If we get a C- again in the 21st century, that's the end of us. Again, Chris, there's a theme here, I think, in my comments, which is I read him here as making a very broad, somewhat true, but also somewhat banal point, which is that when technological revolutions occur, going right back to things like, okay, we can grow food by planting seeds. Now we can build a city and so on. There's going to be unintended consequences. Like you're going to have a lot more disease, for instance, because a whole bunch of large numbers of people are living in the same area. And, you know, it's true, I think broadly what he said, which is that, you know, the industrial revolution made it possible for political systems like fascism or communism to work. They weren't really an option at a
Starting point is 00:40:56 national kind of scale before. And, you know, it's also true that, you know, technology generally, like more modern electronic technologies, like China is setting up what seems to be like a surveillance state in some respects. And you don't necessarily need AI. You don't need to single out AI as the key ingredient there. You just need like closed circuit TV cameras and bracelets to monitor people. So technology is dangerous. It's a dual edged sword. I don't disagree.
Starting point is 00:41:30 But isn't it a kind of a banal point again like don't most people know this that technology can have good and bad consequences yeah you would imagine they do and that i think his statement that definitely if we have totalitarian regimes and you know fascistic governments or whatever that'll be the end of humanity will it like we have totalitarian regimes and, you know, fascistic governments or whatever, that'll be the end of humanity. Well, like we have totalitarian regimes now all over the world, and some of them even have access to nuclear weapons. And we're still here. And I'm not saying that should make us're completely at the point that you know it's yeah we're we're doomed in the next like 20 years if we don't get it right well well that's right like you demonstrably don't need hyper intelligent ai to run a fascist state um it's been done before it's
Starting point is 00:42:19 it's happening currently and we could potentially have you know, more states could trend into fascism without any AI whatsoever. So it's very easy to imagine that. So the way he's cast it is that, you know, the Industrial Revolution came along. It led to these harmful sort of social systems, but then globally we sort of have learnt to adjust to the- We adapted. We adapted to the Industrial Revolution.
Starting point is 00:42:43 So we're not going to do fascism or communism anymore i don't think that's true is it i mean i i could well see more countries slipping into authoritarianism without ai yeah well so there's that i think one of the things that yuval does and why he gets attention is because he has a knack of presenting stuff in quite attention-grabbing, sometimes overly dramatic ways. And another example of this, which relates somewhat to the topic, is when he's talking about future economic developments and in relation to AI, but also bioengineering and all these kinds of things. So here's something he says about jobs in the future. Nobody has any idea. I mean, if you think about specific skills,
Starting point is 00:43:29 then this is the first time in history when we have no idea how the job market or how society would look like in 20 years. So we don't know what specific skills people will need. If you think back in history, so it was never possible to predict the future, but at least people knew what kind of skills will be needed in a couple of decades. If you live, I don't know, in England in 1023, a thousand years ago, you don't know what will happen in 30 years.
Starting point is 00:44:07 Maybe the Normans will invade or the Vikings or the Scots or whoever. Maybe there'll be an earthquake. Maybe there'll be a new pandemic. Anything can happen. You can't predict. But you still have a very good idea of how the economy would look like and how human society would look like in the 1050s or the 1060s. So we have no idea, Madhav, about jobs in 20 years. This is the first time in human history that, you know, who knows in 20 years, will there be lawyers? Will
Starting point is 00:44:40 there be doctors, scientists? Who knows? I'll take that bet. I'll take that. I'll take that bet for the next 50, 100 years as well. Chris, I think you're being a bit uncharitable here. I think that's broadly true, that the jobs are becoming incredibly more specialized and specific. And it's a bit of a truism, but people do become a bit obsolete. Things are changing quickly. Was it your grandfather or your dad that learned the specific accounting method? Yeah, yeah.
Starting point is 00:45:13 Doing the things like that. I'll tell people that little story. So my grandfather was a bank manager, was that he was really reliable at calculating. He could open up the notebook and go through the ledger and he could add up all the totals and get the right answer very reliably. So that's a skill that went obsolete during his career. So things do change. Right. That's a skill that went obsolete during his career.
Starting point is 00:45:44 So things do change. But Matt, did that get rid of bankers and accountants when that skill became obsolete? Did we now no longer have any accountants or bankers doing any job? Okay, I got one. I got two words for you, Chris. Yeah. Web designer. Yeah, web designers still exist.
Starting point is 00:46:00 Far fewer of them than they used to be. Like it used to be like a hot job lots of people being trained for it it was a system where it took a long time and a lot of technical skills to sort of write all the html to put a code thing together now the vast majority of people can get them off the shelf and very few people need like someone who was actually has dedicated training in web site design right but here's the criticism i have there. So those skills weren't transferable. Like if you learned about web design and that kind of thing, it didn't make you better equipped for learning, you know, how to code in a different language or how to do other things.
Starting point is 00:46:39 It does because the kids now, for example, have more exposure to computers and online environments, and they're much better at navigating them than older people. I'm not saying they're directly transferable, but it's the same way that I feel this is saying that people will learn these highly specific skills and then they'll completely be irrelevant. And what are we going to do then? these highly specific skills and then it will just completely be irrelevant. And, you know, what are we going to do then? But I am certain that humans are still going to be doing things, walking around in like 50 years and that kind of stuff. Because like, when you look at the way that
Starting point is 00:47:18 we have imagined the future in our fiction, we've always overestimated certain things and underestimated others, right? The way that we presented technology in sci-fi in the 1950s is that everybody will be taking a small pill and that's all they'll want to eat, right? Just the pill that has all the nutrients for the day. And they'll be flying around in a jet car. But the computers are big giant tape decks and all that kind of thing. But none of that is true, right? Like people can get access to, you know, nutrients or whatever, but people still want to eat meals. They still want to do that. And in the year 2024, Matt, people, even very online people, are talking about the importance of working out and touching grass and doing all these things. So you all could be right. I could be wrong that all this is going
Starting point is 00:48:11 to be fundamentally transformed and we're all Skynet's fucking slave in 20 years time. But I think it's more likely that humans will be existing in society sort of the same. AIs will have, you know, transformed various industries and made things easier, but there will be new tasks and there will be new things that people are doing and people will still be motivated by the stuff that they always were,
Starting point is 00:48:36 status and resources and that kind of thing. What am I, where am I wrong, Matt? Where am I wrong? Where's the lie? You're tilting at windmills, Chris. So if he's making the pretty basic point that technological change is increasing it's happening a lot faster today than 100 years ago and it's happening was happening a lot faster 100 years ago than it was in the middle ages and the jobs that people work in are changing more quickly too
Starting point is 00:49:02 often during our lifetimes even shortly after we've been trained in something like i had a family member who was a web designer this is why i mentioned it his skills were basically obsolete within a year of graduating now that was different in the middle ages if you were a collier or something your apprentice to your dad who taught you how to reshoe horses or something like you're not going to have the same concerns about whether or not they're going to be needing to be reshoeing horses in 1280 as compared to 1260 right that's the point yeah but you know eventually being a web designer is going to be like a basket weaver in the contemporary age where you know you people come to the museum to see you design use the ancient technology to design a web page or whatever.
Starting point is 00:49:45 Like people don't imagine that because it's too near history. But, you know, the people get interested in things. They come to be seen as like artisan crafts and that kind of thing. And I'm not saying web designing necessarily will, but I just think this vision is wrong. If you now look 30 years to the future, nobody has any idea what kind of skills will be needed. If you think, for instance, okay, this is the age of AI, computers, I will teach my kids how to code computers. Maybe in 30 years, humans no longer code anything because AI is so much better than us at writing
Starting point is 00:50:24 code. So what should we focus on? I would say the only thing we can be certain about is that 30 years from now, the world will be extremely volatile. It will keep changing at an ever rapid pace. So Matt, let me make a point here. I use AI in statistical analysis, right? The AI is better than me. It has a deeper knowledge of a whole range of techniques. It can write code better. It knows every coding language, right?
Starting point is 00:51:00 And in this respect, it's already surpassed me and my abilities, you know, hundreds fold, right? But going through grad school, learning statistical methods, learning what are sensible statistical questions to ask, how best to organize data and how to arrange that in a way that lets you do meaningful tests, that was extremely helpful for being able to work with AI in order to get meaningful analysis of data. So in Harari's world, it sounds like he's imagining that none of, like the fact that I spent time developing the skills to, you know, run statistical analysis on programs that will probably be
Starting point is 00:51:45 obsolete in 20 years time means that that skill was you know kind of a waste of time but it wasn't without doing all that kind of stuff i'd just be putting in their black box questions and it would make it much harder for me to interpret outputs right yeah sure but he's making the point that like it could well be the case that there's the skills involved in technical coding right so i'm not talking about the broad conceptual stuff about yeah c++ language or whatever yeah yeah like it won't necessarily like eliminate the need for for humans to be involved in the process of generating code but what it can do is make obsolete the like a lot of the very
Starting point is 00:52:25 very time consuming stuff that involves actually doing the job writing code that's right i mean it's analogous to say farmers right in the industrial revolution yeah they had tractors and things like that we still have farmers today chris exactly that's my point no yeah but your point is wrong because we have far far few we have like about 1% of the number of farmers that we had a few hundred years ago. Because so many of the functions associated with the farming have been automated by machinery. Yeah. So I take his point to mean that we're going to see that kind of thing happening in the desk jobs. I agree with that.
Starting point is 00:52:58 There'll be a transformation in the nature of jobs as technology improves. Like, I completely agree. I just, I feel that it's probably a little bit just in the way that it's stated. It feels to me like over hyperbolically stated and that there is a version of it, which is what you're saying, which is completely unobjectionable, right?
Starting point is 00:53:17 And maybe that is what he means. But when he says, you know, what we teach the kids now, it will be completely irrelevant in 30 years. I'm like, will it? No, I get it. I'm not so sure. I understand what you're reacting against.
Starting point is 00:53:30 And, you know, it's what we mentioned before, which is he has a knack for saying things that are relatively true and hard to disagree with, but saying them in just a slightly flamboyant, sweeping way. And it could just be the choice of words you know um which and and i sort of mentally chose to ignore it then but you didn't well probably the people that listening you know the way that'll work the whole bunch of people will be like matt is right and a whole bunch of people will feel no chris has got it and you know it's it's really a subjective thing
Starting point is 00:54:02 just about the way that you interpret what he's saying. But to speak a little bit more to this tendency towards exaggeration, hyperbole, or perhaps accurate concerns, depending on your point of view, listen to this. And what happens to the financial system if increasingly our financial stories are told by AI? Increasingly, our financial stories are told by AI. And what happens to the financial system and even to the political system if AI eventually creates new financial devices that humans cannot understand? Already today, much of the activity on the world markets is being done by algorithms
Starting point is 00:54:50 at such a speed and with such complexity that most people don't understand what's happening there. If you had to guess what is the percentage of people in the world today that really understand the financial system, what would be your kind of... Less than 1%. Less than 1%. Okay, let's be kind of conservative about it. 1%, let's say.
Starting point is 00:55:13 Okay. Fast forward 10 or 20 years, AI creates such complicated financial devices that there is not a single human being on earth that understands finance anymore. What are the implications for politics? What are the implications, Matt? Imagine that world where there's no individual human who understands all of the financial systems and the algorithms. What would that world be like?
Starting point is 00:55:43 What indeed? Well, look, obviously, we're already there in many respects. Yes. Yes, we are. It's already the current world. Definitely, I do not understand the global finance system. I suspect a great many economists don't fully understand it either, given the degree of disagreement there is.
Starting point is 00:56:02 Even Elon Musk, I think, doesn't understand all the financial instruments. Even Elon? Yeah, him. Or, you know, even the quants that are designing the algorithms for high-speed training. I don't think they understand everything that is going on in every single algorithm, right?
Starting point is 00:56:21 Or the decision trees that are going on to make the individual trades no we're already there we're already in the world where there are various algorithms and computer programs that are operating which are important to the economy to science that we do not individually understand in their entirety yeah so yeah i mean it's confusing what he actually means there because presumably he doesn't mean just the algorithms that support the fast automatic trading which can be very intelligent or could be less intelligent but whatever right those those clearly aren't really an issue they're already they've been happening for a while now yes i mean there are issues with
Starting point is 00:56:59 them but it's nothing special really that ai there. But when you talk about financial instruments, you usually refer to things like mortgage-backed securities or treasury notes. These are financial instruments. And he's saying that AI is going to invent maybe new financial instruments that we don't understand. And that just seems like pure speculation to me. And we're not obligated to use these instruments that we don't understand i think what he wants to say is but you know imagine that they were giving you know 400
Starting point is 00:57:30 returns to some economy and if you didn't use it you'd be left behind so you don't understand that and you know he talks about the mortgage-backed loans or whatever you know the things that led to the financial crisis being a potential illustration of the kind of thing that he's talking about because they were complex, right, and lots of people. But in that case, I think that's a good illustration that humans do it, you know. Yeah, we confuse ourselves quite easily without the help of AI.
Starting point is 00:57:58 And ultimately, it is a human choice whether or not to go, oh, I'm going to buy these securities, and I don't fully understand what the backing is for them, whatever, but I'll just do it because everyone else is doing it. Yeah, people have been doing that since the tulip craze, you know. So I don't, it just seems speculative to me to just say, oh, well, maybe AI will do this. Yeah, but then there's also instances where, like,
Starting point is 00:58:22 I'm pretty sure computerized trading has resulted in like weird things. Oh yeah. Wasn't there a case where there's been market crashes because of algorithms getting into your feedback loop and people being like, what the F is going on. So that happened, but that's, that's already happened in the current world that we live in.
Starting point is 00:58:41 And I mean, another example is that in the UK, you may not be aware of this matt there's a current thing with a called the post office horizon scandal where there was faulty computer software which led to false accusations of theft and fraud and all these people ended up accused and it turned out like it was a fault, right, with the software. Oh, yeah. Very similar thing happened in Australia.
Starting point is 00:59:07 The Department of Social Services put in like an automated algorithm that kind of checked to see whether people receiving payments were entitled to them or not. It was all automatic. It would send off the letters automatically and then cut people off automatically. And it made a bunch of wrong decisions, right? So it was just an algorithm right a computer algorithm it wasn't ai it was just automation and so he's making a banal point in my view that when you when you automate things there can be unintended consequences yeah just like when you have a new technology it might be a two-edged sword chris that's i just don't find i just don't find any
Starting point is 00:59:44 of it very interesting i don't find it very objectionable i just don't find any of it interesting it's reasonable points to make but like he gives the example of what happens about you know algorithms deciding bank loans for example let me just play it because it's quite short and increasingly it's algorithms making all these decisions for us or about us. Is that possible? It's already happening. Increasingly, you know, you apply to a bank to get a loan. In many places, it's no longer a human banker who is making this decision about you, whether to give you a loan, whether to give you a mortgage.
Starting point is 01:00:22 It's an algorithm. you, whether to give you a loan, whether to give you a mortgage. It's an algorithm analyzing billions of bits of data about you and about of millions of other customers or previous loans determining whether you're credit worthy or not. And if you ask the bank, if they refuse to give you a loan and you ask the bank, why didn't you give me a loan? And the bank says, we don't know. The computer said no. And we just believe our computer, our algorithm. I mean, he is saying that that's already happening. But what was the situation with bank loans before that was better? Like humans looking at it, because like there was lots of issues
Starting point is 01:01:01 about corruption or people, you know people deciding things in favor of people that they like versus they don't or making mistakes whenever they're looking over documents or they had a good day, that thing. So yes, it is true that automated processes can make errors and that these give people a layer of deniability because they can just point to the machine data, right? Like the Post Office Horizon scandal. But the human element was also liable to corruption or error or so on. So like, is it not just that all systems have some component that there is the potential for error or bias or so on? So we should make ways to try and incorporate pushback, right?
Starting point is 01:01:47 Complaints and so on. A human element, right? Where you're able to raise complaints. But like, why is that suddenly such a big, because isn't that the case like with life insurance for decades? Yeah. That they've been, you know know calculating your risk of dying and whatnot that's right they don't there's not much of a human element in that even without a computer
Starting point is 01:02:10 like these tables and look up things and etc yeah god i'm just bored talking about it because we're having to explain in detail where the point he is making is uncompelling and anodyne so yeah okay last step on this point, Matt. Last one, because I think it's kind of funny and a little bit sci-fi. So listen to this. So going back to the financial example. So imagine that, you know,
Starting point is 01:02:32 it's four o'clock in the morning. There is a phone call to the prime minister from the finance algorithm telling the prime minister that we are facing a financial meltdown and that we have to do something within the next, I don't know, 30 minutes to prevent a national or global financial meltdown. And there are like three options and the algorithm recommends option A
Starting point is 01:03:01 and there is just not enough time to explain to the prime minister, how did the algorithm reach the conclusion? And even what is the meaning of these different options? How does that differ if a human minister of finance calls the prime minister in the morning and says, you know, look, we need to do something now. The stock market is crashing. There's three options. This is the one i want to go for but it's it's technical like i i don't have time to walk you through it you got to make the call like what's the and also the algorithm calling is it that the it would be the the algorithm i i just find you know you're right but I just find these what-if type scenarios, like hypothetical situations in this hypothetical world
Starting point is 01:03:47 where the AI can talk and is independently analysing data and in this hypothetical world, it's basically taken the role of the finance minister and the prime minister's getting the advice directly from this robot and the robot's saying, quick, you have to do all these things in 30 minutes like like oh you know it's just it's a did it pop up on like i i talk you know ai algorithm it's like saying what if you know what if the you know what was that movie where there was an ai and there was a nuclear war war games war. War Games, yeah, yeah.
Starting point is 01:04:25 Yeah, War Games. So, yeah, Chris, what if we put the AI in charge of all the nuclear weapons, and we just said, you figure out what to do, and we'll just do what you say? That'd be bad, wouldn't it? Could be. Yeah, it could be. Well, you're going to really like this next section, Martin Marquez.
Starting point is 01:04:45 If the what if speculative stuff was kind of annoying you, oh, you're going to enjoy this. This is his kind of futurist component, and he's talking about where humans are heading. And if you play that forward 100 years, maybe 200 years, you don't believe that you believe we'll be the last of our species, right? I think we are very near the kind of end of our species. It doesn't necessarily mean that we'll be destroyed in some huge nuclear war or something like that.
Starting point is 01:05:19 It could very well mean that we'll just change ourselves using bioengineering and using AI and brain-computer interfaces. We will change ourselves to such an extent that we'll become something completely different, something far more different from present-day Homo sapiens than we today are different from chimpanzees or from Neanderthals. I mean, basically, you know, you have a very deep connection still with all the other animals because we are completely organic. We are organic entities. Our psychology, our social habits,
Starting point is 01:06:01 they are the product of organic evolution and mammalian, more specifically mammalian evolution over tens of millions of years. So we share so much of our psychology and of our kind of social habits with chimpanzees and with other mammals. Looking 100 years or 200 years to the future, maybe we are no longer organic or not fully organic. You could have a world dominated by cyborgs, which are entities combining organic with inorganic parts, for instance, with brain computer interfaces. You could have completely non-organic entities.
Starting point is 01:06:43 You like the culture. Yeah, yeah. could have completely non-organic entities you like the culture yeah yeah we could have a mind meld between a human and an octopus and live under the sea chris many things can happen it's just that combination of just quite trite truisms like we share a lot in common biologically with other species like chimpanzees yes yes we do and then just rank speculation and i just find it really boring and you know agorometer has this cassandra complex thing and the reason it's on there is that just putting the wind up people a bit about what could happen what what's what's currently happening and you know if you can say that look i can glimpse this
Starting point is 01:07:23 a little bit better than than other people then know, that's the hook which makes things more interesting to people. But, you know, Yuval Noah Harari is a very innocuous version of it. But, you know, he is just speculating. And it's only interesting, I think, because of that vulnerability that people have about uncertainty about the future. That's interesting. But, you know, I'm kind of surprised because you like speculative science fiction, right? Where they play with a lot of these kind of ideas about uploading your consciousness, cybernetic enhancements. Yeah, that's right. I
Starting point is 01:07:58 think that's why I'm so unsympathetic to this, because there's this long history of really, really good science fiction that is speculative, but it does map out what might happen, what could happen, whatever. And sometimes it's been incredibly prescient. Like, it obviously gets it wrong a huge amount as well. But a lot of the, like, for instance, Blindsight by Alan Watts was, I think, pretty prescient. He wrote that before the ai revolution but in in that he described a thing that was basically very much like like it was communicating an alien that basically that was communicating like a person seemed to be intelligent but they figured out after a lot of trouble that
Starting point is 01:08:35 it wasn't actually conscious or anything it was just a it was a it was a machine that that um so you know that that's pretty prescient and the cyberpunk stuff wasn't too far off the mark in some respects. And that was written back in the 1982 or something like that. So I see this kind of thing as like really lazy, boring science fiction because it's just, you know, I think that's why I don't like it. Oh, that's interesting. Well, there is a part where they talk about the potential. I actually like this but matt maybe i i should play
Starting point is 01:09:06 it because it was a thought experiment that i find interesting i wonder if you find it interesting or i'm gonna try to stop i'm gonna try to stop being grumpy i'm gonna i gotta well see let's see i'll give you this example and let's see if you find this interesting or if you find it like you know bad science fiction so it's not that humanity is completely destroyed, it's just transformed into something else. And just to give an example of what we are talking about, organic beings like us need to be in one place at any one time. We are now here in this room, that's it. If you kind of disconnect our hands or our feet from our body, we die. Or at least we lose control of these. I mean, and this is true of all organic entities, of plants, of animals. Now with cyborgs,
Starting point is 01:10:02 or with inorganic entities, this is no longer true. They could be spread over time and space. I mean, if you find a way and people are working on finding ways to directly connect brains with computers or brains with bionic parts, there is no essential reason that all the parts of the entity need to be in the same room at the same time. What did you like about that, Chris? Well, I just liked highlighting that if you're a disembodied consciousness, it's wrong to think about you existing in one. Why do you need to exist in one particular point or location?
Starting point is 01:10:43 You could be spread infinitely all over and like, but the way our minds work as organic things is that when we imagine, even in our science fiction, very often when we try to imagine that, we like to like personify a robot, right? Like the people play about with like the robot copying its body many times but generally there's like a boss robot right with the personality and that kind of thing so yeah just that levitation of our perspective doesn't make you think so let me let me get this straight if if we develop a mind computer interface and we can download ourselves into the matrix the matrix the matrix matrix matrix yeah the matrix could be branded after you if you invent it so let's go with the matrix yeah
Starting point is 01:11:32 and then if we could then inhabit this digital world inhabiting avatars like cyborgs wandering around and our consciousness could be distributed over different places and abstractions. Yeah. And if that made any kind of sense to our uploaded brains, right, and didn't seem absolutely mad, then that would be kind of weird, wouldn't it? Yeah, that's it. You got it.
Starting point is 01:11:58 You got it. That's right. It's good. See, money, Matt, when you think about it, money, it's just pieces of paper. It's just in your mind, man. We're just meatbags. I think the fact that I only had a couple of hours of sleep last night
Starting point is 01:12:11 is putting a tinge on my takes here. I'm finding very little patience with Noah. No, that's good, Matt. But the next take is going to get you and Yuval both hoisted by your petards, actually in being too conservative. So let me just see what I've done by lumping you two together. Once you can connect directly brains to computers, first of all, I'm not sure if it's possible. I mean, people like Elon Musk in Neuralink, they tell us it's possible. I'm still waiting for the evidence. I don't think it's impossible,
Starting point is 01:12:45 us it's possible. I'm still waiting for the evidence. I don't think it's impossible, but I think it's much more difficult than people assume, partly because we are very far from understanding the brain. And we are even further away from understanding the mind. We assume that the brain somehow produces the mind, but this is just an assumption. We still don't have a working model, a working theory for how it happens. But if it happens, if it is possible to directly connect brains and computers and integrate them into these kinds of cyborgs, nobody has any idea what happens next, how the world would look like. There are some issues there, Matt matt one thing is i'm not trying to say i think you would disagree with him that we don't have any working theory about how the
Starting point is 01:13:29 brain produces the mind well more generously i mean we have heaps of theories and some of them i think are pretty good and but you know it is it is a vastly complicated thing like we can't model it like you know i broadly agree with him right that we is it mysterious man i don't i'm not going to get into that i'm not going to get into that. I'm not going to get into that. For me, a good criterion of a working model is can you build a replica of one, right? And we're not there yet, right? After watching and reading about the immune system, we are not capable of building something like that for bioengineering.
Starting point is 01:14:00 But we understand a lot about it, Matt, okay? I've looked at the cultural diagrams i think i think there are like like technological practical challenges like to to making the immune system right but but we understand it like you say pretty damn well that in principle we probably could build it you know what i mean like in principle i know it's a bit of a conceptual slip i'm using here to say in principle but i kind of feel like in principle like it's it's a bit of a conceptual slip I'm using here to say in principle, but I kind of feel like in principle, like it's chemistry, right? Like it can be. You're slipping in the Yuval Noah, right?
Starting point is 01:14:30 Like underpant norm territory. But the thing is, Matt, he said Neuralink. Now we on the podcast, primarily you, have expressed skepticism about like the potential for Elon Musk to fry people's brains, right? Because of how he's going to charge the batteries and all this kind of thing. Now, recently there was, or at least there's videos. This is one thing there are, I don't believe there are papers or scientific data yet, but there are videos purporting to be a human recipient of the Neuralink brain computer interface thing, which now has a guy.
Starting point is 01:15:09 Initially, I was thinking, oh, is it just moving a cursor around? Because we already were capable of doing that for other interfaces. But this does seem, at least from the videos, to be a step up, right? In that he's playing computer games and playing civilization and whatnot so did you not say that that was impossible and that you can't recharge a battery without cooking someone's brain um i don't know i'm not going to comment because it's just a video and he's he's shown videos of his robots and stuff doing things before that turned out to be totally fictional but i'll just say to my naive understanding, yes, there are big practical issues with basically having a permanent penetration into your body, into your cranium, and actually having basically electronic sensors, wires of some kind, linked up to so many neurons in your brain without just causing just terrible biological problems to you, infections.
Starting point is 01:16:07 And, like, there are just so many neurons in the brain. I just don't see practically how you would wire up a decent proportion of them. Now, if this guy's playing Civilization and stuff, then that is still basically a brain-computer interface, right? That is very different. Like, that is so far apart. So I'm agreeing with Yuval here.
Starting point is 01:16:24 I'm agreeing with himval here. I'm agreeing with him in saying there's a massive gulf between moving a cursor around, even playing Civilization with it. Or Mario Kart. I've seen videos of that. And actually uploading your brain into a computer or even just you and I, instead of talking by moving our mouths, sort of sending our thoughts to each other. Like there's just a huge gulf there that i don't see being bridged anytime soon okay i i like that and i'll let you off there with your sentiment i think you made a good case but but this is also why neville is frustrating to me because in one breath he says this isn't on the horizon it's much more complicated than people think i don't see it
Starting point is 01:17:02 happening but then he talks about the scenarios in which it does happen and we're not going to be able to even recognize the world we'll be living in when this thing that he says isn't happening happens in 20 years yeah so the well here's a little clip just to round out this thing of him talking about you know it is fair to say he wrote a book called human deuce right about the next stage of evolution in humans. So it's kind of understandable that he would be talking about this kind of stuff, but here's, you know, a little bit of that in a nutshell. But the idea that we can now develop these extremely powerful tools of bioengineering and AI and remain the way we are, we'll still be the same homo sapiens in 200 years, in 500 years, in 1,000 years.
Starting point is 01:17:48 We'll have all these tools to connect brain to computers, to kind of re-engineer our genetic code, and we won't do it. I think this is unlikely. Agreed. Yeah, we're going to be cyborgs. I mean, it's likely. He is right. The bioengineering is going to improve and cybernetic implants.
Starting point is 01:18:06 Oh, yeah. Yeah, yeah. I think he generally is right. Like you can't, it's very hard to just choose not to use a technology if it's available. Like it's very hard to choose to not to build nuclear weapons if the Ruskies have got them. It's very hard not to employ AI if other people are using them.
Starting point is 01:18:22 So unless you can control everyone and with a one world government, then technology is slippery like that. So yeah, I'm sure that if we, or when rather, we eventually do get the genetic tailoring and stuff going on. And when there is the option to simply make little biological enhancements to ourselves, I'm sure a lot of people will want to take them up to avoid diseases and things like that. So it is going to happen. Yeah. Well, okay, Matt,
Starting point is 01:18:50 the last clip related really to the AI stuff, which I think is sort of topical, is about AI and their potential ability to manipulate us intimately. There was a battle between different social media giants and whatever, how to grab human attention. And they created algorithms that were really amazing at grabbing people's attention. And now they're doing the same thing, but with intimacy. And we are extremely
Starting point is 01:19:23 exposed. We are extremely vulnerable to it. Now, the big problem is, and again, this is where it gets kind of really philosophical, that what humans really want or need from a relationship is to be in touch with another conscious entity. in touch with another conscious entity. An intimate relationship is not just about providing my needs. Then it's exploitative, then it's abusive. If you're in a relationship and the only thing you think about is how would I feel better, how would my needs be provided for, then this is a very abusive situation. a really healthy relationship is when it goes both ways andrew hubenman take note yeah exactly the person that came to mind when i heard this section as well because he's talking about ai's capacity to produce like a false kind of intimacy, which is actually, you know, not fulfilling because the other side is not getting actual the things out of it, right? It's not a real connection.
Starting point is 01:20:32 It's a kind of simulacrum. Simulacrum? How do you say that? Simulacrum. Simulacrum of intimacy. But as Andrew Huberman has demonstrated, you can also do that as a biological flesh and blood human as well although people might think they're in a relationship if you're going to be like that it's probably better to do it with something that's a bit of technology because that way nobody gets hurt right yeah
Starting point is 01:20:56 lex should probably you know it's fine if lex is exploiting six ai chatbots at the same time that's all right but um yeah so i i just thought this was interesting it's a reasonable point about that we've talked about before about you know ais we are social primates and we're kind of reactive to various social cues so there is room for exploitation there but just pointing out that plenty of exploitation going on in the flesh and blood tech optimizer bro yes well it didn't really make sense to me because he's talking about exploitation but in in the context of a like a non-reciprocal relationship and how that's bad but it's bad not presumably because the ai's feelings are going to
Starting point is 01:21:35 get hurt or something it's it's bad because that's not an authentic relationship so it's kind of unhealthy for the person for the human being who's got their their waifu sex doll or whatever right yeah so it's just a bit confusing yeah and also doesn't it i mean i can see circumstances wherein you have somebody who's extremely socially isolated extremely socially awkward doesn't have the option to form relationships with you know real people and yes the advice in general would be that they should right like that you should give them tools to try and get them to form real relationships but i'm thinking of them are undoubtedly cases whereby maybe the only intimacy open to people is you know an artificial ai or whatever and in that case i know people like to paint that as dystopian but i don't see
Starting point is 01:22:23 it as hugely more dystopian than somebody living on their own in depression but using pornography instead of having sex with a human being i mean yeah it's not yeah you know it's not great or maybe a little bit you might find it a bit icky but i don't know if it's quite the dystopian sort of thing in harari's language i think they want to say it gets dystopian if like people choose that over you know that they aren't that kind of person and they're able to go i guess he's imagining a future world right where the ai impersonation of you know loving caring sexually available i assume partner is so good and so authentic feeling that it will become a more attractive proposition for people than the real thing. And, you know, I guess, I guess so that's possible. That could be a problem in the
Starting point is 01:23:11 far future or the near future. The general position of all these things, but yes, yes, I agree. So, you know, as we become better at simulating artificial humans, people may turn to prefer spending time with artificial humans over real humans that's possible and honestly i don't think you're gonna be able to stop people doing that because sometimes humans are arseholes so i'm already enjoying chatting to gpt and claude more than people on twitter so i'm already substituting chris it hasn't gotten sexual yet it hasn't gotten sexual but there's been a meeting of minds i think it's been much more pleasant and much more informative than reply guys you need scarlett johansson's voice that would help that really couldn't hurt we see we've already examined this in her the movie her yeah
Starting point is 01:23:57 example but um i mean that's that's my problem that's my problem with all of this because it's not that his speculations are bad or he shouldn't be speculating or it's not possible. It's just that I've heard it all before and not just in the really nerdy, hard sci-fi literature. It's in movies like her. It's mainstream. I've got some clips that are going to get you back on Harari's side. You're going to be pumping the air, fist bumping them.
Starting point is 01:24:23 Here's him talking about intelligence, and I think you're going to be pumping the air, fist bumping them. Here's him talking about intelligence, and I think you're going to like this. But what exactly is the relation between intelligence and consciousness? Now, intelligence is the ability to solve problems, to win a chess, to invest money, to drive a car. This is intelligence. Consciousness is the ability to feel things like pain and pleasure and love and hate and sadness and anger and so many other things. Now in humans and also in other mammals, intelligence and consciousness actually go together. We solve problems by having feelings. But computers are fundamentally different. They are already more intelligent than us in at least several narrow fields, but they have zero consciousness.
Starting point is 01:25:19 They don't feel anything. They don't feel anything. Yeah, yeah, yeah. Chris, you really should read Blindsight by Alan Watts. It deals with all of this really convincingly. And it was written before the AIs came along. I actually focused on the intelligence, because I know that's a B in your bond,
Starting point is 01:25:34 about like people, whenever they want to say that AIs aren't intelligent, right? But consciousness, just listening back there, that's not my definition of consciousness. Like he talks about the ability to feel like emotions and oh yeah but presumably if pressed he would talk about subjective experience and put it in a more formal sounding way it's okay okay is that a shorthand that's fine well so anyway you like that right well i don't disagree i mean yeah i mean okay that's my first shot that's just getting
Starting point is 01:26:01 you this is exactly the plot of blindsight where the AI construction turns out to be an extremely intelligent being but isn't conscious at all. And it's sort of, you know, it's kind of cool. It makes you think about it. Yeah, it's just, I don't know. I've just, like, what's his point there? Like, I've forgotten. What was he? I'm sleep deprived.
Starting point is 01:26:19 What was he getting to with making this point that intelligence is not the same as consciousness? Was he going somewhere with it? Well, he does go on to talk about, you know, basically that humans might get more intelligent, but it doesn't make us more happy, right? Like Elon Musk. Let's assume that Elon Musk is a genius, right? But it doesn't make you, like, being a genius doesn't make you happier but he
Starting point is 01:26:45 also says there's no correlation between intelligence and happiness and i don't think that is actually true probably negatively correlated but no i mean again this is trite but true right he cites vladimir putin and elon musk as people that are mega rich and powerful rich yeah and he would bet good money that they're not you know that much happier if if at all happier than the median person out there and i think he's bet good money that they're not that much happier, if at all happier, than the median person out there. And I think he's completely right about that. There's a huge amount of psychological literature that looks at happiness, which shows that
Starting point is 01:27:13 this is true. Once you get beyond meeting your basic needs and stuff, the incremental returns on doubling your money get smaller and smaller. But that's not connected with what I was talking about before with the difference between intelligence and consciousness. Now he's saying that money doesn't make you happy. You know, it's just saying that it won't solve all of our human problems, right? Even if we like, you know, something which even if AI is super intelligent, it doesn't mean it has the wisdom to apply that intelligence in a way that humans will find beneficial.
Starting point is 01:27:40 Yeah. That's right. Okay. So I see I'm not doing my job great. What about if I get him to talk about Twitter? Twitter now, when Elon took it over, and I think people will relate to this if you use Twitter. Suddenly, I've seen more people having their heads blown off and being hit by cars on Twitter than I'd ever seen in the previous 10 years. I think someone at Twitter's gone, listen, this company's going to die we we increase time spent on this platform and show more ads so let's start
Starting point is 01:28:09 serving up a more addictive algorithm and that requires a response from instagram and the other part and so it's a real that was actually stephen bartlett i forgot to me at that point but so but but he was right wasn't he there that was correct like that's that's fine that's that's correct yeah there's incentives from all those media companies to basically push stuff in front of us that'll make us keep watching and keep clicking. And the stuff that sort of incites a reaction, a visceral reaction, a negative one often, it's like, I mean, it's people.
Starting point is 01:28:38 They've talked about this a lot. Yes, it's a concern. Elon Musk is worse than most, made Twitter much worse. We all agree. Okay, I know something you're interested in, Matt. You're very concerned about your mortality. You're always fretting, wringing your hands, worrying about your gray hair, whatever the case might be.
Starting point is 01:28:54 Maybe this will light up your idea space. Yeah, it will definitely change everything if you think about relations between parents and children. If you think about relations between parents and children. So if you live forever, so the 20 years you spent raising somebody 2,000 years ago, what do they mean now? But I think long before we get to that point, I mean, most of these people are going to be incredibly disappointed because it will not happen within their lifetime. Another related problem is that we will not get to immortality. We will get to something that maybe should be called a mortality.
Starting point is 01:29:38 That immortality is that like your God, you can never die, no matter what happens. It's even if we solve cancer and Alzheimer and dementia and whatever, we will not get there. We will get to kind of a life without a definitive expiry date. That you can live indefinitely. You can go every 10 years to a clinic and get yourself rejuvenated. But if a bus runs you over or your airplanes explodes or a terrorist kills you, you're dead and you're not coming back to life. Now, realizing that you have a chance to live forever, but if there is an accident, you die, this creates a level of anxiety and terror unlike anything that we know
Starting point is 01:30:28 in our own lives so what do you think about that matt once again chris if if we solve all the problems of aging we figure out the secret of eternal youth and we could potentially unless we have an accident live forever and ever and ever. I mean, that sounds pretty great, doesn't it? But there's a downside, which is that you might have a bit of an odd or a different relationship with your parents, who are also physiologically presumably the same age as you after a while, and people might get sick of it. They might get paranoid about having an accident and play it really safe. I mean, yes, maybe, you know,
Starting point is 01:31:07 I'm sure you could write a good science fiction story about, you know, a civilization where they're all like two or three thousand years old and it's made them incredibly conservative and risk averse. It's like, let's cross that bridge when we come to it, shall we? I mean, we have to figure out the mysteries of eternal life first. Isn't that sort of what altered Carbon or what's the other one? Elysium was about.
Starting point is 01:31:30 Well, anyway, all cyberpunk has these kind of concepts about the ways to exchange consciousness or extend life and rich people becoming very paranoid. So, yeah, and one point he makes, Matt, that I just don't find convincing is whenever he's kind of talking about another drawback of immortality is like, what are you actually going to do? And why aren't you doing it now? And if you're not doing it now, like, would you actually do anything with the 1,000 years? And on the one hand, yes, he is correct that people, you know, fixating on the life extension technologies,
Starting point is 01:32:08 but having not very satisfying lives now, there's a contradictory aspect to that. But on the other hand, the reason that I'm not doing so many of the things that I would do if I had 1,000 years on the planet or an unlimited amount is because I have a limited amount of time, right?
Starting point is 01:32:25 Like, so I just never really get this argument because it's like, what are you going to do? You know, what are you going to do for thousands of years? I'm like, there are more games produced every year that I could possibly play. There are more books, more movies that I could possibly see. And I could learn so many interesting things there's so many history podcasts i feel like it's a failure of imagination right all these vampires matt they're always getting so bored like god damn learn some new skills stop dressing like victorian dandies learn to code you dracula so get a hobby yeah i know i feel like i like I could use a few hundred years at least. I might go into the suicide booth eventually
Starting point is 01:33:08 or I might keep going, who knows. But I'll cross those bridges when I get to them. This is all based on just total speculative what-ifs that Harari would agree are not in the near foreseeable future unless it's a huge stroke of luck. So I see little value in speculating like i'm sure there'll be some unexpected you know downsides like when you're 300 years old like i'm not quite sure how your memories are going to work are you going to be able to keep remembering all the
Starting point is 01:33:33 things that happened to you i mean again science fiction has dealt with this they posit that there's like a limit to how much a human mind can like hold in it in one's brain not my mind yeah yeah and you know like i already forget most of the details of my life and you know you could sort of become a bit of a different person after a few hundred years and it's like oh yes it's all very interesting it makes us question what we mean by you know the self and stuff you definitely are selling it yes very interesting i'd like it in the context of a fully fleshed out science fiction story. I just don't find it as interesting when it's like, hey, Chris,
Starting point is 01:34:09 what if we live forever? Do you think you'd get bored? I mean, I just, all right. No. The answer could be yes. The answer could be no. I don't care. How many guru speeches can I listen to every week?
Starting point is 01:34:22 I would not get bored. That's right. We all know what you would be doing with your time. We all know. Yeah. So Zargon the Zephyr of the Ferd, you know, said to the Galactic Conference, but Matt, this is a little bit of indulgence on my part,
Starting point is 01:34:39 but I just find this slightly irritating. I'm sorry. I don't think there's that much wrong with it, but just let me play and see. Let's see if we can do this game. I'll make it more exciting for you. Guess what I didn't like about this. Now, with regard to the discussion of free will, my position is you cannot start with the assumption that humans have free will. If you start with this assumption, then it's actually is very, it makes you very incurious, lacking curiosity about yourself, about human beings. It kind of closes off the investigation before it began. You assume that any decision you make is just a
Starting point is 01:35:27 result of my free will. Why did I choose this politician, this product, this spouse? Because it's my free will. And if this is your position, there is nothing to investigate. You just assume you have this kind of divine spark within you that makes all the decisions, and there is nothing to investigate there. I would say no. Start investigating. whether it's external factors like cultural traditions and also internal factors like biological mechanisms that shape your decisions. You chose this politician or this spouse because of certain cultural traditions
Starting point is 01:36:18 and because of certain biological mechanisms, your DNA, your brain structure, whatever. And this actually makes it possible for you to get to know yourself better. What upset you? Well, you're a free will guy, aren't you? I shut up. Yeah, I'm not a free will guy. I don't enjoy discussions of free will at all. So it's not that. What do do you think i don't know clearly i'm i'm lacking some part of my mental model of you is god is lacking or maybe i'm just tired yeah yeah no disappointing i can't think what is it what i don't like what is this false dichotomy that he constructs in which other people have created where the option is libertarian free will right
Starting point is 01:37:04 that kind of divine little thing floating around inside your head that is making completely unconstrained choices, like not related to any biological issues or psychological propensities or whatever. Or you consider those factors and recognizing that you're a biological being, that you're steeped in a social environment,
Starting point is 01:37:25 that you're influenced by your DNA and personalities. This then makes you realize that free will, it's an illusion. It isn't real. It is not proper. You are considering the topic more fully. I understand. I understand.
Starting point is 01:37:40 You can endorse a free will position like Kevin Mitchell, and it doesn't mean that you're just ruling out or completely uninterested in all of the antecedent causes of that. Look, I mean, I actually didn't mind what he said there myself, because I guess I just read it charitably, which is that- Unlike me. Unlike you. Unlike me before. But, you know, I guess if he's saying, look, it's more helpful to focus, because free will, you could say it is a bit of a like a thought terminating cliche because it does sort of as he said it can shut down thinking and exploring the reasons why people do the things they do and you know psychologists are
Starting point is 01:38:16 basically all about that we don't go you know why is this person gambling well it's because he chose to he exists his free will now we don't do that we look at all the causes yeah so right but i'm not so sure that on the other hand that it opens your mind entirely like for example to consider the biological determinants of behavior makes people more open-minded to thinking about the multiple influences no there's plenty of people that are hard biological determinants and re-siq maniacs it is certainly possible i guess i'm more sympathetic to it because even though it's possible it is easy to and it does have that political angle to it because i'll stick with the gambling example like there is a strong sort of stream of thought which is that you shouldn't worry about
Starting point is 01:39:00 addiction um too much you know there's nothing really wrong with the way the gambling industry is doing their thing because fundamentally- It's everybody's choice. It's their choice. We shouldn't be controlling them and trying to interfere with them making their free choices. If they make a wrong choice, then it's on them.
Starting point is 01:39:15 So it does get used, right, as a rhetorical tool. And I guess I'm basically on board with him, which is that theoretical considerations aside, it is helpful to focus on the causes. Yeah, I don't have any issue with that. I think people should consider the role that, you know, all of the things that he highlights play. And it's wrong to think that you are the libertarian free will, you know, creature that is completely unconstrained by the environment and your biological makeup. Of course you're not. that is completely unconstrained by the environment and your biological makeup.
Starting point is 01:39:43 Of course you're not. I just take issue with the notion that acknowledging that means that you basically will, if you think about it enough, you know, agree with Sam Harris and Yuval Noah Harari on the position regarding free will. I don't think that Kevin Mitchell hasn't thought about this topic carefully enough. So that's it. But actually, he is not as extreme on this issue because he does point out at another part that you can take different viewpoints on the degree to which there might be choices that people make,
Starting point is 01:40:19 which are less constrained by those factors or that cannot be just viewed entirely deterministic and you can have a debate about that but you have to consider those factors so maybe i am being unfair in the fact that like he's acknowledging just he's basically talking about people that have never considered you know the influence of biology or culture as as important for the choices that they make. And like, that is correct, that the unexamined life famously is unexamined. Not worth living, I think, is how it goes. That's right. But I've just used it for my purpose.
Starting point is 01:40:55 So that's right. But actually it does speak to, he has a side gig in emphasizing the importance of introspection, the kind of Sam Harris thing. So he just talks about it briefly, but he mentions this. You know, keeping a kind of balanced information diet, that it's basically like with food. You need food in order to survive and to be healthy. But if you eat too much, or if you eat too much of the wrong stuff,
Starting point is 01:41:25 it's bad for you. And it's exactly the same with information. Information is the food of the mind. And if you eat too much of it, or the wrong kind, you'll get a very sick mind. So I, I try to keep a very balanced information diet, which also includes information fasts. So I try to disconnect. Every day I dedicate two hours a day for meditation. Wow. And every year I go for a long meditation retreat of between 30 and 60 days, I can completely disconnecting, no phones, no emails, not even books. Um, just observing myself, observing what is happening
Starting point is 01:42:17 inside my body and inside my mind, getting to know myself better, and kind of digesting all the information that I absorbed during the rest of the year or the rest of the day. How would you describe your information diet, Chris? Balanced? Healthy? Yeah, well, I do consume a lot of junk, but I consume it in a critical way and I balance it out with good quality. I actually like that analogy because we've talked about intellectual junk food, you know, stuff which gives the appearance of being information dense and, you know, thoughtful, but it's actually junk food, right? Like there's no intellectual calories.
Starting point is 01:43:02 So I like that aspect i thought the part about information fasting just like fasting as an actual practice has more debatable evidence for its benefits i do think you know taking a break from like social media or whatever at some points you know especially if you're getting too wrapped up in things, is advisable. But 60 days a year at a two-hour meditation session per day, I mean, I guess it's nice that he has that much time to dedicate to introspective development. So, yeah, I don't have that. No, that's not an option for most people.
Starting point is 01:43:40 But, yeah, like you, I like that analogy. Like, for instance, I mean, I practice this. Like, I've turned off notifications on Twitter, so I don't get notifications of any kind popping up. And when I open the app, I won't even see notifications from people that I don't follow. So that means in Elon Musk's new Twitter, where you get 14 idiotic blue check morons replying to you.
Starting point is 01:44:05 If I read those, I would probably get at least annoyed and it would just clutter my mind with nonsense as I debated whether or not to, oh, I'm going to write this. No, I'm not going to. Just ignore it, Matt. Just ignore it. I mean, that's just like information hygiene, right?
Starting point is 01:44:19 Or you could talk about- Exactly. Talk about, yeah, so I think he's giving good advice there. Everyone should sort of do that and not just be reactive and just consume whatever you know is stimulating or is coming across your eyes yeah and they i mean they do in another point talk about the apparently the ceo of netflix said something about the being in a war with people falling asleep whatever and they take this as very, you know, a terrible indicator, but I actually thought it was just like probably an offhand joke comment. But in any case,
Starting point is 01:44:51 Matt, the last section for this decoding is probably the part that we are most anticipated to agree with, and perhaps we will. It's his kind of neoliberal politics pitch section. So he's actually lamenting the fall of neoliberalism and kind of the hopeful period of politics that he was resident in. So listen to this. It's not just because of the rapid changes and the upheavals they cause, it's also because, you know, 10 years ago, we had a global order, the liberal order, which was far from perfect, but it's still kind of regulated relations between nations, between countries, based on an idea on the liberal worldview that despite our national differences, all humans share certain basic experiences and needs and interests,
Starting point is 01:45:54 which is why it makes sense for us to work together to diffuse conflicts and to solve our common problems. It was far from perfect, but it did create the most peaceful era in human history. Then this order was repeatedly attacked, not only from outside, from forces like Russia or North Korea or Iran that never accepted this order. But also from the inside, even from the United States, which was the architect to a large extent of this order, with the election of Donald Trump, which says, I don't care about any kind of global order. I'd only care about my own nation. I'm on board with that.
Starting point is 01:46:44 I wouldn't call it neoliberalism specifically, just a little point of order. I'd only care about my own nation. I'm on board with that. I wouldn't call it neoliberalism specifically, just a little point of order. It's slightly distinct from... Yeah, it's liberal consensus. But if you want to put the negative qualifier, like when people refer to it as the neoliberal consensus. I know. There are accounts on Twitter that call themselves neoliberals, and they're basically defenders of the liberal consensus and basically normie economics and stuff. But they've sort of embraced the term neoliberal
Starting point is 01:47:11 like the gay community embraced the word queer. So you were just doing that. I understand. I understand. But, you know, I'm on board with that. I think I agree with them. The unipolar moment wasn't perfect by any means. But the retreat from multilateralism and international uh multilateral
Starting point is 01:47:28 agreements and things like that you there is well until nato until nato countries got the bejesus get out of them by russia they had a got a second wind but you know there has been a growing skepticism in countries even within the sort of order and it's driven by these populist movements and you know i feel a little bit it's a little bit like vaccines you know what i mean like when nationalism and our country first it sounds really great if it feels like you don't need to abide by these multilateral agreements and participate in these international institutions it may sound good because you haven't lived in a world where just like viruses roamed free you know oh know, countries really were just really fighting wars, big wars with each other on the regular.
Starting point is 01:48:08 Well, I was trying to work out the connection to vaccines, but I see like, you know, a stretched analogy. Interesting. Yeah, you know, why should you have to take a vaccine? I meant to say it's just complacency. Oh, right.
Starting point is 01:48:21 Yes, yes, yes. And a failure to recognize the benefits of things like international trade one crucial qualifier which i think is often overlooked in what uval says there and what many other people who are criticized for having this opinion is he acknowledged it's not perfect that there were plenty of things that you could and should criticize about that yeah because people whenever people make this case they'll always point to are you so you're saying that there was a global peaceful consensus 10 years ago when or in now you know 20 years ago when they invaded iraq yeah you know like that is not the argument you know america in that instance for example america actually forwent establishing
Starting point is 01:49:03 a multinational consensus right like it didn't get the support of the un no no exactly and uh you know that their experience of america making those stupid decisions has has kind of led them to retreat even further from you know being involved i guess in the rest of the world or at least amongst the megatypes but yeah no he's not like one of these like triumphalist, West is best, you know, rah, rah, rah. No, he's not Douglas Murray. Doesn't sound to me like he's putting up on a pedestal. He's taken the far more reasonable position, I think, which is just that this is preferable, right? International trade is good,
Starting point is 01:49:37 multilateral agreements, peace and, you know, international agreements where you follow some rule of law, that we had that at least to some extent, and we seem to be losing it to some degree. At least there's challenges coming. This is true. And I think he makes a good critique of the Donald Trumps and the populist figures that are all over the place at the minute. And you see this way of thinking, that I only care about the interests of my nation more and more around the world. Now, the big question to ask is if all the nations think like that, what regulates the relations between them? And there was no alternative. Nobody came up with the and said, okay, I don't like the liberal global order, I have a better suggestion for how to manage relations between different nations.
Starting point is 01:50:32 They just destroyed the existing order without offering an alternative. And the alternative to order is simply disorder. And this is now where we find ourselves. I think that's a fair point, and I'll sign on to it. You know, like, people shouldn't be focused on just recent history here. Like, there's a broader point here, which is order is better than disorder in general. And, you know, after, I think it was the Napoleonic Wars, they, you know, instituted the metadict system in Europe, was the Napoleonic Wars, they instituted the Metternich system in Europe, which worked for a pretty long while in at least preventing the kind of cataclysmic wars that had been fought before, like 30 Years' War and Napoleonic Wars and so on. And by no means was it a perfect system. It
Starting point is 01:51:16 ended up failing cataclysmically with World War I. But again pretty it's a pretty anodyne point which is that multilateral agreements and a sort of ordered diplomatic negotiated way of dealing with disputes rather than just survival of the fittest is is preferable when it comes to international relations like if you want to talk about utopian visions and like basic principles that we should all be able to get on board with there's a part where he's asked about, you know, what kind of things that he thinks we should be focusing on. So just listen to his answer, Matt. The relatively peaceful era of the early 21st century, it did not result from some miracle. It resulted from humans making wise decisions in previous decades. What are the
Starting point is 01:52:02 wise decisions we need to make now, in your view? Reinvest in rebuilding a global order which is based on universal values and norms, and not just on the narrow interests of specific nation states. So I do think there will be some people that hear that and say the relatively peaceful era of the 21st century and the global order, what kind of platitudinous bullshit, right? Plenty of people suffering under the boot of imperialist regimes. But his suggestion is we should try and build a cooperative, multinational agenda which is based on respect of universal values and norms and not focused on like nationalism right like in-looking nationalism surely that is not we could get up we could get on board with that can't we can't we not
Starting point is 01:53:00 all hold hands in in this respect except for the, you know, the hardcore nationalists. But even there, Matt, even there, I think he does a good job of highlighting something about the false dichotomy that populists draw. It should be clear that many of these politicians, they present a false dichotomy, a false binary vision of the world, as if you have to choose between patriotism and globalism, between being loyal to your nation and being loyal to some kind of global government or whatever. And this is completely false. There is no contradiction between patriotism and global cooperation.
Starting point is 01:53:48 When we talk about global cooperation, we definitely don't have in mind, at least not anybody that I know, a global government. This is an impossible and very dangerous idea. It simply means that you have certain rules and norms for how different nation states treat each other and behave towards each other. If you don't have a system of global norms and values, then very quickly what you have is just global conflict, is just wars. I mean, some people have this idea. They imagine the world as a network of friendly fortresses. Like each nation will be a fortress with very high walls, taking care of its own interest interests, but, uh, living on relatively friendly terms with the neighboring fortresses, trading with them and whatever.
Starting point is 01:54:49 Now, the main problem with this vision is that fortresses are almost never friendly. I like that. And it's so counter to the image of Alex Jones and so on, want to present of him wanting a global government that controls all aspects of your life, you know, just eat bugs the waf no he's talking about just multilateral agreements and you know not invading other sovereign countries and you know international courts of law yeah for what it's worth not that not that my opinion about any of this stuff matters at all but yeah that's that's kind of
Starting point is 01:55:22 how i see my star trek future unfolding you know it's not that you concentrate all the power and decision making at a centralized one world government that just does everything everywhere but there's no reason to concentrate at all at the national level either you know you can have these tiered kind of systems where you know decision making power and democratic type processes are occurring at very local levels and regional levels and semi-national levels and then regional levels like like europe or the or asean or whatever and just distributing that kind of stuff in a network a hierarchical network i suppose that spans the world and what you get from that hopefully is something that looks more like a community that spans multiple geographic levels,
Starting point is 01:56:09 rather than this sort of fractious, kind of friendly, but suspicious and self-interested. It's like a libertarian model of international relations that we currently, that is kind of the usual way of doing things. So, I'm all for it. I'm all for, you things so i'm all for it i'm all for you know i'm all for orcas the agreement that australia the uk i'm all for nato bring it on bring it on yeah so that's that's our official position on that but like yeah if the libertarians had their way we'd all be living in mad max that's we we don't want them to get their way and we don't want the ultra-nationalists to get away and the leftists would have us all up against the wall um like living in a harmonious collective where you're not allowed to think about capitalism so you know come on come on the moderate path take
Starting point is 01:56:54 the middle path you know if you're buddhists that's apparently the way to go you know whatever have your own political views but this is the part that i will say that jives quite a lot with me in terms of i think what he's saying is sensible and non-objectionable and even if you take issue with the way that he presents things like surely you can agree that the populist right-wing ethno-nationalism that is growing is worse than a system that focuses more on global collaboration yeah and it's unfair to paint him as some kind of thatcher type or douglas murray no or or indeed a kind of you know one world government you know davos no he's pretty he's pretty normie in his political opinions and they're fine he's been normie and like counter to that image he does talk about potential issues with technological development and so on like he
Starting point is 01:57:50 isn't just a accelerationist booster so they've got their diagnosis wrong but what's new they're conspiratorial chuckle fucks so that's what they do matt i know so that's us we're done with harari final thoughts matt what do you have to say about him what's the big picture is he much of a secular guru what did you think i think he's a bit he's a bit guru-ish in a relatively innocuous way he does have that trick of basically saying things that are relatively uncontroversial and frankly a bit bland and uninteresting, but just phrasing it in a way that sounds a bit more dramatic, that sounds a bit more sexy. And that's ironically what got him into trouble with both hardcore Christian types and also academic philosopher types. And that's all he's doing. He's just making himself sound more interesting, frankly. As well as that that he does that thing of speculating
Starting point is 01:58:46 about the future in kind of sweeping terms i mean part of that is just kind of intrigue intriguing exciting like what's going to happen you know are we going to be cyborgs but you know some of it is is negatively tinged in terms of you know this is going to change everything and you know we won't even be able to recognize ourselves anymore and who knows what could happen when people are having relationships with sex bots or whatever and so so there is a bit of that cassandra complex thing too but you know it's very it's very mild though it's not like a alex jones type thing it's it's mild so i'm going to classify him chris i don't want to i want to preempt the gorometer the gor Gurometer has the final say, but I think I'd probably find him a little bit guru-esque on a few components, but mostly harmless.
Starting point is 01:59:30 Yeah, I basically see him as a guru in the mold of a TEDx speaker, right? And the issue is speaking with too much confidence and sometimes presenting things in an overly dramatic and simplistic way in order to, yeah, like you said, make fairly trite observations sound more dramatic. But as noted, I do think there is space in the intellectual landscape for people presenting ideas in this fashion and, you know, trying to weave grand narratives. And I don't think he's particularly harmful. I know that isn't one of the things that we're, you know,
Starting point is 02:00:10 typically focusing on. But I do think from people that we've looked at, you know, we've just covered Jordan Peterson and Brett, and it's clear that there's a very big difference, even in the parts where he's been hyperbolic between their delivery and his. And in many ways, you could also see him as a pro-establishment guru type, right? Because he's not trying to undermine confidence in the global order or vaccines or whatever the case might be. So it's sort of interesting that he found a lean that leads to attention and lots of invitations to speak and whatnot but which doesn't require taking a strong contrarian position so yeah that's interesting yeah and
Starting point is 02:00:51 he's not the only one like i think there are other sort of popular non-fiction authors who who write you know academically informed but easily digestible big ideas type big picture stuff about science humanity history and you know i humanity, history. And, you know, I think that's okay. You know, if you want a book to read on the beach on the holidays, then you can read one of those and it won't do you any harm. You can have a go at them for being a bit, you know, simplifying things a little bit, making it sound a bit more revolutionary than the ideas than they actually are. But, you know, it's not the worst crime in the world. That's true.
Starting point is 02:01:27 Well, there you have it. So the decoding has ended for today. And just now, Matt, before we slink off into our various caves, we should look at the reviews that people have been offering. And, you know, what I thought I'd do just for today i'm not going to give us any positive feedback i'm gonna let us wallow in the dark you gotta sometimes accept you know the bitter pills that people want to check at you so i've got a small smorgasbord of negative one out of five star reviews for you so which would you like first a b or c okay um what's behind door number b this is
Starting point is 02:02:09 from somebody with an unpronounceable name in canada the title is has run its course i think they're done covering most of what there is to cover and they're struggling for content now you can feel they're in their last miles with this project. It was good while it lasted. Definitely check out their older episodes. One out of five stars. Oh, God. I like that. That's very, it's struggling. That's such a downer, man.
Starting point is 02:02:34 I can't take it on a day like this when I'm sleep deprived. I'm sorry. There's more to come. Is it true? Have we peaked? Is this it? No, no. What's it Dennis says?
Starting point is 02:02:44 And I think it's always sunny. I haven't peaked? Is this it? No, no. What's it Dennis says? And I think it's always sunny. I haven't peaked. I haven't even begun to peak yet. But no, this diagnosis is wrong factually in terms of it's a struggle to find gurus to cover. It's really not. And unfortunately, they're all over the place. And even the ones that we've covered just continue to be more mental every week. So that's not accurate. But, you know, if you've got your value and you're ready to move on, that's fine. You know, that I'm done with. You go, you know, there's plenty of history podcasts I had to go,
Starting point is 02:03:14 but we'll continue to look at the gurus. So take that unpronounceable name and count it back. Now, this one, I don't know if I should reward this, Matt, but I'm going to do it anyway. Do you remember we had the guy who wrote the review about us making Sam Harris into a golem and all that stuff?
Starting point is 02:03:31 They were very upset with our interview with Sam Harris. They have somehow raised back to the top. So let me read it. It's disappointed 2,000,001 again. Just from a few days ago. It's Sam Harris again at the title of one of five stars. And it says, just terrible discussion on Gaza Israel. You seem to
Starting point is 02:03:53 attack Sam, using him as a golem to bring in an audience. I certainly don't agree with all Sam's views, but you instance the two sides, the argument for your gain in audience numbers, two sides the argument for your gain in audience numbers. Seems shameful on such an important topic. Hmm. The point is that Sam has a much bigger audience than you. You disproportionately criticize Sam to bring his audience to your conversations. Your Israel-Gaza disagreements were to create conflict with him rather than a genuine attempt to discuss a complex world-changing event. That seems cynical. And now you have a paywall? Creating your own echo chamber, boys.
Starting point is 02:04:32 Sam should have been smarter and not given you the airtime. Stick to the Weinsteins and kiss it. Love, yours disappointed to a million and one. So that's... Good idea. That's... I'm sorry, Matt. I've done another body blow. Another body blow. But again, I told you, Matt, that we had to gin up our disagreements with Sam.
Starting point is 02:04:54 You know, we find him too rational, too accurate. So we had to pretend that we disagreed. We had to manufacture a disagreement. In order to get his audience. Like, why would that? Does that logic hold up? Surely they wouldn't like us attacking it wouldn't be Sam Harris haters that we would want to attract I'm not sure with that I'm not sure of the logic there but uh you know Sam does have a big audience but he also talks to a lot of people
Starting point is 02:05:18 and he chose to exercise his his right to reply on Guru's Pod and we have a policy where we let people do it even people we don't want to talk to so uh you know that's what it is well yeah it's just it's funny that the analysis on that is like that we were pretending to disagree with some words like other people's analysis we weren't pretending that we didn't disagree enough we can't we can't win we did disagree on a number of points and it was not pretend i can assure you of that yeah and we had few opportunities to actually have a back and forth debate there it was mostly sam exercising his right but whatever that's true that's true yeah but anyway and and shove the paywall of your hearts disappointed. If you want to hear us waffle on about things that are,
Starting point is 02:06:06 you know, on the supplementary material. Yes, you can pay the payroll or not, but anyway, you're too busy. So, you know,
Starting point is 02:06:13 you go listen to Sam Harris. It's not like he has a payroll in place. Anyway, last one, Matt. Sorry. I, like I said,
Starting point is 02:06:21 this is a negative one. This is negative one. Maybe I'm suddenly psychologically influencing the audience to go back and counter this wall of hate that we've received. So this is from HeyHoHoHoHoHoHoHo. And the title is Podcast of Critical Academics, now ironically turned into hate-filled gurus. I think we are the hate-filled gurus there.
Starting point is 02:06:44 So sorry, Matt. The hosts tiptoe around the central question. Are these gurus revolutionary thinkers or snake oil salesmen? Instead of wielding Occam's razor, they brandish Occam's fellow duster. I like that line. They leave anyone with a modicum of rational thought left yearning for sharper critiques.
Starting point is 02:07:07 The podcast regularly feels like an echo chamber for the already skeptical. So if you hate Jordan Peterson, Chris Williamson, Andrew Huberman, or Elon Musk, you can look forward to these two inject a one-sided drivel about how they're all evil right-wingers. If you're not in along, you're part of the club. If not, well, you're probably a guru yourself. Chris and Matt chase gurus like kids chasing fireflies. Just when they've cornered one, it flits away, leaving them grasping at Finn there.
Starting point is 02:07:39 Perhaps they need a guru catching that. He does like his metaphors. I'm imagining myself chasing butterflies with a feather duster yeah that review is a bit of a trip it's nicely written it was very eloquent it was lyrical i liked that it's a bit overwritten though isn't it there's way too many metaphors and analogies for like a small short review not a great deal of substance substance heavy on the metaphors a little bit light on the evidence to support those metaphors. Look, we don't hate everybody.
Starting point is 02:08:07 We're relatively gentle. I like Chris Williamson and his drink. Come on. I joked about it, but it's fine. Yes. Yeah. And we try not to criticise people for being right-wing per se. That's why we didn't criticise Hasan Piker for being very,
Starting point is 02:08:24 very left-wing, nominally anyway. We criticize him on other grounds, and I think we try hard to do that. I'm very vulnerable. Yeah, well, he said you can get better representatives. I'm very vulnerable today. I'm very tired. I'm sorry. I've had these three things.
Starting point is 02:08:37 No, you're dealing with this. Part of this is, you know, if you're not agreeing with us, you're probably a guru. No, again, you guys, come on. We've hammered this home. If you're not doing the shit in the garometer, you're not a secular guru. You can be a complete idiot who believes in Jordan Peterson's rants about vaccines or whatever, but you yourself are not necessarily a secular guru. So bad analysis, too many metaphors,
Starting point is 02:09:03 but I did like the line about occam's fellow duster and there's a weird thing where it's kind of like you know the critiques would be good if there were more incisive and cutting but not this way not this way i expected better so anyway your review all right i gave it one out of five stars yeah Yeah. So, or two, maybe two because of the nice, you got to give it a point for that. Yeah. How do you like them apples, eh? You had your review rated.
Starting point is 02:09:31 Yeah. How does it feel? After that, at that struggle session, we are now finally met to the last bit, which is just thanking the people who support us. The wise people, the good people, the people who support us, the wise people, the good people, the people that are not convinced by right-wing idiots, unlike our last review. So yeah, there's plenty of people that we could thank, Matt, and I will thank some of them in a haphazard manner. And I'm going to start with conspiracy hypothesizers. You don't object to this.
Starting point is 02:10:06 Not at all. Not at all. Go ahead. I will thank John Gao, Jacob Hangen, Milan Nigam, Aoife Gallagher, TSD3141, Robert Potter, Nobody, John Maciek Grabowski, Adrian Maniatis, Donkey Kong, Alice Lee, David Moore, Peter Ranschwarfschwert, Mina A., Marshall Clemens, Nathan Ender, Dr. Gerb, Reiterative No. 4, Biavaza Borodny and Francis Sebesta. Wow. I did very well. I did very well. You did. Congratulations.
Starting point is 02:10:55 Thank you, everybody. Appreciate it. Some of them are really hard. I feel like there was a conference that none of us were invited to that came to some very strong conclusions, and they've all circulated this list of correct answers i wasn't at this conference this kind of shit makes me think man it's almost like someone is being paid like when when you hear these george soros stories well he's trying to destroy the country from within we are not going to advance conspiracy theories. We will advance conspiracy hypotheses. Yeah.
Starting point is 02:11:27 Revolutionary thinkers, Matt. The next people, the people that get access to decoding academia content. I have some of them. I have Lars Vick, Robert, Taylor Squigliacci, ML, McLegendface, Mark M, Nicole Davison, Max Flatange, Colin McLaughlin, Grammaticus Gore, Clayton Spiner, Verdi Gopher, Alexander Will, Peter Rosnick, Matt Behrens, Michael Zimmerman, Melissa, Omnomin... Damn, fuck you.
Starting point is 02:12:15 Omnominicon. Omnominicon. Son of a bitch. And Julius and Ian Sears. Those are all revolutionary geniuses. The people making up these names just to hurt you. I wonder.
Starting point is 02:12:32 I know. Son of a bitch. I'm usually running, I don't know, 70 or 90 distinct paradigms simultaneously all the time. And the idea is not to try to collapse them down to a single master paradigm. I'm someone who's a true polymath. I'm all over the place. But my main claim to fame, if you'd like, in academia is that I founded the field of evolutionary consumption. Now, that's just a guess, and it could easily be wrong. But it also could not be wrong. The fact that it's even plausible is stunning.
Starting point is 02:13:01 Just the tenor of Brett's voice as he says says those last words it just never fails to do it for me i love it yeah i get that i get that now matt galaxy brain gurus that is what we're we're talking about now there i have a just a little sporky sport of people to thank. Just a random selection. I'm going to thank Felix, Floatcoat, Gareth Lee, Ian, Jason Parker, Jim Brown, Joachim Amundsen, Justin, Ketrisel White, Kyle Wilson, one on two, and Lucy and M.E.M. Those are all the ones I can't even find this week. I've got an uncle called Jim Brown. I'll have to research him, find out whether it's him.
Starting point is 02:13:59 Oh, no. I've just worked out as well. There was somebody who hasn't been found for like two years. Oh. And they said I'd shout them out. Oh, no. I forgot the DM. I'll get you next week, okay?
Starting point is 02:14:10 Just the person I said I'll get you. Don't worry. I just remembered now. So sorry about that. But sorry, sorry, sorry. Anyway, here's the Galaxy Brain stinger. We tried to warn people. Yeah.
Starting point is 02:14:22 Like what was coming, how it was going to come in the fact that it was everywhere and in everything considering me tribal just doesn't make any sense i have no tribe i'm in exile think again sunshine yeah yeah yeah yeah cool yeah well yeah good stuff good stuff we've done all of the essential things you file file, Noah Harari, decoded. Very mean and hurtful reviews dealt with. Yeah, sorry. I'll restore balance next time. Maybe I'll just read the positive ones.
Starting point is 02:14:58 Come on. If you like us, what are you doing? Represent. Get rid of those one-star bastards. I think that first i think that first review it caught me because you know i'm at a time in my life where i feel like i'm you know i've i have picked and i i'm on the slow gradual descent oh matt don't let them worry everybody's going pear-shaped you know physically i don't have the vim and vigor that I once had.
Starting point is 02:15:25 I'm no longer the wunderkind in the room. Oh, no. Well, you are in this room. You are in this room. Still there. Still the shining guiding light in the coding group. Are you trying to say I'll always be your wunderkind? I guess that is what I'm saying.
Starting point is 02:15:47 your wunderkind i guess that is what i'm saying and i also want to tell you matt that despite their claim that we have you know we've picked we're past it we're old news we are at 200 000 downloads this month the most downloads we've ever had out of any month since the show began so who's falling off you know who's scraping the barrel is it us yeah look just saying matt they're happy 200 000 downloads imagine all those little people consuming all those nuggets of information before anybody writes anything on reddit he's joking he knows that popularity indexes are not a measure of success or worthiness yeah i think well i think he knows that i think he knows that maybe i do i'm just saying matt in the specific claim of it's uninteresting it's dull there's nobody good for you to cover anymore it's not true it's not true there's plenty of
Starting point is 02:16:38 gurus there we should have covered you well many moons ago and we had to get them eventually because we had too many stuff you know going on so yeah that's it don't worry about it but yes i do know amount of downloads does not equate to the accuracy of your content or the quality okay except in our case where it does so that's that's all righty i think i'm gonna go have a nap go but good to speak to you chris yes i'll see you later retire retire okay farewell

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.