The Tucker Carlson Show - Amjad Masad: The Cults of Silicon Valley, Woke AI, and Tech Billionaires Turning to Trump

Episode Date: August 1, 2024

Tech entrepreneur Amjad Masad joins Tucker for the deepest and most interesting explanation of AI you’ll ever see.  (00:00) Artificial Intelligence (10:00) Bitcoin (22:30) The Extropians Cult (31:...15) Transhumanism (42:52) Are Machines Capable of Thinking? (47:38) The Difference Between Mind and Computer (1:33:00) Silicon Valley Turning to Donald Trump (1:45:10) Elon Musk and Free Speech Paid partnerships: Download the Hallow prayer app and get 3 months free at https://Hallow.com/Tucker ExpressVPN: Get 3 months free at https://ExpressVPN.com/TuckerX Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Tucker Carlson Show. It's become pretty clear that the mainstream media are dying. They can't die quickly enough. And there's a reason they're dying. Because they lie. They lied so much, it killed them. We're not doing that. TuckerCarlson.com, we promise to bring you the most honest content,
Starting point is 00:00:22 the most honest interviews we can without fear or favor. Here's the latest. It does sound like you're directly connected to AI development. Yes. You're part of the ecosystem. Yes. And we benefit a lot from when it started happening. It was almost a surprise to a lot of people, but we saw it coming.
Starting point is 00:00:42 You saw AI coming? Saw it coming, yeah. So, you know, this recent AI wave, you know, it surprised a lot of people. When ChatGPT came out in November 22, a lot of people just lost their mind. Like, suddenly a computer can talk to me. And that was like the holy grail. Yeah, I wasn't into it at all.
Starting point is 00:00:59 Really? It's terrifying. Paul Graham, one of my closest friends and sort of allies and mentors he's a big Silicon Valley figure he's a writer
Starting point is 00:01:10 kind of like you you know he writes a lot of essays and he hates it he thinks it's like a midwit right
Starting point is 00:01:17 and it's just like making people write worse making people think worse worse or not think at all right not think
Starting point is 00:01:24 as the iPhone has done, as Wikipedia and Google have done. We were just talking about that. The iPhones, iPads, whatever, they made it so that anyone can use a computer, but they also made it so that no one has to learn to
Starting point is 00:01:38 program. The original vision of computing was that this is something that's going to give us superpowers, right? JC Licklider, the head of DARPA while the internet was developing, wrote this essay called The Man-Machine Symbiosis. And he talks about how computers can be an extension of ourselves, can help us grow, we can become, you know, there's this marriage between the type of intellect
Starting point is 00:02:07 that the computers can do, which is high speed, arithmetic, whatever, and the type of intellect that humans can do. It's more intuition. Yes. But, you know, since then, I think the sort of consensus has sort of changed around computing,
Starting point is 00:02:23 which is, and I'm sure we'll get into that, which is why people are afraid of AI as kind of replacing us. This idea of computers and computing are a threat because they're directly competitive with humans, which is not really the belief I hold.
Starting point is 00:02:38 They're extensions of us. And I think people learning to program, and this is really embedded at the heart of our mission at Repl.Ed, is what gives you superpowers. Whereas when you're just tapping, you're kind of a consumer. You're not a producer of software.
Starting point is 00:02:54 I don't want more people to be producers of software. There's a book by Douglas Rochkoff. It's called Program or Be Programmed. And the idea if you're not the one coding, someone is coding you.
Starting point is 00:03:08 Someone is programming you. These algorithms on social media, they're programming us, right? So. Too late for me
Starting point is 00:03:17 to learn to code though. I don't think so. I don't think so. I can't balance my checkbook assuming there are still checkbooks.
Starting point is 00:03:24 I don't think there are. But let me just go back to something you said a minute ago, that the idea was originally, as conceived by the DARPA guys who made this all possible, that machines would do the math, humans would do the intuition. I wonder, as machines become more embedded in every moment of our lives, if intuition isn't dying or people are less willing to trust theirs. I've seen that a lot in the last few years where something very obvious will happen and people are like, well, I could sort of acknowledge and obey what my eyes tell me and my instincts are screaming at me, but the data tell me something different. I feel like my
Starting point is 00:04:04 advantage is I'm very close to the animal kingdom. That's right. And I just believe in smell. Yeah. But I wonder if that's not a result of the advance of technology. Well, I don't think it's inherent to the advance of technology. I think it's a cultural thing, right? It's how to, again, this vision of computing as a replacement for humans versus an
Starting point is 00:04:27 extension machine for humans. And so, you know, you go back, you know, Bertrand Russell wrote a book about history of philosophy and history of mathematics and like, you know, going back to the ancients and Pythagoras and all these things. And you could tell in the writing, he was almost surprised by how much intuition played into science and math and in the sort of ancient era of advancements in logic and philosophy and all of that. Whereas I think the culture today is like, well, you got to check your intuition at the door.
Starting point is 00:05:04 Yes. Yeah, you're biased, your intuition at the door. Yes. Yeah, you're biased. Your intuition is racist or something. And you have to, this is bad. And you have to be this like, you know, blank slate. And like, you trust the data. But by the way, data is, you can make the data say a lot of different things.
Starting point is 00:05:19 Oh, I've noticed. Wait, can I just ask a totally off-topic question that just occurred to me? How are you this well-educated? I mean, so you grew up in Jordan speaking Arabic in a displaced Palestinian family. You didn't come to the U.S. until pretty recently. You're not a native English speaker. How are you reading Bertrand Russell?
Starting point is 00:05:36 Yeah. And what was your education? Is every Palestinian family in Jordan this well-educated? Kind of. Like, yeah, Palestinian diaspora is this well-educated? Kind of. Yeah, Palestinian diaspora is pretty well-educated. And you're starting to see this generation, our generation, who grew up are starting to become more prominent. I mean, in Silicon Valley, a lot of C-suite and VP-level executives,
Starting point is 00:06:02 a lot of them are Palestinian. A lot of them wouldn't say so because there's still bias and discrimination and all of that. They wouldn't say they're Palestinian? They wouldn't say. And they're called Adam and some of them, some of the Christian Palestinians especially kind of blend in, right? But there's a lot of them out there. But how did you, so how do you wind up reading, I assume you read Bertrand Russell in English? Yes. How did you learn that? You didn't grow up
Starting point is 00:06:26 in an English-speaking country. Yeah, well, Jordan is kind of an English-speaking country. Well, it kind of is. That's true. Right. So, you know,
Starting point is 00:06:33 it was a British colony. I think one of the, you know, the Independence Day like happened in like 50s or something like that or maybe 60s. So, it was like pretty late
Starting point is 00:06:43 in the, you know, British sort of empire's history that Jordan stopped maybe 60s. So it was like pretty late in the British sort of empire's history that Jordan stopped being a colony. So there was a lot of British influence. I went to, so my father is a government engineer. He didn't have a lot of money. So we lived a very modest
Starting point is 00:06:59 life, kind of like middle, lower middle class. But he really cared about education. He sent us to private schools. And in those private schools, we learned kind of using British diploma, right? So IGCSE, A-levels, you know, that's, are you familiar with? Not at all.
Starting point is 00:07:17 Yeah, so part of the sort of British, you know, colonialism or whatever is like, you know, education system became international. I think it's a good thing. Oh yeah, there are British schools everywhere. Yeah, British schools everywhere and there's a good education system. It gives students a good level of freedom and autonomy to kind of pick the kind of things they're interested in. So I, you know, went to a lot of math and physics, but also did like random things. I did child development, which I still remember. And now that I have kids,
Starting point is 00:07:45 I actually use. In high school you do that? In high school. And I learned. What does that have to do with the civil rights movement? What do you mean? That's the only topic
Starting point is 00:07:56 in American schools. Really? Oh yeah. You spend 16 years learning about the civil rights movement. So everyone can identify the Edmund Pettus Bridge, but no one knows anything else.
Starting point is 00:08:04 Oh God. I'm so nervous about that with my kids. So everyone can identify the Edmund Pettus Bridge, but no one knows anything else. Oh, God. I'm so nervous about that with my kids. No, opt out. Trust me. That's so interesting. So when did you come to the US? 2012. Damn.
Starting point is 00:08:16 And now you've got a billion dollar company. That's pretty good. Yeah, I mean, America is amazing. Like, I just love this country. It's given us a lot of opportunities. I just love the people, like everyday people. I like to just talk to people. I do too.
Starting point is 00:08:28 I was just talking to my driver, which she was like, you know, I'm so embarrassed. I didn't know who Dr. Carlson was. Good, that's why I live here. Yeah. I was like, well, good for you. I think that means you're just like, you're just living your life.
Starting point is 00:08:42 And she's like, yeah, I have my kids and my chickens and my whatever. I was like, that's great. It means you're happy. It means you're just like, you're just living your life. And she's like, yeah, I have my kids and my chickens and my whatever. I was like, that's great. That's awesome. It means you're happy. It means you're happy, yes. But-
Starting point is 00:08:50 So I'm sorry to digress. I'm sorry to digress. Please, please. You're referring to all these books. I'm like, you're not even from here. It's incredible. So, but back to AI and to this question of intuition,
Starting point is 00:09:03 you don't think that it's inherent. So in other words, if my life is, to some extent, governed by technology, by my phone, by my computer, by all the technology embedded in every electronic object, you don't think that makes me trust machines more than my own gut? You can choose to, and I think a lot of people are being guided to do that. But ultimately, you're giving away a lot of freedom. It's not just me saying that. There's a huge tradition of hackers and computer scientists that started ringing the alarm bell a really long time ago
Starting point is 00:09:48 about the way things were trending, which is more centralization, less diversity of competition in the market. And you have one global social network as opposed to many. Now it's actually getting a little better. But you had a lot of these people,
Starting point is 00:10:07 you know, start, you know, the crypto movement. I know you were at the Bitcoin conference recently and you told them CIA started Bitcoin.
Starting point is 00:10:14 They got really angry on Twitter. I don't know that. But until you can tell me who Satoshi was, I have some questions. What?
Starting point is 00:10:22 I actually have a feeling about who Satoshi was, but that's a separate conversation. No, let's just stop right now because I can't, I'll never forget to ask you again. Who is Satoshi? There's a guy,
Starting point is 00:10:32 his name is Paul LaRue. By the way, for those watching who don't know who Satoshi was, Satoshi is the pseudonym that we use for the person who created Bitcoin, but we don't know what it is. It's amazing. You know, it's this thing that was created.
Starting point is 00:10:45 We don't know who created it. He never moved the money, I don't think. Maybe there was some activity here and there, but there's like billions, hundreds of billions of dollars locked in. So we don't know the person as they're not cashing out. And it's like pretty crazy story, right? That's amazing.
Starting point is 00:11:00 So Paul LaRue. Yeah, Paul LaRue. Yeah, Paul LaRue was a crypto hacker in Rhodesia before Zimbabwe. And he created something called Encryption for the Masses, EM4. And was one of the early... By the way, I think Snowden used EM4 as part of his hack. So, he was one of the people that really made it so that cryptography is accessible to more people. However, he did become a criminal. He became a criminal mastermind in Manila.
Starting point is 00:11:36 He was really controlling the city almost. He paid off all the cops and everything. He was making so much money from so much criminal activity. His nickname was Sletoshi with an L. And so there's like a lot of, you know, circumstantial evidence. There's no cutthroat evidence, but I just have a feeling that
Starting point is 00:11:54 he generated so much cash. He didn't know what to do with it, where to store it. And on the side, he was building Bitcoin to be able to store all that cash. And around that same time that Sletoshi disappeared, he went to jail. He got booked for all the crime he did.
Starting point is 00:12:13 He recently got sentenced to 25 years of prison. And I think the judge asked him, like, what would you do if you would go out? And he's like, I would build an ASIC chip to mine Bitcoin. And so, look, this is a strong opinion loosely held, but it's just like there's... So he is currently in prison. He's currently in prison, yeah. In this country or the Philippines?
Starting point is 00:12:33 I think this country. Because he was doing all the crime here. He was selling drugs online, essentially. We should go see him in jail. Yeah, yeah. Check out his story. It's fascinating. I'm sorry.
Starting point is 00:12:44 I just had to get that out of you. So I keep digressing. So you see AI and, you know, you're part of the AI ecosystem, of course, but you don't see it as a threat. No. No. No, I don't see it as a threat at all.
Starting point is 00:13:00 And I think, and I, you know, I heard some of your, you know, podcasts with Joe Rogan or whatever, and you were like, oh, we should nuke the data centers. I'm excitable. Yeah. On the basis of very little information. Well, actually, yeah.
Starting point is 00:13:11 Well, actually, tell me, what is your theory about the threat of AI? I, you know, I always, I want to be the kind of man who admits up front his limitations and his ignorance. And on this topic, I'm legitimately ignorant, but I have read a lot about it and I've read most of the alarmist stuff about it and the idea is, as you well know, that the machines become so powerful that they achieve a kind of autonomy and they, though designed to serve you, wind up ruling you.
Starting point is 00:13:39 Yeah. And, you know, I'm really interested in Ted Kaczynski's writings his two books that he wrote obviously as to say
Starting point is 00:13:51 ritually totally opposed to letter bombs or violence of any kind but Ted Kaczynski had a lot of provocative
Starting point is 00:13:57 and thoughtful things to say about technology it's almost like having live-in help which you know people make a lot of money
Starting point is 00:14:04 they all want to have live-in help, but the truth about live-in help is, you know, they're there to serve you, but you wind up serving them. It inverts. And AI is a kind of species of that. That's the fear. And I don't want to live, I don't want to be a slave to a machine
Starting point is 00:14:19 any more than I already am. So it's kind of that simple. And then there's all this other stuff. You know a lot more about this than I do since you're in that world. But yeah, that's my concern. That's actually a quite valid concern. I would like decouple the existential threat concern from the concern,
Starting point is 00:14:34 and we've been talking about this, of us being slaves to the machines. And I think Ted Kaczynski's critique of technology is actually one of the best. Yes, thank you. Yeah, I... I wish he hadn't killed people, of course,
Starting point is 00:14:50 because I'm against killing. But I also think it had the opposite of the intended effect. He did it in order to bring attention to his thesis and ended up obscuring it.
Starting point is 00:15:01 Yeah. But I really wish that every person in America would read, not just his manifesto, but the book that he wrote from prison, because they're just so, at the least, they're thought-provoking and really important. Yeah, yeah. I mean, briefly, and we'll get to existential risk in a second, but he talked about this thing called the power process, which is he thinks that it's intrinsic to human happiness to struggle for survival,
Starting point is 00:15:27 to go through life as a child, as an adult, build up yourself, get married, have kids, and then become the elder and then die, right? Exactly. And he thinks that modern technology kind of disrupts this process and it makes people miserable. How do you know that?
Starting point is 00:15:44 I read it. I'm very curious. I like, I read a lot of things and I just don't have mental censorship in a way. Like I can, I'm really curious. I'll read anything. Do you think being from another country has helped you in that way? Yeah. And I also, I think just my childhood, I was like always different. When I had hair, it was all red. It was bright red. I'm comfortable being different. I'll be different.
Starting point is 00:16:32 And that just commitment to not worrying about anything, about conforming, or like, it was forced on me that I'm not conforming just by virtue of being different and being curious and being good with computers and all that. I think that carried me through life. I get almost a disgust reaction to conformism and mob mentality. Oh, I couldn't agree more. I had a similar experience in childhood. I totally agree with you.
Starting point is 00:17:00 We've traveled to an awful lot of countries on this show, to some free countries, the dwindling number, and awful lot of countries on this show, to some free countries, the dwindling number, and a lot of not very free countries, places famous for government censorship. And wherever we go, we use a virtual private network, a VPN, and we use ExpressVPN. We do it to access the free and open internet. But the interesting thing is when we come back here to the United States, we still use ExpressVPN. Why? Big tech surveillance. It's everywhere. It's not just North Korea that monitors every move its citizens make. No, that same thing happens right here in the United States and in Canada and Great Britain and around the world. Internet providers can see every website you visit. Did you know that they may even be required to keep your browsing history on file for years and
Starting point is 00:17:49 then turn over to federal authorities if asked in the United States internet providers are legally allowed to and regularly do sell your browsing history everywhere you go online there is no privacy did you know that well we did and that's why we use ExpressVPN. And because we do, our internet provider never knows where we're going on the internet. They never hear it in the first place. That's because 100% of our online activity is routed through ExpressVPN's secure encrypted servers. They hide our IP address, so data brokers cannot track us and sell our online activity on the black market.
Starting point is 00:18:26 We have privacy. ExpressVPN lets you connect to servers in 105 different countries. So basically, you can go online like you're anywhere in the world. No one can see you. This was the promise of the internet in the first place, privacy and freedom. Those didn't seem like they were achievable but now they are expressvpn we cannot recommend it enough it's also really easy to use whether or not you fully understand the technology behind it you can use it on your phone laptop tablet even your smart tvs you press one button just tap it and you're protected you have. So if you want online privacy and the freedom it bestows, get it. You can go to our special link right here to get three extra months free of ExpressVPN. That's expressvpn.com slash tucker. Express, E-X-P-R-E-S-S, vpn.com slash Tucker for three extra months free.
Starting point is 00:19:26 Remember in 2020 when CNN told you the George Floyd riots were mostly peaceful? Even as flames rose in the background? It was ridiculous, but it was also a metaphor for the way our leaders run this country. They're constantly telling you, everything is fine. Everything is fine. Everything is fine. Don't worry. Everything's under control. Nothing to see here. Move along and obey. No one believes that. Crime is not going away. Supply chains remain fragile. It does feel like some kind of global conflict could break out at any time. So the question is, if things went south
Starting point is 00:20:03 tomorrow, would you be ready? Well, if you're not certain that you'd be ready, you need Ammo Squared. Ammo Squared is the only service that lets you build an ammunition stockpile automatically. You literally set it on autopilot. You pick the calibers you want, how much you want to save every month, then they'll ship it to you, or they'll store it for you and ship it when you say so. You get 24-7 access to manage the whole thing. So don't let the people in charge, don't let CNN lull you into a fake sense of safety. Take control of your life. Protect your family.
Starting point is 00:20:39 Be prepared. Go to AmmoSquared.com to learn more. It's one of the saddest things about this country. The country is getting sicker. Despite all of our wealth and technology, Americans aren't doing well overall. Obesity, heart disease, autoimmune conditions, all kinds of horrible chronic illnesses, weird cancers are all on the rise. Probably a lot of reasons for this, but one of them definitely is Americans don't eat very well anymore. They don't eat real food.
Starting point is 00:21:02 Instead, they eat industrial substitutes, and it's not good. It's time for something new, and that's where masa chips come in. Masas decide to revive real food by creating snacks how they used to be made, how they're supposed to be made. A masa chip has just three simple ingredients, not 117. Three. No seed oils, no artificial additives, just real delicious food, and I know this because we eat a ton of them in my house. And by the way, I feel great. So you can still continue to snack, but you can do it in a healthy way with chips without feeling guilty about it. Masa chips are delicious.
Starting point is 00:21:36 They taste how a tortilla chip is supposed to taste. But the thing is, you can hit them really, really hard, and I have, and not feel bloated or sluggish after. You feel like you've done something decent for your body. You don't feel like you got a head injury or you don't feel filled with guilt. You feel light and energetic. It's the kind of snack your grandparents ate. Worth bringing back. So you can go to masachips.com, Masa's M-A-S-A, by the way, masachips.com slash Tucker to start snacking. Get 25% off. We enjoy them. You will too. So Kaczynski's thesis that struggle is not only inherent to the condition, but an essential part of your evolution as a man or as a person. And that technology disrupts that.
Starting point is 00:22:33 I mean, that seems right to me. Yeah. And I actually struggle to sort of dispute that despite being a technologist, right? Ultimately, again, like I said, it's like one of the best critiques. I think we can spend the whole podcast kind of really trying to tease it apart. I think ultimately
Starting point is 00:22:49 where I kind of defer, and again, it just goes back to a lot of what we're talking about, my views on technology as an extension of us. It's like, we just don't want technology
Starting point is 00:23:00 to be a thing that's just merely replacing us. We want it to be an empowering thing. And what we do at Replit is we empower people to learn the code, to build startups, to build companies, to become entrepreneurs. And I think you can, in this world, you have to create the power process. You have to create the power process. You have to struggle. And yes, you can.
Starting point is 00:23:29 This is why I'm also, a lot of technologists talk about UBI and universal basic engineering. Oh, I know. I think it's all wrong because it just goes against human nature. Thank you. So I think- You want to kill everybody,
Starting point is 00:23:40 put them on the dole. Yes. Yes. So I don't think technology is inherently at odds with the power process. I'll leave it at that. And we can go to existential threat. Yeah, of course. Sorry.
Starting point is 00:23:55 Boy, am I just aggressive. I can't believe I interview people for a living. We had dinner last night. That was awesome. It was one of the best dinners. Oh, it was the best. but we hit about 400 different threads yes that was amazing so that's what's out there
Starting point is 00:24:12 I know I'm sort of convinced of it or it makes sense to me and I'm kind of threat oriented anyway so people with my kind of personality are like sort of always looking for you know the big bad thing that's coming, the asteroid or the nuclear war, the AI, slavery. But I know some pretty smart people who, very smart people who are much closer to the heart
Starting point is 00:24:37 of AI development who also have these concerns. And I think a lot of the public shares these concerns. Yeah. And the last thing I'll say before soliciting your view of it, much better informed view of it, is that there's been surprisingly and tellingly little conversation about the upside of AI. So instead, it's like, this is happening. And if we don't do it, China will. That may, I think it's probably true.
Starting point is 00:25:04 But like, why should I be psyched about it? Like, what's the upside for me? Right. You know what I mean? Normally when some new technology or huge change comes, the people who are profiting from like, you know what, it's going to be great. It's going to be great.
Starting point is 00:25:16 You're not going to ever have to do X again. You know, you just throw your clothes in a machine and press a button and they'll be clean. Yes. I'm not hearing any of that about it. That's a very astute observation. and they'll be clean. Yes. I'm not hearing any of that about AI. That's a very astute observation and I'll exactly tell you why. And to tell you
Starting point is 00:25:30 why, it's like a little bit of a long story because I think there is a organized effort to scare people about AI. Organized? Organized, yes. And so this starts with a mailing list in the 90s. It's a
Starting point is 00:25:44 transhumanist mailing list called the Extropians. And these Extropians, they, I might have got it wrong, Extropia or something like that, but they believe in the singularity. So the singularity is a moment of time where AI is progressing so fast or technology in general progressing so fast that you can't predict what happens. It's self-evolving and it just all bets are off. We're entering a new world where you just can't predict it.
Starting point is 00:26:17 Where technology can't be controlled. Technology can't be controlled. It's going to remake everything. And those people believe that's a good thing because the world now sucks so much and we are imperfect and unethical and all sorts of irrational and whatever. And so they really wanted for the singularity to happen.
Starting point is 00:26:37 And there's this young guy on this list, his name is Eliezer Yudkowsky. And he claims he can write this AI. And he would write really long essays about how to build this AI. Suspiciously, he never really publishes code. And it's all just prose about how he's going to be able to build AI. Anyways, he's able to fundraise. They started this thing called the Singularity Institute.
Starting point is 00:27:02 A lot of people were excited about the future, kind of invested in him, Peter Thiel most famously. And he spent a few years trying to build an AI. Again, never published code, never published any real progress. And then came out of it saying that not only you can't
Starting point is 00:27:20 build AI, but if you build it, it will kill everyone. So he kind of switched from being this optimist, singularity is great, to like actually AI will for sure kill everyone. And then he was like, okay, the reason I made this mistake is
Starting point is 00:27:36 because I was irrational. And the way to get people to understand that AI is going to kill everyone is to make them rational. So he started this blog called Less Wrong. And Less Wrong walks you through steps to make them rational. So he started this blog called Less Wrong. And Less Wrong walks you through steps to becoming more rational. Look at your biases, examine yourself,
Starting point is 00:27:52 sit down, meditate on all the irrational decisions you've made and try to correct them. And then they start this thing called Center for Advanced Rationality or something like that, CIFAR. And they're giving seminars about rationality. But the intention- What's a seminar about rationality or something like that, CIFAR. And they're giving seminars about rationality. But the intention- What's a seminar about rationality?
Starting point is 00:28:09 What's that like? I've never been to one, but my guess would be they will talk about the biases or whatever. But they have also like weird things where they have this almost struggle session like thing called debugging. A lot of people wrote blog posts
Starting point is 00:28:23 about how that was demeaning and it caused psychosis in some people. 2017, that community, there was like collective psychosis. A lot of people were kind of going crazy and it is all written about it on the internet. Debugging, so that would be like kind of your classic cult technique
Starting point is 00:28:40 where you have to strip yourself bare, like auditing and Scientology. It's very common. Yes. It's a constant in cults. Yes. Is that what you're describing? Yeah, I mean, that's what I read on these accounts.
Starting point is 00:28:53 They will sit down and they will audit your mind and tell you where you're wrong and all of that. And it's caused people huge distress. Young guys all the time talk about how going into that community has caused them a huge distress. And there were like offshoots of this community where there were suicides, there were murders, you're a group, we're all rational now. We learned the art of rationality. And we agree that AI is going to kill everyone. Therefore, everyone outside of this group is wrong. And we have to protect them.
Starting point is 00:29:36 AI is going to kill everyone. But also, they believe other things. They believe that polyamory is rational. And everyone that- Polyamory is is rational and and everyone that polyamory yeah like like uh you can have sex with multiple partners essentially that but they think that's i mean i think it's um it's certainly a natural desire if you're a man to sleep with more indifferent women for sure but it's rational in the sense, how? Like you've never met a happy polyamorous long-term. I've done a lot of them, not a single one.
Starting point is 00:30:10 It might be self-serving. You think? To recruit more impressionable people into... Yeah, and their hot girlfriends. Yes. Right. So that's rational? Yeah, supposedly.
Starting point is 00:30:24 And so they convince each other of all these cult-like behavior. And the crazy thing is this group ends up being super influential because they recruit a lot of people that are interested in AI. And the AI labs and the people who are starting these companies were reading all this stuff. So Elon famously read a lot of Nick Bostrom as kind of an adjacent figure to the rationality community. He was part of the original mailing list. I think he would call himself a part of the rational community. But he wrote a book about AI and how AI is going to, you know, kill everyone essentially. I think he moderated his views more recently, but originally he was one of the people that are kind of banging the alarm.
Starting point is 00:31:12 And, you know, the foundation of OpenAI was based on a lot of these fears. Like Elon had fears of AI killing everyone. He was afraid that Google was going to do that. And so they, you know, group of people.
Starting point is 00:31:26 I don't think everyone at OpenAI really believed that, but, you know, some of the original founding story was that. And they were recruiting from that community. So much so when, you know, Sam Altman got fired recently, he was fired by someone from that community. Someone who started with effective altruism, which is another offshoot from that community. Really?
Starting point is 00:31:51 And so the AI labs are intermarried in a lot of ways with this community. And so it ends up, they kind of borrowed a lot of their talking points. But by the way, a lot of these companies are great companies now, and I think they're cleaning up house. But there is, I mean, I'll just use the term.
Starting point is 00:32:09 It sounds like a cult to me. Yeah. I mean, it has the hallmarks of it in your description. Yeah. And can we just push a little deeper on what they believe? You say they are transhumanists. Yes.
Starting point is 00:32:20 What is that? Well, I think they're just unsatisfied with human nature, unsatisfied with the current ways we're constructed and that we're irrational, we're unethical. And so they long for the world where we can become more rational, more ethical by transforming ourselves,
Starting point is 00:32:44 either by merging with AI via chips or what have you, changing our bodies, and like fixing fundamental issues, like a lot of those people I have known are, not that smart actually, because the best things, I mean, reason is important and we should, in my view, given us by God and it's really important and being irrational is bad. On the other hand, the best things about people, their best impulses are not rational. I believe so too. There is no rational justification for giving something you need to another person. Yes. For spending an inordinate amount of time helping someone, for loving someone. Those are all irrational. Now, banging someone's hot girlfriend, I guess that's rational, but that's kind of the lowest impulse that we have actually. We'll wait to hear about effective altruism. So, they think our natural impulses that you just talked about
Starting point is 00:33:48 are indeed irrational. And there's a guy, his name is Peter Singer, a philosopher from Australia. The infanticide guy. He's so ethical, he's for killing children. Yeah, I mean, so their philosophy is utilitarianism, is that you can calculate ethics,
Starting point is 00:34:04 and you can start to apply it and you get into really weird territory. Like, you know, if there's all these problems, all these thought experiments, like, you know, you have two people at the hospital requiring some organs of another
Starting point is 00:34:20 third person that came in for a regular checkup or they will die, you're ethically, you're supposed to kill that guy, get his organ and put it into the other two. And so it gets, I don't think people believe that per se. I mean, but there are so many problems with that.
Starting point is 00:34:45 There's another belief that they have. Can I say that belief or that conclusion grows out of the core belief, which is that you're God. It's like a normal person realizes, sure, it would help more people if I killed that person and gave his organs to a number of people.
Starting point is 00:35:01 That's just a math question. True. But I'm not allowed to do that because I didn't create life. I don't have the power. I'm not allowed to make decisions like that because I'm just a silly human being who can't see the future and is not omnipotent because I'm not God. And I feel like all of these conclusions stem from the misconception that people are gods. Yes. I agree. Does that sound right? No, I agree. I mean, a lot of the, I think it's,
Starting point is 00:35:30 you know, they're at root, they're just fundamentally unsatisfied with humans and maybe perhaps hate humans. Well, they're deeply disappointed. Yes. I think that's such a, I've never heard anyone say that as well, that they're disappointed with human nature.
Starting point is 00:35:48 They're disappointed with the human condition. They're disappointed with people's flaws. And I feel like that's the, I mean, on one level, of course, I mean, you know, we should be better. And, but that we used to call that judgment, which we're not allowed to do, by the way. That's just super judgy, actually. What they're saying is, you know, you suck. And it's just a short hop from there to you should be killed, I think. I mean, that's a total lack of love.
Starting point is 00:36:14 Whereas a normal person, a loving person says, you kind of suck. I kind of suck too. Yes. But I love you anyway. And you love me anyway. And I'm grateful for your love, right? That's right. That's right. Well, they'll say, you suck. And you love me anyway. And I'm grateful for your love. Right? That's right. That's right.
Starting point is 00:36:25 Well, they'll say, you suck. Join our rationality community. Have sex with us. So, but. Can I just clarify? These aren't just like, you know, support staff at these companies. Like, are there. So, you know, you've heard about SBF and FDX.
Starting point is 00:36:41 Of course, yeah. They had what's called a polycule. Yeah. Right? They were all having sex with each other. Just given now, I just want to be super catty and shallow, but given some of the people
Starting point is 00:36:50 they were having sex with, that was not rational. No rational person would do that. Come on now. Yeah, that's true. Yeah. Well, so, you know, what's even more disturbing, there's another ethical component to their philosophy called long-termism.
Starting point is 00:37:13 And this comes from the effective altruist sort of branch of rationality. Long-termism? Long-termism? Long-termism. And so what they think is, in the future, if we made the right steps, there's going to be a trillion humans, trillion minds. They might not be humans, they might be AI, but there are going to be trillion minds
Starting point is 00:37:32 who can experience utility, can experience good things, fun things, whatever. If you're utilitarian, you have to put a lot of weight on it. And maybe you discount that, sort of like discounted cash flows. But you still have to posit that if there are trillions, perhaps many more people in the future,
Starting point is 00:37:55 you need to value that very highly. Even if you discount it a lot, it ends up being valued very highly. So a lot of these communities end up all focusing on AI safety because they think that AI, because they're rational, they arrived, and we can talk about their arguments in a second, they arrived at the conclusion that AI is going to kill everyone. Therefore, effective altruists and rational community, all these branches, they're all kind of focused on AI safety because that's the most important thing because we want a trillion people in the future to be great. But when you're assigning sort of value that high, it's sort of a form of Pascal's wager. It is sort of, you can justify anything, including terrorism, including doing really bad things. If you're really convinced that AI is going to kill everyone
Starting point is 00:38:48 and the future holds so much value, more value than any living human today has value, you might justify really doing anything. And so built into that, it's a dangerous framework. But it's a dangerous framework. But it's the same framework of every genocidal movement. Yes.
Starting point is 00:39:10 From, you know, at least the French Revolution to present. Yes. A glorious future justifies a bloody present. Yes.
Starting point is 00:39:19 And look, I'm not accusing them of genocidal intent, by the way. I don't know them, but those ideas lead very quickly to the camps. I feel kind of weird just talking about people who just generally I'd like to talk about ideas, about things. them of genocidal intent by the way i don't know them but i but those ideas lead very quickly to
Starting point is 00:39:25 the camps i feel kind of weird just talking about people just generally i'd like to talk about ideas about things but if they were just like a you know silly berkeley cult or whatever and they didn't have any real impact in the world i wouldn't care about them but what's happening is that they were able to convince a lot of billionaires of these ideas. I think Elon maybe changed his mind, but at some point he was convinced of these ideas. I don't know if he gave them money. There was a story at some point at Wall Street Journal that he was thinking about it. But a lot of other billionaires gave them money, and now they're organized, and they're in D.C. lobbying for AI regulation.
Starting point is 00:40:03 They're behind the AI regulation in California. And actually profiting from it, there was a story in PirateWares where the main sponsor, Dan Hendricks, behind SB 1047
Starting point is 00:40:19 started a company at the same time that certifies the safety of AI. And as part of the bill, it says that you have to get certified by a third party. So there's aspects of it that are kind of, let's profit from it. By the way, this is all allegedly based on this article. I don't know for sure.
Starting point is 00:40:38 I think Senator Scott Wiener was trying to do the right thing with the bill, but he was listening to a lot of these cult members, let's call them. And they're very well organized. And also, a lot of them still have connections to the big AI labs and some of the work there. And they would want to create a situation
Starting point is 00:41:02 where there's no competition in AI, regulatory capture, per se. And so, I'm not saying that these are like the direct motivations. All of them are true believers. But, you know, you might kind of infiltrate this group and kind of direct it in a way that benefits these corporations. Yeah. Well, I'm from DC. So, I've seen a lot of instances where, you know, my bank account aligns with my beliefs. Thank heaven. Yeah. Well, I'm from DC, so I've seen a lot of instances where my bank account aligns with my beliefs. Thank heaven. This kind of happens. It winds up that way. It's funny.
Starting point is 00:41:32 Climate is the perfect example. There's never one climate solution that makes the person who proposes it poorer or less powerful. Exactly. Ever. Not one. We've told you before about Halo. It is a great app that I am proud to say I use, my whole family uses. It's for daily prayer and Christian meditation. And it's transformative. As we head into the start of school and the height of election season, you need it. Trust me, we all do. Things are going to get crazier and crazier and crazier.
Starting point is 00:42:00 Sometimes it's hard to imagine even what is coming next. So with everything happening in the world right now, it's hard to imagine even what is coming next. So with everything happening in the world right now, it is essential to ground yourself. This is not some quack cure. This is the oldest and most reliable cure in history. It's prayer. Ground yourself in prayer and scripture every single day. That is a prerequisite for staying sane and healthy and maybe for doing better eternally. So if you're busy on the road, headed to kids sports, there's always time to pray and reflect alone or as a family, but it's hard to be organized about it. Building a foundation
Starting point is 00:42:36 of prayer is going to be absolutely critical as we head into November, praying that God's will is done in this country and that peace and healing come to us here in the United States and around the world. Christianity obviously is under attack everywhere. That's not an accident. Why is Christianity, the most peaceful of all religions, under attack globally? Did you see the opening of the Paris Olympics?
Starting point is 00:42:59 There's a reason. Because the battle is not temporal. It's taking place in the unseen world. It's a spiritual battle, obviously. So try Halo. Get three months completely free at Halo. That's halo.com slash Tucker. If there's ever a time to get spiritually in tune and ground yourself in prayer, it's now.
Starting point is 00:43:19 Halo will help. Personally and strongly and totally sincerely recommended. Halo.com slash Tucker. So the people trying to wreck our civilization want you to be passive. They want you weak so they can control you. Weakness is their goal. No thanks. Our friends at Beam, a proud American company,
Starting point is 00:43:42 understand that our country can only be great if its people are strong. And that's why they've created a new creatine product to help listeners like you stay mentally sharp and physically fit. People like to mock creatine. CNN doesn't like creatine at all. But people buy it because it works. Beam's creatine can help you improve your strength, your brain health, your longevity. It's completely free of sugar and synthetic garbage that's in almost everything else that you eat. Of course, you don't hear about it too much because, again, a population that is strong, clear-minded, and physically capable is a threat to tyrants. That's why they want you playing video games. To celebrate American strength, actual American strength,
Starting point is 00:44:26 Beam is offering up to 30% off their best-selling creatine for the next 48 hours. Go to shopbeam.com slash tucker. Use the code tucker at checkout. That's shopbeam, B-E-A-M dot com slash tucker. Use the code tucker for up to 30% off. It's built on core values, integrity, results, no BS, beam. We strongly recommend it.
Starting point is 00:45:01 I wonder like about the core assumption, which I've had up until right now, that these machines are capable of thinking. Yeah. Is that true? So let's go through their chain of reasoning. I think the fact that it's a stupid cult-like thing or perhaps actually a cult does not automatically mean that their arguments are totally wrong. That's exactly right.
Starting point is 00:45:30 I think you do have to discount some of the arguments because it comes from crazy people. But the chain of reasoning is that humans are general intelligence. We have these things called brains. Brains are computers.
Starting point is 00:45:47 They're based on purely physical phenomena that we know they're computing. And if you agree that humans are computing and therefore we can build a general intelligence in the machine, and if you agree up till this point, if you're able to build a general intelligence in the machine. And if you agree up till this point, if you're able to build a general intelligence in the machine, even if only at human level,
Starting point is 00:46:11 then you can create a billion copies of it. And then it becomes a lot more powerful than any one of us. And because it's a lot more powerful than any one of us, it would want to control us or it would not care about us because it's more powerful, kind of like, we don't care about ants, we'll step on ants, no problem. Right. Because these machines are so powerful, they're not going to care about us. And I sort of get
Starting point is 00:46:38 off the train at the first chain of reasoning. But every one of those steps I have problems with. The first step is the mind is a computer. And based on what? And the idea is, oh, well, if you don't believe that the mind is a computer, then you believe in some kind of spiritual thing.
Starting point is 00:47:00 Well, you have to convince me. You haven't presented an argument. But the idea that like- Speaking of rational, by the way, this is what reason looks like. Right. The idea that we have a complete description of the universe anyways is wrong, right? We don't have a universal physics.
Starting point is 00:47:22 We have physics of the small things. We have physics of the big things. We have physics of the big things. We can't really cohere them or combine them. So just the idea that you being a materialist is sort of incoherent because we don't have a complete description of the world. That's one thing. That's a slight argument. I'm not going to dwell on it. It's a very interesting argument, though.
Starting point is 00:47:38 So you're saying as someone, I mean, you're effectively a scientist. Just state for viewers who don't follow this stuff, like the limits of our knowledge of physics. Yeah. So, you know, we have essentially two conflicting theories of physics. These systems can't be kind of married. They're not a universal system. You can't use them both at the same time.
Starting point is 00:48:01 Well, that suggests a profound limit to our understanding of what's happening around us in the same time. Well, that suggests a profound limit to our understanding of what's happening around us in the natural world. Does it? Yes, it does. And I think this is, again, another error of the rationalist types is that just assume that we were so much more advanced in our science than we actually are.
Starting point is 00:48:20 So it sounds like they don't know that much about science. Yes. Okay. Thank you. Thank you. I'm sorry to ask you to pause. Yeah, that's not even the main crux of my argument. There is a philosopher slash mathematician slash scientist, wonderful.
Starting point is 00:48:36 His name is Sir Roger Penrose. I love how the British kind of give the Sir title when someone is accomplished. He wrote this book called The Emperor's New Mind. It's based on The Emperor's New Clothes, the idea that the emperor is kind of naked. And in his opinion, the argument that the mind is a computer is a sort of consensus argument that is wrong. The emperor is naked. It's not really an argument. It's an assertion.
Starting point is 00:49:08 Yes. It's an assertion that is fundamentally wrong. And the way he proves it is very interesting. There is, in mathematics, there's something called Gödel's incompleteness theorem. And what that says is there are statements that are true that can't be proved in mathematics. So, he constructs, Gödel constructs a number system where he can start to make statements about this number system. So, he creates a statement that's like this statement is unprovable in system F where the whole system is F well if you try to prove
Starting point is 00:49:51 it then that statement becomes false but you know it's true because it's unprovable in the system and Roger Pernod says because we have this knowledge that it is true by looking at it, despite like we can't prove it. I mean, the whole feature of the sentence is that it is unprovable.
Starting point is 00:50:14 Therefore, our knowledge is outside of any formal system. Therefore, the human brain is, or like our mind is understanding something that mathematics is not able to give it to us. To describe. To describe. And I thought, the first time I read it, you know, it read a lot of these things. What's the famous, you were telling me last night, I'd never heard it, the Bertrand Russell self-canceling assertion. Yeah. It's like this statement is false.
Starting point is 00:50:45 It's called the liar paradox. Explain why. That's just, that's going to float in my head forever. Why is that a paradox? So this statement is false. If you look at a statement and agree with it, then it becomes true.
Starting point is 00:50:58 But if it's true, then it's not true. It's false. And you go through the circular thing and you never stop. Right. It broke logic in a way. Yes.
Starting point is 00:51:08 And Bertrand Russell spent his whole, you know, big part of his life writing this book, Principia Mathematica. And he wanted to really prove that mathematics is complete, consistent, you know,
Starting point is 00:51:22 decidable, computable, all of that. And then all these things happened, Gödel, decidable, computable, all of that. And then all these things happen, Gödel's incompleteness theorem, Turing, the inventor of the computer, actually, this is the most ironic
Starting point is 00:51:34 piece of science history that nobody ever talks about, but Turing invented the computer to show its limitation. So he invented the Turing machine, which is the ideal representation of a computer that we have today. All invented the Turing machine, which is the ideal representation of a computer that we have today. All computers are Turing machines.
Starting point is 00:51:50 And he showed that this machine, if you give it a set of instructions, it can tell whether those set of instructions will ever stop, will run and stop, will complete to a stop, or will continue running forever.
Starting point is 00:52:06 It's called the halting problem. And this proves that mathematics have undecidability. It's not fully decidable or computable. So all of these things were happening as he was writing the book. And it was really depressing for him because he kind of went out to prove that, you know, mathematics is complete and all of that. And, you know, this caused kind of a major panic at the time between mathematicians and all of that.
Starting point is 00:52:36 It's like, oh my God, like our systems are not complete. So, it sounds like the deeper you go into science and the more honest you are about what you discover, the more questions you have, which kind of gets you back to where you should be in the first place, which is in a posture of humility. Yes. And yet I see science used certainly in the political sphere. I mean, those are all dumb people. So it's like, who cares actually? Kamala Harris lectured me about science. I don't even hear it.
Starting point is 00:53:02 But also some smart peopleured me about science. I don't even hear it. But also some smart people believe the science. The assumption behind that demand is that it's complete and it's knowable and we know it. And if you're ignoring it, then you're ignorant willfully or otherwise, right? Well, my view of science, it's a method. Ultimately, it's a method. Anyone can apply it. It's democratic. It's decentralized. Anyone can apply the scientific method, including people who are not trained. But in order to practice the method, you have to come from a position of humility that I don't know. That's right. And I'm using this method to find out. And I cannot lie
Starting point is 00:53:34 about what I observe, right? That's right. And today, you know, capital S science is used to control and it's used to propagandize and lie of course but you know in the hands of you know just really people who shouldn't have power just dumb people with
Starting point is 00:53:52 you know pretty ugly agendas but we're talking about the world that you live in which is like unusually smart people who do this stuff for a living and are really trying to advance the ball in science and I think what you're saying is that some of them, knowingly or not, just don't appreciate how little they know.
Starting point is 00:54:10 Yeah. And they go through this chain of reasoning for this argument. And none of those are, at minimum, complete. And they don't just take it for granted. If you even doubt that the mind is a computer, I'm sure a lot of people will call me heretic and will call me all sorts of names because
Starting point is 00:54:34 it's just dogma. That the mind is a computer? That the mind is a computer is dogma in technology, science, all that. That's so silly. Yes. Well, I mean, let me count the ways the mind is different from a computer. First of all, you're not assured of a faithful representation of the past. Memories change over time, right?
Starting point is 00:54:54 In a way that's misleading and who knows why, but that is a fact, right? That's not true of computers. That's right. I don't think. Yeah. But how are we explaining things like intuition and instinct? Those are not...
Starting point is 00:55:09 That is actually my question. Could those ever be features of a machine? You could argue that neural networks are sort of intuition machines, and that's what a lot of people say. But neural networks, you know, and maybe I will describe them just for the audience,
Starting point is 00:55:28 neural networks are inspired by the brain. And the idea is that you can connect a network of small little functions, just mathematical functions, and you can train it by giving examples. I could give it a picture of a cat. And if it's yes, let's say this network has to say yes, if it's a cat, no, if it's not a cat. So to give it a picture of a cat, and then the answer is no, then it's wrong. You adjust the weights based on the difference
Starting point is 00:55:59 between the picture and the answer. And you do this, I don't know, a billion times. And then the network encodes features about the cat. And this is literally exactly how neural networks work. You tune all these small parameters until there is some embedded feature detection, especially in classifiers, right? And this is not intuition.
Starting point is 00:56:29 This is basically automatic programming the way I see it. Right. Of course. So we can write code manually. You can go to our website,
Starting point is 00:56:40 write code. But we can generate algorithms automatically via machine learning. Machine learning essentially discovers these algorithms. And sometimes it discovers
Starting point is 00:56:53 very crappy algorithms. For example, all the pictures that we gave it of a cat had grass in them. So it would learn that grass equals cat. The color green equals cat. Yes.
Starting point is 00:57:09 And then you give it one day a picture of a cat without grass and it fails. They're like, what happened? All turns out it learned the wrong thing. So because it's obscure what it's actually learning, people interpret that as intuition. Because it's not, the algorithms are not
Starting point is 00:57:28 explicated. And there's a lot of work now on trying to explicate these algorithms, which is great work for companies like Anthropic. But, you know, I don't think you can call it intuition just because it's obscure. So what is it?
Starting point is 00:57:47 How is intuition different? Human intuition. We don't, you know, for one, we don't require a trillion examples of cat to learn a cat. Good point. You,
Starting point is 00:58:03 you know, a kid can learn a language with their little examples. You know, a kid can learn language with very little examples. Right now, when we're training these large language models like ChatGPT, you have to give it the entire internet for it to learn language. And that's not really how humans work. And the way we learn is like we combine intuition and some more explicit way of learning. And I don't think we've figured out
Starting point is 00:58:28 how to do it with machines just yet. Do you think that structurally it's possible for machines to get there? So, you know, this chain of reasoning, I can go through every point and present present arguments to the contrary or at least like present doubt but no one is really kind of trying to deal with those doubts um and uh and uh my view is that I'm not holding these doubts very, very strongly.
Starting point is 00:59:07 But my view is that we just don't have a complete understanding of the mind. And you at least can't use it to argue that a kind of machine that acts like a human but much more powerful can kill us all. But do I think that AI can get really powerful? Yes. I think AI can get really powerful, can get really useful. I think functionally it can feel like it's general.
Starting point is 00:59:34 AI is ultimately a function of data. The kind of data that we put into it, the functionality is based on this data. So we can get very little functionality
Starting point is 00:59:43 outside of that. Actually, we don't get any functionality outside of that data. It's actually been proven that these machines are just the function of their data. There's some total of what you put in. Exactly. Garbage in, garbage out. The cool thing about them is they can mix and match different
Starting point is 00:59:59 functionalities that they learn from the data, so it looks a little bit more general. But let's say we collected all data of the world, we collected everything that we care about, and we somehow fit it into a machine and now everyone's building these really large data centers. You will get a very highly capable machine that will kind of look general
Starting point is 01:00:21 because we collected a lot of economically useful data and we'll start doing economically useful tasks. And from our perspective, it will start to look general. So I'll call it functionally AGI. I don't doubt we're sort of headed in some direction like that. But we haven't figured out
Starting point is 01:00:42 how these machines can actually generalize and can learn and can use things like intuition for when they see something fundamentally new outside of their data distribution, they can actually react to it correctly and learn it efficiently. We don't have the science for that. So, because we don't have the understanding of it. Yes. On the most fundamental level, you began that explanation by saying we don't really understand the human brain. So, like, how can we compare it to something because we don't even the understanding of it. Yes. On the most fundamental level, you began that explanation by saying, we don't really understand the human brain. So like, how can we compare it to something because we don't even really know what it is.
Starting point is 01:01:09 And there are a couple of, there's a machine learning scientist, Francois Chalet, I don't know how to pronounce French names, but I think that's his name. He took a sort of an IQ like test, you know, where you're rotating shapes and whatever. And an entrepreneur put a
Starting point is 01:01:27 million dollars for anyone who's able to solve it using AI. And all the modern AIs that we think are super powerful couldn't do something that like a 10-year-old kid could do. And it showed that, again, those machines are just functions of the data. The moment you throw a problem that's novel at them, they really are not able to do it. Now, again, I'm not fundamentally discounting the fact that we'll get there, but just the reality of where we are today, you can't argue that we're just going to put more compute
Starting point is 01:01:59 and more data into this and suddenly it becomes God and kills us all. Because that's the argument and they're going to DC and they're going to all these places that are springing up regulation. This regulation is going to hurt American industry. It's going to hurt startups. It's going to make it hard to compete.
Starting point is 01:02:15 It's going to give China a tremendous advantage. It's going to really hurt us based on these flawed arguments that they're not actually battling with these real questions. It sounds like they're not. And what gives me pause is not so much the technology, it's the way that the people creating the technology understand people. So I think the wise and correct way to understand people is as not self-created beings. People did
Starting point is 01:02:41 not create themselves. People cannot create life as beings created by some higher power who at their core have some kind of impossible to describe spark, a holy mystery. And for that reason, they cannot be enslaved or killed by other human beings. That's wrong.
Starting point is 01:03:00 There is right and wrong. That is wrong. I mean, lots of gray areas. That's not a gray area because they're not self-created. Yes. Right. I think that all humane action flows from that belief and that the most inhumane actions in history flow from the opposite belief, which is people are just objects that can and should be improved and I have full power over them. Like that's a totalitarian mindset
Starting point is 01:03:25 and it's the one thing that connects every genocidal movement is that belief. So it seems to me as an outsider that the people creating this technology have that belief. Yeah, and you don't even have to be spiritual to have that belief. Look, I- You certainly don't.
Starting point is 01:03:40 Yeah, yeah. So- I think that's actually a rational conclusion based on- I 100% agree. I'll give you one interesting anecdote, again, from science. We've had brains for half a billion, if you believe in evolution, all that. We have had brains for half a billion years, right? And we've had kind of a human-like species for, you know, half a million years, perhaps more,
Starting point is 01:04:06 perhaps a million years. There's a moment in time, 40,000 years ago, it's called the Great Leap Forward, where we see culture, we see religion, we see drawings, we saw very little of that before that, tools and whatever. And suddenly we're seeing this Cambrian explosion of culture. Right.
Starting point is 01:04:31 And... Pointing to something larger than just daily needs or the world around them. Yeah. And we're still not able to explain it. David Reich wrote this book. It's called, I think, Who We Are, Where We Came From. In it, he talks about trying to look for that genetic mutation that happened, that potentially created this explosion. And they have some idea of what it could be and some candidates,
Starting point is 01:04:56 but they don't really have it right now. But you have to ask the question, like, what happened 30 or 40,000 years ago, right? Where it's clear, I mean mean it's indisputable that the people who lived during that period were suddenly grappling with metaphysics yes they're worshiping things there's a clear separation between between again the animal brain and the human brain uh and it's clearly not computation. Like we suddenly didn't like grow a computer in a rain. It's something else happened. But what's so interesting is like the instinct of modern man is to look for
Starting point is 01:05:32 something inside the person that caused that. Whereas I think the very natural and more correct instinct is to look for something outside of man that caused that. I'm open to both. Yeah. I mean, I don't know the answer. I mean,
Starting point is 01:05:43 of course I do know the answer, but I'll just pretend I don't. But at very least, both are possible. So if like you confine yourself to looking for a genetic mutation or change,
Starting point is 01:05:55 genetic change, then, you know, you're sort of closing out. That's not an empiricist, a scientific way of looking at things, actually.
Starting point is 01:06:02 You don't foreclose any possibility, right? Yeah. Science? You can't. Right. Sorry. Yeah. Science? You can't. Right. Sorry.
Starting point is 01:06:07 Yeah. And that's very interesting. So, you know, I think that these machines, I'm betting my business that on AI getting better and better and better, and it's going to make us all better. It's going to make it all more educated. Okay, so now's the time for you to tell me why I should be excited about something I've been hearing.
Starting point is 01:06:32 Yeah. So this technology, large language models, where we kind of fed a neural network, the entire internet, and it has capabilities mostly around writing, around information lookup, around summarization, around coding. It does a lot of really useful thing
Starting point is 01:06:57 and you can program it to kind of pick and match between these different skills. You can program these skills using code. And so the kind of products and services that you can build with this are amazing. So one of the things I'm most excited about this application of the technology, there's this problem called the Bloom's two sigma problem.
Starting point is 01:07:21 There's this scientist that was studying education and he was looking at different interventions to try to get kids to learn better or faster or have just better educational outcomes. And he found something kind of bad, which is there's only one thing you could do to move kids, not in a marginal way, but in two standard deviations from the norm, like in a big way, like better than 98% of the other kids by doing one-on-one tutoring using a type of learning called mastery learning.
Starting point is 01:08:03 One-on-one tutoring is the key formula there. That's great. I mean, we discovered the solution to education. We can up-level everyone, all humans on earth. The problem is like, we don't have enough teachers to do one-on-one touring. It's very expensive. No country in the world can afford that.
Starting point is 01:08:24 So now we have these machines that can talk, that can teach, that can present information, that you can interact with it in a very human way. You can talk to it. It can talk to your back, right? And we can build AI applications to teach people one-on-one. And you can have it, you can serve 7 billion people with that and everyone can get smarter. I'm totally for that. I mean, that was the promise of the
Starting point is 01:08:56 internet, it didn't happen. So I hope this I was going to save this for last, but I can't control myself. So I just know, being from DC that when the people in charge see new technology, the first thing they think of is like, how can I use this to kill people? So what are the military applications
Starting point is 01:09:16 potentially of this technology? You know, that's one of the other thing that I'm sort of very skeptical of this lobbying effort to get government to regulate it. Because I think the biggest offender would be of abuse of this technology, probably government. You think? I watched your interview with Jeffrey Sachs, who's a Columbia professor, very mainstream. And I think he got assigned to a a Lancet sort of study of COVID origins or whatever.
Starting point is 01:09:49 And he arrived at very, at the time, heterodox view that it was created in a lab and was created by the US government. And so, you know, the government is supposed to protect us from these things. And now they're talking about pandemic readiness and whatever. Well, let's talk about how do we watch what the government is doing? How do we actually have democratic processes to ensure that you're not the one abusing these technologies? Because they're going to regulate it. They're going to make it so that everyday people are not going to be able to use these things. And then they're going to have free reign on how to abuse these things.
Starting point is 01:10:27 Just like with encryption. Right. Encryption is another one. That's right. But they've been doing that for decades. Yes. Like we get privacy, but you're not allowed it because we don't trust you. Right. But by using your money and the moral authority that you gave us to lead you, we're going to hide from you everything we're doing
Starting point is 01:10:45 and there's nothing you can do about it. I mean, that's the state of America right now. Yeah. So how would they use AI to further oppress us? I mean, you can use it in all sorts of ways, like autonomous drones. We already have autonomous drones. They get a lot worse. You can, you know, there's a video on the internet where like the, you know, Chinese guard
Starting point is 01:11:06 or whatever was walking with a dog, with a robotic dog and the robotic dog had a gun mounted to it. And so you can have robotic sort of dogs with shooting guns, a little sci-fi, like you can be. It's a dog lover. That's so offensive to me. It is kind of offensive. Yeah. In a world increasingly defined by deception and the total rejection of human dignity, we decided to found the Tucker Carlson Network and we did it with one principle in mind. Tell the truth. You have a God-given right to think for yourself.
Starting point is 01:11:40 Our work is made possible by our members. So if you want to enjoy an ad-free experience and keep this going, join TCN at tuckercarlson.com slash podcast. tuckercarlson.com slash podcast. There was this huge expose in this magazine called 972 about how Israel was using AI to target suspects, but ended up killing huge numbers of civilians. It's called The Lavender. A very interesting piece.
Starting point is 01:12:24 So the technology wound up killing people who were not even targeted? Yes. It's pretty dark. What about surveillance? I think this recent AI
Starting point is 01:12:41 boom, I think it could be used for surveillance. I'm not sure if it gives a special advantage. I think it could be used for surveillance. I'm not sure if it gives a special advantage. I think they can get the advantage by, again, if these lobbying groups are successful. Part of their ideal outcome
Starting point is 01:12:58 is to make sure that no one is training large language models. And to do that, you would need to insert surveillance apparatus at the compute level. And so perhaps that's very dangerous. Our computers would like spy on us to make sure we're not training AIs. I think the kind of AI that's really good at surveillance is kind of the vision AI, which China perfected.
Starting point is 01:13:26 That's been around for a while now. I'm sure there's ways to abuse language models for surveillance, but I can't think of it right now. What about manufacturing? It would help with manufacturing. Right now, people are figuring out how to do, I invested in a couple of companies, how to apply this technology foundation models to robotics. It's still early science, but you might have a huge advancement in robotics if we're able to apply this technology to it. So the whole point of technology is to replace human labor, either physical or mental, I think. I mean, historically, that's what, you know, the steam engine replaced the arm, et cetera, et cetera. So if this is as transformative as it appears to be, you're going to have a lot of idle people. And that's, I think, the concern that led a lot of your friends and colleagues to support UBI, universal basic income.
Starting point is 01:14:28 Like, there's nothing for these people to do, so we just got to pay them to exist. You said you're opposed to that. I'm adamantly opposed to that. On the other hand, like, what's the answer? Yeah. So, you know, there's two ways to look at it. We can look at the individuals that are losing their jobs, which is tough and hard. I don't really have a good answer.
Starting point is 01:14:45 But we can look at it from a macro perspective. And when you look at it from that perspective, for the most part, technology created more jobs over time. Before alarm clocks, we had this job called the knocker-opper, which goes to your room, you kind of pay them,
Starting point is 01:15:02 it was like come every day at like 5 a.m., they knock on your window or ring the village bell right and you know that job disappeared but like we had
Starting point is 01:15:11 you know 10 times more jobs in manufacturing or perhaps you know 100 or 1000 more jobs in manufacturing and so
Starting point is 01:15:20 overall I think the general trend is technology just creates more jobs and so like I'll give you a general trend is technology just creates more jobs. And so I'll give you a few examples how AI can create more jobs. Actually, it can create more interesting jobs. Entrepreneurship is like a very American thing, right?
Starting point is 01:15:36 It's like America is the entrepreneurship country. But actually, new firm creation has been going down for a long time, at least 100 years. It's just like been going down. Although we have all this excitement around startups or whatever, Silicon Valley is the only place that's still producing startups. Like the rest of the country, there isn't as much startup or new firm creation, which is kind of sad because again, the internet was supposed to be this great wealth creation engine that anyone has access to.
Starting point is 01:16:05 But the way it turned out is like it was concentrated in this one geographic area. Well, it looks, I mean, in retrospect, it looks like a monopoly generator, actually. Yeah. But again, it doesn't have to be that way. And the way I think AI would help is that it will give people the tools to start businesses. Because you have this easily programmable machine that can help you with programming. I'll give you a few examples.
Starting point is 01:16:29 There's a teacher in Denver that during COVID was a little bored, went to our website. We have a free course to learn how to code. And he learned a bit of coding. And he used his knowledge as a teacher to build an application that helps teachers use AI to teach.
Starting point is 01:16:49 And within a year, he built a business that's worth tens of millions of dollars, that's bringing in a huge amount of money. I think he raised $20 million. And that's a teacher who learned how to code and created this massive business
Starting point is 01:17:03 really quickly. We have stories of photographers doing millions of dollars in revenue. So it just, it's a, you know, AI will decentralize access to this technology. So there's a lot of ways in which you're right,
Starting point is 01:17:19 technology tend to centralize, but there's a lot of ways that people kind of don't really look at in which technology can decentralize. Well, that was, I mean, that promise makes sense to me.
Starting point is 01:17:28 I would just, I firmly want it to become a reality. I have a, we have a mutual friend who showed me a name, he's so smart and a good,
Starting point is 01:17:34 humane person who's very way up into the subject and participates in the subject. And he said to me, well, one of the promises of AI
Starting point is 01:17:44 is that it will allow people to have virtual friends or mates that it will fill, you know, it will solve the loneliness problem that is clearly a massive problem in the United States. And I felt like, I don't want to say it because I like him so much, but that seemed really bad to me. Yeah, I'm not interested in those.
Starting point is 01:18:07 I think we have the same intuition about what's dark and dystopian versus what's cool. He's a wonderful person, but I just don't think he's thought about it or I don't know what, but we disagree. I don't even disagree. I don't have an argument. It's just an instinct,
Starting point is 01:18:21 but people should be having sex with people, not machines. That's right. I don't have an argument, just an instinct, but people should be having sex with people, not machines. That's right. I would go so far as to say some of these applications are a little unethical, like the preying on lonely men with no
Starting point is 01:18:38 opportunities for a mate. It will make it so that they were actually not motivated to go out and date and get an actual girlfriend. Like porn 10x.
Starting point is 01:18:50 Yes. Yes. And I think that's really bad. That's really bad for society. And so I think the application, look, you can apply this technology in a positive way
Starting point is 01:18:58 or you can apply it in a negative way. You know, I would love for this, you know, doom cult, if instead they were like trying to, you trying to make it so that AI is applied in a positive way.
Starting point is 01:19:09 If we had a cult that was like, oh, we're going to lobby, we're going to go out and make it so that AI is a positive technology, I'd be all for that. And by the way, there are in history, there are times where the culture self-corrects, right? I think there's some self-correction on porn
Starting point is 01:19:32 that's happening right now. You know, fast food, right? I mean, you know, just generally junk. Right. You know, everyone is like, Whole Foods is like high status now. Like you eat Whole Foods, there's a place called Whole Foods you can go to.
Starting point is 01:19:46 That's right. And people are interested in eating healthy. Chemicals in the air and water. Another thing that was a very esoteric concern even 10 years ago was only the wackos. It was Bobby Kennedy cared about that. No one else did. Now that's like a feature of normal conversation. Yes.
Starting point is 01:20:00 Everyone's worried about microplastics in the testicles. That's right. Which is, I think, a legitimate concern. Absolutely. So what, I'm not surprised that there are cults in Silicon Valley. I don't think you named the only one. I think there are others. That's my sense.
Starting point is 01:20:13 And I'm not surprised because, of course, every person is born with the intuitive knowledge that there's a power beyond himself. That's why every single civilization has worshipped something. And if you don't acknowledge that, you just, it doesn't change. You just worship something even dumber. Yeah. But so my question to you
Starting point is 01:20:28 as someone who lives and works there is, what percentage of the people who are making decisions in Silicon Valley will say out loud, you know, not I'm a Christian, Jew, or Muslim, but that like, I'm not, you know, there is a power bigger than me in the universe. Do people think that?
Starting point is 01:20:42 Do they acknowledge that? You know, for the most part, no. I thought. Yeah, like I think most, I don't want to say most people, but like, you know, the vast majority of the discussions tend to be like more intellectual.
Starting point is 01:20:57 I think people just take for granted that everyone has like a secular, mostly secular point of view. Well, I think that, you know, the truly brilliant conclusion is that we don't know a lot and we don't have secular point of view. Well, I think that, you know, the truly brilliant conclusion is that we don't know a lot and we don't have a ton of power. That's my view.
Starting point is 01:21:10 Right, right. So like the actual intellectual will, over time, if he's honest, will reach it. But this is the view of like many scientists and many people who really went deep. I mean, I don't know who said it. I'm trying to remember. But someone said like the first gulp of science
Starting point is 01:21:24 make you an atheist, but at the bottom of the cup, you'll find God waiting for you. Matthias Desmet wrote a book about this, supposedly about COVID. It was not about COVID. I just cannot recommend it more strongly. But the book is about the point you just made, which is the deeper you go into science, the more you see some sort of order reflected that is not random at all. Yes. And a beauty exhibited in math even. And the less you know, and the more you're certain that there's a design here, and that's not human or quote natural, it's supernatural. That's his conclusion, and I affirm it.
Starting point is 01:22:12 But how many people do you know in your science world who think that? Yeah, I can count them on one hand, basically. How interesting. Yeah. That concerns me because I feel like without that knowledge, hubris is inevitable. Yeah. And a lot of these conclusions are from hubris. The fact that there's so many people that believe that AI is an eminent existential threat. A lot of people believe that we're going to die.
Starting point is 01:22:36 We're all going to die in the next five years. Comes from that hubris. How interesting. I've never, until I met you, I've never thought of that. That actually, that is itself an expression of hubris. How interesting! I've never, until I met you, I've never thought of that, that actually that is itself an expression of hubris. I never thought of that. Yeah, you can
Starting point is 01:22:52 go negative with hubris, you can go positive and I think the positive thing is good. I think Elon is an embodiment of that. It's just a self-belief that you can fly rockets and build electric cars is good and maybe in some cases it's delusional but like
Starting point is 01:23:08 net net will kind of put you on a good path for creation I think it can go pathological if you you know if you're for example SBF and again he's kind of part of those groups just sort of
Starting point is 01:23:23 believed that he can do anything in service of his ethics, including steal and cheat and all of that. Yeah, I don't, I never really understood. Well, of course I understood too well, I think, but the obvious observable fact that effective altruism led people to become shittier toward each other,
Starting point is 01:23:47 not better. Yeah, I mean, it's such an irony, but I feel like it's in the name. If you call yourself such a grandiose thing, you're typically horrible. Like the Islamic state is neither Islamic or state. The effective altruists are neither altruists. The United Nations is not united.
Starting point is 01:24:10 No, that's, boy, is that wise. So I don't think to your earlier point that any large language model or machine could ever arrive at what you just said. Because like the deepest level of truth is wrapped in irony always. And machines don't get irony, right? Not yet.
Starting point is 01:24:32 Could they? Maybe. I mean, I don't think, I don't take as strong of a stance as you are at the capabilities of the machines. I do believe that if you represent it a lot. Well, I don't know. I mean, I'm asking. I really don't know what they're capable of.
Starting point is 01:24:46 Well, I think maybe they can't come up with really novel irony that is like really insightful for us. But if you put a lot of irony in the data, they'll understand. Right. They can ape human irony. They can ape. I mean, they're ape machines. They're imitation machines. They're literally imitating, like, you know, the way large language models are trained is that you give them a corpus of text and they hide different words and they try to guess them.
Starting point is 01:25:12 And then they adjust the weights of those neural networks. And then eventually they get really good at guessing what humans would say. Well then, okay, so you're just kind of making the point unavoidable. Like, if the machines, as you have said, that makes sense, are the sum total of what's put into them,
Starting point is 01:25:27 then, and that would include the personalities and biases of the people putting the data in, then you want like the best people, the morally best people, which is to say the most humble people, to be doing that. But it sounds like we have the least humble people doing that. Yeah, I think some of them are humble.
Starting point is 01:25:45 I think some people working in AI are really upstanding and good and want to do the right thing. But there are a lot of people with the wrong motivations coming at it from fear and things like that. This is the other point I will make, is that free markets are good because you're going to get all sorts of entrepreneurs
Starting point is 01:26:05 with different motivations. And I think what determines the winner is not always the ethics or whatever, but it's the larger culture. What kind of product is pulling out of you? If they're pulling the porn and the companion chatbots, whatever, versus they're pulling the education and the healthcare
Starting point is 01:26:29 and I think all the positive things that will make our life better. I think that's really on the larger culture. I don't think we can regulate that with government or whatever. But if the culture creates demand for things just makes us worse as humans, then there are entrepreneurs that will spring up and serve this. That's totally right.
Starting point is 01:26:53 And it is a snake eating its tail at some point because, of course, you serve the baser human desires and you create a culture that inspires those desires in a greater number of people. In other words, the more porn you have, the more porn people want, like actually. Yes. Yes. No frills delivers. Get groceries delivered to your door from No Frills with PC Express.
Starting point is 01:27:27 Shop online and get $15 in PC Optimum points on your first five orders. Shop now at nofrills.ca. The Chevrolet Employee Pricing Event is on now. Get a big cash purchase discount of up to $11,300 on the 2025 Chevrolet Silverado LDZR2 and Silverado HDZR2. With a factory-installed lift kit and Multimatic DSSV dampers on both the Silverado LD and HDZR2, you'll have all the capability you need to leave the asphalt behind. Hurry in. Employee pricing is on for a limited time. Visit your local Chevrolet dealer for details. I wonder about the pushback from existing industry, from the guilds.
Starting point is 01:28:13 So if you're the AMA, for example, you mentioned medical advances. That's something that makes sense to me for diagnoses, which really is just a matter of sorting the data, like what's most likely. That's right. And a machine can always do that more efficiently and more quickly than any hospital or individual doctor. So like, and diagnosis is like the biggest hurdle. Yes.
Starting point is 01:28:37 That's going to like, that's going to actually put people out of business, right? If I can just type my symptoms into a machine and I'm getting a much higher likelihood of a correct diagnosis than I would be after three days at the Mayo Clinic, like who needs the Mayo Clinic? I actually have a concrete story about that. I've dealt with like a chronic issue for a couple of years. I spent hundreds of thousands of dollars on doctors out of pocket, get like world's experts and all that. Hundreds of thousands of them. Yes.
Starting point is 01:29:07 And they couldn't come up with a right diagnosis. And eventually, it took me like writing a little bit of software to collect the data or whatever, but I ran it, I ran the AI, I used the AI,
Starting point is 01:29:15 I ran the AI once and it gave me a diagnosis they haven't looked at. And I went to them, they were very skeptical of it. And then we ran the test, turns out it was the right diagnosis. Oh, that's incredible.
Starting point is 01:29:25 Yeah, it's amazing. It changed my life. That's incredible. But you had to write the software to get there. Yeah, a little bit of software. So that's just, we're not that far from like having publicly available. Right. And by the way, I think that anyone can write a little bit of software.
Starting point is 01:29:39 Right now at Replit, we are working on a way to generate most of the code for you. We have this program called 100 Days of Code. If you give it 20 minutes, do a little bit of coding every day, in like three months, you'll be a good enough coder to build a startup. I mean, eventually you'll get people working for you and you'll upscale and all of that, but you'll have enough skills. And in fact, I'll put up a challenge out there, people listening to this,
Starting point is 01:30:06 if they go through this and they build something that they think could be a business or whatever, I'm willing to help them get it out there, promote it. We'll give them some credits and cloud services, whatever. Just tweet at me or something and mention this podcast and I'll help them.
Starting point is 01:30:20 What's your Twitter? Amasad, A-M-A-S-A-D. So, but there are a lot of entrenched interests. I mean, I don't want to get into the whole COVID
Starting point is 01:30:30 poison thing, but I'm revealing my biases. But, I mean, you saw it in action during COVID where, you know,
Starting point is 01:30:40 it's always a mixture of motives. Like, I do think there are high motives mixed with low motives because that's how people are. You know, it's always a booby base of good do think there are high motives mixed with low motives because that's how people are. It's always a booby base of good and bad.
Starting point is 01:30:47 But to some extent, the profit motive prevailed over public health. Yes. That is, I think, fair to say. Yes. And so, if they're willing to hurt people to keep the stock price up, I mean, what's the resistance you're going to get to allowing people to come to a more accurate diagnosis with a machine for free? Yeah. In some sense, that's why I think open source AI, people learning how to do some of the stuff themselves, is probably good enough.
Starting point is 01:31:24 Of course, if there's a company that's building these services, it's going to do better. But just the fact that this AI exists and a lot of it is open source, you can download it on your machine and use it, is enough to potentially help a lot of people. By the way, you should always talk to your doctor.
Starting point is 01:31:39 I talk to my doctor. I'm not giving people advice to kind of figure out all this themselves, but I do think that it's already empowering. So that's sort of step one. But for someone like me, I'm not going to talk to a doctor until he apologizes to my face for lying for four years because I have no respect for doctors at all. I have no respect for anybody who lies, period. And I'm not taking life advice and particularly important life advice, like about my health from someone who's a liar. I'm just not doing that because I'm not insane. I don't take real estate advice
Starting point is 01:32:08 from homeless people. I don't take financial advice from people who are going to jail for fraud. So like, I'm sure there's a doctor out there who would apologize, but I haven't met one yet. So for someone like me, who's just, I'm not going to a doctor until they apologize, this could be like literally life-saving. Right. So to the question of whether there's going to be a regulatory capture, I think that's why you see Silicon Valley getting into politics. Hmm. You know, Silicon Valley, you know, what was always sort of in a politics, you know, when I was, I remember I came in 2012. It was early on in my time. It was the Romney-Obama debate.
Starting point is 01:32:57 And I was... Can I just pause here? Imagine a debate between Romney and Obama who agree on everything. Yes. I didn't see a lot of daylight. And people were just like making fun of Romney. It was like he said something like
Starting point is 01:33:11 binders full of women and kind of that stuff without whatever. And I remember asking everyone around me, like, who are you with? I was like, of course, Democrats. Like, of course. I was like, why isn't anyone here for Republicans?
Starting point is 01:33:26 And they're like, oh, because they're dumb. Only dumb people are going to vote for Republicans. And, you know, Silicon Valley was this like one state town in a way. Actually, look, you know, there's like data on like donations by company for state. There's like Netflix is 99% to Democrats and like 1% to Republicans. If you look up the, you know, diversity of parties in North Korea, it's actually a little better. Oh, of course it is. They have more choices there.
Starting point is 01:34:00 They have a more honest media too. But anyways, I mean, you see now a lot of people are surprised that a lot of people in tech are going for Republicans, are going for Trump. And particularly Mark Andreessen and Ben Horowitz put out a two-hour podcast talking about- So they are the biggest venture capitalists in the United States, I think. I don't know on what metric you would judge, but they're certainly on their way to be the biggest. They're the most, I think the best, for sure. And
Starting point is 01:34:31 They put out a poll, sir, but I didn't, I should have watched. I didn't. Yeah, so their reasoning for why they would vote for Trump. By the way, you know, they would have never done that in like 2018 or 19,
Starting point is 01:34:47 whatever. And so this, this vibe shift that's happening. How is it received? It's still, it's still mixed, but, but I think,
Starting point is 01:34:58 you know, way better than what would have happened 10 years ago. They would have been canceled and they would have, no one would ever like, no founder would take their money. But it's like,
Starting point is 01:35:06 I mean, again, I'm an outsider just watching, but Andreessen Horowitz is so big and so influential and they're considered smart and not at all crazy.
Starting point is 01:35:13 Yeah. That like, that's got to change minds if Andreessen Horowitz is doing it. Yeah, it would have certainly changed minds.
Starting point is 01:35:22 I think a lot, I think, you know, give people some courage to say, I'm for Trump as well at minimum, but I think it does change mind.
Starting point is 01:35:31 And they put out, the arguments is, you know, they put out this agenda called little tech. You know, there's big tech and they have their lobbying
Starting point is 01:35:37 and whatever. Who's lobbying for little tech? Like, smaller companies. Companies like ours, but much smaller too. Like, you know,
Starting point is 01:35:44 one, two person companies. And actually, no one much smaller too. Like, you know, one, two person companies. And actually no one is... Your company would be considered little? In Silicon Valley, yeah. I want a little company. Right. So, but you know,
Starting point is 01:35:59 let's call it like really just startups that just started, right? Like, you know, typically no one is protecting them sort of politically. No one's really thinking about it. And it's very easy to disadvantage startups, like you just talked about with healthcare regulation, what are very easy to create regulators or capture
Starting point is 01:36:17 such that companies can't even get off the ground doing their thing. And so they came up with this agenda that like like we're going to be the, you know, the firm that's going to be looking out for that little guy, the little tech, right? Which I think is brilliant. And, you know, part of their argument for Trump is that, you know, the, you know, AI, for example, like the Democrats are really excited
Starting point is 01:36:44 about regulating AI. One of the most hilarious things that happened, I think Kamala Harris was invited to AI safety conference. And they were talking about existential risk. And she was like, well, someone being denied healthcare, that's existential for them. Someone, whatever, that's existential. So she interprets existential risk as like any risk is existential. And so, you know, that's just one anecdote. But like, there was this anecdote where she was like, AI is a two-letter word.
Starting point is 01:37:17 And you clearly don't understand it very well. And they're moving very fast at regulating it. They put out an executive order that a lot of people think. They kind of, I mean, the tweaks they've done so far from a user perspective to keep it safe are really like just making sure it hates white people. Like it's about pushing a dystopian,
Starting point is 01:37:39 totalitarian social agenda, racist social agenda on the country. Is that going to be embedded in it permanently? I think it's a function of the culture rather than the regulation. So I think the culture was sort of this woke culture broadly in America,
Starting point is 01:37:56 certainly in Silicon Valley. And now that the vibe shift is happening, I think Microsoft just fired their DAI team. Microsoft. Really? Yeah. I mean, it's a huge vibe shift is happening. I think Microsoft just fired their DEI team. Microsoft. Really? Yeah. I mean,
Starting point is 01:38:08 it's a huge vibe shift. Are they going to learn to code? Microsoft, perhaps. So, you know, the, you know,
Starting point is 01:38:18 I wouldn't pin this on the government just yet, but it's very easy. Oh, no, no, no,
Starting point is 01:38:21 no. I just meant, Democratic members of Congress, I know for a fact, applied pressure to the labs. Like, no, no, no, no. I just spent, Democratic members of Congress, I know for a fact, applied pressure to the labs. Like, no, you can't. It has to reflect our values.
Starting point is 01:38:30 Okay. Yeah, yeah, yeah. So maybe that's where it's coming from. But is that permanent? Am I always going to get, when I type in, who is George Washington, you know,
Starting point is 01:38:36 a picture of Denzel Washington? You know, it's already changing is what I'm saying. It's already, a lot of these things are being reversed. It's not perfect, but it's already changing. And that's, I think it's just a function of the larger culture
Starting point is 01:38:48 change. I think Elon buying Twitter is in letting people talk and debate moved the culture to like, I think a more moderate place. I think he's gone a little more, you know, a little further. But like, I think it was net positive on the culture because it was so far left. It was so far left inside these companies, the way they were designing their products, such that, you know, George Washington will look like there's like a black George Washington or what have you. That's just insane, right? It was like, it was verging on insanity. Well, it's lying.
Starting point is 01:39:26 And that's what freaked me out. I mean, it's like, I don't know, just tell the truth. There are lots of truths I don't want to hear that don't comport with my, you know, desires, but I don't want to be lied to. George Washington was not black. None of the framers were. They were all white Protestant men.
Starting point is 01:39:39 Sorry, all of them. Yeah. So like, that's a fact, deal with it. So if you're going to lie to me about that, you're my enemy, right? I think so. I mean, you're, and I would say it's a small element
Starting point is 01:39:50 of these companies that are doing that. Yes. But they tend to be the, they were the controlling element. Those like sort of activist folks that were, and I was at Facebook in 2015. You worked at Facebook? I worked at Facebook, yeah.
Starting point is 01:40:04 I didn't know that. I worked on open source, mostly. I worked at Facebook? I worked at Facebook, yeah. I didn't know that. I worked on open source, mostly. I worked on React and React Native, one of the most powerful kind of wave programming user interfaces. So I mostly worked on that. I didn't really work on the
Starting point is 01:40:15 kind of blue app and all of that. But I saw this sort of cultural change where like a small minority of activists were just like shaming anyone who is thinking independently.
Starting point is 01:40:29 And it sent Silicon Valley in this like sheep-like direction where everyone is afraid of this activist class because they can cancel you, they can, you know, I think one of the early shots fired there was like Brendan Eich, the inventor of JavaScript I think one of the early shots fired there was like Brandon Eich, the inventor of JavaScript,
Starting point is 01:40:47 the inventor of the language that like runs the browser because the way he votes or donates whatever got fired from his position as CTO of Mozilla browser. And that was like seen as a win or something.
Starting point is 01:41:03 And I was like, again, I was like very politically, I was not really interested in politics in like 2012, 13, when I first came to this country. But I just accepted it. It's like, oh, all these people are Democrats, liberal is what you are, whatever. But I just looked at that.
Starting point is 01:41:18 I was like, that's awful. Like, no matter what his political opinion is, like you're taking from a man his ability to earn a living. Eventually he started another browser company and it's good, right? But this like sort of cancel culture created such a bubble of conformism.
Starting point is 01:41:39 And the leadership class at these companies were actually afraid of the employees. So that is the fact that bothers me most. Silicon Valley is defining our future. That is technology. We don't have kind of technology in the United States anymore. Manufacturing, creativity has obviously been extinguished everywhere in the visual arts, you know, everywhere. Silicon Valley is the last place.
Starting point is 01:42:00 Yes, it's important. It's the most important. Yes. And so the number one requirement for leadership is courage. Number one. Yes, that's important. What's the most important. Yes. And so the number one requirement for leadership is courage. Number one. Yes. Number one. Nothing even comes close to bravery as a requirement for wise and effective leadership. So if the leaders of these companies were afraid of like 26-year-old unmarried screechy girls in the HR department, like, whoa, that's really cowardly. Like, shut up.
Starting point is 01:42:26 You're not leading this company. I am. Like, that's super easy. I don't know why that's so hard. Like, what? The reason I think it was hard, it was because these companies were competing for talent hand over fist.
Starting point is 01:42:42 And it was the sort of zero interest era in sort of US economy. And everyone was throwing cash at like talent. And therefore, if you offend the sensibilities of the employees, even to the slightest bit, you're afraid that they're going to leave or something like that. I'm trying to make up an excuse for them.
Starting point is 01:43:04 Well, you could answer this question because you are the talent that, you know, you came all the way from Jordan to work in the Bay Area to be at the center of creativity in science. So, the people who do what you do, who can write code,
Starting point is 01:43:20 which is the basis of all of this, are they, I don't, like, they seem much more like you or James DeMore. They just, they don't seem like political activists to me. For the most part, yeah. There's still a segment of the programmer population.
Starting point is 01:43:36 Well, they have to be rational because code is about reason, right? I mean, this is the whole thing. You know, it's like, I don't think, I mean, a lot of these people that we talked about are into code and things like that. They're not rational. Really? Yeah.
Starting point is 01:43:47 Like, look, I think coding could help you become more rational, but you could very easily override that. Isn't that the basis of it? I thought. If this is true and that is true, then that must be true. I thought that was the point.
Starting point is 01:43:57 Yeah, but people are very easy, it's very easy for people to just, you know, compartmentalize, right? It's like, now I'm doing coding, now I'm doing emotions. Oh, so the brain is not a computer as well.
Starting point is 01:44:09 The brain is not a computer, exactly. Exactly, that's my point. I know. You know, so, you know,
Starting point is 01:44:15 I'm probably, you know, responsible for the most amount of people learning to code in America because I was like a, I like built, the reason I came to the US
Starting point is 01:44:24 is I built this piece of software that was the first to make it easy to code in the browser and it went super viral and a bunch of US companies started using him including Code Academy and I joined them as
Starting point is 01:44:40 like a founding engineer they had just started two guys amazing guys that just started and I joined them and we taught like 50 million people how to code. Many of them, many millions of them are American.
Starting point is 01:44:50 And the sort of rhetoric at a time, what you would say is like, coding is important because it'll teach you how to think, computational thinking and all of that. I sort of like not, maybe I've said it at some point,
Starting point is 01:45:03 but I've never really believed it. I think coding is a tool you can use to build things, to automate things, to, it's a fun tool. You can do art with it. You can do a lot of things with it. But ultimately,
Starting point is 01:45:14 I don't think, you know, you can sit people down and sort of make them more rational. And you get into all these weird things if you try to do that. You know, people can become more rational by virtue of education, by virtue of seeing that,
Starting point is 01:45:32 taking a more rational approach to their life yields results, but you can't really teach it that way. Well, I agree with that completely. That's interesting. I just thought it was a certain, because I have to say without getting into
Starting point is 01:45:48 controversial territory, every person I've ever met who writes code is kind of similar in some ways to every other person I've ever met who writes code. Yeah, that's true. It's not a broad cross-section
Starting point is 01:45:58 of any population. No. At all. Well, people who make it a career, but I think anyone sort of can write a lot of code. I'm sure. I mean, people who get paid to do it. Right, people who make it a career, but I think anyone sort of can write a lot of code. I'm sure. I mean,
Starting point is 01:46:06 people who get paid to do it. Right. Right. Yeah. Interesting. So, bottom line, do you see,
Starting point is 01:46:12 and then we didn't even mention Elon Musk, David Sachs, have also come out for Trump. So, do you think the vibe shift in Silicon Valley is real?
Starting point is 01:46:23 Yes. Actually, I would credit Saks originally perhaps more than Elon because, look, it's one party state. Yeah. No one watches you, for example. No one ever watched anything.
Starting point is 01:46:37 I don't want to over-journalize, but most people didn't get any right wing or center-right opinions for the most part. They didn't seek it. It wasn't there. You're swimming in just, you know, liberal democratic sort of talking point. I would say Sacks in the All In podcast
Starting point is 01:46:59 was sort of the first time a lot of people started on a weekly basis hearing a conservative talk being David Sachs. And I would start to hear at parties and things like that, people describe their politics as sexism. I just started calling it. They were like, I agree with you. Most of the time I agree with Sachs' point of view on all in podcasts. Like, yeah, you're kind of maybe moderate or center right at this point. Sacks' point of view on all-in podcasts. Like, yeah, you're kind of maybe moderate or center-right at this point.
Starting point is 01:47:29 Well, he's so reasonable. First of all, he's a wonderful person, in my opinion. But I didn't have any sense of the reach of that podcast until I did. I had no sense at all. And he's like, will you do my podcast? Sure, because I love David Sacks. I do the podcast like everyone
Starting point is 01:47:45 i've ever met text me oh you're on all in podcasts like it's it's not my world but i didn't realize that is the vector if you want to reach sort of business-minded people who are not very political but are probably going to like send money to a buddy who's bundling for comwell because like she's our candidate. That's the way to reach people like that. That's right. By the way, this is my point about technology can have a centralizing effect, but also decentralizing effect.
Starting point is 01:48:13 So YouTube, you can argue YouTube is the centralized thing, they're pushing opinions on us, whatever. But now you have a platform on YouTube after you got fired from Fox, right? Saks can have a platform and put these opinions out. And I think there was a moment during COVID that I felt like they're going to close everything down.
Starting point is 01:48:38 Yeah. For good reason, you felt that way. Yes. And maybe there's going to be some other event that will like allow them to close it down. But one of the things I really love about America is the First Amendment.
Starting point is 01:48:51 It's just, it's just the most important institutional innovation in the history of humanity. I agree with that completely. And we should really protect it. You grew up without it too. I mean, it must be.
Starting point is 01:49:01 We should really protect it. Like we should, like we should be so coveting of it. Like, you know, we should, you know, like your wife or something. Can you, I totally agree. Hands off. Yeah.
Starting point is 01:49:14 Can you just repeat your description of its importance historically? I'm sorry, you put it so well. It's the most important institutional innovation in human history. The First Amendment is the most important institutional innovation in human history. The First Amendment is the most important institutional innovation in human history. Yes. I love that.
Starting point is 01:49:30 I think it's absolutely right. And as someone who grew up with it in a country that had had it for 200 years when I was born, you don't feel that way. It's just like, well, it's the First Amendment. It's just part of nature. It's like gravity. It just exists. But as someone who grew up in a country that does not
Starting point is 01:49:48 have it, which is true of every other country on the planet. It's the only country that has it. You see it that way. You see it as the thing that makes America, America. Well, the thing that makes it so that we can change course. Yes. Right? And the reason why
Starting point is 01:50:03 we had this, you know, conformist mob rule mentality that people call woke. You know, the reason that we're now past that almost, you know, still kind of there, but we're like, we're on our way past that is because of, of the first amendment and free speech. And again, I would credit Elon a lot for buying Twitter and letting us talk and can debate and push back on the craziness. Right. It's kind of,
Starting point is 01:50:38 it's, well, it's beautiful. I've been a direct beneficiary of it as I think everyone in the country has been. So I'm not, and I love Elon, but'm I mean it's a little weird that like a foreigner has to do that a foreigner foreign born person you Elon appreciates it in this way it's like it's a
Starting point is 01:50:57 little depressing like why didn't some American born person do that I guess because they don't we don't take it yeah you know take it for I wrote a thread. It was like 10 things I like about America. I expected it to do well, but it was like three or four years ago. It went super viral. The Wall Street Journal covered it. Peggy Noonan called me and was like, I want to write a story about it.
Starting point is 01:51:17 I was like, okay. It's like a Twitter thread. You can read it. I just talk about normal things. Free speech, one of them, but also like hard work, appreciation for talent and all of that. And it was starting to close up, right? I started to see, you know, meritocracy kind of like being less valued and that's part of the reason why I wrote that thread. And what I realized is like, you know, yeah, most Americans just don't think about that
Starting point is 01:51:48 and don't really value it as much. I agree. And so maybe you do need foreigners. Oh, I think that's absolutely right. But why do you think, I mean, I have seen, I hate to say this because I've always thought that my whole life
Starting point is 01:51:59 that foreigners are great. You know, I like traveling to foreign countries. I like, my best friend is foreign born actually, as opposed to mass immigration as I am, which I am. Arabs really like you, by the way. Oh, well, I really like Arabs. So we're thrown off the brainwashing. Just a sidebar,
Starting point is 01:52:19 I feel like we had a bad experience with Arabs 23 years ago and what a lot of Americans didn't realize, but I knew from traveling a lot of the Middle East. Yeah, it was bad. It was bad. However, like that's not representative of the people that I have dinner with in the Middle East at all. Someone once said to me, like, those are the worst people in our country. And right. And no, I totally agree with that strongly. I always defend the Arabs in a heartfelt way, but no,
Starting point is 01:52:50 I, I wonder if some of the, particularly the higher income immigrants recently I've noticed are like parroting the same kind of anti-American crap that they're learning from the Institute. You know, you come from Punjab and go to Stanford, and all of a sudden, you've got that same rotten, decadent attitudes of your native-born professors from Stanford.
Starting point is 01:53:16 Do you see that? No, I'm not sure what's the distribution like. I mean, speaking of Indians, I mean, on the right side of the spectrum, we have Vivek. Who's the best. Yeah. Who's a perfect example of what I'm saying.
Starting point is 01:53:28 Like, Vivek is thought through, not just like First Amendment good, but why it's good. Yeah. Well, you know, I'm not sure, you know, I'm not sure. I think it's, yeah, I think foreigners, for the most part, do appreciate it more but it's easy you know I talked about how I just you know try not to be you know this conformist kind of really absorb everything around me and act on it but it's very easy for people to go in
Starting point is 01:53:54 these one party state places and really get you know become part of this like mob mentality where everyone believes the same thing. Any deviation from that is considered cancelable offense. And you asked about the shift in Silicon Valley.
Starting point is 01:54:15 I mean, part of the shift is like, yeah, Silicon Valley still has a lot of people who are independent-minded. And they see this sort of conformist type of thinking in the Democratic Party and that's really repulsive for them where there's like a party line. It's like Biden's sharpest attack,
Starting point is 01:54:33 sharpest attack, everyone says that. And then the debates happen, oh, unfit, unfit, unfit. And then, oh, he's out, oh, Kamala, Kamala, Kamala. It's like lockstep and there's like no range, there's very little dissent within that party and maybe Republicans I think at some point were the same maybe now
Starting point is 01:54:52 it's sort of a little different but this is why people are attracted to the other side by the way this is advice for the Democrats if you want sort of Silicon Valley back maybe don't be so controlling of opinions and like be okay with more dissent. You have to relinquish a little bit of power to do that.
Starting point is 01:55:14 I mean, it's the same as raising teenagers. There's always a moment in the life of every parent of teenagers where a child is going in a direction you don't want. You know, it's a shooting heroin direction. You have to intervene with maximum force. But there are a lot of directions a kid can go that are deeply annoying to you. And you have to restrain yourself a little bit if you want to preserve the relationship.
Starting point is 01:55:37 Actually, if you want to preserve your power over the child, you have to pull back and be like, I'm not going to say anything. That's right. This child will come back. My gravitational pull is strong enough, I'm not going to say anything. That's right. This child will come back. My gravitational pull is strong enough. I'm not going to lose this child because she does something that offends me today. That's right.
Starting point is 01:55:53 You know what I mean? Yeah. You can't hold too tightly. And I feel like they don't understand. I feel like the Democratic Party, I'm not an intimate, of course, I'm not in the meetings, but I feel by their behavior that they feel very threatened that's what I see
Starting point is 01:56:08 these are people who feel like they're losing their power yes and so they have to control what you say on Facebook I mean what yes if you're worried about what people say on Facebook
Starting point is 01:56:17 you know you've lost confidence in yourself that's right that's right do you feel that yeah and I mean you know there's Matt Taibbi and Michael Schellenberger and a lot of folks did a lot of great work on censorship
Starting point is 01:56:30 and the government's kind of involvement in that and how they push social media companies. I don't know if you can put it just on the Democrats because I think part of it happened during the Trump administration as well. For sure. But I think they're more excitable about it. They really love
Starting point is 01:56:48 misinformation as a term, which I think is kind of a BS term. It's a meaningless term. It's a meaningless term. All that matters is whether it's true or not. Yeah.
Starting point is 01:56:57 And the term mis- and disinformation doesn't even address the veracity of the claim. That's right. It's like irrelevant to them whether it's true or not. In fact, if it's true,
Starting point is 01:57:04 it's more upsetting. Yeah, it's like everything what we talked's right. It's like irrelevant to them whether it's true or not. In fact, if it's true, it's more upsetting. Yeah, it's like everything what we talked about earlier. It's just making people stupid by taking their faculty of trying to discern truth. I think that's how you actually become rational
Starting point is 01:57:17 by trying to figure out whether something is true or not and then being right or wrong and then that really trains you for having a better judgment. You talked about judgment. That's how people build good judgment. You can't outsource your judgment to the group, which again, feels like what's asked from
Starting point is 01:57:42 us, especially in liberal circles is that, no, Fauci knows better, two weeks to stop the spread, take the job, stay home, wear the mask. It was just like talking down to us as children, you can't discuss certain things on YouTube, you'll get banned. At some point you couldn't say the lab leak theory, right? Which is now the mainstream theory.
Starting point is 01:58:04 Yes. And again, a lot of this self, right? Which is now the mainstream theory. Yes. And again, a lot of this self-corrected because of the First Amendment. Yeah, and Elon. Wow, that was as interesting as dinner was last night. A little less profanity, but I'm really grateful that you took the time to do this. Thank you. It's absolutely my pleasure. It was mine.
Starting point is 01:58:21 Thank you. Thanks. Thanks for listening to Tucker Carlson show. If you enjoyed it, you can go to tuckercarlson.com to see everything that we have made the complete library

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.