Modern Wisdom - #877 - Marc Andreessen - Elon Musk, The Changing World Order & America’s Future

Episode Date: December 14, 2024

Marc Andreessen is a venture capitalist, entrepreneur, and co-founder of Andreessen Horowitz. America is entering a new chapter. With the recent election in the past, sweeping changes on the horizon, ...and a lot of uncertainty. Just how optimistic should we be about the upheaval the world is about to face? Expect to learn how we ended up on our current timeline, just how big the civil war is within the Democratic party right now, Marc’s thoughts on the motives and response to the Brian Thompson killing, how much government efficiency can be improved upon, Elon Musk's productivity secrets, how much of a political revolution is happening in Silicon Valley and much more... Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get the best bloodwork analysis in America and bypass Function’s 400,000-person waitlist at https://functionhealth.com/modernwisdom Get a 20% discount on Nomatic’s amazing luggage at https://nomatic.com/modernwisdom Get $150 discount on Plunge’s amazing sauna or cold plunge at https://plunge.com (use code MW150) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 What's happening people? Welcome back to the show. My guest today is Mark Andreessen. He's a venture capitalist, entrepreneur and co-founder of Andreessen Horowitz. America is entering a new chapter with the recent election in the past, sweeping changes on the horizon and a lot of uncertainty everywhere. Just how optimistic should we be about the upheaval that the world is about to face? Expect to learn how we ended up on our current timeline, just how big the civil war within the democratic party is right now, Mark's thoughts on the motives and response to the Brian Thompson killing, how much government efficiency can actually be improved upon, Elon Musk's productivity secrets, how much of a political revolution is happening in Silicon Valley,
Starting point is 00:00:39 and much more. If you haven't been feeling as sharp or energized as you'd like, getting your blood work done is the best place to start, which is why I partnered with Function. They run lab tests twice a year for you that monitor over 100 biomarkers. The team of expert physicians then take that data and put it in a simple dashboard and give you actionable insights and recommendations to improve your health and lifespan. They track everything from your heart health and your hormone levels to your thyroid function and nutrient deficiencies. They even screen for 50 types of cancer at stage one,
Starting point is 00:01:11 which is five times more data than you get from your annual physical. Dr. Andrew Huberman is their scientific advisor and Dr. Mark Hyman is their chief medical officer. So you can trust that the data and insights you receive are scientifically sound and practical. Getting your blood work drawn and analyzed like this would usually cost thousands, but with Function, it is only $500.
Starting point is 00:01:29 They've got a 300,000 person waitlist, but every Monday they open a few spots for Modern Wisdom listeners. And right now you can get Function's expert blood work analysis and bypass that 300,000 person waitlist by going to the link in the description below or heading to functionhealth.com slash modern below or heading to functionhealth.com slash modern wisdom. That's functionhealth.com slash modern wisdom. This episode is brought to you by Nomadic.
Starting point is 00:01:52 Traveling should be about the journey, not the chaos of packing, which is why I'm such a huge fan of Nomadic. Their backpack and carry-on pro have genuinely made the travel process infinitely more enjoyable. They've got compartments for everything, your laptop, your shoes, your sunglasses. It's so well organized that even your toothbrush will feel important. It's like the Marie Kondo of luggage. Everything has its place.
Starting point is 00:02:13 Best of all, their products will last you literally a lifetime with their lifetime guarantee. So this is the final backpack that you will ever need to buy. And you can return or exchange any product within 30 days for any reason. So you can buy your new bag, try it for a month. And if you do not like it, they'll give you your money back. I've been loving my cold plunge and sauna from the team over at Plunge. I literally use them every single week because the benefits of hot and cold contrast therapy make me feel fantastic. I have more energy during the day, sleep better at night and recover faster after hard workouts. Plunge's
Starting point is 00:02:56 Evolve collection includes four brand new offerings made to fit your lifestyle, space and goals. The new Plunge Pure Pro Chiller uses state-of-the-art technology to filter water and chill it at the same time. The all- new Plunge Pure Pro Chiller uses state-of-the-art technology to filter water and chill it at the same time. The all-new Plunge Air is amazing if you're looking for a lightweight space efficient option. You can connect it to the Plunge pop-up for maximum portability and affordability or go with the XL Plunge Pro for their signature style. Best of all, they offer a 30-day return policy so you can buy it and try it and get cold for 29 days. And if you do not like it for any reason, you can send it back. Head to plunge.com slash modern wisdom
Starting point is 00:03:30 and use the code MW one fifty for one hundred and fifty dollars off any purchase that plunge.com slash modern wisdom and MW one fifty a checkout. But now, ladies and gentlemen, please welcome Mark Andreessen. How much of a new timeline are we on right now? We are on a very, very different new timeline. So yeah, so I think the timeline is split twice. It's split once in the second week of July, and then it's split again on November 6th. And I don't know, you tell me, I mean, can you feel it? It, the sense in the air, the ambience, uh, has certainly taken a hard pivot. Um, lots of people that were happy and now unhappy and lots of people that were
Starting point is 00:04:33 unhappy and now fucking ecstatic. So I think actually I've detected something interesting and maybe it's just my, my world or maybe business, but I think it's broader, which is, I actually think a fairly large number of people who didn't vote for Trump are actually feeling, people who run organizations who did not vote for Trump are feeling liberated. They're feeling like they can make changes that they haven't wanted to make for a long time.
Starting point is 00:04:55 And they can really dial down a lot of the things that have really been causing the problems. So the blast radius of the good vibes is wider than people might've anticipated. I think it's wider than people anticipated. And I've been at some discussions where people are like, yeah, like it really feels like the air is coming out of the, it really feels like the tension is draining out of the system
Starting point is 00:05:13 in an interesting way this time, which is of course the exact opposite of how it felt in 2016. And so it's, I don't know, I'm, you know, I'm cautiously optimistic that actually a fair, you know, look, I don't think there's any like overnight, you know, transformation and there's going to be continued, you know, drama and strife and so forth. But I think a lot of institutions, I think a lot of leaders at a lot of institutions, including ones that are
Starting point is 00:05:32 left leaning, I think they've just simply had it with a lot of the chaos of the last 10 years and a lot of the drama and a lot of the pressure and conflict. And I think they're just done with it. And they want their, if it's a company, they want to get back to business. If it's a university, they want to get back to teaching. So I's a university, they wanted to get back to teaching. Um, so I'm, you know, knock on wood, uh, cautiously optimistic. This pivot from, uh, saying good or appearing good to hang on, but what actually happened is stress testing the outcomes and, uh, people optimizing for actual effectiveness, as opposed to like, uh, optical, uh, slickness, I guess, or popularity, reputational stuff. Uh, I think that pivot seems to be, uh, one that was quite overdue.
Starting point is 00:06:13 Yeah. And, you know, part of it is, you know, there were some ideas that had to be stress tested right, apparently. And I think they were stress tested and maybe found wanting maybe backfired. Um, and then, you know, there's this thing in, uh, you know, there's this thing in philosophy, the, um, the, uh, they call it the paradox of tolerance, which is in order to maximize tolerance, we must not tolerate the intolerant, right? Right?
Starting point is 00:06:35 Right? Like you can only have tolerance if you don't tolerate the intolerant. And so therefore, if you want to maximize tolerance, which is what we've all been told that we need to do, you need to ostracize, cancel, nuke, and get rid of the intolerance. Of course, the definition of the intolerant very rapidly goes from a small set of people who are truly antisocial to basically everybody who doesn't agree with every single thing on the thousand item checklist of what makes a good person. Um, right. And so, so, you know, it's, it's, you know, put it this way. It's hard to win elections when your electoral strategy is to shrink your
Starting point is 00:07:08 coalition, um, as much as you possibly can by driving out as many people as possible by tagging them as racist or sexist or intolerant and like, it just doesn't work, it doesn't win elections. It doesn't lead to a happy company. It doesn't lead to a well-functioning university. It doesn't lead to a well-functioning organization of any kind. And I think, I think a lot of leaders have maybe finally figured that out. Yeah.
Starting point is 00:07:26 I think any group that's bound together over the mutual distaste of an out group, not the mutual love of an in-group. Uh, the only way that you can continue to bind that together is to scapegoat and shave off the people on the outside, insufficiently pure, continue to point the finger at them and say, well, at least we're not dot dot dot that thing over there. Uh, but yeah, I mean, it's the reason I've had a bunch of conversations with people on the left, Anna Kasparian, Cenk Uygur, uh, Dave Smith, you know, like people who varied political standings. Anna, the fact that you can only move from left to right, you can't go from right to left.
Starting point is 00:08:02 So it's a one way valve, you know, it's a one way street. And guess what happens? The street that allows you to move in one direction is going to be more welcoming and better populated. Yeah. And you, I don't know if you may know this either, or either because of age or nationality, you might not know this, but, you know, this, this happened before, actually in my, in my lifetime.
Starting point is 00:08:21 So, you know, sort of the late sixties, the 1970s were in many ways, you know, a lot like what we just went through in the last decade, you know, and sort of, you know, Trump is that was that, you know, the version of Trump at that time was Nixon and, you know, the version of things like, you know, the Iraq war was the Vietnam war and so forth. And so, you know, energy crisis, inflation, all these, all these, you know, hostages crisis, all these crazy comparisons. And basically, and what happened actually was, Jimmy Carter lost in 1980 and the Democrats didn't win another national election until 1992. It took actually 12 years for the Democrat party to get back on track. And really what had to
Starting point is 00:08:58 happen was the party and the movement behind it had to go back to the center. And to your point, had to become a party of actual inclusivity, of actual inclusivity, of actual tolerance. And it's sort of the Democratic party had to reconstruct its own big tent. Precisely for the reason you said, which is you have to be able to attract people back. You have to get people to be able to come back and feel like they're part of a broad-based coalition and not just part of a very narrow thing. Now, at that time it took 12 years, right? So, they lost in 80, they lost in 84, they lost in 88. And then Bill Clinton and then a guy named Al Fromm
Starting point is 00:09:31 who ran this thing called the Democratic Leadership Council and then Al Gore kind of put together this program to kind of bring the Democrats back to center. And then that led to a boom for the Democratic Party with a much more, you say, sensible centrist set of policies and a much more kind of say open-minded and optimistic attitude, by the way, pro-business, pro-patriotism,
Starting point is 00:09:52 kind of treating everybody well. And so I am cautiously optimistic that the current Democratic Party will be able to find its way back in less than 12 years. And I think that would be good for the country, right? I mean, I think even if you're a Republican, you should want there to be a healthy, vibrant, viable, centrist, sensible, responsible, kind of stable configuration of an opposition party as opposed to what we've seen recently.
Starting point is 00:10:24 And so I'm cautiously, there's a democratic party, there's a civil war already, kind of playing out the democratic party. And there's all these different arguments going back and forth already. But you're starting to see voices like Richie Torres and Ro Khanna and others who are standing up saying, it's kind of time to go back to the center.
Starting point is 00:10:40 And again, I'm cautiously optimistic that that will happen. How big of a civil war do you think is unfolding at the moment? So it's really big. So it's really quite something. So a couple of things. So one is there was just a political story that was really good that talked about the first big kind of summit meeting that the Democrats have had,
Starting point is 00:10:57 which was the DNC hosted a meeting of the state Democratic Party leaders. And it started with a full land acknowledgement, right? And then the step one, okay? And then the current chairman of the party gave the STEM-1E speech about how the party absolutely needs to double down on identity politics, right? Step two, and then this horrific tragedy in New York with the CEO getting, you know, shot and killed, you know, there's now, you know, loud prominent voices on the, on the left,
Starting point is 00:11:29 including in the press, you know, saying, you know, basically, you know, yay, murder, which I think is, you know, step three is maybe, maybe not. Like these are not vectoring in the direction of like, let's just say a successful majority electoral, electoral path. But, but again, like the, the contrast is clear. Like that, that is, that is a conceivable path. There are some people who want to go on that path. But again, the contrast is clear. That is a conceivable path. There are
Starting point is 00:11:47 some people who want to go on that path. I think there are a much larger number of completely sensible people in the party interested in the party and interested in rejoining the party who would like to see the exact opposite. But yeah, they are going to argue this and litigate this, I think, all the way out. I think they, you know, they really feel like they have to, um, they have to argue this and you mentioned the folks you've talked to, like, you know, there, there are now some, you know, I think some, you know, some voices, including some, you know, folks that are pretty far left who are kind of saying, all right, let's, let's pause for a moment and make sure we're not going off the cliff.
Starting point is 00:12:17 You work with a lot of founders, CEOs, tons of your friends will be in that position too, what do you make of the response to the Brian Thompson killing and the situation more broadly? Well, the killing itself was obviously an enormous shock. And I know people who knew him and had been with him recently. So, he was a well-known figure in the healthcare industry. So, I'm very respected among in the,
Starting point is 00:12:41 sort of the business circles. So, yeah, I mean, you know, more shock. And then of course, you know, at first, you're just like, well, these things are either random or, you know, you know, or most murders are, you know, committed by, you know, you know, some somebody in somebody's personal life. You know, the way this one seems to be unfolding, you know, is then I don't know anything that's not in the press, but it seems to have a, you know, ideological motivation behind it. And, and so, you know, and then it's like, okay, is that just a one-off or, you know, is this the
Starting point is 00:13:07 beginning of a pattern? Um, and, um, you know, I would say it, you know, a lot of people I talked to are very disconcerted by, I would say the enthusiasm that a lot of people in the press are showing to, yeah, there have been a tremendous, there've been a shockingly large number of stories of the line of, you know, of course murder is bad, but,
Starting point is 00:13:27 we shouldn't laugh. I shouldn't laugh. Right. Well, I mean, it is, I mean, you have to laugh or you cry, right? So, and then it's like 3000 words that come after the but, right? And it's like, okay, like, you know, which is basically a long, long, long-winded, you know, sort of justification for murder. And so, you know, that is disconcerting. There are some comedians now, comedians, you know, that have gotten in the game. And so, I would say this is fairly disconcerting. I mean, we'll see what happens. The scary scenario, I mentioned the 70s. Domestic terror actually got to be quite a thing in the 70s in the US. And there's this great book called Days of Rage that kind of chronicles basically this very
Starting point is 00:14:01 widespread pattern of domestic terror, ideologically motivated domestic terror in the 70s by many radical revolutionary groups, the Weather Underground and many others. What a lot of those domestic terrorists had in common was this very privileged backgrounds. A lot of these were like Ivy League, like the Weather Underground came out of Columbia University. And then domestic terrorists coming out of places like UC Berkeley and then often very privileged backgrounds, very kind of elite upper middle class people. And then ended up in some cases going on the lam for years being hunted by the FBI. And there were years in the 1970s when there were thousands of terror bombings a year. And then there was a run of anti- know, anti-corporate, you know, basically murder and terrorism in Germany, kind of through that period
Starting point is 00:14:48 into the 80s with this thing called the Bader-Meinhof group and so forth. And so, you know, there's always been this kind of violent edge to kind of call it the anti-corporate, anti-business, anti-industrial, you know, kind of, and by the way, you know, a little bit of that, you know, some of that's on the left, some of that's on the right, you know, there's different, you know, you can squint different ways. You know, Ted K some of that's on the right. You can squint different ways. Ted Kaczynski was on the right. This guy maybe appears to be a little bit more on the left. And so this pattern has unfolded before.
Starting point is 00:15:13 I don't know that it's ever been effective at causing any of the change that its proponents seem to want. These violent tactics usually backfire in our societies. But certainly hope that this doesn't become a copycat situation and I, and I do think it's striking. I mean, I think there are a lot of voices in American public life who are behaving quite irresponsibly right now.
Starting point is 00:15:33 What, what did they think that they're doing that sometimes people that run healthcare organizations don't pay out when people should be paid out, therefore it's okay to kill the person at the top. Yeah, it's the thing. Well, it's of course murder is bad, but, and then in the three, in the 3000 words is basically what you said. So, so the whole healthcare situation is, is very, it's obviously very complicated and very emotional.
Starting point is 00:15:57 The, the, that there's, there's, this can become a very long conversation, but the sort of key fact that all Western countries are trying to kind of grapple with, and this is true of the US, but also Canada and the UK and the rest of Europe, basically there's two key facts. So key fact number one is healthcare is at like a fifth of US GDP in terms of spending. It's like a fifth of all national production per year
Starting point is 00:16:20 goes to healthcare. And that number is rising. The healthcare is basically rising. And if you just chart this on a graph, it's just very clear what's happening is healthcare left unchecked is going to go from being a fifth economy to a fourth, to a third, to a half, and then somebody in the future effectively all of it. And then nobody wants to pay for it. And so, like, everybody wants somebody else to pay for it. And then just saying the government should pay for it, of course, doesn't answer the question because the government has to be funded by somebody. And so that means taxpayers have to pay for it. And so
Starting point is 00:16:51 you get in all these questions. And of course, we live in a progressive taxation system in which people who make more money pay a higher percentage of taxes. And there's big political fights over that. And I'm kind of neutral on that. I don't really care. It's not one of my issues, but it's part of this big fight over who pays for it. So that's issue number one. And that's just gonna bring out a lot of emotion in people. Cause of course, especially as you get closer to end of life or you get into these very bad chronic conditions
Starting point is 00:17:17 that require a lot of money to take care of people. It's this real challenge of how to pay for it. And then the other kind of key kind of fact is all these countries also are aging very rapidly, right? The demographics are headed in the direction of increased average age and the consequence of that is a sharp reduction in the percentage of working age people versus older people retirees. And the whole basis for every social welfare system in the world is that the current workers pay for the current retirees, right? That is the basis. The Social Security Trust Fund doesn't exist. The money that you pay in, you don't get later. Your money you pay in today gets paid out
Starting point is 00:17:54 to old people today. And then when you're old, there need to be young workers in the system to be able to generate the money to pay you. And that's true for Social Security. It's true for Medicare. It's true for Medicaid. It's true for all of these kinds of social, it's true of any socialized health system. It'd be true of government funded health system, it'd be the same thing. It just, you know, for taxpayers, taxpayers are people who work. And so if the demographics go upside down
Starting point is 00:18:16 and you have a country, you know, and you can see the future in places like Japan where you just have like way more old people than you have young people. And then you just have this fundamental mismatch and then you literally quite simply can't pay for it. And so these two things kind of put this whole issue in a vice.
Starting point is 00:18:32 Now, what we try to do, like in our line of work, which is tech and biotech and BC, what we try to do is figure out ways to kind of get out of that vice primarily through new technologies that at least, in some cases will provide new kinds of healthcare, new kinds of drugs and new kinds of medical devices, but also things that are able to break that price curve
Starting point is 00:18:50 and able to make healthcare much cheaper. That's the other way to solve it is you just, you make all this stuff much cheaper. And that's where we get excited about things like, for example, AI and healthcare, and maybe the prospect that instead of people having, only having human doctors, maybe a lot of routine medical interactions are with AI doctors and then you only consult with a human doctor when
Starting point is 00:19:09 something gets really sensitive. I think there's a reasonable chance in the next few years that we can figure out how to do that. Hopefully, we can solve this with technology. It is weird, I will say, and it's coming out in this current debate and controversy. It is weird that the same people who are the maddest about healthcare costs rising and not being able to pay for it are also the ones who hate technology the most. Right? Like, yeah, we are, we are nuclear and nuclear and people who care most about climate, the people who care most about climate and carbon emissions hate nuclear energy the most.
Starting point is 00:19:40 Right. And it's like, all right, we are genuinely trying to help solve these. Those of us in tech are genuinely trying to help solve these problems. What do you think that is? Is it just some sort of generalized skepticism, generalized scrutiny over rich people that they don't have the best interests at heart of the common folk? Yeah, well, there's these, part of it's just like simply, right, moral intuitions, right? And so you have these kind of moral intuitions that, and moral intuitions
Starting point is 00:20:07 presumably evolved to be the way they are according to kind of historical conditions of how people lived. But, you know, historically people lived in these very small, you know, kind of communities. You know, one way of describing it is this great book called The Ancient City that goes back and reconstructs basically like pre, basically prehistoric civilization, like what it was like. And, and, uh, basically And basically the conclusion of it is,
Starting point is 00:20:26 sort of pre, you know, go back 4,000, 5,000, 6,000 years and then further back, you know, basically we all lived in these very small tribes, you know, maxed out at a few hundred people. And then you could kind of describe the politics of the tribe being sort of a hybrid of like absolute fascism combined with absolute communism. Right?
Starting point is 00:20:43 Like at the same time, right? So it was like absolute fascism in that the father of the family or the leader of the tribe had the total power of life or death over all the members of the tribe and could kill them at any moment for whatever reason he wanted to. And then it was also pure communism in that there was no market economy.
Starting point is 00:20:58 Everybody shared everything. And if you went out and killed an animal and brought it back and didn't share the food, they would kill it. You know, you had to share the food. Like everything had to share the food, like everything had to be shared. Cause it was how you were,
Starting point is 00:21:07 both of those forms of governance were basically how the tribe would survive. It needed total discipline and total sharing, right? At the same time. And so we evolve these moral intuitions that basically says, you know, maybe we don't want the fascism part, but we kind of want everything to be shared. We want everything to be equal.
Starting point is 00:21:22 Having inequality is sort of intrinsically, morally offensive to us. And so we naturally drive towards, you know, who can possibly argue against, you know, wouldn't it be desirable if everybody, you know, if everybody had the same, you know, how it's default unfair for some people to have more than others. And then, of course, profit, you know, particularly drives people crazy, right? Because profit feel, you know, profit in the capitalist system, if you have this moral intuition, it feels like it's unfair because it feels like it's money that's been extracted that's not,'s been extracted that's not doing anything. I won't name names, but there was a congressman this morning who tweeted, two back-to-back tweets, and he
Starting point is 00:21:54 tweeted, he said, it's just completely absurd and outrageous that the American health insurance industry generates $1.4 trillion in profits per year. And he said, you know, that's just like so clearly unfair because that's, you know, presumptively that's $1.4 trillion being extracted, providing, you know, no value at all. And then like four hours later, there was a second tweet, which he said, he said, you know, actually, I learned that actually it's $1.4 trillion in revenue, not profit. And total profits are, you know, a very small fraction of what I thought it was. Right, you know, but still, but still Equally absurd and unfair, right going back to sort of where we're at at the moment
Starting point is 00:22:35 I know that you've been heavily involved in a lot of meetings over the last month or so how much having been on Every side of the fence and above it and inside of it, how much governmental efficiency is there to improve on in your opinion? Oh, like absolutely enormous. And it's actually interesting that this actually, historically this is actually a bipartisan question. And I actually think that Vivek and Ilan are looking at it in a very bipartisan way. And you may have already seen there already some Democrats signing up for the Doge Congressional Caucus and coming out.
Starting point is 00:23:10 Even Bernie Sanders, to his credit, came out and said he certainly agrees with Ilan and Vivek specifically on the topic of military spending, which is of course one of the big areas. So that's significant. So historically, it's a very partisan topic. When I was younger, I mentioned the sort of Clinton-Gore comeback in 92. This was actually a big theme of the Clinton-Gore campaign in 92. And actually, Al Gore, who I've known over the years, Al Gore actually had a whole program like the Doge actually at the time called REGO, R-E-G-O, called Reinventing Government. And he said many of the same things that actually Ilana Vivek are saying today. And he made it famous at the time.
Starting point is 00:23:47 You find it on YouTube. There's this famous thing he did to kind of visualize this, which is he went on the David Letterman show at the time and he took out, it was like the Department of Defense was like buying like $600 shatterproof ashtrays. And he took them onto the Letterman show with a chisel and safety glasses and gave it a big whack. And of course it fractures just like any other ashtray. So anyway, so there was a whole program around that. They made a little bit of progress.
Starting point is 00:24:16 So anyway, I think everybody effectively agrees that there's, who doesn't want tax money to be spent efficiently? Who doesn't want to get results? Who doesn't want to money to be spent efficiently? Who doesn't want to get results? Who doesn't want to eliminate waste and fraud? These are kind of, those are hard things to argue against. In terms of methods, the big change here is that we've never, in the past, these have been government officials
Starting point is 00:24:38 trying to figure this out or commissions or whatever. Some may say the blind leading the blind. Some may say, participants in the system attempting to reform the system from within, let's say. And, you know, and this, of course, what's starkly different here is to have, you know, people of the caliber of Elon and Vivek, you know, who are, you know, tremendously accomplished entrepreneurs obviously very smart on all this, but Elon is legendary for being able to think, as he says, think from first principles on things like costs. And if you read the books about him, there, you know, he's always focused on that. And so he's, he's going to bring a toolkit that has been, you know, very successfully deployed at some of the world's leading, you know, literally some
Starting point is 00:25:17 of the world's leading companies like Tesla and SpaceX, and he's going to bring that to the government for the first time. How is Elon so productive? Have you ever deconstructed this? So actually I've known government for the first time. How is Elon so productive? Have you ever deconstructed this? So actually I've known Elon for a long time. I didn't work with him for a very long time, just we did work on, you know, different, different, different things. But I've been working with him quite closely for the last couple of years,
Starting point is 00:25:37 starting with the X acquisition. And then, and then we've also invested in XAI and then in a SpaceX. And so we now work on three companies together to different degrees and now the government stuff. Yeah, I mean, there's basically, he has an operating method that he's developed that I would say is very unusual by modern standards. Actually, I'm not aware of
Starting point is 00:25:59 another current CEO who operates the way that he does. And I think probably the single biggest question in all of business right now is why don't more CEOs operate the way that he does? And I think probably the single biggest kind of question in all of business right now is why don't more CEOs operate the way that he does? And it's a complicated question we could talk about. But if you go back in history, you find characters more like him. And so especially like the industrialists
Starting point is 00:26:16 of the late 1800s, early 1900s, people like Henry Ford or Andrew Carnegie, or Thomas Watson who built IBM. If you go back and read the biographies of people like that, Andrew Mellon, Cornelius Vanderbilt, those guys ran very similar to the way that Elon runs things. The top-line thing is just this incredible devotion from the leader of the company to fully, deeply understand what the company does and to be completely knowledgeable
Starting point is 00:26:45 about every aspect of it and to be in the trenches and talking directly to the people who do the work, deeply understanding the issues and being the lead problem solver in the organization. Basically, what Elon does is he shows up every week at each of his companies. He identifies the biggest problem that the company's having that week and he fixes it. Then he does that every week for 52 weeks in a row. Then each of his companies has solved the 52 biggest problems that year, like in that year. Most other large companies are still having the planning meeting for the pre-planning meeting, for the board meeting, for the presentation, for the this, for the compliance review and the legal review, it's this level of both like incredible intellectual capability coupled with
Starting point is 00:27:27 incredible force of personality, moral authority, execution capability, focus on fundamentals that is just like really amazing to watch. And then by the way, the side effect of it is he attracts many of the best people in the world to work with him. Because if you work with Elon, like the expectations are through the roof in terms of your level of performance and he is going to know who you are and he is going to know
Starting point is 00:27:49 what you've done and he's going to know what you've done this week and he's going to know if you're underperforming and he may fire you in the meeting if you're not carrying your weight. But if you are as committed to the company as he is and working hard and capable, many people who have worked for him say that they had the best experience of their lives when they were working for him. And so then there's this, this attraction thing. And that's why I think his company's compound the way they do is because they just, because of this, they just keep bringing.
Starting point is 00:28:14 Oh, they're a black hole that sucks in the best talent. If you're the best in the world, you want to work harder than anybody else, come here. That's right. And people look at this on the outside and they're like, how can people tolerate, you know, all the criticisms, how can they tolerate this guy and da da da da da. And inside the company, people are like, finally I get to work with somebody who's- Somebody gets it.
Starting point is 00:28:30 Yeah. Somebody gets it. Somebody, there's a famous, somebody used to work in one of the other defense space, aerospace companies and went to work at SpaceX and said, that's what it was like. And he said, it's like being dropped into a shocking zone of competence.
Starting point is 00:28:43 It's just like, it's just like, exactly your point. Like everybody around me is like so absolutely competent and, and, and most, and look, most of, most people, most of us never have that experience. Most, you know, most people are never in an organization where like the bar is held that high and as a consequence, as a consequence, the competence level is so high and stays so high and even rises over time. Um, and he, and he's, and again, to his master credit, he has
Starting point is 00:29:06 been able to do that repeatedly. So detail oriented, focusing on everything from A to Z in terms of the business, sort of intimately familiar with it and prepared to get his hands dirty from a, I will get in there and actually look at solving the problem myself. So is that, you know, that suggests to me, although this can't be the case because you couldn't run this many companies, if you were doing it this way. Um, there may be some challenges in learning when to delegate, uh, when to
Starting point is 00:29:34 hand off tasks, because in order to get that level of, uh, resolution, to be able to see things with that, that much finitude, uh, you also need to spend a lot of time on it, and then if you're solving the problems, how do you choose the problems that are yours to solve and the problems that aren't so on and so forth? So that must be, that must be a challenge delineating between what is my problem to solve and what is somebody else's. Yeah. So I would say most leaders, like most CEOs we work with have exactly that problem
Starting point is 00:30:00 for exactly the reasons you described. I think the Elon method is a little bit different and I don't know if you'd agree with this, but the way I think about it is he actually delegates almost everything. He's not involved in most of the things that his companies are doing. He's involved in the thing that is the biggest problem right now until that thing is fixed. And then he doesn't have to be involved anymore. And then he can go focus on the next thing that's the biggest problem for that company right now. So like for example, manufacturing,
Starting point is 00:30:27 there's this concept of the bottleneck, right? And so in any manufacturing chain, there's always some bottleneck. There's always something that is keeping the manufacturing line from running the way that it's supposed to. And sometimes the bottleneck is at the beginning of the process.
Starting point is 00:30:38 It's like, we can't get enough raw material. Sometimes the bottleneck is at the end of the process. We don't have enough warehouses for the finished product or the bottleneck might be somewhere in the middle. And if you run a manufacturing company, there's always a bottleneck, whatever the bottleneck is, is holding everything up. And the job number one is to remove that bottleneck and get everything flowing again. And I think he basically has universalized that concept. And he basically looks at every company like it's some sort of conceptual assembly line, sometimes a literal assembly line, making cars and rockets and basically,
Starting point is 00:31:06 any given week there's a bottleneck. There's guaranteed to be the main bottleneck. There's going to be one thing that's going to be the thing that's holding people back. So the answer to your question, the resolution of the paradox is I'm going to micromanage the solution of that. I don't need to manage everything else because everything else, by definition, is running better than that. So I can go focus on that. The other part of it is so compelling, and this is where I think a lot of especially non-technical CEOs would really struggle to
Starting point is 00:31:32 implement the method is he really, when he identifies the bottleneck, he goes and he talks to the line engineers who understand the technical nature of the bottleneck. And if that's people on a manufacturing line, he's talking to people directly on the line, or if that's people in a software development group, he's talking to the people on the line. Or if that's people in a software development group, he's talking to the people actually writing the code. Right, so he's not asking the VP of engineering to ask the director of engineering to ask the manager to ask the individual contributor to write a report
Starting point is 00:31:55 for, you know, it's to be reviewed in three weeks. He doesn't do that. He would like throw them all out of the window. There's just no way he would do that. What he does is he goes and personally finds the engineer who actually has the knowledge about the thing. And then he sits in the room with that engineer and fixes the problem with them.
Starting point is 00:32:11 Right? And again, this is why he inspires such incredible loyalty from the, especially the technical people who he works with, which is they're just like, wow, if I'm up against a problem I don't know how to solve, freaking Elon Musk is gonna show up. I can call on the guy that owns the company. Freaking Elon Musk is gonna show up in his Gulf stream
Starting point is 00:32:26 and he's going to sit with me overnight in front of the keyboard or in front of the manufacturing line and he's going to help me figure this out. Wow. Right? And so it's like, how do you possibly like, how can, yeah. And this is the thing. It's like, okay, you're like a normal CEO
Starting point is 00:32:38 running a normal company. Like how can you possibly compete with that? What are the other reasons that CEOs of similar companies or comparable companies don't do that? Technical understanding, I get that, that you need to be able to get your hands dirty from a specificity perspective, but what else? Yeah, so that is a lot of it and that's a topic that makes people really mad because non-technical people really hate being told that they are not qualified to do something
Starting point is 00:33:04 because they're not technical. But every now and then the technical details actually do matter. So there's a whole domain there. And you know, most managers, you know, it's like most people in government are lawyers, you know, most people in business and senior levels are MBAs. Like, and so there's a, you know, most large companies are not run by engineers, right? They're run by business, trained business people. And so there's a, or increasingly lawyer, increasingly also lawyers. And so, you know, there is a real challenge there.
Starting point is 00:33:29 And then I just think more generally, more generally it's just the way that management is taught. And, you know, most classically in the form of something like a Harvard Business School or Stanford Business School, it's basically taught, it's basically management as it was sort of developed and implemented, I would say in like the 1950s, 60s, 70s, it was sort of the so-called scientific school of management.
Starting point is 00:33:49 And so it's basically management as a generic skill that you can apply to any industry and you know, you can manage a soup company or you can manage a, you know, I don't know, whatever, whatever, whatever kind of company. And they're kind of all the same and it kind of doesn't matter what they do. And there's a common set of management practices and, and it has, you know, is a lot to do. It's, you know, process. Um, it's, you know, how do you structure, you know, the, how do you, the, manage the balance sheet? How do you set the review schedule for the meetings? How do you do compliance? How do you, you know, hire, you know, how do you manage a, uh, you know,
Starting point is 00:34:18 how do you hire and motivate executives? Um, you know, how do you resolve interpersonal conflicts? Like all these, like general business skills. And by the way, those general business skills are, you know, very do you resolve interpersonal conflicts, like all these like general business skills. And by the way, those general business skills are, you know, very useful in lots of contexts. It's just that that training doesn't give you any of the information, that training gives you none of what you need to go do what Elon does.
Starting point is 00:34:38 And then, and then Elon push, I would say he pushes it as far as he can and not doing all the stuff that you're classically trained to do so that he can spend all of his time doing the things that only he can do, which it turns out has this like just like incredible, you know, catalytic multiplicative effect where his companies are, you know, so just incredibly amazing. Elon just said this thing at the most recent Tesla event that blew my mind, you know, because his companies are famous. They don't they don't actually have like marketing departments. Like, like, Tesla doesn't spend any money on sales and marketing. Like they don't, they don't sell the cars.
Starting point is 00:35:07 Like every other car company in the world is like running all these ad campaigns all the time, TV commercials and the newspaper flyers and the quarterly, you know, sales events and like all this crazy stuff and promotions, all these things like Tesla doesn't do any of that. It's just like, it's the best, you know, it's the best car and you just like show up and buy it or not. It's up to you. You're a moron if you don't, right? Like it's a very straightforward thing. My dad went to go and test drive a Model S maybe a couple of years ago. And he's sort of an old school car guy and he's used to the, the sort of banter back and forth, the four courts, the discussion, the say, please sit down. Would you like a coffee? Would you like, you know, what is that? And he said that he walked in and it felt like he got into an Apple store and there
Starting point is 00:35:41 was some, as he called it, teenager in a hoodie, uh, that came out and my dad started to do the whole like classic boomer parent negotiation thing. And this guy was like, the price is the price and I don't have any fucking coffee for you. So he left, he left feeling, uh, like this is the new world and now he has a Range Rover. So that's the thing. It's like, do you want the best car in the world or not?
Starting point is 00:36:09 Right? Like that's Elon's mentality, right? And it's working very well. And then he, at this event, he just, he took it to the next level and this broke my brain. I'm still thinking about it. He said, if you think about it, he said the best product in the world shouldn't even need a logo. Like you shouldn't even have to have your name on the product.
Starting point is 00:36:26 People can just identify it from how good it is. It's just obvious. It's just obvious. He did, right? It's just obvious. Everybody knows because it's the best product in the world. Everybody has it. Everybody uses it.
Starting point is 00:36:38 And like, of course you don't need to put the name on it. Everybody knows, right? And so for a minute I was like, all right, is this like a Zen thing that I think about? And then I was like, no, he's actually, as usual with Elon, it's like, no, he's actually serious. The best products in the world would not need your name on it. And so, yeah, anyway, so it's just this like completely different, it's just this completely
Starting point is 00:36:58 different method. And then on top of that, to have the guy who's on top of his game, you know, to, you know, as you said, to kind of be working on the government challenge and then to be, you know, to have, to have the new, you know, for whatever people think of the new president pros and cons, like to have him fully, you know, fully signed up, embracing, you know, Elon and encouraging him to do this is a really, I mean, this, this has not, you have to go back, like literally the last time something like this happened in the United States government was, was literally like 1933. You know, this is the closest
Starting point is 00:37:24 analogy to what, what at least has started to happen is literally what happened under FDR, which is just like a fundamental reinvention of how government works and like this massive influx of talent from actually from the private sector that is able to just like do things that are unimagined. And so it's a once in an 80 years thing. I'm doing everything I can to help
Starting point is 00:37:41 and I hope it goes- Pouring some kerosene onto the fire as you are. I saw a clip of Elon speaking. I don't know whether it was a recent clip, but the clip only came up the other day. He's talking about his thoughts on being a CEO. A lot of times people think that creating a company is going to be fun.
Starting point is 00:37:56 I would say it's really not that fun. I mean, there are periods of fun and there are periods where it's just awful. Particularly if you're the CEO of the company, you actually have a distillation of all the worst problems in a company. There's no point in spending your time on things that are going right.
Starting point is 00:38:09 So you only spend your time on things that are going wrong. And there are things that are going wrong that other people can't take care of. So you have like the worst, you have a filter for the crappiest problems in the company, the most pernicious and painful problems. So I wouldn't say it's fun. You have to feel quite compelled to do it and have a fairly high pain threshold.
Starting point is 00:38:27 And there's a friend of mine who says starting companies is like staring into the abyss and eating glass. And there's some truth to that. The staring into the abyss part is that you're constantly facing extermination for the company because most startups fail 90%, 99% fail. That's the staring part. You're constantly staring. Okay.
Starting point is 00:38:44 And saying, if I don't get this right, the company will die. Quite stressful. And then the eating glass part is you've got to work on the problems that the company needs you to work on, not the problems you want to work on. So you end up working on that. You really wish you weren't working on it. That's the eating glass part. And it goes on for a long time.
Starting point is 00:39:01 What's your assessment of his, uh, post-mortem on being a CEO? So the original form, number one, I agree with all of that. And I felt that myself. And that's one of the reasons I'm happy to be an investor right now and not a CEO of a company. But the original form of the quote, the eating glass quote was actually Sean Parker. And the original form of the quote was, starting a company is like eating glass. Eventually you start to like the taste of your own blood. Oh my God. And I love using that quote in front of an audience because it always gets the
Starting point is 00:39:36 exact reaction you just gave me. It always gets that moment of stunned silence followed by that look on the face. And everybody's like, Oh my God, that's horrifying. And I'm like, yes, now you understand. Go forth, run your company. Now you understand what it's going to feel like. Yeah, no, look, and the serious thing I would say is, you know, it's a little bit less now,
Starting point is 00:39:54 but there have been times when the idea of being an entrepreneur and tech entrepreneur has been like a romanticized concept. And, you know, there used to be TV shows talking about how much fun it was. And, you know, I have people ask questions like, well, how do we encourage more people to be entrepreneurs? And my answer always looks like, no, we shouldn't do that.
Starting point is 00:40:13 Like, people shouldn't be encouraged to do something that painful. They should do it because they really wanna do it. In fact, they should do it because they can't not do it. Right? So I'm not an endurance athlete, but it would be a little bit like encouraging somebody to do a triathlon or an ultra marathon, right?
Starting point is 00:40:27 Like, yeah, why don't you go out there and run 120 miles in the heat, right? Like, no, like that's bad. No, most people should not do that, but the people driven to do that should do that. And by the way, the people driven to do that will do that, right, for their own reasons. And so it really is like that.
Starting point is 00:40:41 And yeah, and then look, it's tremendously painful. You know, most of the experience of being in business in a competitive, you know, setting and as startup is sort of the peak version of that is most of it consists of basically being told no, you know, by everybody, you know, you need all these things, right? You need employees and you need, you know, financing and you need customers. And, you know, you go around all day long asking people for those things and they tell you no, or, you know, half the time they don't even respond, you know, they
Starting point is 00:41:06 just go dark on you and ghost you. Um, and then, you know, just when you think you get it working and your competitor emerges and just starts like, you know, punching you square in the face and, you know, the minute you think your product's working, there's a problem in it, you have to do a recall and, and every, you know, it's just, yeah, it's just this constant, it's just kind of this, this constant rolling, you know, horror story. Yeah.
Starting point is 00:41:23 Give it a minute, you'll know the agony now the agony ant to the people who are eating glass and learning to try and enjoy the taste of their own blood. Do you feel more like a therapist than an investor sometimes? So it really depends. One of the things, I think there are certain fields in which you really get to see somebody's core personality and you get to see their kind of core attributes and their core virtues and their core vices and their core weaknesses. And it's really only people under situations of extreme stress where you really see that. And so part of it is
Starting point is 00:41:56 like we see that. We see people when they're at their most stressed and when they're at their worst. And then there are, I would say many cases where we're, you know, one of their only remaining allies, you know, kind of at that point. So that's a not uncommon thing. And, you know, there is, you know, sort of a, there's a coaching component to it and a therapeutic and a, just a general, you know, kind of being a friend, you know, being an ally component to it. But I also say the other thing you see is just like,
Starting point is 00:42:17 you know, there are very specific personality types. And, you know, there are certain people where, oh, I'll just give you an example, Mark Zuckerberg, you know, a great superpower that Mark Zuckerberg has that probably is not well understood enough, which is he does not get emotionally upset in stressful situations. He is able to maintain an analytical frame of mind,
Starting point is 00:42:38 even when other people would be, you know, literally bursting into tears and hiding under the table. And I've seen that many, many times. It's just like, you like, the company's been through a tremendous amount of both good and bad things over the last, I've been involved almost for 20 years. And we've been through almost everything, good and bad. And I've literally not once seen him raise his voice.
Starting point is 00:42:56 He's always been, and he's very sympathetic and he's very, when somebody has a personal crisis, it's not that he's not emotional. Like he engages emotionally with people and he's very supportive of people when they have issues and he feels things very deeply like everybody does. But he has a level of emotional self-control. Psychologists would say zero percent neuroticism.
Starting point is 00:43:15 He just doesn't respond emotionally and as a consequence, he can keep his head, right? When everybody around him is losing theirs. And so having said that, we also work with a lot of people who are, let's say, higher in neuroticism. And so you've got that, having said that, we also work with a lot of people who are, let's say higher in neuroticism. And in particular, often the more creative, people who are super creative are often high in neuroticism.
Starting point is 00:43:32 And so sometimes you get the, we get the artist type where they're like incredibly creative and they're a fountain of new ideas and they just have a much more, let's say direct engagement with their emotions and they feel things, I would say more directly or kind of in a more raw way. And so, you know, when things go bad,
Starting point is 00:43:47 it really comes down to them emotionally. I was frankly myself probably more on that side of things. And so yeah, there's, you know, a therapeutic aspect of that. The good news I would say is generally in business, also a couple of things. So one is it's not, in our world, it's not a 90% failure rate.
Starting point is 00:44:05 It's about 50%. Um, and, and 50, 50% is still high, but you know, you have a reasonably good shot. And then I think most problems in business are fixable, um, as long as you can keep the team together. Um, I think usually when companies crack and go down, it's usually not, sometimes it's catalyzed by something that happened from the outside, but generally the thing that actually happens before a company goes down is the internal, the team cracks, the team itself cracks internally and the founders turn on each other or the management team dissolves. And so a big part of it is just with these things is
Starting point is 00:44:38 just like, can you keep the team together? If you can keep the team together and we're happy to be a part of it, our job is to be part of that team. Then, you know, most of these companies can battle through most things. And so, and so I've said, I've said, I've seen like, I've seen, you know, just as many like incredible last minute saves and rescues and turnarounds as I have, you know, screaming disasters. So, you know, there, there is reason for optimism going into even the dark times. Going back to the governmental efficiency thing, how much do you think the huge swaths, the platoons of useless middle management inside of the government are shitting themselves
Starting point is 00:45:12 at the moment? Yeah. So it's actually an interesting question. It's hard to say from the outside. The other thing is you have this, I would say very interesting thing. There's no homogenous, the government employees, whatever it is, millions of people, multiple millions of people, it's far from a homogenous group.
Starting point is 00:45:30 And I'll just give you some examples. There's a pretty large contingent, which is sort of the baby boom component of it, because the government went on a massive hiring boom in the 50s, 60s, 70s, 80s. There's a pretty big contingent that's close to retirement. And, you know, it was probably going to retire anyway in the near future. And so, you know, there's some prospect maybe there for, you know, kind of, let's say,
Starting point is 00:45:52 mutually agreeable accelerated retirement, you know, is one possible solution. Mutually agreeable accelerated retirement, the most diplomatic way to say, get the fuck out. No, no, this happens to come. So in fairness, this happens at companies. And I'm not saying the Doge is going to do this. I'm not in charge of this, but I'll just give you the scenario that happens in companies. It's something called buyouts. It's contrasted to layoffs.
Starting point is 00:46:12 And it sounds like it's a clever way to do the same thing, but it's actually different, which is you basically, you offer retirement packages, right? And you let people voluntarily sign up for the retirement package, and they don't have to sign up for it. And so it's a voluntary thing. But if they choose to, they can get accelerated retirement or an unusual level of retirement compensation.
Starting point is 00:46:28 Like a way to think about this is like the budget crisis that a business has when it gets in trouble or that the government has where we're $36 trillion in debt. The budget crisis is not like the money that we might have to pay to have people retire this year. Like you can afford almost anything to do that. It's the savings that you can have compounded over the next 20 years. If you get your fiscal house in order. Right. And so you can sometimes do things in the near term that cost more money in order to get the long-term savings.
Starting point is 00:46:55 And that often makes sense. And so there's, so, and again, I'm not speaking for the doge. So this, you know, this is one option. Another option that they've already talked about in public is the remote work thing. You know, most, most federal workers You know, most federal workers are not actually at work. You know, most federal workplaces are empty today. You know, there are agencies where there are formal collective bargaining agreements
Starting point is 00:47:14 where the employees are down to a day a month in the office. Right. And you know, there's a huge controversy around remote work, you know, is how effective it is. And there are companies that make it work. But you know, if hypothetically you have a government agency with 10,000 people and nobody's literally coming to work, controversy around remote work is how effective it is, and there are companies that make it work. But if hypothetically you have a government agency with 10,000 people and nobody's literally coming to work, you might as a taxpayer have some questions about exactly what's going
Starting point is 00:47:32 on. And then you have this situation where, as they've said, people have to come back to work. Like part of the thing of working for the government, working for taxpayers is people have to come to work. Well, there are people who have literally left. There are people who have moved to lower cost locations who are not going to come back to work.
Starting point is 00:47:46 And so that's another opportunity. And so there's a lot of this. And then I would say the third thing that they've talked about, which is I think probably the part of it that is not well appreciated, is that the government that we live under today, remember that old guy, I don't know if you were,
Starting point is 00:48:04 you probably didn't see this because you weren't maybe here at the time, but there's this cartoon that every American kid of a certain age saw at elementary school social studies, which was how a bill becomes a law. It's this very bouncy, it's this very bouncy like 1970s cartoon about the bill and you know, the Congress, you know, so the White House proposes the bill, then the Congress agrees and then the Senate and they reconcile and you know, the bills, it's a little cartoon, you know, bill wrapped in a little ribbon and it bounces off the thing and the president signs it and everybody's happy, right? And so it's like, you know, legislation, right? It's a legislative process. Most of what
Starting point is 00:48:31 the government does and most of the rules that we all live under are not that, right? Most of that, most of it's not legislation. Most of it is what's called regulation, right? Which is very different and most of the rules that the government sort of exists in order to process and enforce are regulatory, not legislative. The government we live under today is the result of basically 60 to 80 years of agencies effectively putting out regulations entirely on their own. So executive branch agencies just basically deciding
Starting point is 00:49:02 that there are gonna be hundreds of thousands of rules that we need to live under and then deciding that they need to staff themselves to be able to process and handle and enforce all those rules. The Supreme Court recently ruled in a series of judgments a couple of years ago ruled that regulations that were not authorized by legislation are not constitutional. And by the way, if you just read the US Constitution, which is quite short and to the point, it makes it very clear that the legislature, the Congress exists to pass laws, the executive branch exists to enforce those laws, the executive branch does not get to issue its own laws.
Starting point is 00:49:35 And so we've lived in a system where the executive branch has been issuing its own laws for a very long time. The Supreme Court now says that all of those regulations are now not constitutional. And so basically any activity in the government that relates to the staffing and enforcement and all of the processes and all the work around those regulations is no longer constitutional. And so there's a lot of government activity that actually is no longer constitutional. And the previous administration was completely uninterested in doing anything, bringing the executive branch into alignment with what the Supreme Court had ruled. But again, if you read the constitution, the Supreme Court gets to make these decisions. They've decided this is not
Starting point is 00:50:11 constitutional. This new administration has an opportunity to bring the government more in compliance with the Supreme Court, which means the government will get smaller. Yeah. What happens if Trump can't deliver change? He's got all of the things, House, Senate, popular vote, Elon, unlimited funds, blah, blah, blah. He doesn't even need to worry about being reelected. What's the implication for America and the West if he can't make sizable change? Yeah, I don't know.
Starting point is 00:50:37 I mean, look, I was to start by saying I'm optimistic. Like I think he's determined. I think that Elon's determined. I think Vivek is determined. I think the team around the Doge. By the way, a lot of the key staffing positions, there's a lot of controversy in the press as usual about the cabinet officials and I think many of those are also very good, but a lot of the operational managers are being brought in and like the deputy secretary level
Starting point is 00:50:56 and the department heads are very, very strong people. And so I think this is going to be one of the strongest, if not strongest, execution competence capability administrations that we've had in a long time. Again, maybe arguably since the next, I think a reasonable case that this will be the best staffed administration since the 1930s. So I'm reasonably optimistic. And then look, the president is in this very interesting
Starting point is 00:51:19 position where he doesn't ever have to run again, right, because he can't. And so a lot of what happens in the second term, if presidents choose to take advantage of it is they can do things that caught, you know, that basically might even put their reelection at peril. They can do more forceful things cause they, you know, they can do the things that they actually think are
Starting point is 00:51:35 important as compared to the things that they think they might need to do to get reelected. And that's something you can only do when you're gonna get, when you're gonna get termed out. And so, and then of course there's just, yeah, there's sort of, you know, all the things kind of behind Trump's level of determination this time, which I think is very high. So, I'm quite optimistic.
Starting point is 00:51:50 I don't know. I mean, you could paint various cynical pictures of the whole thing being gummed up. My guess is if it disappoints, it's going to disappoint in the sense of some of these things are going to get done, but not all of them. And frankly, I think it'd be great to solve all of our problems at once. But if we can't do that, if we can just basically, if this team can sort of give us a roadmap on how to do things properly and can kind of show us through examples, let's say there's a thousand versions of this problem that have to get solved and this team only gets to the first hundred. Um, then we will have the recipe book right for how to solve the next 900 over the next, you know, two or three or four administrations that follows that.
Starting point is 00:52:35 And so that, that, that would be my optimistic failure case. And I think we'll, I think we will get at least that, if not a much more sweeping transformation. Talk to me about this, uh, this Willow Google quantum computer thing. How big of a deal is it? Yeah, so first of all, I don't know, you know, I just read the announcement. I haven't been face to face with the technology, so I don't know anything that's not public on that. Quantum computing historically has been one of those things that is theoretically very exciting
Starting point is 00:53:02 and then just extremely hard to actually get working. Quantum computers don't act like regular computers in any way. And so, and there's always this kind of thing where you in theory can kind of demonstrate something in a lab and that maybe it works and maybe it doesn't. It's often not completely clear even whether it worked. And then, you know, could you ever get that to repeat? Could you get that to repeat across, you know,
Starting point is 00:53:21 10 or a hundred different, you know, kind of locations? Could you ever then apply that to these different problems? There's just all these really fundamental questions. And so it's one of these really enticing things that everybody's hoping that we get to. I just, I'm not close enough to know whether they got to the full breakthrough or not. The thing that I think would be the full breakthrough
Starting point is 00:53:40 would be not only does it work in one lab, this is something where you could now stamp out 10 and then 100 and then a thousand of these machines and then you could do it repeatedly. The most enticing thing of what they said, which is one of these things where you think about it and then you decide you have to stop thinking about it because you'll never think about anything else again, is that in theory the lab demonstration that they have is doing a computation that the entire universe that we live in, if it was converted entirely
Starting point is 00:54:13 to just a giant computer, if every atom in our universe it was turned into a giant computer, that computer could not solve that problem before the universe ultimately dies if you don't. Right? And the implication of that is that the computation, therefore must be spread out across many other parallel universes,
Starting point is 00:54:35 which is therefore proof that there are many other parallel universes. And so- Oh, interesting. Right, in other words, if you weren't able to spread this computation out across many parallel universes, it could not have been done. Therefore, there other words, if you weren't able to spread this computation out across many parallel universes, it could not have been done.
Starting point is 00:54:46 Therefore, there must be many parallel universes. Therefore, we do actually live in a multiverse and there are actually billions or uncountable numbers of parallel realities included. The way using for our RAM. Yes, yes. And we're, yeah, we basically have outs, yeah, we're, yeah, we're outsourcing to these other universes.
Starting point is 00:55:02 I mean, you know, by the way, are we having any impact on them when we do that? I don't know. Yeah, there's some very glitchy 360P universe out there. God damn it. That guy from Google is trying to work out if two plus two equals four again. Yes, exactly. And then of course, are any of these other universes going to do it to us? So, you know, the quantum people really do believe it's a, you know, we live in many, you know, many, many, many different realities. So, um, you know, this may be one of those things from science fiction that we end up figuring out, but, um, yeah, this is, it's the kind of thing, you know, it's, it's, you
Starting point is 00:55:35 know, Einstein's famous reaction to quantum physics was, you know, this, this is impossible because the famous quote was God doesn't play dice with the universe. Right. And it, you know, this result is yet another evidence that no, actually God really likes playing dice with the universe. He is doing it between that. I mean, this, this last period is so much going on in tech, uh, Sora AI as well. What's your post-mortem?
Starting point is 00:55:56 I mean, they've closed now signups and stuff because it just went ballistic. For the people that don't know, this is the video equivalent of OpenAI's image creation service and now you can do video too, although so many people signed up, it's restricted to particular territories, et cetera, et cetera, et cetera. What do you make of the way that that went? Yeah. So this is, yeah, this is video. So this is text to video generation.
Starting point is 00:56:21 So you put in a text prompt, you get out a video, which sounds easy, turns out to be very hard. It's very magical. They're, they're, Sora's, you know, one of the more impressive ones at OpenAI. They're, you know, we, there are others, you know, we have, we have a bunch of others and there's a bunch of others underway. It's a competitive space, but you know, the, the, the class of technologies is very impressive and Sora's very impressive. I mean, one is just like, how amazing is it to live in a world in which you can literally, you know, say a text prompt of, you know, a hobbit living in a hobbit town with, you know, and the dragon shows up.
Starting point is 00:56:49 And then like, literally it will render that for you. Right. And if you want the dragon to be rendered out of used car parts, you just say, dragon made out of used car parts. And it renders that for you. Right. And so it's like, I mean, that's just amazing to start with. The even more amazing thing about that, and opening, I put out a paper about this, it's very interesting.
Starting point is 00:57:08 So the more amazing thing about that is, to generate video that passes the sniff test of the human eye looks at it and thinks that it's real, you can't just sort of copy from other videos, and that's part of what's happening is, this is your training, a system like Sora is trained on millions of hours of video from like all over the world
Starting point is 00:57:29 and all this open source stuff and everything out of old movies and all this stuff. And then there's copyright disputes over what else is in there and so forth. But, so it's trained on lots of video, but it's not sufficient to just train on lots of, cause video is all 2D, it's not sufficient to just train on 2D video
Starting point is 00:57:43 and then generate a new 2D video that actually looks to the human eye like it's a representation of the real 3D world. And if you look at these videos coming out from Sora, if you look at them carefully, basically what you see is like multiple sources of lighting in different parts in 3D space coming together. You see reflections coming off of reflective surfaces
Starting point is 00:58:04 that are actually correct. You see translucency that's correct. And then you get combinations of these factors. So like, for example, if you do something where like man walking through a puddle at night, you'll get the splashing effects of the water. The water has to splash in a way that is physically realistic. The light has to come through, refract through the water droplets in the correct way. The water droplets have to reflect the image of the man's shoe in the right way. So, and the AI term for this is world model. This is not only a model for video, this is a model that actually understands the real world.
Starting point is 00:58:38 It's a model that actually understands 3D reality, right? And it understands light, right? And it understands surfaces and textures and materials and motion and gravity and gravity and shapes. And right, exactly. All these things, right, exactly. You know, a stiff surface versus a spongy surface. Yeah, you know, definitely, you know, give me a close up. Give me a close up of a baseball bat hitting a baseball.
Starting point is 00:59:01 You know, like in slow motion, it has to deform the baseball in the way that in the real world it does, you know, like in slow motion, it has to deform the baseball in the way that, in the real world it does, you know, when it hits the bat. And like for the story, it has to do all those things. And in fact, it is able to do all those things. Like if you just look at the results, you can see all that's happening. And so what that means, backing up to that,
Starting point is 00:59:19 the implication is that model is not just a video model, it's what's called a world model, meaning it actually understands 3D physical reality. The implication of that is that we may have basically just solved the fundamental challenge of robotics. The fundamental challenge of robotics is how do you get a physical robot to navigate the real world without screwing everything up, right?
Starting point is 00:59:39 So how do you get a robot waiter to navigate through a busy restaurant without stepping on anybody's foot, right? Without tripping over anything, without spilling water on the table. To understand everything that's happening in real time to be able to adapt. Very similar to the self-driving car challenge,
Starting point is 00:59:53 like how do you do that? But how do you do that for basically everything? And how do you put machines up close with people in such a way that it's like completely safe? And it turns out one of the things you need to do to do that is you need a world model. You need, the robot needs to have a comprehensive understanding of physical reality so that it can understand
Starting point is 01:00:08 what's happening. And so when things change, because you know, the robot's seeing primarily visual, right? It's just like you're seeing visual. So you have to map the visual into an internal representation of the 3D world. Up until now, building a world model like that
Starting point is 01:00:20 has been difficult or impossible. And it now appears that that's actually starting to, that that's actually starting to work. So Westworld. It's happening. 2028, 2030. What do you think? I mean, you must be, you must be knee deep in, uh, projections for. Robotics for AI for where we're going to to get to, what's the meteorological artificial
Starting point is 01:00:47 intelligence forecast for the next few years? What do you think people can expect sort of 25, 26, 27, et cetera? Yeah, so I'll just start by saying it's really hard to forecast this. And I'll give you my favorite example of this, which was the world's leading AI researchers in the year 1956 got together and they got a grant from the government to spend 10 weeks in the summer of 1956 at Dartmouth University. And so they all got together and in that 10 weeks, they were going to finally get to artificial general intelligence. They were so close. Like they were almost, it was called AGI, they were almost an AI that can do everything that a person can do. They were only 10 weeks away. And you'll notice that was in 1956. It didn't happen.
Starting point is 01:01:26 We're sitting here in 2024 and we still don't have it. So this field is prone to, this field of all the fields in tech, AI is prone to utopianism. It's prone to apocalyptic nightmare scenarios. And it's prone to a very, very, it's been very hard to forecast progress, like extremely difficult to forecast progress.
Starting point is 01:01:43 Well, another great example of that is OpenAI was not formed to make chat GPT. OpenAI was not created to make large language models. OpenAI was created to make an entirely different kind of AI. Then it turns out if you trace back the origin of GPT, there was little one guy at OpenAI, his name Alec Radford, and he was sitting in a corner and he's like, I think all this other stuff is wrong.
Starting point is 01:02:02 I think we should be doing this other thing instead. So even that company that brought us, you know, chat GPT, you know, didn't, you know, a few years ago, didn't know that that's what they were going to do. So forecasting is very hard, especially about the future. Look, having said that, the progress is staggering. And, you know, one is just the observed progress is staggering. The plans that people have are incredibly exciting. One of the things you always wonder with any field like this is just how many more ideas are there, right?
Starting point is 01:02:30 Like arguably we've run out of ideas for what the smartphone can do. Like what's new with the smartphone now versus last year versus the year before. It's the smartphone companies kind of run out of ideas. With AI there's ideas all over the place. And so that's very optimistic. And then many of the smartest people in the world are being drawn into the field, right? And so every
Starting point is 01:02:49 smart college kid, you know, who's considering what to do, a lot of the very smartest are going into the field. And then a lot of people are like coming over from other fields like physics to work on this now that it's really started to work. So we're also getting this effect of kind of this, you know, reverse brain drain or something where we're pulling in all the smart people. That's another reason for optimism. And then obviously the commercial opportunity is very large. So that's another reason for optimism. The big focus right now is to get these things to what's called reasoning, general purpose
Starting point is 01:03:16 reasoning. So to get them to predictably be able to solve problems in a way that is fully coherent and leads to good results every time. And these things are very good at solving certain problems most of the time. It's a very unusual technology where if you ask a large language model the same question twice,
Starting point is 01:03:35 it actually gives you different answers. And if it doesn't know the answer to the question, it will sometimes make up an answer. And it is pretty amazing and wild that we have creative computers that will literally make things up. But like, we need versions of these that don't do that. We need versions of these that are able to reason their way through complicated logic
Starting point is 01:03:54 chains that are able to model physical reality like I just described, that are able to, you know, think longer and get better results. And so there's tremendous amounts of work happening on that right now. I'm pretty optimistic on that. I think by, yeah, I think by, I don't know, 2028 or something, I'm just off the top of my head, just to with a little bit of margin of safety by 2028, these systems, you'll be able to basically
Starting point is 01:04:16 give these systems problems and they'll, you know, and they'll solve the problems. If it's something that a human being can do, they'll be able to do it. And then robotics is basically that. And then we now have the interface method for robotics because we have large language models so we can now make robots talk and listen
Starting point is 01:04:30 and we have voices and human English comprehension and all that stuff. And then basically the rest of robotics is basically mechanical engineering and then some power, basically battery technology. And I think robotics are getting quite close now. And so that, and I bring that up cause like that, AI is going to be important if it's just disembodied
Starting point is 01:04:49 software, but it's going to be really important if it's around us all the time in the form of- If it exists. Yes, in the real world. In the real world. And then of course that's already happening, right? You know, autonomous, you know, it's now drones now fly themselves, cars now, you know,
Starting point is 01:05:01 cars now drive themselves, right? So, you know, for people who haven't tried either the Waymo cars in places like San Francisco or the new Tesla, the latest version of the Tesla software, they're both outstanding. And so cars now drive themselves, that's a big step forward. Drones now fly themselves,
Starting point is 01:05:17 there are now companies making autonomous submarines and all kinds of fancy, fancy things. Oddly enough, the autonomous submarine just feels like one that you could have done earlier, you know, there's far, there's far less traffic, there's far fewer things to crash into. Yeah, probably. Um, there's, uh, yeah, we have, we have a company that has a, has a, has a military submarine that is, uh, is, uh, it's literally, it's literally, it's an
Starting point is 01:05:40 unpressurized, uh, shell, uh, sort of, uh, uh, platform and you can basically customize it in many different ways, load many kinds of things on it. But it's like unpressurized form, it's able to go far deeper than a sub that needs to support human beings can go. It's able to go down and be able to do all kinds of things. So yeah, like the whole, as we're seeing on the military side,
Starting point is 01:06:02 there's a transformation of military, of military strategy, you know, that's underway right now, you know, kind of as these technologies hit that we're already seeing in Ukraine. Um, so that's going to matter, but you know, look, a lot of this stuff is just going to be stuff in our, in our personal lives, in our daily lives. What do you think of the adoption of self-driving sort of
Starting point is 01:06:18 on-mass self-driving? I don't know whether there is a more highly kinetic, high risk and heavily involved area of human life that is ready to be outsourced to AI. You know, there's other things where you can help you cook or it'll, you know, keep your food fresh or do whatever, but there doesn't seem to be the same sort of mortal consequences. Uh, you know, people have a very strange relationship between car safety when or food fresh or do whatever, but there doesn't seem to be the same sort of mortal consequences. People have very strange relationship between car safety when it's done manually and when
Starting point is 01:06:51 it's sort of pivoting to that. Is this just some conceptual inertia thing that we're moving through over time slowly as usual? Yeah. So to start with, cars today are a disaster from a safety standpoint, right? And so cars, the current run rate of road deaths worldwide is at least a million people a year. And so it's about 40,000 road deaths a year in the US and about a million worldwide. And by the way, that million may be a low number, it may be much higher
Starting point is 01:07:13 worldwide and we just don't know how to count it, but it's at least a million worldwide. And so it's sort of, again, you kind of think about this like, okay, a million people a year dying from cars. Okay, over the course of a decade, that's 10 million people. Like when 10 million people die from something in modern world, we use words for it, like apocalypse, right? Genocide, like this is like mass death at a very large scale. And so one is just like, to your point,
Starting point is 01:07:37 like there's a psychological thing here, which is we have gotten acclimated. And by the way, for every death, there's many injuries, right? Many people crippled and never recover. And so there's a psychological thing here, which is we have gotten used to a very large amount of carnage in return for the convenience
Starting point is 01:07:51 of modern transportation and logistics. By the way, in the fullness of time as a civilization, it seems to have been a good trade off. I don't think, very few of us not named Greta Thunberg want to go back to walking everywhere know, walking everywhere, right? And so, and you know, you could never go back to riding horses, right? Cause of course, you know, animal rights.
Starting point is 01:08:12 But, you know, so like it is amazing that we got there. By the way, implication of that being, if the car were invented today, I mean, imagine the conversation that we would have, right? About the rollout of the automobile, right? Which is like, okay, the plan, right? The plan is to strap people into 6,000 pounds of steel and glass. And then the plan is to shoot them down, you know, a road that may or may not, you know, be in good shape at 60 miles an hour.
Starting point is 01:08:33 And then they're going to shoot them at each other. And then we're going to have the safety measure. The safety measure is going to be we're going to paint a line down the center of the road. And that's going to keep them from crashing, right. And so like there's no way that you'd be able to like invent this today or launch this today, so maybe it's good that it happened in an earlier era. So, so anyway, to your point, we're used to that. Um, the self-driving both Waymo's and Tesla's today are far safer than that. Far safer.
Starting point is 01:08:57 And there's like edge questions about, you know, like, you know, this is the, the metric is like, you know, collisions per, you know, thousand or a hundred thousand miles or something, or desperate a hundred thousand miles. And like, it know, collisions per, you know, thousand or hundred thousand miles or something or deaths per hundred thousand miles. And like, it is just no question. They're just like tiny percentages relative. And by the way, not that they're necessarily zero, but they're tiny percentages of what we, of what we experience every day with them. You know, with road deaths. By the way, I think as humans, we have a bad intuition on this because we can't see the other drivers most of the time when we're driving. So I find the cure for people who need to think about this harder is to just go spend
Starting point is 01:09:25 a day sitting at the DMV and just watch the parade of humanity and watch the people who have to be physically steered by their relatives in front of the camera to get the new driver's license at age 90. So we just, and then get like an X-ray machine into cars that are driving past to see how many people are just texting, right? Or how many people have been hitting the vape for the last three hours, right? So like, there's no question
Starting point is 01:09:49 that the technology we have is much safer. There's a societal question as to whether we want the, where we want the trade-off to be, right? Do we want the trade-off to be to stay to the current level of carnage because we're used to it, right? Or do we want, you know,
Starting point is 01:10:02 and wait until the computers are perfect, or, you know, is it just, is it enough for the computers to be much better, which is what's actually happening. Interestingly and optimistically, we as a society have chosen to actually accept, we have chosen to roll out self-driving without the computers being perfect.
Starting point is 01:10:16 And so, you know, there's Teslas and Waymos on the road all over the place, and you know, and people are fine with it. And I actually think that's like, it's a very optimistic thing that we were, we were able to do that. If you talk to the people who make self-driving cars, what they tell you is the problem is not ever
Starting point is 01:10:30 a self-driving car colliding with a self-driving car. The problem is it's when you have other humans in the mix, right, as you'd expect, right? Because humans react in unpredictable ways. And so a lot of the engineering going into these cars is to accommodate the human drivers. There will be some future state years in the future where there are no more human drivers, at least on public roads. And when that happens, these things will be like basically completely
Starting point is 01:10:54 safe because then you won't have the human element in there. And so one of the things that we may want to drive, you know, we have to drive to ultimately is that there are a lot of questions around that, but that may be where we want to go to. I'm pretty optimistic about this. I think that these companies are making excellent progress. I think that the, this is pros and cons, but the Chinese auto industry is now coming online and I think they're also going to be quite good at this and I think they're coming now. And so I think that this whole space is going to develop, I think, quite quickly.
Starting point is 01:11:21 What about personal autonomous vehicles like drone? I remember I was in Leonardo da Vinci airport years ago, and there was one out for gyrocopter thing out front, this can fly itself, et cetera. That feels like a much bigger problem in that you're not retrofitting a new piece of technology onto an existing infrastructure, that this is basically an entirely new thing that you need to have where they can float around.
Starting point is 01:11:49 Yeah. So the, the term, the term, the most common in our world is they call it EVTOL, EVTOL. So electric vertical takeoff and landing. Okay. Right. Right. So VTOL, VTOL, everybody's seen VTOL seen VTOL, vertical takeoff and landing.
Starting point is 01:12:06 It's a Harrier jet. There's a military fly. The movie True Lies, they have a big scene on the Harrier jet. So it's the jet that has thrusters that point down, it rises and then it has thrusters that carry it forward. And so we've had these VTOL things. And by the way, we also have had helicopters which have the same property. So we've had these things in the past.
Starting point is 01:12:25 And then the electric part is basically, that they'll be electric. And then the other presumptive part of that is that they'll be autonomous. They'll be self-piloting. Because for those things to go mass market, you can't expect everybody to get a pilot's license, right? You're gonna wanna just get in,
Starting point is 01:12:39 wanna sit back and let the thing fly itself. So autonomous, electric, vertical, takeoff and landing. We have all of the elements to do that. We know how to do all of that. you're gonna wanna just get in, wanna sit back and let the thing fly itself. So autonomous, electric, vertical takeoff and landing. We have all of the elements to do that. We know how to do all of that. Like all those technologies exist now. And there are companies that have put this together and have these products working.
Starting point is 01:12:55 The big issue right now, I think there's a couple of big issues. One is just power, which is, it's just, it takes a lot of power to get something in the air and keep it in the air. And like I said, batteries aren't very good right now relative to what we need for things like that. So, we need some breakthroughs and batteries. And then the other is just cost and infrastructure and then safety regime and all that. Over the course of five or 10 years, I think you could
Starting point is 01:13:21 imagine that happening. There are people working on the Ironman suit. Um, you know, Elon keeps making references to, I don't know that he's working on it or not, but he keeps making references to it. And then, um, there are, there are, I, there have been, well, do you remember, I don't remember, there was, uh, was it the James Bond, one of the old James Bond movies, um, like Skullfinger, one of those movies, uh, actually had what looked like a stunt where James Bond actually escaped, uh, from a house by strapping on a jet pack. And actually did this thing and he did this thing, a big arc up in the air and then he landed like a half mile away.
Starting point is 01:13:52 And when I was a kid, I watched it like, you know, I just love those movies and I watched it and I was like, well, clearly they did that with like a green screen special effects. And it's like, no, that was an actual jet pack. No way. Yes, that actually existed. And actually the military has those today. I've seen someone boarding a, like a tanker type thing and they've got sort of the hands in gauntlets and that, but that's not electric. That's, that looks like it's jet, some sort of, sort of fuel jet powered type thing. Yeah, that's right. So today, today that would be, today that would be, today that would be jet powered and there are jet packs like that. Again, expensive, risky, not mainstream. You do not want to bullet in the wrong location if you've got one of those things strapped to you.
Starting point is 01:14:31 Yes, exactly. That's maybe the second most dangerous hobby after flying in squirrel suits. So those are the hobby videos where they end, the YouTube videos end two seconds before the final moment. But look, we know how to get things in the air, like we know how to do that. And so, you know, is there a kind of, you know, is there a kind of propulsions, is there a kind of, you know, either fuel system, propulsion system, is there a battery system, by the way, you know,
Starting point is 01:14:55 there are hobbyists that have literally put together like quadcopter drones, you know, where you use, it's like a, you know, it's like a helicopter with, you know, using lots of individual like quadcopter rotors. And so that could be electric powered and could basically do the same thing. And so there's lots of hobbyist activity here. And so I think people are gonna be working on this a lot
Starting point is 01:15:11 over the next few years. Solving autonomy helps tremendously because you're able to, the human doesn't need to be trained. You can make the thing super safe. Another interesting thing that's happened is if you've seen the Waymo cars in the street, they've got this sensor on the top of the car
Starting point is 01:15:28 that kind of spins around. And that's a system called LiDAR and it's a light-based form of radar. And so it's a system that lets it do basically 3D mapping of the environment. So the Tesla is basically optical. So the Tesla is using cameras to construct a 3D model of the environment and sort of interpolating distances based on having different
Starting point is 01:15:47 camera angles. The Waymo cars actually have these sensors called LiDAR that actually do depth sensing. The problem with the depth sensing is LiDAR sensors historically have been extremely expensive which makes it hard to feel these things in products but they're coming down rapidly in price now and I about three months ago I bought my nine nine, my nine year old is my prototype in-house prototype for all this stuff. And I got him a Chinese robot dog for I think $1,600 and the robot dogs snout. And it's a, it's a robot dog.
Starting point is 01:16:18 Like the demos you've seen, like the, if you've seen like the Boston Dynamics robot dogs, it's like that, but you can, you can buy it on it. You can actually have one of your own and it does all the things. It's very impressive. And it actually has this now that's the spinning sensor. And so they've somehow gotten lighter down to a couple hundred dollars. And again, like that's very encouraging because now that you've got that, now you could start to think about building all kinds of things that then have, have depth sensing. And so all the, yeah, so all these pieces are, are starting to fall into place and with a little bit of luck
Starting point is 01:16:45 and some progress and batteries, um, the next, you know, five years, 10 years. Yeah. More of the Ironman stuff, you know, just some stuff, hopefully we'll start to happen. It really is a different timeline. Speaking of which, I went, uh, I was in the UK, uh, last week I was there for, for a couple of weeks. I went back home and had this live show thing. Give me your opinion on the sort of state of free speech and regulation
Starting point is 01:17:04 on that side of the world. We're talking about the excesses, the overreaches that hopefully may be coming out of over here. What's your opinion on the other side of the Atlantic? Yeah, so I'd say there's like three kind of worlds of speech right now, three zones in the world. There's like the Chinese version, which is we're going to tell you explicitly what the rules are and you better not break them. But basically, you know, other than the things we tell you not to talk about, which we're very clear about, you know, you can go crazy and talk about whatever you want. So kind of the top-down authoritarian, you know, kind of model. We have the American model, which is we have free speech constitutionally guaranteed, but there are a thousand unwritten rules of society, like the world's worst version of curb your enthusiasm,
Starting point is 01:17:51 where there's a thousand different ways to trip the PC police. And if you trip any one of them, your life gets vaporized. And so you might call that bottoms up authoritarianism. The government's not coming to get you, they're not going to jail you for it, but like, you know, boy, you know, just, it's a shame that you got fired and your family now hates you and your friends have all left you and you can never work again, right? So that's like the distributed bottoms up authoritarian, unwritten version of what China has, right? So top down, bottoms up. And then there's Europe, which has decided to do both. For reasons I have to say, I don't fully understand.
Starting point is 01:18:29 But at least from an American perspective, if you look at the speech laws in the UK or Canada, they are horrifying. I would like to think that if those speech laws came to the US, we would have a revolution. They're just completely unacceptable. Of course you can't send police to somebody's house because they said something wrong on Twitter. Of course, you can't do that. But yet it happens.
Starting point is 01:18:51 And then, yeah, the rest of Europe, they all have variations on that. Obviously, Germany has very explicit versions of that on a lot of topics, and other countries do as well. So yeah, but then also Europe and the Anglosphere have also the top-down version of the actual laws, hate speech also Europe and the Anglosphere have also the, you know, they have the top down version of the actual laws, hate speech laws and so forth. But then they also have the bottoms up authoritarianism of all the implicit codes, which if you're a highly educated, you know,
Starting point is 01:19:19 Ivy League graduate equivalent, you know, your job is to track those codes as they evolve every day by faithfully reading the New York Times cover to cover every day. But if you don't and you fall out of step or you're a working class person, you say the wrong thing. If you crack your knuckles in the wrong way outside of your truck cab, you get accused of being a white supremacist and you get fired, which is something that actually happened in the US during the 2020 craziness. So yeah, Europe right now is combining the worst of those worlds. And, you know, I, you know, you may know more about it than I do. They seem held back to just get much, much worse. I don't know what's going on.
Starting point is 01:19:50 I, I really, you know, I went back to the UK and I was quite disheartened by like a lot of this stuff. I was, I spent my time in Kensington. Um, you know, the diversity is our strength thing in the UK, which is a message that you're greeted by as you enter the country. Um, I don't know whether there's such a thing as too much diversity. Um, but you know, it just didn't particularly feel that much like London. I'm aware that that just sounds like the most Brexit sentence that I could have ever said, and I was like, huh, like this just doesn't feel very much like London.
Starting point is 01:20:22 Maybe it should be a cosmopolitan place, so on and so forth. So that was kind of a little bit strange. It was just something that I hadn't noticed previously, like that level of diversity. Um, what else did I notice? The level of service, like every Uber took 20 minutes to get to you. It's like, why is, why is that happening? That seems, I was slapping in the middle of the city.
Starting point is 01:20:40 Every bit of service in every restaurant was either a long wait, or there was some sort of a problem, uh Eats seem completely determined to just like destroy whatever food or coffee. We actually started ordering twice the number of coffees that we needed to order in an attempt that we could maybe combine them together to have residual amounts of the right amount of coffee that we needed. And we still didn't get, I just, I don't know, you know, I mean, where they can just whine on like the fucking opulent like returning prodigal son or something, but it just, it really was quite, it was quite disappointing in many ways. And then on top of this, you know, this upside down world where Keir Starmer is saying that
Starting point is 01:21:20 the immigration is too high and that this is a problem and that we need to deal with. And at the same time, you've got these still unbelievably bizarre non-crime hate incidents that are being reported. It's odd and I feel bad about my old homeland. I don't know whether Europe's in the middle of a downfall or what at the moment, but you know, between all of the different parties that are going in this vote of no confidence that just happened recently, it's like the, whatever the timeline is, maybe this is actually what happens when we get used for the Ram of
Starting point is 01:21:49 another fucking universe, this computation that they just start to speed up all of the drama. That's the side effect. I don't know. Um, but yeah, there's, there's something happening and it's being expedited, whatever it is. Yeah. So say a couple of things.
Starting point is 01:22:02 So a friend of mine in private equity went and worked in, he grew up in Texas and worked in the US and then went over and lived in London and did private equity in Europe for five years. And he came back and I said, you know, what was it like? And he said, Europe is like, he said in Europe, it's like, there are like five things that are more important than making money and nobody will tell you what they are. Right? Like there's, there are these, there are these goals and objectives and they're loosely around ideas of societal fairness and they're loosely around ideas of not having to work very hard and they're loosely around retirement
Starting point is 01:22:31 things and social services and they're loosely around diversity and they're loosely around immigration but they don't seem crisply defined and so maybe there's a bit of an identity crisis happening there. There's another thing as a famous line that fascism and communism are always looming over the US and landing in Europe. There's always this threat of the US is going to go communist in the 20s or is going to go fascist in the 30s or something. And then what actually happens is, France actually goes communist and Germany actually goes fascist. So there are actually still communist parties in Europe. Maybe Europe is downstream of American culture in a way that is maybe helpful in some ways,
Starting point is 01:23:14 but harmful in others. Well, it's a little bit, let's say a little bit, a great example of that with England is it's like, okay, in the US, we were all trained from birth, of course, to feel extremely bad about the fate of indigenous peoples on the North American continent. Um, um, um, you know, Chris pop question, who are the indigenous people of England? Uh, I don't know who are they? Whoever it was before the fucking Normans came in.
Starting point is 01:23:36 No, no, no. The answer is it's the English. Right. Okay. Like it's just the English. It's just the English. Right. Like the English didn't display it. It's like the Normans, it's like the Saxons, like whatever came together, but there was no displacement, right? It's like the indigenous people of England are just the English.
Starting point is 01:23:51 Yet somehow the English feel just as bad about indigenous peoples as we do, right? Like, right? Well, that's the other thing is like, as you know, England has like no history. England has no history of like, you know, Africans, you know, anyway, I'm not going to go into that topic. But yeah, the spectacle of BLM in England is really quite something. It's just, it's not historically,
Starting point is 01:24:12 it's not very historically grounded, let's say. So, yeah, so, you know, maybe it's just America, maybe America's both functions and dysfunctions kind of ripple out to the rest of the world. Yeah, yeah. In a way that is helpful in some ways and harmful in others. And then, you know, maybe it's a, I mean, from the outside, it seems like it's a continent that is in a series of countries that are having just a massive identity crisis. I think that's definitely, I think that's certainly true. Looking for a direction, looking for what your contribution is to the broader world, you know, it very much is a little bit of attention is being paid to Russia.
Starting point is 01:24:43 Most of the attention is being paid to the U S and some attention gets paid to China and everybody else is playing second, third, fourth, fifth string behind that. And I wonder whether the UK is feeling like a jilted lover at the moment where it's just like, we used to be important. We used to do a thing. I actually want you to bring this up. Um, I've been sort of hopping on about the, uh, differential number of entrepreneurs coming out of the UK versus coming out of the U S despite the fact that we have
Starting point is 01:25:09 similar higher education levels in the same number of universities in the top 10 worldwide, the UK has got the same number as the U S do, but we put out only 20% of the number of entrepreneurs. What's based on your assessment culturally, commercially, what do you lay that at the feet of? Yeah, look, part of it is some of the great UK entrepreneurs come to the US. And so we like, by the way, this is not just true of the UK, it's also true of France and Germany and Sweden and Norway and many other countries, right?
Starting point is 01:25:39 And so I just because the US has such a highly evolved, advanced entrepreneurial ecosystem, it may just be that we're drawing a lot of them. And so they just, they don't start companies in the UK because they've left. I think that's part of it. I think part of it is the identity crisis. Here, I just know what I hear from my friends who are English or French or German, which is just, they feel like their home countries and cultures are just not very supportive of the entire concept. or home countries and cultures are just not very supportive of the entire concept. The governments don't necessarily want it. The legal codes are not well set up for it.
Starting point is 01:26:11 Look, the UK, there's a great example. So there's a great example of the dichotomy, just like the speech thing. So in the US, this AI thing is kind of the big exciting thing in tech and the US is forming it. And the Biden administration was threatening to do all these horrible regulatory things, but the new administration is certainly not, they're going to do, I think, really smart
Starting point is 01:26:28 things. And so there's going to be a AI tech boom in the US. It's going to be spectacular. And a lot of that is around startups. Europe, the EU has chosen to basically make all of that illegal, right? And so they passed this thing called the EU AI Act. And they, you know, this guy Thierry Breton, it's this kind of crowning achievement to basically
Starting point is 01:26:45 make AI startups and the EU illegal. So they've just decided they just don't want them. And then the UK- Cut us off at the knees just in case we were going to contribute to this new fledgling industry. Yeah, well, the EU, you know, the EU, they have this slogan, the EU kind of bureaucrats have this slogan, it's just say, you know,
Starting point is 01:27:00 we're not going to lead the world in innovation, but we can lead it in regulation, right? Right. Right. Exactly. Your reaction is the appropriate reaction, which is like, no, that's not actually what we're going to. Yeah, that doesn't make any sense. If it's not literally happening there,
Starting point is 01:27:16 then you're not going to be able to regulate it. And in fact, what's happening is new leading edge AI products from American companies now are actually not being released in Europe. So like the new Apple AI products, and I actually think the new Open AI products as well are just not even literally being released in Europe as a result. The UK again has chosen, the UK has chosen this middle form, which is they sort of made it illegal.
Starting point is 01:27:40 And so they did this, the last administration did this disastrous AI safety push, which was basically a massive red light basically saying, don't even bother to try to start AI startups in the UK. Um, so, but it wasn't like as overt a ban as the EU put on, but it was like this massive signal that basically says this is not a safe place to do it. And I think the UK is kind of stuck halfway in the middle right now and it kind of has to decide which way it wants to go. Yeah.
Starting point is 01:28:02 I, uh, I didn't know, man. I, I really hope that we can bring ourselves back around, maybe not like old school colonialist glory. I'm not harkening back to the fucking East India training company, but something, I don't know. You're right, the identity crisis and the direction is a big part. But then you layer on top,
Starting point is 01:28:21 I'm sure you've been seeing Dominic Cummings going full fucking scorched earth recently. He came on the show about six months ago and everything that he said, even like Rory Stewart, someone that you think of as, you know, being way less of a firebrand than Dominic Cummings. And he has this really like visceral description of the type of people that inhabit the halls, Whitehall and Westminster. And he describes the way that they smell and the way that they talk and the things that concern them. And he realized, you know, the governmental efficiency thing needs to be scaled out beyond just the U S this is the sort of thing that really needs to be probably put
Starting point is 01:28:58 everywhere. Yeah. So Dominic has this great, no, I'm glad you brought it Dominic. He's a good friend of mine. So Dominic has this great line. He says the people aren't running the system, the system is running the people, right? And he uses that to describe the UK government,
Starting point is 01:29:10 but you could also describe the US government and you could also describe the EU kind of the same way, which is like, in a sense, it's like, it's like what happens when you get face to face with people in government is what you realize is, there are actually a significant number of like very publicly spirited, very determined people who actually wanna do good things. And there's this, And then there's people who are not like that and they're just there
Starting point is 01:29:28 for the job or whatever. But like there are people who legitimately are working hard and trying to figure things out. And what happens is if you know them, what happens generally, they get ground down, you know, they end up disappointed and then they end up either, you know, becoming disillusioned or they end up leaving and going into the private sector. And so I think Dominic's explanation is a good one, which is the system is running the people, not the people running the system. The bad news with that is that, you know, it would be easier, like if the solution was just to swap out the people, in a sense, that would be an easy answer to the question. Whether you could do that or not is an open question, but at least you would know what to do to reform the system as Dominic talks about and others talk about.
Starting point is 01:30:03 And I actually saw, actually Starmor's now talking about this too, right? Which is like, okay, how do we actually redo the system is of course much harder. The other lens on this that I think about a lot is Curtis Yarvin, who's also a good friend of mine. And the way he describes the American system that's running the people,
Starting point is 01:30:22 the way he describes it is, we are living under FDR's running the people. The way he describes it is, we are living under FDR's personal monarchy 80 years later without FDR. And the reason he describes it, he says, look, before FDR, the federal government was actually very small. Tax rates were super low. The federal government didn't do very much. The FDR dramatically, you know, orders of magnitude increased the size and scope of the federal government. He did that for two reasons. One was the New Deal, and then the other was World War II. And so the federal government that Franklin Roosevelt left behind in 1945 when he passed away was the government that he had built, which he
Starting point is 01:30:58 had run the entire time from 1933 to 1945 himself, in which he had staffed himself and he had overseen himself and everything. And he built this, basically this giant structure. And as Curtis basically says, as long as you had FDR running that, it could run really well. And you know, we won World War II and saved the free world and like it worked and pulled the US out of depression,
Starting point is 01:31:16 like the whole thing worked and it was great. But if you let an organization of that size and scope run without its founder CEO for 80 years, you end up with what we have now, which is just like basically an out of control bureaucracy, like an out of control system in which people can't even make positive change even if they want to.
Starting point is 01:31:31 And again, that's why you could have in the US, you could have reason for optimism, which is, okay, what do you need? Well, you need another FDR like figure, but in reverse, right? You need somebody and a team of people around them who's actually willing to come in and like take the thing by the throat and make the changes.
Starting point is 01:31:47 By the way, make the changes that FDR would probably make if he were here to make them, but he's not, right? And so somebody else has to step up and do that. It has to be a president because nobody else conceivably has the power to do that. But we will see how much this president can do. But like that's a lot of what this administration plans to do.
Starting point is 01:32:04 In the UK, look, the UK government maybe grew up in parallel with the US government. So maybe FDR is also partially responsible for it by inspiring, you know, the general modern Western style of governance, but also the, of course, the English system grew up for, you know, many centuries before that. You know, it may be time for an FDR style transformational leader to come in and like really get a grip on it. Yeah, the path dependence.
Starting point is 01:32:25 I always, you know, we always talk about the, the wonderful heritage of the UK, but you don't necessarily think about the Einstein effect that has got you beholden to all of this bullshit that came before. I learned last year that the distance between the two front benches, uh, in the, the house of commons is a broadsword held out at arm's length from either side. Which just tells you kind of everything that you need to know about what our current government and political institutions in the UK are inheriting. Like if you've got a country that's thousand years old, couple of thousand years old, you're like, bloody hell, like there's a really lovely history there. And you say, yeah, but what else comes along for the ride? And sometimes it's stuff you want to get rid of. Mark, let's bring this one home, dude.
Starting point is 01:33:07 It's been a long time coming. You're so great. I'd love to bring you back on at some point soon. I watched your people go. They want to keep up to date with all of the stuff that you're doing. Oh, go to X, go to X, uh, P Mark A P M A R C A on X.com. And also on, on Substack. Mark, I appreciate you.
Starting point is 01:33:22 Thank you, mate. Good. Fantastic. Thank you, mate. Good, fantastic. Thank you, Chris.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.