a16z Podcast - Ben Horowitz: What Founders Must Know About AI and Crypto

Episode Date: July 11, 2025

This week on the a16z Podcast, we're sharing a feed drop from Impact Theory with Tom Bilyeu, featuring a wide-ranging conversation with a16z cofounder Ben Horowitz.Artificial intelligence isn't just a... tool — it's a tectonic shift. In this episode, Ben joins Tom to break down what AI really is (and isn't), where it's taking us, and why it matters. They dive into the historical parallels, the looming policy battles, and how innovation cycles have always created — not destroyed — opportunity.From the future of work and education to the global AI race and the role of blockchain in preserving trust, Ben shares hard-won insights from decades at the forefront of technological disruption. It's a masterclass in long-term thinking for anyone building, investing, or navigating what's coming next.Resources: Listen to more episodes of Impact Theory with Tom Bilyeu: https://link.chtbl.com/impacttheoryWatch full conversations on YouTube: youtube.com/tombilyeuFollow Tom on Instagram: @tombilyeuLearn more about Impact Theory: impacttheory.comTimecodes: 00:00 Introduction to Impact Theory with Ben Horowitz01:12 The Disruptive Power of AI02:01 Understanding AI and Its Implications04:19 The Future of Jobs in an AI-Driven World06:52 Human Intelligence vs. Artificial Intelligence10:31 The Role of AI in Society21:41 AI and the Future of Work35:07 The AI Race: US vs. China41:25 The Importance of Blockchain in an AI World44:26 Government Regulation and Blockchain45:16 The Need for Stablecoins45:45 Energy Challenges and AI49:53 Market Structure Bill and Token Regulation53:51 Blockchain's Trust and Adoption01:04:17 Elon Musk's Government Involvement01:12:03 Historical Figures and Modern Parallels01:18:41 AI and Creativity in Business01:21:29 Conclusion and Final ThoughtsStay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures

Transcript
Discussion (0)
Starting point is 00:00:00 Today, we're doing something a little different. We're dropping an episode from Impact Theory, a show hosted by Tom Bill You, featuring a conversation with Ben Horowitz, co-founder of Andreessen Horowitz. Ben rarely does interviews like this, and in this one, he goes deep, on AI, on power, on the future of work, and what it really means to be a human in a world of intelligent machines. He breaks down why AI is not life, not consciousness, but something else entirely. Why blockchain is critical to preserving trust and truth in the age of deepfakes, Why distribution may matter more than code
Starting point is 00:00:32 and why history tells us this isn't the end of jobs but the beginning of something new. Let's get into it. This information is for educational purposes only and is not a recommendation to buy, hold, or sell any investment or financial product. This podcast has been produced by a third party and may include pay promotional advertisements,
Starting point is 00:00:51 other company references, and individuals unaffiliated with A16Z. Such advertisements, companies, and individuals are not endorsed by AH Capital Management LLC, A16Z or any of its affiliates. Information is from sources deemed reliable on the date of publication, but A16Z does not guarantee its accuracy.
Starting point is 00:01:11 Revolutions don't always come with banners and protests. Sometimes the only shots fired are snippets of code. This is one of those moments. AI is the most disruptive force in history, and it's no longer a distant possibility. It is here right now, and it's already changed. the foundations of power and the economy.
Starting point is 00:01:33 Few people have been as influential in shaping the direction of AI than mega-investor Ben Horowitz. A pioneer in Silicon Valley, he spent decades at the center of every major technological disruption, including standing up to the Biden administration's attempts to limit and control AI. In today's episode, he lays out where AI is really taking us, the forces that will define the next decade, and how to position yourself before it's too late.
Starting point is 00:02:00 You are in an area making investments thinking about some of the things that I think are the most consequential in the world today as it relates to innovation, but along those lines, you and Mark Andresen are all in on AI, but how do we make sure that it benefits everyone instead of making humans obsolete? To begin with, we have to just realize, you know, what AI is because I think that because we called it artificial intelligence, you know, our whole industry of technology has a name. naming problem and that we started, you know, by calling computer science, computer science, which everybody thought, oh, that's just like computers. It's like the science of a machine, as opposed to information theory and what it really was. And then in Web3 world, which you're familiar with, we called it cryptocurrency, which to normal people means secret money. But that's not that's a good point. And then I think with artificial intelligence, I think that's also like a bad name in a lot of ways and that, you know, look, the people who work on in the field
Starting point is 00:03:05 call what they're building models. And I think that's a really accurate description in the sense that what we're doing is we're kind of modeling something we've always done, which is we're trying to model the world in a way that enables us to predict it. And then, you know, we've built much more sophisticated models with this technology than we could. You know, in the old days we had E equals MC squared, which is like amazing, but a relatively simple model. Now we have models with like 600 billion variables and this kind of thing. And we can model like what's the next word that I should say and that kind of thing. So that's amazing and powerful. But I would say like we need to distinguish the fact that it's a model that is directed by us to tell us things
Starting point is 00:04:00 about the world and do things on our behalf. But it's not a, it's not life. It doesn't have a free will in these kinds of things. So I think default, you know, we are the master and it is it is the servant as opposed to vice versa. The question, though, that you're getting at, which is, okay, how do we not get obsoleted? Like, why do we need us if we've got these things that can do all the jobs that we currently do? And I think, you know, we've gone through that in the past, and it's been interesting, right? So in I think 1750, over 90% of the jobs in the country were agricultural. And, you know, there was a huge fight and a group called the Luddites that fought the plow.
Starting point is 00:04:48 and, you know, some of these new-fangled inventions that eventually, by the way, eliminated 97% of the jobs that were there. But I think that most people would say, gee, the life I have now is better than the life that I would have had on the farm, where all I did was farm and nothing else in life. But, you know, like, and if you want a farm still, you can. that is an option but most people don't take that option so the jobs that we have now will you know a lot of them will go away and we'll but will likely have new jobs i mean humans are pretty good at figuring out new things to do and new things to pursue and so forth and you know like
Starting point is 00:05:40 including like going to mars and that kind of thing which obviously isn't a thing today but could very well be a thing tomorrow. So I think that, you know, we have to stay creative and, and keep dreaming about, like, a better future and how to kind of improve things for people. But I think that, you know, particularly for kind of the people in the world that are on the struggle bus who are living on a dollar a day or, you know, kind of subject to all kinds of diseases and so forth, life is going to get, like, radically better for them. When I look at the plow example and the Luddites fighting against it, I think you'll see the same thing with AI. You're going to get people that just completely reject it, refuse to engage in
Starting point is 00:06:20 anything for sure. But when I look at AI, what I worry about is that there will be no refuge to go to, meaning if you realize, oh, I can't plow as well as a plow or a tractor or now a combine, there's still a lot of other things that technology can't do better than me. Do you think there's an upper bound to artificial intelligence, bad name or not? Or do you think that it keeps going, and it literally becomes better than us once it's embodied in robotics at everything. We are kind of limited by the new ideas that we have. And artificial intelligence is really, by the way, artificial human intelligence, meaning, right, humans looked at the world. Humans figured out what it was, you know, described it, came up with these concept like trees
Starting point is 00:07:11 and, you know, air and all this stuff. It's not necessarily. necessarily real, just how we decided to structure the world. And AI has learned our structure. Like, they've learned language, human language, which is a version of the universe that is not an accurate version of the universe. It's just our version of the universe. So it's the artificial. But you're going to have to go deep on that. I know my audience is going to be like, air seems pretty real when you're underwater. What do you mean that trees and air are not necessarily real? It's a construction that we made. You know, we decided it is, it is, literally the way humans have interpreted the world in order for humans to navigate it.
Starting point is 00:07:52 And, you know, as is language, right? Language isn't the universe as it is. Like if completely objective, if you had an objective look at, you know, the atoms and so forth and how they were arranged and whatnot, you probably, you know, those descriptions are lacking in a lot of ways. They're not completely accurate. They don't, you know, they certainly don't kind of predict everything about how the world works. And so what machines have learned, or like what artificial intelligence is, is an understanding of our knowledge, of the human knowledge.
Starting point is 00:08:29 So it's taken in our knowledge of the universe and then it is kind of can refine that and it can work on that and it can derive things from our knowledge, our axiom set. but it isn't actually observing the world at this point and figuring out new stuff. So, you know, at the very least, you know, humans still have to discover new principles of the universe or kind of interpret it in a different way or the machines have to somehow observe directly the world, which they're not yet doing. And so that, you know, that's a pretty big role, I would say, but then, you know, in addition, you know, we direct the world. like Star Trek is actually a pretty good metaphor for that. Like the Star Trek computer was pretty badass, but, you know, the people in Star Trek were still
Starting point is 00:09:20 like flying around the universe, discovering new things about it. You know, they were still much to do. And I think that it's always a little kind of difficult to figure out what the new jobs that get created are. And we've had intelligence for a while, right? like we've had machines that could do math way better than us. And, you know, I mean, I can remember when I was in junior high school, our junior high put on a play about like how bad it was that there were calculators
Starting point is 00:09:52 because nobody would know how to do arithmetic and then all the calculators would break and then we'd be stuck. We'd be trying to like fly around the universe and rockets and then, but we wouldn't be able to do math and the calculators would be broke and we'd be screwed. So there is always that fear. and, you know, we've had computers that can play games better than us. We currently have computers that can drive better than us and so forth. So we have a lot of intelligence out there.
Starting point is 00:10:17 But it hasn't, you know, created like this super dystopia, you know, in any degree. It's actually made things better, you know, everywhere it's appeared. So I would expect that to continue. What do you think, though, is the limiting function? So when I look at AI, I always say, unless we run in, into an upper bound where the computation just can't allow the intelligence to keep progressing. It seems like it will become not only generalized human intelligence and thus they'll be able to do everything that we can do, it will become embodied as robotics. And if I ran the math
Starting point is 00:10:54 on this one, Einstein is roughly 2.4 times smarter than someone who is definitionally a moron. And the gap just between those two is so dramatic. The army won't even, even draft somebody that is a moron at, you know, whatever, 81 IQ or whatever it is, because it's, they create more problems than they solve even just by being, you know, bullet fodder. So do you think there is something that's going to cause that upper bound? Or you have a belief about the nature of intelligence that will keep AI subservient to us? the smartest people don't rule the world, you know, Einstein wasn't in charge. And, you know, many of us are, like, ruled by our cats. And so, like, power and intelligence don't necessarily go together, particularly when the intelligence has no free will. Or has no desire to do free will. It doesn't have will. You know, it is kind of a model that's computing things. I think also, you know, the whole general intelligence. intelligence thing is interesting in that, you know, Waymo's got a super smart AI that can drive a car,
Starting point is 00:12:12 but that AI that drives a car doesn't know English and, you know, isn't, you know, particularly good at other tasks, you know, currently. And, and then the chat GPT can't drive a car. And so that's, you know, how much things generally. particularly, and if you look at, well, why is that? A lot of it actually has to do with the long tail of human behavior, where humans, you know, the distribution of human behavior is, it's fractal, it's mantle broadian or whatever. It's not evenly distributed at all.
Starting point is 00:12:55 And so, you know, an AI that kind of captures all that turns out to be, you know, we're not so much on the track. We're more on the track for kind of really great reasoning over kind of a set of axioms that we came up with, you know, in say math or physics, but not so much, you know, kind of general human intelligence, which is, you know, being able to navigate other humans and the world in a way that is productive,
Starting point is 00:13:32 for us is kind of a, it's a little bit of a different dimension of things. You know, yeah, you can compare the math capabilities or the go-playing capabilities or the driving capabilities or the IQ test capabilities of a computer. But that's not really a human. I think a human is kind of different in a fairly fundamental way. So what we end up doing, I think it's going to be different than what we're doing today, just like what we're doing today is very different than what we did 100 years ago.
Starting point is 00:14:06 But, you know, the not having a need for us, I think that, you know, these AIs are tools for us to basically navigate the world and help us solve problems and do things like, you know, everything from prevent pandemics to deal with climate change to that sort of thing, to not kill each other driving cars. which we do a lot of, you know, hopefully it doesn't, you know,
Starting point is 00:14:36 create more wars, hopefully it creates less wars, but we'll see. What I know about the human brain maybe tricking me into painting a vision of the future that isn't going to come true, let me put words in your mouth and you tell me if they fit appropriately. What I hear you saying is something akin to the way that we're approaching artificial intelligence right now, let's round it to large language models. that is going to hit an upper bound where it's not able to have insights that a human won't already have, that they are trapped inside of the box that we have created,
Starting point is 00:15:12 what you're calling the axioms by which we navigate the world. They get trapped inside that box, and thusly, we'll never be able to look at the world and go, I'm not going to predict the next frame. I'm going to render the next frame based on what I know about physics. And so water reacts this way in an earthbound gravity system. And so it's going to splash like this, and it understands liquid dynamics, et cetera, et cetera. So is that accurate? Are you saying that it is trapped inside of our box and we'll never have?
Starting point is 00:15:39 It hasn't demonstrated that capability yet. So, like, you know, it hasn't like walked up to a rock and said, this is a rock, right? We labeled it a rock because that's our structure. But, you know, our rock isn't probably the more intelligent being, what I call it, something else. Or maybe the rock is irrelevant, you know, to how you. actually can navigate the world safely, and kind of figuring those things out or kind of adopting to them is just not something that, you know, it's trained on our rendition of the universe, in our kind of literally like the way we have described it using language that we
Starting point is 00:16:22 invented. And so it is constrained a bit to the way. that in nature currently, you know, that doesn't mean it's not like a massively useful tool and can do things. And by the way, it can derive new rules from the old rules that we've given it, for sure. But, you know, we'll, like I think it's a bit of a jump to go, you know, it's going to replace us entirely when the whole discovery process is something that we do that it doesn't do yet. Okay. The way that the human mind is architected is you have competing regions of the brain. Like if you cut the corpus callosum, the part that connects the left and the right hemisphere, you can get two distinct personalities, one that is atheist, for instance, and one that believes deeply in God. And they'll argue back and forth. I mean, this is in the same human brain. So that tells me that what you have is basically regions of the brain that get good at a thing. And then they end up coming together to collaborate. And that is, sort of human intelligence.
Starting point is 00:17:31 And I've heard you talk about there's something like 200 computers inside of a single car. So if we already know that you can daisy chain all of these, like it's a very deep knowledge about one thing. But as you daisy chain them, the intelligence gets what I'll call more generalized. You don't see that as a flywheel that is going to keep going. You know, what we can compute will get better and better and better. But having said that, you know, that doesn't say that like humans, one, you know, humans built the machines, plug them in, give them the batteries, all these kinds of things.
Starting point is 00:18:15 And, you know, and they've been created to fulfill our purposes. So, you know, what it means to be a human will probably will change, like it has been changing. and kind of how humans live their life will change. But humans still find things to do. I mean, it's kind of like, you know, like a Cheetah's been able to run faster than a human forever, but we never watched Cheetah's race. We only watch humans race each other.
Starting point is 00:18:42 You know, computers have played chess better than humans for a long time, but nobody watches computers play chess anymore. They watch humans play humans, and chess is more popular than it's ever been. And so I think we have like a keen interest in each other and how that's going to work. And these will be kind of tools to enhance that whole experience for us. But I think it's, you know, like a world of just machines seems like that seems like really unlikely.
Starting point is 00:19:11 So you've got people like Elon Musk, Sam Altman, who have both expressed deep concerns about how AI may in fact make us obsolete. Elon has likened, he's certainly become fatalistic, but he gave a rant that I absolutely love that is AI is a demon summoning circle and you're calling forward this demon that you were just convinced you're going to be able to control and he certainly is not so sure. And at one point, and again, I'm fully aware
Starting point is 00:19:42 that he's on his fatalist arc and he's just moving forward and he's building as fast as he can. It's interesting that both of them, despite saying these things, are building AI as fast as, like they're literally, literally in a race with each other to see who can build it faster.
Starting point is 00:19:58 Who can, who can send the demon faster that they're warning about? What do you take away from that? Is it just regulatory capture on both of their parts? Is Elon being sincere, not that I need you to mind read him, but like, what do you take away in the fact that they've both warned against it and they're both deploying it as fast as I can? Yeah, yeah, it seems fairly contradictory. Like, I think there is, like, I won't question either of their sincerity at some degree, but I do think there are many reasons to warn about it.
Starting point is 00:20:33 But, like, I also think that, you know, any kind of new super powerful technology, you know, in a way they're right to kind of warn about, like, okay, this thing, if we, you know, if we don't think about some of the implications of it could get dangerous. And I think that's a good thing. Like every technology we've ever had from fire to, you know, from fire to automobiles to nuclear to AI has got, the Internet has got downsides to it. They all have downsides. And the more powerful, the more kind of, you know, kind of intriguing the downside. And, you know, maybe like, you know, without the Internet, we probably would have never gotten to AI. and so maybe that was the downside of the internet that it led to AI or something like that, you could argue.
Starting point is 00:21:25 But I think generally we would take every technology we've invented and keep it because, you know, NetNet, they've been positive for humanity and for the world. And that's generally, and that's why I think they're building it so fast because I think they know that. All right, so anybody with a 17-year-old right now is thinking, oh, my, where do I point my kid? what do I tell them to go study that's future proof?
Starting point is 00:21:50 What can we learn about the way you guys are investing at Andreessen Horowitz that would give somebody an inclination of what you think a 17-year-old should be focused on now? Yeah, you know, it's really interesting. I think one of the things, what we're saying in the kind of smartest young people that come out is they spend a lot of time with AI learning everything they possibly can. So I think you want to get very good at, like, high curiosity and then learning, you know, you have available to you all of human knowledge in something that will talk to you. And that's, you know, that's an incredible opportunity. And I think that anything you want to do in the world to make the world better, you now have the tools as an individual to do that in a way that, you know, if you look at kind of what Thomas Edison had to do in creating GE.
Starting point is 00:22:45 and like what that took and so forth, you know, it was a way higher bar to have an impact. Whereas now, I think, you know, you can very quickly, you know, build something or do something that, you know, just pick a problem. It's not like, you know, sometimes in this AI conversation, the thing that we ignore is like, well, what are the problems that we have in the world? well, we still have cancer and diabetes and sickle cell and every disease, and we still have the threat of pandemics, and we still have climate change, and we still have, you know, lots of people who are starving to death, and we still have malaria. And so, like, pick a problem you want to solve, and now, you know,
Starting point is 00:23:34 you have a huge helping hand in doing that, that nobody in the history of the planet has ever had before. for us. So I think there's really great opportunities along those lines. So that would be my, you know, my best advice, I think, is to get really good with that. And look, I think, I think a lot of things that we've learned or that have been valuable skills traditionally are going to change. So you really, you know, again, want to be able to learn how to do anything. And I think that's probably going to be key. When I look at the things you were just talking about, that feels right for people that have
Starting point is 00:24:17 the inclination, that have the cognitive horsepower to go and say, okay, I'm going to leverage AI to extend my capabilities, to tackle the biggest problems in the world. Certainly right now in this moment, that is the thrilling reality that people should focus on. But then I contrast that with the deaths of despair among people. largely young men, we have this problem in, call it, middle America, where manufacturing jobs have gone away. So for that normal, just sort of everyday person, I want to have a trade, I want to go out into the world and get something done, is AI going to be useful to them,
Starting point is 00:24:56 or are they going to get replaced by robotics? The truth of it is, is there's only one robot supply chain in the world, and that's in China. And, you know, so like we all need a robot. supply chain, we need to manufacture that. So I think there's going to be like a real manufacturing opportunity coming up to, and it'll be a different kind of manufacturing. Certainly more will be automated and so forth, but there will be a lot of things to learn in that field that I think will be super interesting and, you know, likely very, very good job. So ironically, I would say like going into manufacturing. now as a, as a young man and trying to, you know, kind of figure out what that is and get engaged
Starting point is 00:25:44 in it will probably lead to, you know, quite a good career, you know, maybe in creating factories have become like insanely valuable and kind of national, and strategic to the national interests as well. That makes sense. So again, at the level of the guy smart enough to build the facility, yes. And I recently saw a video of the grocery store of the future where it is a. a huge grid inside of a giant facility and there's just like these bots that look kind of like small shopping carts and they're just grid patterning across all the items snatching up whatever you order. So you order online, these things grab all that stuff and then they send it off to you.
Starting point is 00:26:25 So for the person that's savvy enough to build that facility, yes, tremendous. But what I think I hear you saying and correct me if I'm wrong is that, okay, there are two opportunities here. The opportunity one is if you're the kind of person I can leverage AI to build that facility, massive opportunity. If you're the kind of person that would traditionally work at that factory, something new is coming. We know that because looking back at history, all these technologies unleash things that we can't yet see. And so I have faith in the, we can't yet see it, but it is coming. Yeah. No, for sure. Like, I mean, you know what like the biggest in-demand job in the world is right now? Data labelers. And like, data labeling was,
Starting point is 00:27:06 in a job not long ago. But if you talk to... I've even heard of this. What is data labeling? Yeah. So it's what Alexander's what scale AI does. You know, they pay armies and armies and armies of people to label data. So say, hey, this is a plant or this is, you know, a fig or whatever it is for the AI to then
Starting point is 00:27:27 understand it. And then, you know, now with the, you know, with the kind of reinforcement learning coming back into play, you know, labeling, you know, that kind of supervised learning is still like very, very, very important. And I think that, you know, right now, like, he's got unlimited hiring demand, which is, you know, ironic or scale AI to have unlimited need for humans. And I think, you know, in manufacturing, there are going to be jobs like that and there will be the kind of physical, Well, when you go into these robot, like the software companies that are doing robotics, they have people managing the robots, right?
Starting point is 00:28:15 Like they're training the robots. Humans train robots to do all kinds of things. And it turns out that like folding clothes doesn't necessarily generalize to making eggs. They're like super different for robots. And so you need, you know, these robots trained in all these kinds of fields and so forth. So I think there's, you know, there's a whole new class of jobs that are a little bit hard to anticipate, you know, in advance. But I think at least for the next 10 years, I think the number of new jobs related to making these machines smarter is going to increase a lot. And then after that, you know, like, I think there will be, there just tend to be, like, throughout history, so many needs for new things that we never anticipated.
Starting point is 00:29:11 Like, well, I mean, you know, one of my favorite examples is, okay, computers, computers are going to kill the typesetting business. And they did. Everybody knew that. Like, that was coming. nobody said oh and then there's going to be five million
Starting point is 00:29:28 graphic design jobs that come out of the PC like nobody not a person predicted that so it's really easy to figure out which jobs
Starting point is 00:29:37 are going to go away it's much more difficult to kind of figure out which jobs are going to come but like if you look at
Starting point is 00:29:43 the history of automation which is kind of automated away everything we did 100 years ago there's less unemployment now than there was
Starting point is 00:29:53 then And so you go, okay, and then, you know, like some of the employment will be much more, I think, enjoyable than the old employment as well, as it has been, you know, over time. And you always talk about manufacturing jobs going away, but the manufacturing jobs that have gone away have been the most mind-numbing. So I think, you know, things evolve in very, very unpredictable ways. And, you know, like I think the hope is that, you know, the world just gets much better. But I'm not so worried about kind of anticipating all the horror that's going to come in. I mean, I think the main reason we're making these things is, you know, the ways that they're making life better. And, you know, just like we finally figured out a way for everybody.
Starting point is 00:30:39 Like, we already have in our hands, everybody can get a great education. Like, that whole inequality of access to education is like literally gone. on right now, which is pretty amazing. I mean, it's certainly huge. Yeah, nothing that I ever thought I'd see. So hopefully things go well. The great irony, it's so crazy. I don't think anybody, anybody saw that coming.
Starting point is 00:31:09 It was always going to be, it's going to go for the drivers. It's going to go for all those hard, difficult, repetitive tasks. Yeah, it's been very fascinating to see what actually is in danger, like super creative jobs, very much in danger. But yeah, as you get down, I mean, look, I think I believe way more strongly than you do that robots are just going to get better and better and better and better. But that could be that I'm not as close to the problem as you are. Speaking of which, how the insights that you've had into AI, how are they informing the investments that you guys make? The theory is there's this one like super intelligent big brain that's going to do everything.
Starting point is 00:31:47 The reality on the ground is even with the state of the art models, They're all kind of good at slightly different things, right? Like, you know, Anthropic is, like, really good at code. And Grok is really good at, like, real-time data because they've got the Twitter stuff. And then, you know, Open AI has gotten, like, very, very good at reasoning. So with all of them, we're doing AGI,
Starting point is 00:32:11 but then they're all good at different stuff, which is, you know, from an investing standpoint, it's very good to know that. because of something like that's not winner take all, that becomes like super interesting. It also is interesting for what it means at the application layer because if the infrastructure products aren't winner take all, and then the other thing about the infrastructure products,
Starting point is 00:32:36 that's interesting is that they're not particularly sticky in the way that kind of Microsoft Windows was very sticky, right? It was sticky. you build an application on Windows, it doesn't run on other stuff. You've got to do a lot of work to move it to something else. So you get this network effect with developers.
Starting point is 00:32:56 Then you go, okay, well, how does that work with state-of-the-art models? Well, people build applications on these things, but guess what? Like, to move your application to Deepseek, you didn't have to change the line of code. They just literally took the opening AI, Python API, and like it runs on Deepseek now.
Starting point is 00:33:13 Tadda! So, you know, that kind of thing really impacts, you know, how you think about investing and, like, what is the value of having a lead in application, and then, you know, where's the moat going to come from? And, of course, AI is also getting, like, the one thing it is getting amazingly good at is writing code. And so then, you know, how much of a lead do you have in the code itself versus, you know, kind of the other traditional things? And when I started in the industry, the salespeople were in charge. They were kind of like the big, there's a great TV show called Halt and Catch Fire. And if you watch it, like the thing that's really stunning, if you're, you know, kind of coming from the 2010's 2020s world is, why are the salespeople so powerful? But they were the most powerful in those days.
Starting point is 00:34:10 and it was because, you know, distribution was the most difficult thing. And, you know, I think distribution is going to get very, very important again because maintaining a technological lead is a lot harder, you know, when the machine is writing the code and writing it very fast. Although it's not, you know, it's not all the way where it can, you know, build like super complex systems, but there's, you know, a bunch of things out now, you know, Replit's got a great product for it. There's a company called Loveable that's got one out of Sweden that just builds you an app.
Starting point is 00:34:47 Like if you need an app for something and just say, say, hey, build me this app. And there it is. Yeah, another thing in Cursor that you can select what model you want to use for whatever thing that you're about to generate. So the ability to go, oh, I want 10 of these things. I'm going to use this one for this kind of code, this one for that kind of code. It's really fascinating. Now, the Biden administration was super hostile towards tech. When you look at what's going on now with the changes in regulatory, what do you think about the race between us and China?
Starting point is 00:35:22 Were we headed down a dark path where if that administration had stayed with that, like we're going to have one or two companies, we're going to control them, that's going to be that. Is it possible we could have lost that race? Is that race a figment of my imagination? Is that real? I think that there's kind of multiple layers to the AI race with China. And then, you know, the Biden administration was kind of hostile in many ways, but all for kind of a central reason, I think. So, you know, an AI in particular, you know, when we met, and I should be very specific.
Starting point is 00:36:02 So it wasn't, we did meet with Jake Sullivan, but he was very good about it. We met with Gina Raimondo. she was very good about it. But we met with the kind of White House, and their, you know, their position was super kind of, I would say, ill-informed. So they basically were, they walked in with this idea that, like, we've got a three-year lead on China, and we have to protect that lead. and there's no and therefore we need to shut down open source and that doesn't matter to you guys and startups because startups can't participate in AI anyway
Starting point is 00:36:45 because they don't have enough money and the only companies that are going to do AI are going to be kind of ironically the two startups anthropic and open AI that are out and then the big companies, Google and so forth and Microsoft. And so we can put a huge regulatory barrier on them because they have the money and the people to deal with it. And then that'll be. And, you know, in their minds, I think they actually believe that that would be how we would win. But of course, you know, in retrospect,
Starting point is 00:37:19 that makes no sense. And it kind of, it damages, you know, if you look at China and what China's great at, then this goes to the next thing. So there's, how good is your AI and then how well is it integrated into your military and the way the government works and so forth. And I think that China being a top-down society, their strength is, you know, that whatever AI they have, they're going to integrate into, it's already all the companies are highly integrated into the government. So, you know, they're going to be able to deploy that and we're going to see it in action with their military very fast. I think that the advantage of the U.S. is like we're not a top-down society. We're like a wild, messy society, but it means that all of our smart people can participate in the field.
Starting point is 00:38:09 And look, there's more to AI than just the big models. As you said, like, you know, how important is cursor? It's really important if you're building stuff. So like, oh, you want to go build, you know, the next whatever thing that the CIA needs or the NSA needs or this and that, like you're building that with Cursor, you're using a state of the art model, but like if you had eliminated, if the Biden White House had gotten their way,
Starting point is 00:38:36 they'd eliminate things like Cursa, they'd eliminate startups being able to do anything in AI. And so the advantages that we have, that we don't just have a model. We've got all this other stuff that goes with it and we've got, you know, and then we have new ideas on, models, you know, with new algorithms and this and that. And that's what the U.S. is great at.
Starting point is 00:38:56 And I think, you know, what China is great at is, you know, by the way, they're very good at math. People are good and AI is math. So they're going to, their models are good. They also have a data advantage on us where they have access to the Chinese internet. They have access to copywritten material, which they do not have the same difference for it that we do in the U.S. And so they're able to kind of get to, you know, if you use Deepseek, you go, wow, deep seek really is a great writer compared to a lot of the U.S. models. Why is that? Well, they train on a bigger dataset than we do.
Starting point is 00:39:32 And that's amazing. So it really, you know, I think what we want is we want to have kind of world-class, first-class AI in the U.S. And I think of it less as, you know, is it ahead of China? Is it slightly ahead of China? And I think that model, you know, what we've seen with our own state-of-the-art models as a lead are very shallow. And I think that'll continue as long as we're able and allowed to build AI.
Starting point is 00:40:00 And then economically, what you'd like is you'd like, you know, to have a vibrant AI ecosystem coming out of the U.S. So other countries, you know, who aren't state-of-the-art with this stuff, adopt our technology. And, you know, we continue to be strong economically. as opposed to everything goes to China. And that was like a big, big risk with the Biden administration, I think. And, you know, which was, you know,
Starting point is 00:40:26 what they were doing on AI was, you know, tough. I would say what they were doing on kind of fintech and crypto was even tougher in that they were just trying to get rid of the industry and its entirety. You know, with AI, they were trying to, I would say they were extremely arrogant in what they thought their ability,
Starting point is 00:40:46 was to predict the future. You know, Mark and I were in there, you know, like our job is to predict it. Like, this is our job to invest in the future to predict the future. And they were saying things that, like, we're so arrogant that we would never even think to say them even if we thought them because we're like, we know that we don't know the future like that. You know, it's just unknowable. There's too many moving parts.
Starting point is 00:41:11 I mean, these things are really complicated. All right. Well, speaking of the future, fully accepting that it is very opaque and very difficult to see, what would you say is the most controversial view that you hold about the future? If we don't get to world class in crypto, we're going to be, you know, AI really has the potential to wreck society. And what I mean by that is if you think about what is obviously clearly going to happen in an AI world is one, We're not going to be able to tell the difference between a human and a robot.
Starting point is 00:41:48 Two, we're not going to know what's real or fake. Three, the level of security attacks on big central data repositories is going to get so good that everybody's data is going to be out there. And, you know, there is no safe haven for a consumer. And then finally, you know, for these agents and these bots actually be useful, they actually need to be able to use money and pay for stuff and get paid for stuff. Like so, and if you think about all those problems, those are problems that are by far best solved by kind of blockchain technology. So, one, we absolutely need a public key infrastructure such that every citizen has their own wallet with their own data, with their own information.
Starting point is 00:42:41 and if you need to get credit or prove you're a citizen or whatever, you can do that with the zero knowledge proof. You don't have to hand over your social security number of your bank account information and all this kind of thing because the AI will get it. So you really need your own keys and your own data and there can't be these gigantic, you know, massive honeypots of information that people can go after. I think that with deep fakes, if you think about, okay, we're going to have to be able to whitelist things.
Starting point is 00:43:17 We're going to have to be able to say what's real. But who keeps track of what's true then? Is it the government? Oh, please, Jesus now. Everybody trusts Trump now. You know, everybody trusts Biden. Is it going to be Google? We trust those guys?
Starting point is 00:43:31 Or is it going to be the game theoretic mathematical properties of the blockchain that can hold that? And so I think that, you know, it's essential that we regenerate our kind of blockchain crypto development in the U.S., and we get very serious about it. And, you know, like if the government were to do something, I think it should be to start to require these information distribution networks, these social networks, to have a way to, you know, verifiably prove your human, you know, prove where a piece of data came from and so forth. And I think that, you know, we have to, you know, have banks start accepting zero knowledge proofs and, you know, and that be just the way the world works. We need a network architecture that is up to the challenge of, you know, these super intelligent agents that are running around. We were talking before we started rolling that you guys have an office in D.C. And part of what you do is advise on that. Like what does the infrastructure changes?
Starting point is 00:44:35 what do they need to look like? What are a small handful of things that you guys are really pushing to see the government adopt to modernize the way that the whole bureaucracy works? Yeah, so there's a few things. And one of the things is because, you know, blockchain technology involves money. We do need, it's not like we don't need any regulation. We do need regulation. And there are kind of very specific things that we're working with the administration
Starting point is 00:45:04 to make sure are done in a way that kind of creates a great environment for everybody. What do you guys hoping will get blocked out, for instance? Is that what you're about to cover? Yeah, I mean, so like one of the first thing you need is, you know, we do need electronic money, you know, in the form of stable coin, so actual currency. But we need that to not, like it's very bad of one of those collapses. because then, like, the whole trust and the system breaks down and so forth.
Starting point is 00:45:38 Well, why do we need this kind of money, this kind of Internet native money? Well, I'll give you an example. So we have a company called Daylight Energy, and what they do is, so we're going to run into a big energy problem with AI that I think most people probably listening to the snow about where AI consumes a massive amount of energy, you know, much more than Bitcoin ever did, by the way, which everybody would pull up. arms about. And, you know, so much so that, like, you can't really even get it out of the power grid. And I think Trump has been smart about this saying, hey, you probably need to build
Starting point is 00:46:13 a kind of power next to your data center because we can't be giving it to you from the central thing. But beyond that, I think that, you know, kind of individuals, you know, now have Tesla kind of solar panels and power walls and these kinds of things. And when you have one of those, you sometimes have more energy than you need and sometimes have less. And wouldn't it be great if you could, you know, if there was a nice system that figured out who needed energy and who had energy and you could just trade. And there was some kind of contract that said,
Starting point is 00:46:51 okay, this is what you pay during peak. This is what you pay at different periods. And that contract, probably best done in a form of a smart contract, but a power wall is not a human, so it doesn't have a credit card. It can't get a credit card. doesn't have a bank account, I don't have a social security number, but it can trade crypto. It can trade stable coins.
Starting point is 00:47:10 And so we need that kind of currency to kind of facilitate all these kind of automated agreements and automated transfer of kind of wealth between entities in order to kind of solve these big problems that we have like energy. And so we need a stable coin bill that kind of. says, okay, look, we need these currencies to be backed one for one with U.S. dollars or whatever it is so that, you know, we can have a system that works and is trusted. Now, there's this really interesting side benefit to that, which is, if you look at Treasury auctions lightly, the demand for dollars is not good. You know, and a lot of that is, you know, the two biggest kind of lenders
Starting point is 00:48:02 is to the U.S. have been China and Japan, and, you know, China's backed off a lot, and Japan is backed off somewhat. And so the demand for dollars has gone down. We've done things to also dampen demand like, you know, when we sanctioned Russia and we seized the assets of the Russian Central Bank, you know, there were other countries that had, you know, other entities that had money there, and their money got frozen, and they can access it. And so that makes people more wary of holding everything in dollars. So we've done a lot to dampen that, which of course is, you know, fueled inflation in the same way that increasing supply fuels, inflation, killing demand, fuels inflation. So here we would have this new major source of demand for dollars. And then the dollars
Starting point is 00:48:48 would be much more useful because you can use them online as well. And machines can use them and so for us. So we really need... And sorry, really fast for people that are trying to track that, the reason that that would increase the demand for dollars is that they would, the stable coin would be backed one for one with debt. Is that the idea? Well, yeah, with, where you would basically have, you would have to have a dollar or write a, for every... You'd hold treasuries. You basically hold treasuries so that if somebody wanted to redeem their stable coins, they could. and then that way, you know, kind of the equivalent of the gold standard in the old days, you know, when dollars were trying to get credible, you know, we would need like dollars to be the gold
Starting point is 00:49:34 standard for the stable coin, you know, and probably we should never back off of that. Maybe we should never backed off of gold, but, you know, it's easier when it's dollars because we did kind of start to run out of gold a bit. So, yeah, so that's, you know, one thing. Then, secondly, there's a bill that went through the house known as the Market Structure Bill. It was technically called Fit 21. That's a very important, you know, whether it's exactly that or some form of that. Because, you know, when you talk about tokens, which are this kind of instrument that's very, very important in blockchain world because it's the way that this amazing kind of network of computers. gets paid for us. So, you know, who pays the people for running the computers? Well, that's
Starting point is 00:50:28 paid in the form of these tokens. But these tokens, which can be created on blockchain, have, they can be many things. So you can create a token that's a collectible. You know, you can create a token that is a digital property right, you know, that links to, you know, some piece of real estate or a piece of art or so forth. A token can be a, you know, a Pokemon card. A token could be a coupon. A token could be a security that represents a stock. It could be, you know, a dollar.
Starting point is 00:51:06 So which one is it? Is a very kind of important set of rules that doesn't exist. And this is one of the most insidious thing that the Biden administration did was basically say, well, everything is a security. Everything's a stock. Or, you know, like something with asymmetric information, which kind of basically undermines the whole power of the technology. And so it was basically a scheme for them to kind of get rid of the industry. But it was very, very dark, cynical way of legislating things.
Starting point is 00:51:46 And then they would make these, you know, fake claims about scams and so forth. But the market structure bill is very, very important in that way. And by the way, also, you know, another thing that was in the original market structure bill, which is important is, look, there are also scams. There are, you know, and we call it the casino. But, you know, like I can create some coin like the Haktua girl did, right? Like, and she creates a coin. She kind of lies about, you know, her holdings and says she's going to hold.
Starting point is 00:52:21 them, but then sells them, you know, after people buy it in a short time for it and so forth. And there's no, you know, part of the problem is there's no kind of rules around that. But in the kind of bill that passed the house, it said, like, you can create a token, but if you hold it, you can't trade it for four years. That kind of takes a lot of the ability to scam out of it and kind of forces people to do things that are their real utilities, or if it's a collectible, you know, if it is the Hock to a collectible, you know, it's got to be a real collectible where you don't just, you know, rug the users of it right away.
Starting point is 00:53:00 And so that, you know, that's right away. Yeah. It's okay to do it later, but. Well, but, you know, like in four years, it is what it is, right. Yeah, yeah, no, no, I'm just giving you a hard time. I just know how that's going to sound to people. Yeah. Yeah, no, thank you.
Starting point is 00:53:14 But, you know, so these kinds of things I think are going to be really important to to making the whole industry work. And so we're working, you know, on that, you know, trying to make it safe for everybody. But as I said, it's just such a critical technology in an AI world, you know, where if we don't have it, yes, it's just going to be like a very kind of problematic, you know, it's going to be, you know, it's cyberpunk. It's a, it's a not a, yeah, it's a high technology, uh, difficult society. Yeah.
Starting point is 00:53:51 Yeah. It was shocking to me the level of backlash that the blockchain Web3 community got. What do you think drives that? Is it just the perception that it was only scams and there's nothing real? Like, what was that all about? So there was multiple factors. So the first one is the one that hits all new technology where, oh, it's a toy. It doesn't do anything.
Starting point is 00:54:19 new, like the old way of doing things is better. And, you know, we saw that with social networking. We actually saw that with the Internet. I mean, I think Paul Krugman famously said, you know, never have more economic impact than a fax machine and so forth. So that's just kind of a normal thing that happens with new technologies as they start out looking not that important. And with crypto in particular, you know, one way to think about crypto blockchain is it's a new
Starting point is 00:54:46 kind of computer. and if you think about new kinds of computers, they're always kind of worse in every way, but maybe one, than the old computer. And so, you know, if you look at even like the iPhone, it was a bad phone, it had a horrible keyboard. You know, if you compared it to anything, it wasn't very powerful.
Starting point is 00:55:09 It had a little itty-bitty screen. But it had a feature that was pretty awesome, which you could put in your pocket and it had like a GPS in it and a camera in it. And so now you could build Instagram. You could build Uber, which you could not build with the PC. And you still can't build with the PC. And so that was, you know, enough. And then eventually, like, it started to add the other features
Starting point is 00:55:32 and it's an awfully powerful computer these days. If you look at blockchain, it's lower, it's more complicated to program. Like, there's a lot of issues with it. But it's got a new feature. which is trust, like, it can make promises. You can trust, like, when that code, you know, says there's only 21 million Bitcoin. You can absolutely count on that in a way that, like, you can't trust Google. You can't trust, you know, Facebook to say, like, oh, these are our privacy rules.
Starting point is 00:56:07 Like, you can't trust that at all. You can't trust the U.S. government to say they're not going to print any more money. Like, that's for sure. And so, you know, here. a computer that can make promises that you can absolutely count on. And you don't have to trust a company, you don't have to trust a country, you don't have to trust a lawyer, you just have to trust the game theoretic mathematical properties of the blockchain. And that's amazing. So now you can, you know, program property rights and money and law and all these kinds of things that you could never do before. And so I think that was, that's hard for normal people to understand. who aren't deep into technology. And so they get confused and they say, ah, it's nothing, blah, blah, blah.
Starting point is 00:56:51 And then, you know, I think the next wave was you had, look, you know, it was a very odd thing with the Biden administration because he wasn't really, I think it's come out now. He wasn't really the president. He wasn't really making any decisions. You couldn't even get a meeting with him if you were in his cabinet.
Starting point is 00:57:12 And in terms of domestic policy that was run by Elizabeth Warren. And then the second confusing thing is Elizabeth Warren is always calling, like, people fascists, her whole push with fintech and crypto was to make sure that she could kick people out of the banking system who are political enemies. And so in order to do that,
Starting point is 00:57:34 you have to outlaw new forms of financial technology because those would be kind of back doors or side doors or parallels to, the G-Sibs and the banking system, which she comprehensively, and I think this is coming out now, could kick people out of. And so when you use, when it's a full top-down hierarchy and you can use private companies to enforce your will, that is the way fascism works. And then the way she does it is she sells this fake story about, you know, it's funding terror. And it turns out like the USAID was funding the terrorist groups, but that's a different story. But, you know, it's doing all these
Starting point is 00:58:14 nefarious things, which, you know, was just a very unfair portrayal. And so then the whole industry got this reputation as scammy and this and that and the other. And then, of course, we had Sam Bank been freed, who didn't do us any favors by, you know, and this is another kind of, though, issue with what Elizabeth Warren did, is she blocked all legislation. And so the criminals were running free and the people doing things that should have been legal were getting terrorized by the government. When they should have been looking, they should have been looking at FTCS, they were looking at Coinbase, which was totally compliant, public company, you know, begging for
Starting point is 00:58:55 feedback, tell us what you want us to do. Yeah, yeah, that whole thing was crazy. So given that AI is putting us on a collision course with, I don't know who's real, I don't know what's fake, do you think that blockchain is about to have its day like in the next 12 to 24 months or is this still something that it's so embedded deep in the infrastructure it's going to take a long time to really have its I told you so moment yeah no I think it's I think it's within 24 months for sure I mean I think that there's enough you know there were like actual like if you like it kind of the last wave of blockchain there were real technological limitations that
Starting point is 00:59:36 made it you know I think we're slowing it down from getting order adoption. So, you know, very obvious usability challenges. The fees were really high. The blockchains were slow. So there were just a lot of use cases that you just couldn't do on them. I think that's changing very, very fast. Things are, you know, the chains are much faster. The Lager 2 stuff makes them, you know, very fast and cheap. You know, people are doing a lot on usability, you know, for wallets and these kinds of things. So I think we're getting pretty close and then I think the needs are very high. So if you think of something like WorldCoin, you know, to me the difference between that thing being very broadly adopted and where it is now where
Starting point is 01:00:26 it's, I think, half the people in Buenos Aires use it daily. So like it's very widely adopted where it's been legal. I think that, you know, if they are. able to get integrated into some of the big social platforms, then, you know, everybody needs proof of human. And, you know, like, it would make the experience online so much better if you knew who was human and who was not. And right now, like, you can't tell at all. And so, and that problem is going to get worse. And then the solution is really here. So I think it's going to start to take off. And you only need one or two big use cases to start getting the whole infrastructure deployed. And, you know, once the infrastructure is deployed, you know, I think we'll certainly rely on it.
Starting point is 01:01:19 And if you look at actually the curve of people who have active wallets and the curve of, like, internet adoption, they're pretty similar. You know, it's about, I think, blockchain's growing a little faster than the Internet did initially. And so I think, you know, we'll get to a place where it's certainly everybody in the U.S. will be on it, which, by the way, could be great from a government standpoint. You know, Elon has talked about putting, okay, all the government payments on the blockchain, which I think would be really, really good for transparency. Oh, my God. We'd never get into this weird situation now where, like, half the country wants to tear down all the government services and half of them wants to keep it because nobody knows what the hell the spending is. but that would be great. But, you know, beyond that, like, if you think about, well, why is there so much waste and fraud,
Starting point is 01:02:13 well, part of it is, you know, like, you know, I get taxed, I give my money to the IRS. The IRS gives it to Congress. They, you know, do whatever they do with it and so forth. And, well, how does it get to the people who need it? You know, that's a very lossy process, you know, and we don't even know who they are. And it's very, you know, one thing's we found out during COVID. but the government's not very good at sending people money. It's good at taking money.
Starting point is 01:02:39 It's not a good at sending of money, right? We lost like $400 billion trying to give people stimulus. Ridiculous. Yeah, crazy. Ridiculous. But, you know, like if everybody in the U.S. had an address on the blockchain, you could just tell me, okay, here's 10,000 people who need money. Please send them, you know, $5,000 each.
Starting point is 01:03:00 Well, probably that's too much money for me. but, you know, something like that, you know, whatever my tax bill is or whatever that portion of wealth redistribution is, that would be 100% zero loss. By the way, I'd feel a lot better about it because I know I'd be helping people. And, you know, look, maybe even somebody would go, hey, this is great, thank you. And we're like, maybe it would bring us, we wouldn't have this crazy class warfare because everybody would know, hey, we're all integrated, you know, like I'm helping you, you're helping me. And then, like, if you had that, then you'd fix the whole kind of democracy integrity problem because everybody could vote off that address. And by the way, everybody would have an address because everybody would want the money.
Starting point is 01:03:43 So what bigger incentive to register to vote than you, in order to get money, you have to have an address which registers you to vote? And so that kind of thing, I think, could get us to it. just like a much higher trust in our own institutions. Yeah, so, wow, speaking of that, I wanted to absolutely scream into the abyss when I heard that people couldn't retire from the government faster than the elevator would lower their records down into a mine.
Starting point is 01:04:15 I was like, what is happening? What do you take away from? Why does Elon want to do this? Why is he sleeping in hallways? why? Why is he doing this? Is it just to get government contracts and it's nefarious in the way that so many people think it is? Or is there something positive there?
Starting point is 01:04:34 What's the game? I think there's a couple of different things. So one is the strong thing is he truly believes that America is the best country in the world. He is an immigrant. And that it's not guaranteed to stay that way. and we have been in danger of losing it. And so the most important thing for him to do
Starting point is 01:05:01 in order for his companies to be relevant in order for going to Mars to be relevant in order for anything he wants to do in life to be relevant is we've got to stabilize U.S. government. I think that's the main thing driving him. So then you say, well, how did he get to that conclusion that, you know, the whole country is in jeopardy? And it really, like, it's a pretty, it was a pretty interesting thing to watch because, right, in 2021, I think he was a Democrat.
Starting point is 01:05:28 And he was certainly pretty apolitical. And I was actually in a chat group with him when he got the idea or posed the question, should he buy Twitter. And a lot of it stemmed from, you know, it started with the, you know, it started with the, you U.S. government just harassing him, which was a very odd thing, right? Like, here's your, I mean, I think you could very well argue he was our most productive citizen. He was our entire space program. He advanced the state of electric cars by 20 years. He's still like 95% of there, something like that percent of the electric cars sold in the U.S. Jesus. You know, he's, you know, done the things with neuralank to, you know, help people, like, use their arms and legs who have been paralyzed
Starting point is 01:06:21 and this kind of thing. So, you know, you've got, you know, he's a really kind of remarkable person to want to pick on. But what happened was because he got, like, this PR for being very wealthy, the Biden administration targeted him. You know, and again, they're fascists. So it's really a power struggle, always with the fascist. at anybody who looks like they're becoming powerful. And some of the things they did, and one of the ones that's talked about a lot was they sued him.
Starting point is 01:06:54 The Biden Department of Justice sued him for discriminating against refugees. But he had a contract with the U.S. Department of Defense that required him to only hire U.S. citizens. So he was breaking the law either way. And they never dropped the lawsuit, even after it was pointed out, even after, you know, was pointed out by Congress.
Starting point is 01:07:18 So it was clear harassment, and I think that his conclusion from that was, you know, this, we are, like, ironically, I think his conclusion was we're losing the democracy. You know, we're going into this very, very strange world where the incentives are all upside down. And, you know, the way Elon thinks is it's up to him to save it. And so he got, like, extremely involved. And then I think the more involved he got, the more he both realized, like, a lot of the things really were dangerous. And then secondly, that he personally would be somebody who would know how to fix it. And you go, like, well, why the hell would Elon must know how to fix the government and all this? And, you know, this is the thing that everybody's saying now.
Starting point is 01:08:09 And it's funny because I told this to Andresen years ago was because I'm a big fan of Isaac Newton. And we always talked about like, who is Elon like? You know, like what entrepreneur comes to mind? You know, and it really wasn't, you know, maybe Thomas Edison, but not really. But Isaac Newton was really the one that I always thought he was most like, you know,
Starting point is 01:08:38 because it's like, okay, Who can build like rockets and cars and this and that and the other. But the reason I thought he was like Isaac Newton was at the end of Isaac Newton's life. But I think he was in his late 60s, maybe like 67, 68. And this is, you know, for those of you don't know, Isaac Newton, like he figured out how the entire world works and wrote it down in a book, you know, called Principia Mathematica, which is probably the most amazing work in the history of science. And he did it like entirely by himself. Like, he didn't even talk to anybody at this, you know, at the time he wrote it. I think he was trying to figure out what God was or something like that.
Starting point is 01:09:19 As you do. But so he gets to be, you know, like in his late 60s, and the Bank of England has a crisis, which is causing a huge crisis for the whole country, which is there's a giant counterfeiting problem. So the currency is going to be undermined and England's going to basically go bankrupt. And so they had no idea what to do about it. So they call Isaac Newton because he's the smartest man in the world. Of course you're going to call him.
Starting point is 01:09:46 So Isaac Newton, 67-year-old like hermit physicist, goes in. And he says, okay, I can help with the problem. Make me CEO of the Mint. So they make him CEO of the Mint, you know, kind of Head of Doge, whatever. And he reorganizes the Mint in like a week and then fixes the technology. in a month and completely makes it impossible to counterfeit. Then he becomes a private eye and goes into all the pubs where the counterfeiters are, arrests all of them.
Starting point is 01:10:19 Then he learns the law and becomes the prosecutor and prosecutes all the counterfeiters and has a 100% conviction record. And that, by the way, that's Elon. So if you, this is my saying, this is like, I didn't know that part of his story. Oh, yeah, yeah. So it's an amazing thing. And if you look at Elon and Doge, to me, the most remarkable thing about Doge is how he's done it. So if you or I were to say, okay, let's go in and kind of get the waste and fraud out of the government.
Starting point is 01:10:57 What do we do? We would like audit the departments or this and that, the other, and so forth. No, no, no, no, no. Like, that's not how he thinks. He's like, well, the first thing, like, how do the change? checks go out? Like, how is this system design? Like, when does the money lead the building? And then, oh, it all comes out of one system, let me have access to that system, and I'll look at all the payments. I'm not asking anybody what they're spending. I'm looking at what they're spending.
Starting point is 01:11:25 Like, I'm getting to ground truth, and then I'm going to work my way backwards from there. And he's probably, and, you know, so not only is he not unqualified, he's maybe the only person qualified to figure out like how we're spending seven trillion dollars um and so you know so i so you know he's just a very unique individual um he's also a troll he also likes upsetting people i get all that um but what he brings to the table is pretty interesting i would say like very yeah i would say very extraordinary um you have also written about another extraordinary historical figure from the Haitian Revolution, a guy named Toussaint, Laverture, yeah. Yeah, tell us about him, because there's something about this moment,
Starting point is 01:12:18 about being a master strategist, about using what you have, being creative that feels like it's very apropos to this moment. Yeah, so what made his story special? Yeah, so Toussaint was another one of these characters in history, Like there are certain, I call them like once in every 400 year type people where you just don't see them that often. But so it turns out like in the history of humanity, there's been one kind of successful slave revolt that like entered in an independent state, which, you know, if you think about the history of slavery, which goes back thousands of years, really kind of from the beginning of written history, like we. had slavery. So it's like a pretty old-time construct. And, you know, there's a lot of motivation to have a revolt if you're a slave. But why only one successful one? And it turns out it's really
Starting point is 01:13:20 hard, you know, to generate an effect of revolt if you're slaves because slave culture is difficult because you don't own, right, if you don't have any sense of, you know, owning anything, you don't own your own will, right? Like you are at the kind of pleasure of who's ever running things. So long-term thinking doesn't make sense. And what it, because like why plan for next week? It doesn't matter what you plan. Like, it's not yours.
Starting point is 01:13:57 So everything's going to be very short-term. And short-termism is different. in a military context because in order to have an effective military, there needs to be a trust, right? Like a trust, you know, you have to be able to trust people to execute the order. Like I give an order. It's kind of like the Byzantine generals problem to go back to crypto, where like I have to trust that you're going to do the order, you have to trust that I'm giving the correct order. But trust is a long-term idea because it comes from, okay, I'm going to do something for you today because I trust that
Starting point is 01:14:33 down the line you'll do something for me. That doesn't really exist in slave culture because there is no long term. So there is no tomorrow. And so like how do you go from that to like running a successful revolution? And then if you look at Haiti at the time,
Starting point is 01:14:51 you know, you had the French army, the British army and the Spanish army all in there kind of fighting for it. So like really well developed. you know, kind of the strongest militaries of the era all in that region, all very interested in the sugar, you know, which was quite valuable at the time.
Starting point is 01:15:13 So like, how in the world would you ever get out of that? And it turned out, you know, he was probably the great cultural genius of the last, you know, maybe in history, but certainly the last, you know, several hundred years. And he was able, you know, because he was a person who, although he was born a slave, was very, very integrated into European culture because he was so smart. And so the person who ran the plantation kind of took him to all the diplomatic meetings around and so forth. And he got very involved and kind of mastered, you know, European culture, so to speak, and the different subtle. around it. And he started adopting those things and applying them to his leadership. And then,
Starting point is 01:16:05 you know, furthermore, incorporated Europeans into the slave army. So he would capture, you know, he, he would defeat the Spanish, capture some guys rather than kill them. He'd incorporate, you know, the best leaders into his army. And he built this very, like, advanced hybrid fighting system where they used a lot of the guerrilla techniques that he had brought over from Africa, and then he had, you know, combined that with, you know, some of the kind of more regimented kind of discipline strategies of the Europeans. And in building all that, you know, he ended up building this massive army and, you know, defeated Napoleon and everyone else.
Starting point is 01:16:51 And it was like just quite a remarkable story. about how he just kind of figured everything out from his principles. And in a way, yeah, that was, you know, kind of very much like Elon in that sense. Yeah, one of the things I heard you talk about, yeah, yeah, wildly. But one of the things I heard you talk about
Starting point is 01:17:10 that I thought was so ingenious was he would basically use song and sound as like encrypted language. It's really really. Yeah, yeah. So that was like a very cool thing. So, right, remember that this is in the days before telephony or the internet
Starting point is 01:17:27 or any of these things. You know, it's pre-Alexander Graham Bell and all that kind of thing. And so, you know, they were literally, you know, the Europeans were on like, you know, notes, carrier pigeons, guys running, you know, back and forth and so forth. And so as a result, you know,
Starting point is 01:17:45 you kind of needed the army together in one place just so you could communicate the order. To San, basically, you know, had these drummers and these songs, which he could put on top of, like, the hill, who could be very, very loud. And then he would separate his army, you know, into like six or seven groups.
Starting point is 01:18:08 But in the song would be embedded the order of when to attack and, you know, went to retreat and all these kinds of things. So he had this, like, super advanced, you know, wide area communication system that nobody else had. And that, you know, that's a big advantage for them. Yeah, that to me, the reason that that comes up for me now is we have all these new technologies that are coming online. And the person that's going to be able to get outside that box and see something new and fresh is going to be able to use this in totally different ways. And while in the final analysis, I think you and I see it very differently in terms of AI's ability to ultimately gobble up what humans can do.
Starting point is 01:18:49 But right now, AI is this incredible tool. that as an entrepreneur, for me, it has been ridiculously exciting to, one, see how much farther each of my employees can push their own abilities by using AI. And then it does not take much to prognosicate out, you know, 12, 18 months to understand where the tools are going to be and how much more they're going to let you do. Because we're largely an entertainment company. So for us to look at that and just the revolutionary changes, but you can't be trapped inside the old way of thinking. You've got to, like you said, build up from first principles. Yeah, it's a new creative canvas. I think that's like a really great way of thinking about it and that it's like, well,
Starting point is 01:19:34 is your creativity going to be used on, you know, kind of frame by frame editing of like a video or will it be thinking of like incredible new things you can do in a video ad that you can never do before and have the AI do that for you? you know, like, and so it's a little bit of a readjustment of where you put your creative energy into and the things that are possible and so forth. And I think that's, you know, we're really seeing that across the board. Like in our firm, we're applying a lot of AI and you'd be like, oh, well, is this going to mean, you know, like you don't have human investors anymore? And it's actually been like totally the opposite. Like, instead of this like painstakingly collect
Starting point is 01:20:23 you know, all the data needed to put the investment memo together, like, yeah, it just does that for you. And then you're just thinking about like, okay, what are the, like, the really compelling things about this? Or rather than, you know, trying to track every entrepreneur and, like, great engineer in our database, the AI is just tracking all those people and letting you know, hey, that guy just updated his LinkedIn profile or that guy just put out like an interesting tweet. Maybe you should call them and that kind of thing, which is just like a much more kind of fun part of the game. And so, you know, look, I would say the best predictor of kind of how things are going to go or more like what's happening now than like the most dystopian view of it that we can possibly
Starting point is 01:21:14 think of, which I think is where a lot of people go to. And like I said, I think some of that's a name, you know, artificial intelligence. It just sounds. We hate everything artificial, so what do we name of it artificial? That's too true. Ben, I've enjoyed every minute of this. Where can people keep up with you? Yeah, well, I am B Horowitz on AX, and, you know, that's probably the best thing. We're A16Z.com, and I hope you enjoyed it. And that was great fun, good fun catching out.
Starting point is 01:21:48 It was indeed. And then you also have multiple books. that people can read that are extraordinarily well-respected in the field. So also thank you for those. Absolutely. Awesome. Thanks very much. All right. Well, thank you, brother.
Starting point is 01:22:01 I appreciate it. All right, everybody, if you have not already, be sure to subscribe. And until next time, my friends, be legendary. Take care. Peace. Thanks for listening to the A16Z podcast. If you enjoy the episode, let us know by leaving a review at rate thispodcast.com slash A16Z.
Starting point is 01:22:19 We've got more great conversations coming your way. See you next time.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.