a16z Podcast - David Sacks: AI, Crypto, China, Dems, and SF

Episode Date: November 3, 2025

David Sacks, White House AI and Crypto Czar, joins Marc, Ben, and Erik to explore what's really happening inside the Trump administration's AI and crypto strategy. They expose the regulatory capture p...laybook being pushed by certain AI companies, explain why open source is America's secret weapon, and detail the infrastructure crisis that could determine who wins the global AI race. Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:00 The Europeans, I mean, they have a really different mindset for all of this stuff. When they talk about AI leadership, what they mean is that they're taking the lead in defining the regulations. You know, they get together in Brussels and figure out what all the rules should be. And that's what they call leadership. It's almost like a game show or something. They do everything they can to strangle them in their crib. And then if they make it through like a decade of abuse in small companies, then they're
Starting point is 00:00:22 going to give the money to grow. Ron Reagan had a line about this, which is if it moves, tax it, if it keeps moving, regulated, if it stops moving, subsidize it. The Europeans are definitely at the subsidized stage. AI and crypto now sit at the center of the global race for technological and economic leadership. Today, you'll hear from David Sacks, Mark Andreessen, and Ben Horowitz on what it takes for America to stay ahead. We discussed the Trump administration's new approach to AI and crypto policy, the balance between innovation and regulation, and how the U.S. can lead on energy, chips, and open source while avoiding the mistakes of overregulation.
Starting point is 00:00:57 Let's get into it. David, welcome to the A6Z podcast. Thanks for joining. Yeah, good to be here. So, David, you're the AI and CryptoZar. Why don't you first talk about why it makes sense to have those as a portfolio? What do they have to do with each other? And then I'll have you lay out what's the Trump plan on those two categories and how we're doing. Well, there are two technologies that I guess are relatively new.
Starting point is 00:01:19 And so there's a lot of fear of them. And I think people don't really know that much about them. They don't really know what to make of them. I think that from a policy standpoint, I mean, we can talk about the similarities and differences. The approaches are a little different. I think with crypto, the main thing that's needed is regulatory certainty. All the entrepreneurs I've talked to over the years,
Starting point is 00:01:41 they all say the same thing, which is just tell us what the rules are. We're happy to comply, but Washington won't tell us what they are. And in fact, during the Biden years, you had an SEC chairman who took an approach, which I guess has been called regulation through enforcement, which basically means you just get prosecuted. They don't tell you what the rules are. You just basically get indicted.
Starting point is 00:02:01 And then everyone else is supposed to divine what the rules are as you get prosecuted and fined and imprisoned. So that was the approach for several years. And as a result of that, basically the whole crypto industry was in the process of moving offshore. And in America, I think, was being deprived of this industry of the future. And so President Trump during his campaign
Starting point is 00:02:24 in last year, He gave a now famous speech in Nashville in which he declared that he would make the United States the crypto capital of the planet and that he would fire Gensler. That was like the big applause line. I applauded. He's talked about how surprised he was. What a big ovation he got at that. So he said it again and the crowd erupted again. But in any event, he promised basically to provide this clarity so that the industry would understand what the rules are, be able to comply.
Starting point is 00:02:53 In turn, that should provide greater protection for consumers. and businesses, everyone is part of the ecosystem, and it makes America more competitive. So I think that's the mandate on crypto is, in a way, it's pro-regulation. It's basically we want to put in place regulations. In a way, AI is kind of the opposite, where I think the Biden administration was too heavy-handed.
Starting point is 00:03:15 They were starting to really regulate this area without even understanding what it was. No one had really taken the time to understand how AI was even being used, what the real dangers were. there was this intense fear-mongering, and as a result of that, the approach of the Biden administration was, they were in the process of implementing very heavy-handed regulations on both the software and hardware side. And we can drill into that. I think that with the Trump administration, the approach has been that we want the United States to win the AI race. It's a global competition. Sometimes we mention the fact that China is probably our main competitor in this area. They're the only other country that has the technological capability, the talent, the know-how, the expertise to beat us in this area, and we want to make sure the United States wins. And of course, in the U.S., it's not really the government that's responsible for innovation. It's the private sector. So that means
Starting point is 00:04:10 that our companies have to win. And if you're imposing all sorts of crazy burdensome regulation on them, then that's going to hurt, not help. So the president gave, I think, very important AI policy speech a couple months ago on July 23rd, where he declared, and no uncertain in terms that we had to win the AI race, and he laid out several pillars on how we do that. It was pro-innovation, pro-infrastructure, which also means pro-energy and pro-export. And we can drill into all those things if you want, but that was the high line. And so I think that, again, with AI, the idea is kind of like, how do we unleash innovation? And I think with crypto, it's been more about how we create regulatory certainty.
Starting point is 00:04:48 But in terms of my role, like, why am I doing both? I mean, I think the common denominator is just, again, these are new technologies. they're both obviously come from the tech industry, which has a very different culture than Washington does. And I kind of see it as my role to help be a bridge between what's happening in Silicon Valley and what's happening in Washington and helping Washington understand
Starting point is 00:05:09 not just the policy that's needed or the innovation that's happening, but also kind of culturally, what makes the tech industry different and special and how that needs to be protected from a government doing something excessively heavy-handed. So David, you know, we're going to talk a lot about AI today, but just on crypto, I've had this interesting experience
Starting point is 00:05:27 this year, kind of after the election, kind of people adjusted to the change of government. And I've had this discussion with a number of people who, let's say, in politics, who were previously anti-crypto, who have been trying to figure out how to kind of get to a more sensible position. And then also actually people in the financial services industry who kind of followed it from a distance and maybe participated in the various debanking things without really understanding what was happening. But the common denominator has been, they're like, Mark, I didn't really understand how bad it was. I basically thought you guys in tech were basically just whining a lot and pleading as a special interest and kind of doing the normal thing. And I figured the horror
Starting point is 00:06:01 stories were kind of made up, people getting prosecuted and entrepreneurs getting their houses raided by the FBI and like the whole panoply of things that happened. And I now, in retrospect, now that I go back and look, I'm like, oh my God, this was actually much worse than I thought. Do you have that experience? And as you're in there and kind of as you now have a complete view of everything that happens, you think people understand actually how bad it was? I mean, I think it's a great point. I mean, I didn't really know either. You kind of heard generally. I mean, we knew that there was debanking going on.
Starting point is 00:06:26 And by the way, it wasn't just crypto companies that were being debanked, but their founders were being debanked personally. So if you were the founder of a crypto company, you couldn't open a bank account. I mean, that's a huge problem. It's like, how do you transact? How do you make payments? How do you pay people?
Starting point is 00:06:39 I mean, it basically deprives you of a livelihood. It's a very extreme form of censorship. So that was definitely happening. And then, of course, you have all the prosecutions that the SEC was behind. So, yeah, it was really bad. And I remember back in, I think it was in March, we had a crypto summit at the White House. And one of the attendees said that a year ago, I would have thought it was more likely that I'd be in jail than that I'd be at the White House. And so it was a really big milestone for the industry.
Starting point is 00:07:05 They'd never ever received any kind of recognition like that, the idea that this was even a industry that you would do an event at the White House. I mean, at a minimum, I think crypto is seen as very de-class A. But in any event, yeah, no, it's been a huge shift. I mean, we basically have stopped that. And it was very unfair because, again, these founders wanted to comply with the rules, but they weren't told what they were. And that was all part of a deliberate strategy,
Starting point is 00:07:33 I think, to drive crypto offshore. One of the things that is very different between crypto and AI that we've noticed is that on the crypto front, everybody just wanted rules. And the industry was relatively unified. Whereas in AI, we've seen, very, like, interesting kind of calls coming from inside the house
Starting point is 00:07:53 with certain companies really going for regulatory capture. People have early leads saying, let's cut off all new companies from developing AI and so forth. What do you make of that and where do you think that's going? I think it's a very big problem. I actually recently criticized one of our AI model companies for engaging in a regulatory capture strategy. Yes.
Starting point is 00:08:14 It's a very fair criticism, by the way. It is very fair. And actually, of course, they denied it. And then, should I tell the story? I mean, rarely do you get vindicated on X so thoroughly and completely as I did on this? Because after this company, it was basically Anthropic. After they denied it, there was, I mean, what basically happened is that Jack Clark, who's a co-founder and head of Pauls for Anthropic, gave a speech at a conference where
Starting point is 00:08:40 he compared fear of AI to a child seeing monsters in the dark or thinking there were monsters in the dark. but then you turn the lights on and the monsters are there. I thought that was such a ridiculous analogy. I mean, it's basically parol. I mean, it's so childish to be almost self-inditing because you're basically admitting the fear is made up, not real. In any event, so I said, well, this is like fear-mongering
Starting point is 00:09:04 and part of the regulatory capture strategy. And of course, they denied it. But then a lawyer who was in the crowd at his speech said, well, yeah, but Jack's not telling you what he said during the Q&A, which he basically admitted that everything that Anthropic was doing was with like things like SB 53, which is supposedly just implementing transparency. He said, no, he admitted that was just a stepping stone to their real goal, which was to get a system of pre-approvals in Washington before you can release new models.
Starting point is 00:09:33 And he admitted as part of the Q&A that making people very afraid was part of their strategy. So again, just as much as a smoking gun as you could ever get in a spat on X. But the reason why I think that that approach is so damaging is that the thing that's really made, I think, Silicon Valley special over the past several decades is permissionless innovation, right? It's the two guys in a garage can just pursue their idea. Maybe they raise some capital from angels or VC first. Basically, people who are willing to lose all of their money. And these are people who are young founders. They could also be a future dropout in a dorm room.
Starting point is 00:10:11 and they're able just to pursue their idea. And the only reason that I think has happened in Silicon Valley, whereas you look at industries like, I don't know, like pharma or healthcare or defense or banking or these highly regulated industries where you just don't see a lot of startups is because they're all heavily regulated, which means you have to go to Washington to get permission to do things.
Starting point is 00:10:31 And the thing I've seen in Washington is just that, you know, the approvals get set up for reasons, but those reasons very quickly stop mattering. And it just matters like, how good your government affairs team is at navigating through the bureaucracy and figuring out how to get those approvals. And it's not something that your typical startup founders are going to be good at. It's something that big companies get good at because they've got the resources and that's exactly what regulatory capture means. So the whole basis of Silicon Valley success, the reason why
Starting point is 00:11:02 it's really the crown jewel of the American economy and the envy of the rest of the world, We see all these attempts by all these other countries to create their own Silicon Valley. The reason that's the case is because of Provincialist innovation. And what is being contemplated and discussed and implemented with respect to AI is an approval system for both software and hardware. And this is not theoretical. This has already been happening. On the hardware side, one of the last things that the Biden administration did, the last week of the Biden administration, was impose the so-called Biden diffusion rule, which required.
Starting point is 00:11:38 that every sale of a GPU on Earth be licensed by the government, which is to say pre-approved, unless it fits into some category of exception. Basically, the overall idea is that compute is now going to be a licensed and pre-approved category. We rescinded that. And then on the software side, like I said, I mean, the goal very clearly is to start with these reporting requirements
Starting point is 00:12:08 to the government, to the states, and then where that ramps up to is you have to go to Washington to get permission before you release a new model. And, you know, this would drastically slow down innovation and make America less competitive. I mean, you know, these approvals can take months. They can take years. When models are, when a new chip is released every year
Starting point is 00:12:31 and we have licenses that have been sitting in the hopper for two years, I mean, the requests are obsolete by the time they finally get approved. And that would be even more true with models where, you know, the cycle time is, you know, like three or four months for a new model. I mean, you know, and what exactly is a bureaucracy in Washington going to know about this technology that, you know, that they're going to be in a good position to approve at any event. But this is what is being contemplated right now. And I think it would be a disaster for Silicon Valley, but also and for innovation, but therefore for American competitiveness. And I think we will lose the AIR. race to countries like China if, you know, if this is the set of rules that we have. Yeah, one of the really diabolical things about their argument is if they really believe there was a monster, then why are they buying GPUs like at a rate faster than anybody? And then the other thing that we know from being in the industry is their reputation is they have literally the worst security practices in the entire industry with respect to their own
Starting point is 00:13:34 code. So if you were building this monster, the last thing you'd want to do is, like leave a bunch of holes around for people to hack it. So they don't believe anything they're saying. It's completely made up to try and maintain their lead by this. Well, I think there is, I think it's a heady drug to basically say that, that, you know, we're creating this, you know, new superintelligence that is going to, could destroy humanity, but we're the only ones who are virtuous enough to ensure that this is done correctly, right? And I think that, you know.
Starting point is 00:14:09 It's a good recruiting tool. Yeah. Join the virtuous team. Yes. I think that's right. But yeah, but I think that is definitely, you know, I think of all the companies, that particular one has been the most aggressive in terms of the regulatory capture and pushing these for these regulations.
Starting point is 00:14:29 And just, I mean, let's just bring it up a level. Just that's enough to be about them. There's now something like 1,200 bills going through state, legislatures right now to regulate AI. 25% of them are in the top four blue states, which are California, New York, Colorado, and Illinois. Over 100 measures have already passed. I think three of them just got signed in the last month
Starting point is 00:14:54 in California alone. I'll tell you, just let me tell you what Colorado is, actually Colorado, Illinois, and California have all done some version of a thing called algorithmic discrimination, which I think is, it's really troubling where it's headed. What this concept means is that if the model
Starting point is 00:15:14 produces an output that has a disparate impact on a protected group, then that is algorithmic discrimination. And the list of protected groups is very long. It's more than just the usual ones. So, for example, in Colorado, they've defined people who, may not have English language proficiency
Starting point is 00:15:37 as a protected group. So I guess if the models is something bad about, you know, illegal aliens and that would be, you know, that would basically violate the law. I don't know exactly how model companies are even supposed to comply with this rule. I mean, presumably discrimination is already illegal.
Starting point is 00:15:56 So if you're a business and you violate the civil rights laws and you engage in discrimination, you're already liable for that. You know, there's no reason, you know, If you happen to make that mistake and you use any kind of tool in the process of doing it, we don't really need to go after the tool developer because we can already go after the business that's made that decision. But the whole purpose of these laws is to get at the tool. They're making not just the business that is using AI liable. They're making the tool developer liable.
Starting point is 00:16:28 And I don't even know how the tool developer is supposed to anticipate this, because how do you know all the ways that your tool is going to be used? How do you know that this output, especially if the output is 100% true and accurate and the models is doing its job, then how are you supposed to know that output was used as part of a decision
Starting point is 00:16:50 that had a disparate impact? Nevertheless, you're liable. And the only way that I can see for model developers to even attempt to comply with this is to build a DEI layer into their models that tries to anticipate, could this answer have a disparate impact? And if it does, we either can't give you the answer. We have to sanitize or distort the answer.
Starting point is 00:17:13 And, you know, you just take this to its launch conclusion, and we're back to, you know, woke AI, which, by the way, was a major objective of the Biden administration. That Biden executive order on AI that we rescinded as part of the Trump administration had something like 20 pages of DEI language in it. They were very much trying to promote DEI values, they called it, in models. And then we saw what the results of that was. You know, we saw the whole Black George Washington thing where history was being rewritten in real time because somebody built, you know, a DEI layer into the model.
Starting point is 00:17:47 And, you know, and I almost feel like the term woke AI is insufficient to explain what's going on because it somehow trivializes it. I mean, what we're really talking about is Orwellian AI. We're talking about AI that lies to you, that distorts an answer that rewrites history in real time to serve a current political agenda of the people who are in power. I mean, it's very Orwellian, and we were definitely on that path before President Trump's election. It was part of the Biden-EO. We saw it happen in the release of that first Gemini model. that was not an accident
Starting point is 00:18:29 that those distorted outputs came from somewhere so it was you know just to me this is the biggest risk of AI actually is it's not it was not described by James Cameron
Starting point is 00:18:45 it was described by George Orwell you know it's in my view it's not the Terminator it's 1984 that you know that as AI eats the internet and becomes the main way that we interact and get our information online that it'll be used by the people in power
Starting point is 00:19:03 to control the information we receive, that it'll contain an ideological bias, that essentially it'll censor us, all that trust and safety apparatus that was created for social media will be ported over to this new world of AI. Mark, I know that you've spoken about this quite a bit. I think you're absolutely right about that.
Starting point is 00:19:22 And then on top of that, you've got the surveillance issues where AI is going to know everything about, you. It's going to be your kind of personal assistant. And so it's kind of the perfect tool for the government to monitor and control you. And to me, that is by far the biggest risk of AI. And that's the thing we should be working towards preventing. And the problem is a lot of these regulations that are being whipped up by these fear-mongering techniques, they're actually empowering the government to engage in this type of control that I think we should all be very afraid of, actually. Sam Allman earlier this week said that in 2028 or by 2028, he expects
Starting point is 00:20:05 to have automated researchers. I'm curious just for your sort of state of AI sort of model development or just progress in general and what do you think are the implications. Some people have been sort of, you know, saying that, you know, AGI is two years away, sort of the AI 2027. seven papers labeled Ashen Brenner's situational awareness papers. I'm curious kind of what's your reading of the of the state of play in terms of a development and what are the implications from that. So my sense is that people in Silicon Valley are kind of pulling back from the, let's call it imminent AGI narrative. I saw Andre Carpathie gave an interview where now of a sudden he's he's re-underwritten this and he says AGI is at least a decade away. He's basically saying
Starting point is 00:20:49 that that reinforcement learning has a limits. I mean, it's very useful. It's the main paradigm right now that they're making a lot of progress with. But he says that actually the way that humans learn is not really through reinforcement, we do something a little different, which I think is a good thing because it means that human and AI will be synergistic, right? I mean, the AI is understanding if it's based on RL be a little different than the way that we intuit and reason. But in any event, I sense more of a pullback from this imminent AGI narrative, you know, the idea that AGI's two years away. Of course, it's like kind of unclear what people
Starting point is 00:21:25 mean by AGI, but it's kind of it was used in this like scary way that it's kind of this super intelligence that would grow beyond our control. I feel like people are pulling back from that and understanding that yes, we're still making a lot of progress and
Starting point is 00:21:40 the progress is amazing, but at the same time, you know, what we mean by intelligence is multifaceted and it's not like, you know, there's progress being made along some dimensions, but it's not along every dimension. And so, therefore, I think, again, I would just,
Starting point is 00:22:00 I mean, I've described actually the situation where in right now is a little bit of a Goldilocks scenario, where, you know, the extremes would be, you know, you kind of have the scary Terminator situation, imminent superintelligence that'll grow beyond our control. On the other, the other narrative you hear in the press a lot is that we're in a big bubble. So, in other words, the whole thing is fake.
Starting point is 00:22:20 And the media is basically pushing both narratives at the same time. But in any event, I think that the truth is more in the middle. It's kind of a Goldilocks scenario where we're seeing a lot of innovation. I think the progress is impressive. I think we're going to see big productivity gains in the economy from this. But I like the observations that biology made recently where he said there's a couple of things that really struck me. One was AI is polytheistic, not monotheistic, meaning what we're seeing,
Starting point is 00:22:52 is many, instead of just one all-knowing, all-powerful God, what we're seeing is a bunch of smaller deities, more specialized models, you know, you know, it's not that sort of, we're not on that kind of recursive self-improvement track just yet, but, you know, we're seeing many different kinds of models, make progress in different areas. And then the other one was just his observation that AI was,
Starting point is 00:23:20 was middle to middle, whereas human, are end to end, and therefore the relationship is pretty synergistic. And I think that's right. I mean, I think all those observations are, like, resonate with me in terms of where we're at right now. And that's very consistent with what we're saying as well, where, you know, ideas that we thought would for sure get subsumed by the big models are becoming amazingly differentiated businesses just because the fat tale of the universe.
Starting point is 00:23:52 is very fat and you need really kind of specific understanding of certain scenarios to build an effective model and that's just how it's going no model is just
Starting point is 00:24:04 like figured out how to do everything yeah I mean and the models work best when they have context you know and the more I mean we've all seen this the more general your prompt the less likely it is that you're going to be able to
Starting point is 00:24:18 you know get a great response I don't know, if you tell the AI, you know, something very general, like what business can I create to make a billion dollars, it's not going to give you something actionable, you know? You have to get very specific about what you're trying to do, and it has to have access to relevant data. Then it can give you some specific answers to a prompt. And I think this is, you know, partly Bologi's point, which is, you know, the AI does not. come up with its own objective. You know, it needs to be prompted.
Starting point is 00:24:55 It needs to be told what to do. We've seen no evidence that that's, at this stage, that that's changing. We're still at step zero in terms of AI kind of, you know, somehow coming up with its own objective. And as a result of that, you know, the model has to be prompted and then it gives you an output and that output has to be validated. You have to somehow make sure it's correct because models can still be wrong. And more likely, you have to iterate a few times.
Starting point is 00:25:22 because it doesn't give you exactly what you want, so now you kind of reprompt. And we've all had this experience, right? This is why, like, the chat interface is so necessary is because it takes you a few times to kind of iterate to get to the output that actually has value for you. Again, you know, the humans are N10 and the AI is middle to middle.
Starting point is 00:25:43 I just don't, you know, we haven't seen any evidence that that fundamental dynamic is changing. I mean, I think, you know, we're at the, I mean, I love to hear what you guys think about this and we're obviously at the outset of agents and you know an agent's you can give an objective to and then it'll be able to take tasks on your behalf but I suspect that the agents will work better as well when they have a much more narrow context they're much less likely to go off the rails start going in weird directions if you give it like a very broad you know a very broad
Starting point is 00:26:14 task it's just not likely to completely figure it out before it needs human intervention but if you give it something very narrow to do then it's much more like be successful. So, you know, I would just guess like, okay, just you tell the AI, you know, sell my product. You know, it's, it's very unlikely that it's just going to figure out like what that means and how to do that. But if you're a sales rep and you're using the AI to help you, there's probably very specific tasks that you can tell it to do. And it would be much more successful doing that. So I just tend to think, I mean, this also kind of speaks to the whole job loss.
Starting point is 00:26:52 narrative. I just think that this is going to be a very synergistic tool for a long time. I don't think it's going to wipe out human jobs. I don't think the need for human cognition is going away. It's something that we'll all use to kind of get this big productivity boost, at least for the foreseeable future. I mean, I don't know. I don't know if anyone can any of us can predict what's going to happen beyond five or ten years. But I mean, that's just what I'm seeing right now. I don't know. I'm curious. What are you guys seeing it? this front. Generally consistent with that,
Starting point is 00:27:26 things are improving. So, like, on agents, the early agents, the longer the running task, the more they would go, like, completely bananas and off the rails. People are working on that.
Starting point is 00:27:37 I do think, like, everything's working better in a context. I, at least from what we've seen, that will continue. And even, you know, to your point on, like, super smart models,
Starting point is 00:27:49 there's, like, a dozen video models out there, and there's not one that's the best at everything or even close to the best at everything. There's like literally a dozen that are all the best at one thing, which is a little surprising, at least to me, because you would think just the sheer size of the data would be an advantage.
Starting point is 00:28:13 But even that hasn't quite proven out. It is like depending on what you want, Do you want a meme? Do you want a movie? Do you want an ad? It's all very, very different. And I think this gets to your main point, which is, and Mark Zuckerberg said something that I really liked.
Starting point is 00:28:36 He's like, like intelligence is not life. And these things that we associate with life, like we have an objective. We have free will. We're sentient. Those just aren't part of a. mathematical model that is, you know, searching through a distribution and figuring out an answer or even, you know, a model that, you know, through a reinforcement learning technique can kind
Starting point is 00:29:04 of improve its logic. So it's just like the comparison to humans, I think, is it just falls short in a lot of ways, is what we're saying. You know, we're just different. You know, then the models are very good at things. They're better than humans at things already, many things. the other thing I'd bring up related to this, which is sort of this, which I think is a little orthogonal, but also quite related, which is sort of, is the future of the world going to be one or a small number of companies, or for that matter, governments or super AIs that kind of own and control everything, and sort of all the value rolls up into a handful of entities. And, you know, and there you get into this. There's like the hypercapitalist version of it where a few companies
Starting point is 00:29:43 make all the money, or there's like the hyper communist version of it where you have total state control or whatever. You know, or is this a technology that's going to diffuse? out and be like in everybody's hands and be a tool of empowerment and creativity and individual effort, you know, expressiveness and as a tool for basically everybody to use. And I think one of the really striking things about this period of time and you being in this role is that this is the period in which the scenario number two is really, I think, very clearly playing out, which is this is the time in which AI is actually hyper democratizing. I think actually AI is actually hyper democratized. It has spread to more individuals, both in the country and around the world.
Starting point is 00:30:20 in the shortest period of time of any new technology, I think, in history, you know, we're something like 600 million, you know, users today on rapidly on the way to a billion, rapidly on the way to 5 billion, you know, kind of across all the consumer products. And then the best AIs in the world are in the consumer products, right? And so if you use the, if you use, you know, current day, Chad GPT or GROC or any of these things, like, you know, I can't spend more money, you know, and get access to a better AI. It's in the consumer products. And so just in practice, what you have like playing out in real time is this technology is
Starting point is 00:30:50 going to be in everybody's hands. And everybody is going to be able to use it to optimize the things that they do, have it be a thought partner, have it be, you know, somebody, you know, an assistant for building companies, you know, starting companies, or creating art or, you know, doing all the things that people want to do. You know, my wife was just using it this morning to design a new entrepreneurship curriculum for our 10-year-old, right? You know, like literally, it's like, oh, wow, that's like a really great idea. And it took her a couple hours and she has like a full curriculum for him to be able to start his first video game company and here's all the different skills that he needs to learn
Starting point is 00:31:23 and here's all the resources. And that's just a level of capability. I mean, to have done that without these modern consumer AI tools, you know, you'd have to go like hire a, you know, education specialist or something, you know, which is basically impossible to do that kind of thing. And, you know, everybody has these stories now in their lives among people they know.
Starting point is 00:31:40 So we have the, I think we have like a lot of proof that the track that this is on is that this is going to be in everybody's hands. And in fact, that is going to be a really good thing. And I think, and I think David, I think you guys are really playing a key role in making that happen. I think it's so important that this technology remain decentralized because the kind of the Orwellian concern is kind of the ultimate centralization. And fortunately, so far what we're seeing in the market is that it's hyper competitive.
Starting point is 00:32:06 There's five major model companies all making huge investments. And the benchmark, the model performance, the evaluations are relatively clustered. And there's a lot of leapfrogging going on. So, you know, GROC releases a new model. It leapfrogs chat GBT, but then chat GPT releases something new. They leapfrog. So they're all, like, very competitive and close to each other. And I think that's a good thing. And it's the opposite of what was predicted, you know, through this like imminent AGI story, where the, the sort of the, the storytelling there was that one model would get a lead and then it would direct its own intelligence to making itself better
Starting point is 00:32:50 and then so therefore its lead would get bigger and bigger and you kind of get this recursive self-improvement and pretty soon you're off to the singularity and we haven't really seen that you know we haven't seen one model completely pull away in terms of capabilities and I think
Starting point is 00:33:06 that's a good thing and so Eric to your point about this narrative about the virtual AI researcher that was one variant of this sort of imminent AGI narrative is that the steps would be you know, models get smarter, the models create virtual AI researcher,
Starting point is 00:33:23 and then you get a million virtual AI researchers, and then, you know, it's singularity. And I think just the slight a hand in that is what is a virtual AI researcher, right? It's like a very easy thing to say, but like what does that really mean? And, you know, Tobology's point about, you know, AI is still middle to middle.
Starting point is 00:33:44 It doesn't, it's not end to end. So if an AI researcher is end to end, end. There's like things that has to, you know, things the person has to figure out. They've got to set their own objective. They've got to be able to pivot in ways that AI can't. You know, like, is it really, is that really feasible to create a virtual AI researcher? I think there's like parts of the job that, you know, AI could get really good at or even better than humans, but probably that tool has to be used by a human AI researcher. So I guess the, the, the argument, I suspect, could be sort of teleological in the sense that you might need AGI to create a virtual AI researcher
Starting point is 00:34:24 as opposed to the other way around. And if that's the case, you don't just, you know, you're not going to get like singularity. So I'm a little bit skeptical of that claim. You know, we'll see. What Sam said could do it in 2028. I mean, I guess we'll see in two years. I think all those claims tend to be like recruiting ideas as opposed to actual predictions. He's not the first to mention that idea.
Starting point is 00:34:48 Other model companies have been promoting it. But, you know, Leopold to mention that, too. You know, we'll see. But I suspect that that's what's wrong with that argument is that virtual AI researcher requires AGI. And so the idea that you're going to get AGI through a virtual AI researcher is backwards. But we'll see.
Starting point is 00:35:11 You know, we'll see. David, you have also, you in the administration, I think I've also been very supportive of open source. AI, which I think also dovetails into this in terms of like the market being very competitive. Do you want to spend a moment on what you guys have been able to do on that and how you think about it?
Starting point is 00:35:24 Yeah. I mean, so just to the open source is very important because I just think it's synonymous with, you know, freedom. I mean, software freedom. You basically could run your own models on your own hardware and retain control over your own information. And by the way, this is what enterprises
Starting point is 00:35:41 typically do all the time is, you know, About half the global data center market is on-prem, meaning enterprises and governments create their own data centers. They don't go to the big clouds. By the way, I've got nothing against the hyperscalers, but just, you know, people like to run their own data centers and, you know, maintain control over their own data and that kind of thing. And I think that will be true for consumers to some degree as well. So I do think it's an important area that we should, you know, want to encourage and promote. the irony right now in the market is that the best open source models are Chinese
Starting point is 00:36:21 and it's sort of a quirk right it's the opposite of what you'd expect you'd expect like the American system would promote open and somehow the Chinese system would promote closed that has kind of you know ended up being a little backwards I think there's like good reasons for it it could just be
Starting point is 00:36:40 it could just be kind of a historical accident the fact that the Deep Seek founder was very committed to open source and kind of just kind of that got things started that way or it could be part of a deliberate strategy if you're trying and you're trying to catch up
Starting point is 00:36:55 open source is a really good way to do that because you get all the non-aligned developers to want to help your project which they can't do with a closed project so it's a great strategy for catching up and then also if you think that your business model you know, as a company or as a country is, let's say,
Starting point is 00:37:12 scale manufacturing of hardware, then you would want the software part to be free or cheap because it's your complement, right? So you try to commoditize your compliment. And I don't know, whether it's by accident or part of design, that seems to be what the Chinese strategy has been. I think that the right answer for the U.S. in this is to encourage our own open source.
Starting point is 00:37:36 I mean, I think it would be a great thing if we saw more open source initiatives get going. I guess there's one promising one called Reflection, which was founded by former engineers from Google DeepMind. So I hope we see more open source innovation in the West. But look, I think it's very important, it's critical. And like I said, in my view, it's synonymous with freedom. And it's definitely not something we want to suppress.
Starting point is 00:38:04 Now, just back to the closed ecosystem for a second, true, we have five major competitors there, and they're all spending a lot of money. I do worry a little bit that at some point in time that the model, not the model, the market consolidates and we end up with, you know, like a monopoly or duopoly or something like that, as we've seen in other technology markets. We saw this with search and, you know, and so on down the line. And I just think that it would be good if this market stayed more competitive than just one or two winners.
Starting point is 00:38:38 And I don't I don't really know what to do about that. I'm just making that observation. I do think that having open source as an option always then ensures that even if the market does consolidate, that you do have an alternative. And it's an alternative that's more fully within your control as opposed to a large corporation or
Starting point is 00:39:00 the deep state, you know, working with that corporation as we saw in the Twitter files that the Deep State was working with all these social media companies in implementing much more widespread censorship than I think any of us thought possible. So we've seen evidence in the past and again in the social networking space
Starting point is 00:39:24 about how the government could get involved in nefarious ways and it would be good to have alternatives so that, to prevent that, or to make it less likely that that scenario comes about with AI. Yeah. Well, as you know, we and others are very aggressively investing in new model companies in many kinds, including new foundation model companies.
Starting point is 00:39:47 And then also, you know, there are a whole bunch of new open source efforts, you know, that are not yet public that, you know, hopefully we'll bear fruit over the next couple of years. So I think that's great. At least in the medium term, I think we're looking at an explosion of model development as opposed to consolidation in them, you know, we'll see what happens from there. Yeah. That's really good to hear. I mean, I think, you know, if we assess kind of the state of the AI race vis-à-vis China,
Starting point is 00:40:11 this is the only area where we appear to be behind is in this area of open-source models. Yeah. I think, you know, if you don't care whether it's open or close, I think we have the lead. I think our top models, model companies are ahead of the top Chinese companies, although they're quite good. But just this narrow area of open source seems to be where they have an advantage. So it's great to hear that you guys are seeing, you know, a lot more. efforts coming to market yeah yeah yeah just more coming yeah good yeah yeah definitely more coming
Starting point is 00:40:41 the um peter teal quipped you know many years ago that he thought you know crypto would be libertarian or decentralizing and that uh i i would be communist or centralizing and i think one thing we've perhaps we've learned that uh technology isn't deterministic and that there are a set of choices that determine whether these technologies are decentralizing or centralizing and um maybe we could segue, use that as a segue to go deeper into the state of the race with China. Maybe, David, you could lay out the sort of the, what's most important to get right. You've already indicated, you know, open source is one example. You know, you alluded earlier to sort of our strategies as relates to chips. You know, some people say that, yes, it's a good idea to do what we're doing
Starting point is 00:41:22 because it'll, you know, limit domestic semiconductor production. Other people say, oh, well, you know, some of these companies say chips are their biggest, you know, limiting factors. And so Are we enabling them in some way? Why don't you talk about the sort of state of play and then our strategy? Yeah. So, you know, when we talk about winning the AI race, sometimes we say we're in a race against China, and sometimes we just leave it a little bit more vague. Because I don't think we should become overly obsessed with our competitors or adversaries.
Starting point is 00:41:50 I think whether we win or not will mostly have to do with the decisions we make about our own technology ecosystem, not about, you know, what we do vis-a-vis them. And so the president in his July 23rd speech on AI policy, I think mentioned a few of the key pillars of how we win this AI race. And by the way, I'm not saying it ever ends. It might be an infinite game. But we want to be in the lead at least. And I do think that there could be a period of time where, like, you know,
Starting point is 00:42:23 take the Internet where, I mean, the Internet's still going on. It's happened. But we understand that kind of who the winners are. kind of baked now. So there could be a period of time in which, you know, it's kind of baked who the winners in AI are. But in any event, you know, in terms of how we win this race, you know, I mentioned a few of the key pillars.
Starting point is 00:42:41 Number one is innovation. You know, it's very important to support the private sector because they're the ones who do the innovation. We're not going to regulate our way to beating our adversary. We just have to out-innovate them. I mentioned, I think right now the biggest obstacle is the frenzy of over-regulation happening at the states. I desperately think we need a single federal standard.
Starting point is 00:43:06 A patchwork of 50 different regulatory regimes can be incredibly burdensome to comply with. I think even the people who support a lot of this regulation are now acknowledging that we're going to need a federal standard. The problem is that when they talk about it, what they really want is to federalize the most onerous version of all the state laws. And that can't be allowed either.
Starting point is 00:43:26 So, you know, there's going to be a battle to come, I think as the states become more and more unwieldy, you know, as it becomes more of a trap for startups that they now have to report into 50 different states at 50 different times to 50 different agencies, about 50 different things, people are going to realize this is crazy and they're going to try to federalize it. And then the question, I think, is whether we get preemption heavy or preemption light. You know, do we get a, I think everyone's going to ultimately be in favor of a single federal standard. Because I think one of America's greatest advantage is is that we have a large national market, right? Not 50 separate state markets. It's like kind of Europe before the EU wasn't competitive at all on the Internet
Starting point is 00:44:06 because it's 30 different regulatory regimes. And so if you're a European startup and even if you won your country, it didn't get you very far because you still had to like, you know, figure out how to compete in 30 other countries before you could even win Europe. And then meanwhile,
Starting point is 00:44:21 your American competitors won the entire American market and is ready to scale up globally. So the fact that we have a single national market is this fundamental to our competitiveness is why winners in America then go on to kind of win the whole world. So we have to preserve that. And I think we will eventually get some federal preemption. I think the question will just again be whether we preempt heavy or preempt light. Second big area is infrastructure and energy.
Starting point is 00:44:49 We want to help this amazing infrastructure boom that's happening. And the biggest, I think, limiting factor there is this going to be around energy. I think President Trump's been incredibly far-sighted in this. I mean, he was talking about drill baby drilled many years ago. We understood that energy is the basis for everything. It's definitely the basis for this AI boom. And we want to basically get all of these unnecessary regulations, the permitting restrictions, a lot of the nimbism out of the way so that AI companies can build data centers and get power for them.
Starting point is 00:45:22 And we can talk about that more if you want. want. But I think that that's a second really huge part of what it's going to take to win the AI race. And then the third area is around exports. And maybe this has been the most controversial one. And it really speaks to the cultural divide between Silicon Valley and Washington. So I think all of us in Silicon Valley understand that the way that you win a technology race is by building the biggest ecosystem, right? You get the most developers building on your platform. You get the most apps in your app store. Everyone just uses you.
Starting point is 00:45:57 I mean, you know, those are the companies that typically win are the ones that get all the users, all the developers, and so on. And so we in Silicon Valley have a partnership mentality. You know, we want to just publish the APIs and get everyone using them. Washington is a different mentality, right? It's much more of a command and control. We want you to get approved. You know, we kind of want to hoard this technology. Only America should have it.
Starting point is 00:46:20 And this was really fundamental, I think, to the Biden diffusion rule where the point of that rule is to stop diffusion, right? Diffusion is a bad word. But in Silicon Valley, we understand that diffusion is how you win. I don't think we ever called a diffusion before. That was a new word for me. We just called it usage. But we understand that, like, getting the most users is how you win. So there's like a fundamental culture clash going on right now.
Starting point is 00:46:47 And, you know, the way I kind of parse it is that what we decide to sell to China is always going to be complicated because, you know, they're our competitor and our adversary and there's the whole potential dual use. And so the question of what you sell to China is, is nuanced. But what we sell to the rest of the world, that should be an easy question, which is we should want to do business with the rest of the world. We should want to have the largest ecosystem possible. and every country we exclude from our technology alliance, we're basically driving into the arms of China and it makes their ecosystem bigger. And what we saw under the Biden years
Starting point is 00:47:24 is that they were constantly pushing other countries into the arms of China, starting with the Gulf states in October of 2023, basically the Gulf states, you know, I'm talking about countries like Saudi Arabia, UAE, long-standing U.S. allies, they weren't allowed to buy chips from the U.S. In other words, they weren't allowed to set up data centers and participate in AI.
Starting point is 00:47:48 And here we are telling all these countries that, you know, AI is fundamental to the future. It's going to be the basis of the economy. And yet we're excluding you from participating in the American tech stack. Well, you know, it's obvious what they're going to do. You know, the only play we're giving them is to go to China. And so, you know, all of these rules basically just create pent up demand for Chinese chips and models. And it creates a Huawei Belt and Road.
Starting point is 00:48:13 and we are hearing that that WALA is starting to proliferate or diffuse in the Middle East and in Southeast Asia and I just think it's a really counterproductive strategy we're completely shooting ourselves in the foot and like the greatest irony is that the people who've been pushing this strategy
Starting point is 00:48:34 of driving all these countries into China's arms have called themselves China Hawks as if what they're doing is hurting China. No, it's like it's helping China. I mean, it's basically just handing them markets. And our products are better. But if you don't give these countries a choice
Starting point is 00:48:51 to buy the American tech stack, obviously they're going to go with the Chinese tech stack. And, you know, China is out there promoting, you know, deep seek models and Huawei chips. And they're not, like, wringing their hands about, you know, whether, you know, exporting chips for a data center in the U.S. is going to, like, create the Terminator and, you know, all these, like, ridiculous narratives that we've invented to, you know, reasons we've invented not to sell American technology to our friends. So, you know, that has ended up being, I think, surprisingly, maybe the most conversal part of what we've advocated for.
Starting point is 00:49:32 But there you have it. So in any event, I'll stop there. Those are kind of some of the major pillars of what we've been advocating. Should we go deeper on sort of the infrastructure energy point in terms of what? what is really going to take to get enough capacity or what's most important in that second bullet you were talking about? Yeah, I mean, well, so, I mean, there are definitely people who are much more knowledgeable
Starting point is 00:49:53 about energy than I am and are experts in the space. But here's what I've been able to kind of divine is, so first of all, the administration, President Trump has signed multiple executive orders to allow for nuclear, to make permitting easier. We've even freed up federal land for data centers to hopefully to try and help get around some, these state and local restrictions.
Starting point is 00:50:15 And obviously, the president has made it a lot easier to stand up new energy projects, power generation, all that kind of stuff. I still think, though, that we have a growing NIMBY problem at the state and local level in the U.S. that is becoming a little bit worrisome. And if we don't figure out a way to address it, then it could really slow down. the build-out of this infrastructure. In terms of power, so my understanding is that nuclear is going to take five or ten years.
Starting point is 00:50:51 It's just not something that we're going to be able to do in the next two or three years. So in the short term, it really means that gas is the way that these data centers are going to get powered. And the issue with gas is, the shortage there is not, America has plenty of natural gas, and it exists in enough red states where you could just build out data centers,
Starting point is 00:51:13 close to the source, which would be smart. The issue is there's like a shortage of these gas turbines. You know, there's only like two or three companies that make these things. And like there's like a backlog of two or three years. So I think that's probably the immediate problem there that needs to get solved. However, I do think that in the next two or three years, we could get a lot more out of the grid. So I've had, you know, energy executives tell me that that if, we could just shed 40 hours a year of peak load from the grid to backup generators, to diesel, things like that,
Starting point is 00:51:52 you could free up an additional 80 gigawatts of power, which is a lot. Because I guess the way that it works is, you know, the grid is only used about 50%, 50% of the capacity is used throughout the year because they have to build enough capacity for the peak, the peak days, like the hottest day in summer or the coldest day in winter. those are your peak days. And they don't want to commit to a bunch of the capacity being used. And then you find out that you have a really cold day in winter and people can't get enough heat for their homes.
Starting point is 00:52:25 And so they can't like overcommit to, you know, to say contracts or data centers, things like that. But if you could, again, just like if you could shed that like 40 hours a year of peak load to backup, then you would be able to free up 80 gigawatts, which is a lot. And that would definitely get us through the next two or three years until the gas turbine bottleneck's been alleviated.
Starting point is 00:52:50 And then eventually you get to nuclear. So that would be very good. I think the issue there is just there's a whole bunch of insane regulations preventing, you know, load shedding. So like, for example, you can't use diesel. And this, you know, Chris Wright is the Secretary of Energy is very good on all this stuff.
Starting point is 00:53:10 And I think he's working on unraveling all of this so we could actually do this. It's funny, David, as you talk about this stuff, I can't help, but it's a little bit like, it's a little bit like the principles just do the opposite of the EU. Basically, I think everything we've talked about so far is basically the opposite of the European approach. Yeah.
Starting point is 00:53:32 Well, I mean, the Europeans, I mean, they have a really different mindset for all of this stuff. When they talk about AI leadership, what they mean is that they're taking the lead in defining the regulations. You know, it's like, that's what they're proud of is. Like, they think that's what their comparative advantage is, is that, you know, they get together in Brussels
Starting point is 00:53:51 and figure out what all the rules should be. And that's what they call leadership. The EU just announced a, I shouldn't be done on them too much, but they just announced a big new growth fund, a big new public private sector, tech growth fund, to grow EU companies to scale. And I just, I would literally, I was just like, well, it's actually quite,
Starting point is 00:54:09 it's almost like a game show or something. They do everything they can to strangle them in their crib. And then if they make it, if they make it through like a decade of abuse of small companies, then they're going to give them money to grow. Well, it's what Ronald Reagan had a line about this, which is if it moves, tax it, if it keeps moving, regulated, if it stops moving, subsidize it. The Europeans are definitely at the subsidized it stage.
Starting point is 00:54:35 Yeah. And I, yeah, I shouldn't be in them too much. But I just, like, I've always been proud to be an American, but particularly now, because, like, We just, the fact that we've been, we are, we, it really feels like we're re-centering on core American values in a lot of the things that we're talking about, which is just really great. Yeah. I mean, again, it's, you know, our, our view is that the, first of all, we have to win the AI race. We want America to lead in this critical area. It's like fundamental for our economy and our national security. How do you do that? Well, our companies have to be successful because they're the ones who do the innovation. Again, you're not going to regulate your way to winning the AI race. I'm not saying we don't need any regulations, but the point is. just that's not what's going to determine whether we're the winners or not. David, you recently tweeted that climate dumerism perhaps is a giving way to AI dummerism based on Bill Gates' recent comments.
Starting point is 00:55:24 What do you mean by this? Do you mean it's going to be a major flank of the U.S. left? Or what do you mean by this comment? Well, I think the left needs a central organizing catastrophe to justify their takeover of the economy. and to regulate everything, and especially to control the information space. And I think that you're seeing that kind of the allure
Starting point is 00:55:50 of the whole climate change, Dumer narrative has kind of faded. Maybe it's the fact that they predicted 10 years ago that the whole world would be underwater in 10 years, and that hasn't happened. So it's like a certain point, you get discredited by your own catastrophic predictions. I suspect that's where we'll be with AI Dumerism
Starting point is 00:56:05 in a few years. But in the meantime, it's a really good narrative to kind of take the place to the climate dumerism. there's actually a lot of similarities, I would say. You know, you've kind of got, there's a lot of kind of pre-existing Hollywood storytelling and pop culture that supports this idea. You know, you've got the Terminator movies
Starting point is 00:56:25 and the Matrix and all this kind of stuff. So people have been taught to be afraid of this. And then, you know, you, there's enough kind of pseudoscience behind it. You know, kind of like, you know, with, like, like you've got all these contrived studies that like the one where they claim that the AI researcher got blackmailed by his own AI model or whatever.
Starting point is 00:56:49 Look, it's very easy to steer the model towards the answer that you want and a lot of these studies have been very contrived. But there's this patina of pseudoscience to it. It's certainly technical enough that the average person doesn't feel comfortable saying that this doesn't make any sense. I mean, it's more like you're not an expert.
Starting point is 00:57:08 What do you know? And even Republican politicians, I think, are kind of falling for this. So, yeah, I mean, it's a really desirable narrative. And of course, you know, as AI touches more and more things, more and more parts of the economy, every business is going to use it to some degree. If you can regulate AI, then that kind of gives you a lot of control over lots of other things. And like I mentioned, AI is kind of eating the Internet. It's like the main way that you're getting information.
Starting point is 00:57:34 So, again, if you can kind of get your hooks into what the AI is showing people, now you can control what they see and hear and think, which dovetails with the left censorship agenda, which they've never given up on, dovetails with their agenda to brainwashed kids, which is kind of the whole woke thing. So, I mean, this is going to be very desirable for the left. And this is why, I mean, look, they're already doing this.
Starting point is 00:58:00 It's not like some prediction on my part. Basically, after scam, bank run fraud, did what he did with FTX, and got sent to jail. He was like a big effect of altruists, and he had made pandemics like their big cause. They needed a new cause, and they got behind this idea of ex-risk,
Starting point is 00:58:18 it's existential risk. The idea being if there's like a 1% chance of AI ending the world, then we should drop everything and just focus on that because you do the expected value calculation. And so if it ends humanity, then that's the only thing you should focus on, even if it's a very small percentage chance.
Starting point is 00:58:34 But they really like reorganized behind this with all, And, you know, they've got quite a few advocates. And actually, it's an amazing story about how much influence they were able to achieve largely behind the scenes or in the shadows during the Biden years. They basically convinced all of the major Biden staffers of this view, of this like imminent superintelligence is coming. We should be really afraid of it. We need to consolidate control over it. There should only be, you know, ideally two or three companies that have it. We don't want anyone in the rest of the world to get it.
Starting point is 00:59:06 And then, you know, what they said is, you know, once we make sure that there's only two or three American companies, we'll solve the coordination problems. That's what they consider to be, you know, the free market. We'll solve those coordination problems for finding those companies and we'll be able to control this whole thing and prevent the genie from escaping the bottle. I think it was like this totally paranoid version of what would happen. And it's already being, it's already in the process of being refuted. but this vision is fundamentally one animated the Biden executive order on AI is what animated the Biden diffusion rule.
Starting point is 00:59:42 Mark, I mean, you've talked about how you were in a meeting with Biden folks and they were going to basically ban open source and they were basically going to an anoint two or three winners and that was it. Yeah, they told us that explicitly. And yeah, they told us exactly what you just said. They told us they're going to ban open source.
Starting point is 00:59:59 And when we challenge them on the ability to ban open source, because it's, you know, we're talking about, you know, math, you know, like mathematical algorithms that are taught in textbooks and YouTube videos and universities. You know, they said, well, during the Cold War, we banned entire areas of physics and put them off limits and we'll do the same thing for math if we have to. Yeah, and that's the, yeah, that's the, yeah, that was the bad.
Starting point is 01:00:22 And you'll be happy to know that the guy who actually said that is now an anthropic employee. No, that's exactly right. All those, and I mean, literally the minute the Biden administration was over, all the top Biden AI employees went to go work at Anthropic, which tells you who they were working with during the Biden years. Yeah. But no, I mean, this was very much the narrative. You sort of had this imminent superintelligence. And then, you know, one of the frames you heard was that AI is like nuclear weapons and GPUs are like uranium or plutonium or something.
Starting point is 01:00:57 and therefore we need like the proper way to regulate this is with like an international atomic energy commission and so you know again everything would be sort of centralized and controlled and they would anoint two or three winners and you know now this I think this narrative really started to fall apart
Starting point is 01:01:18 with the launch of Deepseek which really happened in the first I don't know a couple of weeks of the Trump administration because you know if you asked any of these people what they thought of China during this time when they were pushing all these regulations they basically, and specifically
Starting point is 01:01:35 well, wait, if we shoot ourselves in the foot by over-regulating AI, won't China just win the AI race? If you were to ask them that, what they would have said, they did say, is that China's so far behind us, it doesn't matter. And furthermore, and this was said completely without evidence, that if we basically slow down
Starting point is 01:01:51 to impose all these supposedly healthy regulations, well, China will just copy us and do the same thing. I think it was absurdly naive view. I think that if we shoot ourselves in the foot, I'll just be like, thank you very much. We'll just take leadership in this technology. Why wouldn't we?
Starting point is 01:02:08 But this is what they said. And, you know, when the Biden executive order on AI was crafted, there was no discussion whatsoever of the China compete. You know, it was just, again, assumed that we were so far ahead that we could basically do anything to our companies. and it wouldn't really affect our competitiveness. And I think that narrative really started to fall apart with DeepSeek at the model level.
Starting point is 01:02:37 Back in April, Huawei launched a technology called Cloud Matrix in which they compensated for the fact that their chips individually are not as good as Nvidia's chips by networking more of them together. So they took 384 of them. You use their prowess in networking to create this rack system, Cloud Matrix, And it was demonstrated to show that, you know,
Starting point is 01:03:00 yes, Nvidia chips are better. They're much more power efficient. But at the rack level, at the system level, you know, Huawei could get the job done with these, you know, ascend chips and Cloud Matrix. And so again, I think that showed that, you know, we're not the only game in town on chips, which means that if we don't sell our chips to, you know,
Starting point is 01:03:18 our friends and allies in the Middle East and other places, then Huawei certainly will. So I think it's just been kind of one revelation after another. which we've learned that a lot of their preconceptions and belief were wrong. And we've talked about the fact that the markets ended up being much more decentralized than they ever could have predicted. And I also say one other thing, which is they also believe that there be imminent catastrophes that haven't, so this is kind of like the equivalent to the global warming thing where
Starting point is 01:03:49 we're all supposed to be underwater by now. They were saying that models trained on, I think, I don't know, like 10 to 25 flops or whatever we're like way too risky. Well, every single model now at the frontier is trained on that level of compute. And so they would have banned us from even being at the place we're at today if we had listened to these people back in 2023.
Starting point is 01:04:11 So just a couple of years ago. So that's like really important to keep in mind that their predictions of imminent catastrophe have already been refuted. And so things are moving in a direction that I think are very different than what they thought in, you know, let's call it the first year
Starting point is 01:04:25 after the launch of ChatGPT. Great. So, David, just to come back real quick while we still have you on crypto. So the administration, and I think the country, had a significant victory earlier this year with the president signing the Stablecoin bill into law, which was the Genius Act. And I think that I'll just tell you what we see is like the positive consequences of that law have been even bigger than we thought. And I would say that's both for the Stablecoin industry. And you now see actually a lot of financial institutions of all kinds embracing Stablecoin. in a way that they weren't before. And, you know, sort of the phenomena is fretting in America, you know, by the way,
Starting point is 01:05:02 being in the lead and doing very well there. But just also more broadly, just as a signal to the crypto industry that, like, this, you know, this really is a, you know, this really is a new day. And there really are going to be regulatory frameworks that make these things possible and, you know, that are responsible, but also make this industry really possible to flourish. And in the U.S. As you know, there is a second piece of legislation, you know, being constructed right now, which is the market structure bill called the Big,
Starting point is 01:05:26 Clarity Act, which is sort of phase two of the legislative agenda. And I wondered maybe if you could just tell us a little bit about your view of the importance of that bill and then, you know, kind of how do you think that process is going? I think it's extremely important. So as you mentioned, we passed the Genius Act a few months ago, but that was just for stable coins. Stable coins are about 6% of the total market cap in terms of tokens. So 94% are all the other types of tokens.
Starting point is 01:05:53 And the Clarity Act would apply to all of that and provide the regulatory framework for all those other crypto projects and companies, you know, if we could be sure that, you know, currently we have a great SEC chairman, Paul Atkins, and if we could be sure that Paul Atkins and a person like Paul Atkins was always at the SEC forever, that we wouldn't necessarily need legislation because they're already in the process of implementing like much better rules and providing regulatory clarity. But the truth is that we don't know for sure. And if you're a founder who's trying to make a decision now about where you're going to build your company, you want to have certainty for 10 years out, 20 years out. We want to encourage long-term
Starting point is 01:06:31 projects. And so, again, I think it's very important to canonize the rules that first provide the clarity and then to make sure there's enough stability around them and sort of canonize those rules in legislation. That's the only way that you provide that long-term stability. I think that we will get the Clarity Act done. Like you mentioned, it passed the House with about 300 votes, so about 78 Democrats. So it was substantially bipartisan. I think it will ultimately it's now going through the Senate. I think it will ultimately get done. We're negotiating with a dozen or so Democrats. We have to get to 60 votes. So that's the hard part is under the filibuster. We got to get 60. So, but we're negotiating with about a dozen Democrats. And I do think that we
Starting point is 01:07:16 will ultimately get to that number. By the way, we ended up having 68 votes in the Senate for Genius, including 18 Democrats. So I do think that even if we just get, you know, two-thirds of the number of Democrats that we got for Genius, then, you know, we'll be fine on clarity. But, you know, this will provide the regulatory framework again for all the other tokens besides stable coins. And I think it's just a critical piece of legislation. And yeah, this would ultimately, I think, kind of complete the crypto agenda where we've kind of, you know, moving from, you know, Biden's war on crypto to Trump's crypto capital of the planet.
Starting point is 01:07:57 And then, you know, I think the industry will have the stability it needs and can just focus on innovating and there'll be, you know, rule updates and things like that, but, you know, fundamentally have the foundation for the industry in place. On Genius Act, you know, President Trump really
Starting point is 01:08:13 made that bill possible. I mean, first of all, it was his election that completely shifted the conversation on crypto. We would still be, you know, if a different result had been reached, we would have, again, sort of like figure at the SEC, the founders would still be getting prosecuted. We wouldn't know what the rules are.
Starting point is 01:08:29 Elizabeth Warren would be calling the shots. So President Trump's election made everything possible, and it's his commitment to the industry, and his commitment to keeping his promises during the election that's made all of this possible. But also, I mean, he got directly involved in making sure the Genius Act passed. It was, the legislation was declared dead many times.
Starting point is 01:08:48 I saw it with my own eyes that he was able to persuade recalcitrant votes and twist arms cajole and charm and he ultimately got it done and I think that clarity will be a similar result
Starting point is 01:09:06 people are always prematurely declaring these things to be dead or whatever there are a lot of twists and turns the legislative process it's definitely true that you don't want to see the sausage getting made but anyway I think we're on a good right now. Good. Fantastic. Great. Pete Buttigieg went on all in recently and you guys talked about
Starting point is 01:09:25 the left's identity crisis and he's hoping for a more moderate, you know, center left to emerge at the same time we see Mamdani in New York. I'm curious what you think of, where are you seeing in terms of what is the future in terms of for the Democratic Party in terms of it? Is there a more moderate presence or is it kind of this Mamdani style, you know, woke populism? I mean, it certainly seems to me that Mamdani and I don't know like the woke socialism seems to be the future of the party I mean that's where all the energy is and their base I mean I don't want that to be the case I'd rather have a rational Democrat party but but that seems to be where their base is where the energy is and you don't really hear Democrats within the party trying to self-police
Starting point is 01:10:10 and distance themselves from that you saw I mean all the major figures in the Democrat Party have endorsed Mamdani so yeah I mean I mean that's where that party seems to be headed. I think that partly it's where their base is at. I think partly it might be a misread of, it could be kind of like a partial reaction to Trump where they feel like, you know, establishment politics has kind of failed,
Starting point is 01:10:41 and so they need a populism of the left to compete with a populism of the right. And so I think that's maybe part of the calculation for why they're going this direction. But I don't, you know, fundamentally I don't think it works. I don't think socialism works. I don't think the, you know,
Starting point is 01:10:55 defund the police, empty all the jails policies work. So, you know, I think we're about to get another, you know, case, a teaching moment in New York. Unfortunately, it's not going to be good for the city. But, you know, we've seen this movie before. But, but yeah, that's where, I mean, it does appear, Party is. I don't completely get it. I mean, other people have made this observation,
Starting point is 01:11:21 but they do seem to be on the 20% side of every 80-20 issue. You know, opening the border, you know, the soft on crime stuff, you know, releasing all the repeat offenders. And just sort of this, you know, anti-capitalist approach, you know, which I think will be disasters for the economy. I mean, this, but this is kind of where the party's out right now. It is, it's a little scary because it does mean that if we lose elections in places where we do lose elections, it's like, you know, you could end up with something really horrible, not just like, you know, we're not just playing in the 40-yard lines anymore in American politics. And that, that is a little bit scary. Yeah. And I do think that, you know, if it weren't for Donald Trump, I think in a way
Starting point is 01:12:12 we might already be there. You know, I think, you know, but we have to make sure that, that this, um, this, um, the, the Trump revolution continues. Lastly, um, we were just talking about New York. Um, recently in an episode all in, in San Francisco, you, you know, endorsed bringing the, the National Guard, you know, Benioff had his comments. He sort of, you know, went back and forth, but forth in those comments. I'm curious if, speaking of teaching moments, I'm curious if you see San Francisco as saveable in some sense and and what needs to be true to get there if so.
Starting point is 01:12:48 Well, Daniel Lurie is the best mayor we've had in decades. So I think he's doing a very good job within the constraints that San Francisco presents. So the mayor job, unfortunately, we have a weak mayor in San Francisco. I don't mean him. I just mean like the way it's all set up. The Board of Supervisors has, you know, a ton of power.
Starting point is 01:13:08 And over time, they've been able to kind of transfer power from the mayor to themselves. And then, of course, you've got all these left-wing judges. I mean, it's just amazing to me that there's a case right now. This is a case that galvanized me several years ago, the case of Troy McAllister, who was a repeat offender who killed two people on New New Year's Eve, I think it was like 2020. And he was arrested four times, you know, in the year before that he ended up killing
Starting point is 01:13:42 these two people, and he had a very, very long criminal history. He had committed armed robbery before, stolen many cars. And he should have been in jail. He should not have been released, but he was basically released thanks to the zero bail policies of Chesa Boudin, who was then the district attorney who he got recalled. There was a huge outcry. I mean, even in San Francisco for there to be a recall of a policy, I mean, you've got to be like seriously left wing to basically alienate San Francisco. And Chesa Boudin managed to be so far, out there that he alienated even San Francisco. And yet, I don't know why Tori McAllister isn't sentenced already in jail for 20 years plus,
Starting point is 01:14:21 but his case is still pending through the courts, never ending, and there's a left-wing judge who's considering just giving him diversion. Basically means you just get released, maybe with an ankle bracelet or something. That's insane. So, I mean, that's what we're dealing with in San Francisco. I mean, like crazy left-wing judges who want to release all the criminals. and, you know, and so I just wonder, like, is Daniel up against too many constraints, and therefore, I know he doesn't want the president to send in the National Guard, but maybe ultimately it would be helpful. But in any event, I think the president has agreed to kind of hold off on that out of, you know, Daniel had a good conversation with the president and asked him to hold back.
Starting point is 01:15:03 And any of the president agreed and is giving him time to implement his solutions. And look, if Daniel and his team can keep making progress and fix the problems without the National Guard having to come in, then so much the better, we'll just see if, and I know he wants. to. And like I said, he's the best mayor we've had in decades. It's just a question of whether he'll be too constrained by the other powers that be in the city. David, thank you so much for coming on the podcast. Yeah, good see you guys. Fantastic. Thank you. David, there's a great. And thank you for the work. We, as much as anybody, appreciate the work that you've done to fix the things in the past and put us on a great road to the future. Well, thanks. I appreciate what you guys have done
Starting point is 01:15:47 as well. So thank you for your support, everything you're doing. So, yeah, appreciate it. Definitely. Thanks for listening to this episode of the A16Z podcast. If you like this episode, be sure to like, comment, subscribe, leave us a rating or review, and share it with your friends and family. For more episodes, go to YouTube, Apple Podcast, and Spotify.
Starting point is 01:16:10 Follow us on X, A16Z, and subscribe to our Substack at A16Z.com. Thanks again for listening, and I'll see you in the next episode. as a reminder the content here is for informational purposes only should not be taken as legal business tax or investment advice or be used to evaluate any investment or security and is not directed at any investors or potential investors in any a16z fund please note that a16z and its affiliates may also maintain investments in the companies discussed in this podcast for more details including a link to our investments please see a16z.com forward slash disclosures Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.