Moonshots with Peter Diamandis - Ex-Google CEO Breaks Down the US vs. China AI Race & How We Avoid a Global Crisis w/ Dr. Eric Schmidt & Dave Blundin | EP #207

Episode Date: November 11, 2025

This episode was recorded at https://www.imaginationinaction.co/ Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends   Eric Schmidt is the former CEO of... Google; Chair and CEO of Relativity Space. Dave Blundin is the founder & GP of Link Ventures _ Connect with Peter: X Instagram Connect with Eric:  X Linkedin  His latest book Connect with Dave:  X LinkedIn Listen to MOONSHOTS: Apple YouTube – *Recorded on November 7th, 2025 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Is America gonna win the AI race? There are three obvious threats right now. Is China winning the global race to develop artificial intelligence technology? So how far ahead of China do you think we are? Overall, I would say... China put in 172 gigawatts of solar last year, I think is the number. It's remarkable. We needed in our calculation by 2030, 92 gigawatts to be built.
Starting point is 00:00:28 A big nuclear power plant is somewhere between one and one and a half gigawatts. The country needs more energy. And if we don't get more energy, we're not going to be able to fully exploit the lead we have in AI and AGI. It's very clear. It's probably the case that... Now that's the moonshot, ladies and gentlemen. Everybody, welcome to part two of Eric Schmidt Week on moonshots. In this episode, my moonshot mate.
Starting point is 00:00:58 Dave Blundon is interviewing Eric Schmidt about U.S. versus China and how to avoid crisis during this period of hyper exponential growth in AI. Heads up, this audio recording from Eric is a little bit choppy. He was on Wi-Fi from his hotel room, but guarantee you the content is valuable. So please listen in. And also, this was recorded about a month ago. It took us a while to get the footage out to all of you. All right, let's jump in this episode with Dave Blundon and Eric Schmidt. So what don't I start you with, is America going to win the AI race?
Starting point is 00:01:37 Because I know that's a topic that you've spoken on quite a bit. It's also addressed a little bit in your new book, Genesis, AI, hope, and the human spirit, which hopefully everybody will read with Henry Kissinger, actually, as a co-author. So are we going to win the AI race? And what are the scenarios where we win and lose? He didn't look like we will. And let me find, like, so I think the San Francisco consensus, which is I call it, is what people in San Francisco believe,
Starting point is 00:02:07 which turned to be here, is that you believe to be built from current objectic computing to various forms of pervasive self-improvement to eventual AGI and superintelligence. In order to do that, requires a dewormous amount of hardware. Google TPUs, the big A ship, and silver, and everybody in the audience knows that. It sure looks like the hardware restrictions that the Trump and Biden and the stations that put on China are going to prevent them from competing at that space.
Starting point is 00:02:45 I would agree with Kali in Shanghai for a few days. I have the relationship with the Chinese. And my conclusion is they're fighting a different thing. They're regular about AI in every product, every service, everything, but in a more classical way, where the market has been a secret for AGI. I was quite worried that we would end up in a super-intelligent phrase where you would end up with such enormous gains that one side would have to actually attack the other one. Now there's a preemptory attack. It looks like that fear from me was not well-granted in facts. I think we're going to be okay for a few years. really what about robotics we seem to be pretty far behind in that front yeah if if it's okay for me to be
Starting point is 00:03:31 completely blunt um the chinese are doing the same thing in robotics that they have done in electronic vehicles um in case you're confused while the current government in the u.s gets rid of solar and wind subsidies and promos china put in 172 gigawatts of solar last year i think is the number it's remarkable so the the chinese race around solar and evis electronic vehicles of one kind another looks like they've won they're using all of those technologies in particular new ways of building stepper motors and other very inexpensive very powerful you know physical thing good example is unitary just launched the r1 for six thousand dollars uh available in december i'm i've ordered one we'll see how good it is
Starting point is 00:04:24 But the arrival of humanoid robots is likely to be dominated by China. I'm not suggesting there won't be areas where the U.S. will play. The U.S. will still have spaces of very high-end, very sophisticated stuff. Our software is so much better than the Chinese software. But at the hardware level, I think you should assume that the world will be awash and inexpensive Chinese robots. In the same sense that there'll be a wash and inexpensive Chinese electric vehicles. Who knew? And so this is going to be my last question, but let me bump it to the front, since you just beautifully segwayed into it.
Starting point is 00:05:00 In the race for AGI, but then also in parallel robotics, China is going to be miles ahead in electricity production. And in the very short term, we're all going to be chip constrained, TPU constraint. But if you look three or four years out, the fabs are running at full throttle, the chips are coming out by the millions. Then you're suddenly electricity constrained. is there a vulnerability for America there? It's a huge issue. So, again, let's use China versus the U.S. as a metaphor. What are China's strengths?
Starting point is 00:05:32 And I'm not praising China. I'm just trying to report it. They have solved their electric power problem. They also have full control over social media, so they don't have the kind of problems that people here complain about. They have enormously talented software people, and they don't have enough hardware.
Starting point is 00:05:50 I think that's roughly where they are. They're also incredible capitalists. They call it Chinese socialism with Chinese characteristics. But trust me, it's pure raw capitalism. Let's call it what it is. In the U.S., we have the many benefits everybody understands. We do not have enough electricity, and at least in our consumer stuff, the Chinese are likely to beat us.
Starting point is 00:06:19 our hardware architectures are fantastic. And I'm including the Amazon ship, obviously the invented ships, TPUV-5, which I'm happy to say I was part of TPU version 1. All of that stuff is incredible. So if you think about it, and I testified in Congress a month or two ago in this, we looked at the amount of electricity required in the United States to power the expected demand. of data centers, and we needed, in our calculation,
Starting point is 00:06:53 by 2030, 92 gigawatts to be built. And for reference, a big nuclear power plant is somewhere between one and one and a half gigawatts. To give you a sense of how many nuclear power plants are getting started in America, effectively zero. So we had hoped, and I've hoped in my lobbying and testifying, that the government would fast track availability of all kinds of electricity.
Starting point is 00:07:22 And indeed, they have promoted oil and gas, but they've also hobbled solar and wind to a terrible degree, which is an error. The country needs more energy. And if we don't get more energy, we're not going to be able to fully exploit the lead we have in AI and AGI. It's very clear. And by the way, the obvious next question is, what do we do?
Starting point is 00:07:46 Well, there are scenarios. For example, the president went to, Saudi and the UAE and did huge deals for multiple gigawatts. And so we might find ourselves in a situation where our training for our most important thing, the thing which are the essence of America, which is American intelligence, is actually being developed in kingdoms. And that may be the only fallback we have. That is really weird.
Starting point is 00:08:08 And that begs a very difficult question. Do you mind if I ask you a tough one? Go ahead. I've been kind of inspired by you for most of my adult life. life. And I really have been watching your podcast where you're talking about, look, remember when you guest lectured Eric Brunielsen's class over at Stanford, actually, you said when you were running Google, you felt like you made decisions three times faster than any company
Starting point is 00:08:34 on the planet, but then when you got into the federal government, you felt like the decision making was one-third as fast as even a slow company. So from your point of view, it's like, you know, a tiny fraction of the pace that we need to move. But you've been saying recently that, you know, one of the things that's... inevitable is some kind of an AI disaster, and we're hoping that it's like a hundred people that die, and not a thousand or ten thousand or a million, or even a hundred million. But, you know, suppose that it does play out that there's a catastrophe that's a wake-up call.
Starting point is 00:09:06 What do you want to do with that wake-up call? What's the next move for Eric Schmidt after the wake-up call? Well, so the background here is that Dr. Kissinger, Henry, and I spent an awful lot of time talking about the period in the 1950s where he was a key component of all of these things and the so so what he did was he used the fact that we had used the nuclear bomb to negotiate over about a 15 year period a set of treaties that restricted nuclear proliferation those treaties when they were when they were negotiated have allowed us to be alive today. So these were centrally important. Without controlling the spread of uranium and the other secrets, we would all be toast, literally, because of crazy people and
Starting point is 00:09:59 so forth. Is there an analogous set of things that we can do? The problem here is, or I guess the good news is, we're not in a war, we haven't had a nuclear bomb, we don't have that thing to discuss. So we can talk about it, but governments tend to act reactively. So let me talk There are three obvious threats right now, which I think are fairly well understood. The first is misinformation, and the software that we're all collectively giving people allows for all sorts of misinformation, fake videos, fake news, what have you. We all understand this. It's all open source.
Starting point is 00:10:34 That's done. That's a threat to democracies, and maybe to dictatorships, and certainly to democracies. The second one is cyber. And I think one way to understand cyber is that if you can write code, you can also right cyber attacks. It's the same logic. And you have these incredible gains in software. It's frightening how good these, remember my career is as a programmer. These things program better than I ever did. It's like shocking, right? And then the third one is bio. And I think most people believe that one of those three will create some kind of mini-crisis that will then
Starting point is 00:11:10 cause the governments to say, hang on, let's have a conversation about how to really deal with the downsize. The upsides are incredible, right? And I want America to win, and I want us to run as fast as we can, and we're indeed doing that with the Trump administration, which is great, right? But we have to be aware that these things are possible. The one I'm particularly worried about is biological, and it goes something like this. You take some existing pathogen and using biological techniques, which I won't discuss, you can modify it enough that it cannot be detected, but it's still quite dangerous. That's an example of a threat. There are many others. Yeah, that's also the easiest, I think, which is scary of the immediate, you know, CBR and a threat, so cyber, biological,
Starting point is 00:11:54 radiological, nuclear. Biological is the one you can kind of do in a basement with three people, but it's also the one that's hardest to contain. So anyway, I hope the theory is right. So then how do you deal with proliferation? You know, the Biden approach was, okay, let's contain AGI to these five companies, you know, Google being one of them. And then let's say any model with over 1E26 training flops has to register with the federal government, and then we'll keep it all contained. So that all got scrapped immediately, you know, after the election
Starting point is 00:12:27 and got replaced by the new David Sachs document, which the David Sachs document is much more about how do we move as quickly as possible and win the race. But it doesn't really address proliferation. And obviously in America, we want startups, and researchers to have incredible access to technology and compute. On the other hand, you know, the three people in a basement making a biological weapon is a real scary thing. So how do you balance those?
Starting point is 00:12:55 It turns out if you read the David Sachs President Trump announcement, they're very clear that they want to continue to study the security aspects of AI, especially in a geopolitical way, and that's code for China versus the US. And so they continue in their proposal to follow fund things involving nation state attacks and so forth and so on and i fully support that kind of stuff um there's probably a difference on the misinformation social media stuff between the two but for example the biden rule was simply that you had to report if you're doing a training run 10 to the 26 or greater i was part of the group that made that number up uh we made it up because we had
Starting point is 00:13:35 no better number or i'm not suggesting it's the right number and the the conclusion you come to is it's probably the case that we know our government, I don't know this, but I'm guessing, knows where the training runs are going on in China because of espionage. And it's probably the case that the Chinese have espionage on the U.S. knowing where our training runs are. So I'm not sure the nature of the training runs is a secret. And frankly, everybody knows where the data centers are because they're immense. So we will see. Another attack on the 1026 is that the training is getting more cost efficient.
Starting point is 00:14:19 If you look at the moving, they move from something called FP16 to FP8. That means eight precision floating point. People are now moving to four-bit floating point, which is bizarre. It turns out these training algorithms seem to be quite tolerant for floating point in precision, which is a shock to me. So again, we're getting more efficient in training. This creates more of a proliferation problem. If I can say in general the proliferation issue,
Starting point is 00:14:46 I'm not worried about big companies in big countries because I can count them. You know, there'll be 10 huge data centers, 10 huge training runs around the world. I'm much more worried about the open source groups which can operate in the shadows. They don't have to solve every problem. They just have to do one thing well.
Starting point is 00:15:05 They can pass together the open source, which is generally available, and it's good enough. If you look at the quality of DeepSeek R1 and now R2 coming, sure looks like it's in 80 or 90% of these top closed models. The closed models people, which obviously includes Google, get very defensive over this because they say, look, those are kind of synthetic. They have diffused the models.
Starting point is 00:15:29 They've trained our best information. All of that is true. They're correct. But they're nevertheless useful for specific things, and they could be used in a proliferation. issue to do various forms of cyber and biological attacks. Well, yeah, the current rule of thumb is that distillation and transfer learning is about 1% of the cost of the original training but gets you to the same destination.
Starting point is 00:15:52 So if you take, you know, FP4, which you mentioned, you know, we get an 8X performance boost over FP32, so you got an 8X on the, you know, the weights, and then you've got another 100x on the transfer learning. And it's really hard to draw an analogy to nuclear or chemical weapons in the past. because you couldn't kind of shrink them a thousand X under the covers and get the same result. But AI is really weird that way. It's very compressible, very fluid. Well, again, we will see. It does not look like we have a very good solution for distillation.
Starting point is 00:16:28 It looks like an opponent of a company can mask their queries to look like normal sets of users and then distill the models. One of my friends has thought about this a lot, and he thinks that the eventual state in the United States is that the biggest models will never be released, and that the companies will distill their own models down for that reason. That's his opinion. We'll see if that's true. And this produces a bizarre outcome where the biggest models in the United States are closed source, and the biggest models in China are open source. And the geopolitical issue there, of course, is that open source is free, and the closed source models are not free.
Starting point is 00:17:16 And so the vast majority of governments and countries who don't have the kind of money that the West does and so forth will end up standardizing on Chinese models, not because they're better, but because they're free. And so we'll see if that's true or not, but I do worry about that. Well, this whole topic of distillation and proliferation is a good segue into a much happier topic, which I really want to use, want to use the remaining time on. I cannot tell you how inspiring you are to the founders around Cambridge, MIT, Harvard, Northeastern, where all of our talent comes from.
Starting point is 00:17:45 In my perfect world, you would podcast or say something or publish something every single day, and then people could turn off CNBC and just tune into what you say. Because, well, I mean, aside from it being brilliant, you have access to knowledge that isn't in anyone else's brain, as far as I can tell. You've got this really, really unique combination of perspectives, and it's incredibly valuable. So, you know, using distillation and as a starting point, what founder advice would you give? You know, we're seeing numbers like they're completely unprecedented at ages of founders that are also unprecedented. I mean, you saw Sergey and Larry when they were very, very young, but now they're even younger.
Starting point is 00:18:24 We're talking 18, 19 years old now. So what advice would you give them? So I think the most important thing to say is that the barrier to entry to starting a company is, effectively zero now. So let's think about it. What do you need to have a company? You can register it online. You need some money to get started. You have to pay yourself. You don't really need any programmers. You need a couple people to want to pay Google or Claude or what have you to write the code for you. So you're pretty good there. You can use third-party logistics companies if you're building a hardware device and you can use
Starting point is 00:19:05 essentially contract manufacturers to build whatever hardware vice. So it looks to me like for hardware and software, the barrier to entry is almost zero. That sounds great until you realize that as a result, you're now competing against everyone all the time. So as an example, in my career, which has spanned more than, I guess, 55 years now doing this stuff, the key thing that's true is the compression of time.
Starting point is 00:19:32 The other thing I would say to founders is it's really, important that everything you do be learned and not specified. I'm doing a couple of startups on my own. We'll see how well they do. But with them, I say, I know nothing, learn everything. So you can learn how to support your customers. You can learn what the customer wants. You can learn how to, and learning meaning in the AI sense of learning, learning it as part of either supervised or unsupervised training. And if you take a learning approach, then you build a system that if it works it will explode because once the learning accelerates you get into a quasi-monopoly position
Starting point is 00:20:13 so the philosophy of winning goes something like this run as fast you can get there as quickly as you can build it around learning and if it takes off you'll be a hero because once it learns it learns how to become stronger and eventually two or three years you'll start to have various forms of reinforcement learning which are self-replicating. And so then you're likely to get accelerations further. That's the most likely path for the next trillion-dollar company
Starting point is 00:20:42 past the equivalence of anthropic, opening eye, you know, et cetera. And so when you're looking at an investment yourself, by the way, the learning loops concept that Eric just mentioned on the Moonshots podcast that we did with Eric, which you can find online, he actually described in detail the three or four different types of learning loops that he looks for. So it's definitely worth, you know, it's a lot more than we have time for right now, but it's really, really brilliant. So aside from the actual business plan and the learning loops, what do you look for in founder dynamics, founder team members?
Starting point is 00:21:18 It's always the case that the founders are really, really smart. They're very, very quick, and they're very interesting to talk to. It's also true that you need them to be able to hire a network of people like them. And so a simple way is if you talk to them and they seem really interesting and really dynamic and you find that they're in a network of such people, you're likely have winners. And then what you do is you say to them,
Starting point is 00:21:43 don't tell me the product, which of course what they want to talk about, show me how you're gonna build a system that goes from zero to infinity. There's a lot of discussion about zero to one, and there it's complete compression, get that thing done. But once you have one, how are you going to scale? Our industry,
Starting point is 00:22:02 makes enormous wealth for the founders when you build a platform that is scaling that's the lesson what is the lesson to offer build a platform that scales if you can't and a platform is defined as something which others depend on that you provide right and the stronger the platform the more networked is the more interconnected it is the stronger the network lock-in that's just generally true it was true from microsoft it was true for what i did 20 years ago it's true today and i think you If you look, I'll give you an example. Everyone's looking at where are the economics for the large LLM companies. They don't have a strong enough network lock-in yet, but you could easily imagine that they would develop it. And I'm sure they don't tell me what they're doing, but I'm sure that they have that in the back of their mind.
Starting point is 00:22:48 That's part of the reason why their valuations are so hot. So we have one minute remaining, and I really want to use it to milk you for a quote that we can loop on our wall in the office. And it would, the perfect quote to me, would be about the importance of this moment in time. And so many of the founders, they weren't around when Microsoft could, you know, Bill Gates could wake up in the morning and decide, hey, I'm going to destroy Word Perfect today.
Starting point is 00:23:15 And I'm going to destroy Lotus 1,2, 3 today. Because they just had that power at that time. And Novell was certainly in that crosshairs, too. Then there was this magical moment of the Internet explosion where companies like Google and many, many others could thrive. But then after that, you know, things got static again. And now we're in the most explosive time period for entrepreneurs that I've ever experienced in my life. But very few people remember all the way back to the Internet explosion era.
Starting point is 00:23:49 So I'd love to get your quote or your thoughts around, what is the importance of this moment in time? So I firmly believe that the arrival of non-human intelligence, AI intelligence, is at the level of electricity or the invention of fire, transportation, etc., in human history. We are fortunate to be living in a time of great historical consequence. The next 10 years are probably the 10 years that will have a greater determination over the next 100 years than anything before because of the invention. of these new tools. And the tools are very, very powerful. And remember, they're powerful because they can equal, and in some cases, surpass human intelligence.
Starting point is 00:24:35 And human intelligence is everything for a society. And so the countries and companies that embrace this non-human intelligence correctly and aggressively will be the big winners. And the companies and countries that are slow or let other people do it, they will lose because the source of excellence, The source of excellence, the source of leadership, the source of growth, the source of everything, the source of innovation, everything.
Starting point is 00:25:00 Economic growth comes from the application of intelligence to discover new things to solve new problems. We are on a huge course to accelerate that here in America, and I'm very proud to be part of it. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.