Moonshots with Peter Diamandis - How Life Changes When We Reach Artificial Superintelligence w/ Dr. Fei-Fei Li & Dr. Eric Schmidt | EP #206

Episode Date: November 7, 2025

This episode was recorded at https://www.imaginationinaction.co/ Get access to metatrends 10+ years before anyone else - https://qr.diamandis.com/metatrends   Eric Schmidt is the former CEO of... Google; Chair and CEO of Relativity Space. Fei-Fei Li is an AI researcher & professor at Stanford University; Co-director at Stanford Human-Centered AI Institute. _ Connect with Peter: X Instagram Connect with Eric:  X Linkedin  His latest book Connect with Fei-Fei Li X Linkedin Her latest book Listen to MOONSHOTS: Apple YouTube – *Recorded on October 27th, 2025 *The views expressed by me and all guests are personal opinions and do not constitute Financial, Medical, or Legal advice. Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 What does superintelligence mean and when is it likely to be here? Eric, we've talked about this. What are your thoughts? Superintelligence is defined as AI's ability to know from chemistry to biology to to sports. It's already super to human in many ways. But the collaboration between humans and AI will be the most productive and fruitful way of doing things. All of the evidence, and Fei-Fei said this very well, is going to be human and computer interaction.
Starting point is 00:00:35 I think that to get to real superintelligence, we probably need another algorithmic breakthrough. It's so important as we talk about AGI and ASI, that the most important thing that we keep in mind is... Now that's the moonshot, ladies and gentlemen. Everybody, welcome to Eric Schmidt Week. Two episodes this week, both with Eric, talking about some of the most important developments. The first is with me and Eric and Faye-Fei Lee, who's building world models. This is a conversation we had live at FII in Saudi Arabia talking about digital superintelligence. This is rare because Faye Faye doesn't travel.
Starting point is 00:01:20 It was her first trip to Saudi, and it was an extraordinary conversation. The second episode is between my moonshot mate, Dave Blundon, and Eric, Schmidt talking about U.S. versus China and how to avoid, you know, critical breakdowns and what might be catastrophes in the future due to accelerating AI. All right, get ready for two incredible episodes. The first one, myself, Eric, and Faye Faye Lee on digital superintelligence. All right, let's jump in. We're going to be discussing superintelligence. What does that mean and what happens when it arrives? We've been talking about AI, AGI, now perhaps digital superintelligence or ASI.
Starting point is 00:02:07 I want to start with the obvious question, and it's one that I don't think anybody has a perfect answer for, but what does superintelligence mean and when is it likely to be here? Eric, we've talked about this. What are your thoughts? So a simple, thank you, Peter, and thanks for everybody for being here, and obviously thanks to Faye Faye, our very close colleague, the general accepted definitions of general intelligence is its human level of intelligence, AGI. And human intelligence, you can understand because we're all human.
Starting point is 00:02:40 You have ideas, you have friends, you have, you know, you think about things, you're creative. Superintelligence is defined as the intelligence equal to the sum of everyone, right? Or even better than all humans. And there is a belief in our industry that we will get to superintelligence. We don't know exactly how long. There's a group of people who I call the San Francisco consensus, because they're all living in San Francisco. Maybe it's the weather or the drugs or something.
Starting point is 00:03:12 But they all basically think that it's within three to four years. I personally think it'll be longer than that. But fundamentally, their argument is that there are compounding effects that we're seeing now, which will race. us to this much faster than people think. And Faye, I don't think anybody's expected the performance that AI has given us so far. The scaling laws have given us capabilities that are extraordinary. You know, you're the CEO of a new company, the founder of World Labs.
Starting point is 00:03:43 You've been at Stanford working on this. How do you think about superintelligence? Do you discuss superintelligence at all in your work? Yeah, that's a great question, Peter. And, you know, when Alan Turing dared humanity with the question of, can we create thinking machines, he was thinking about the fundamental question of intelligence. So the birth of AI is about intelligence,
Starting point is 00:04:11 is about the profound general ability of what intelligence means. So from that point of view, AI is already born as a field that tries to push the boundary of what intelligence mean. Now, in fast-forward to 75 years after Alan Turing, this phrase, superintelligence is pretty hot in Silicon Valley. And I do agree with Eric that the colloquial definition is the capability of AI and computers
Starting point is 00:04:47 that's better than any human. But I do think we need to be a very human. think we need to be a little careful. First of all, some part of today's AI is already better than any human. For example, AI's ability of speaking many different languages, translating between, you know, dozens and dozens of language, pretty much no human can do that. Or AI's ability to calculate things really fast. AI's ability to know from to biology, to sports, you know, the vast amount of knowledge. So it's already super to human in many ways.
Starting point is 00:05:33 But it remains a question that can AI ever be Newton? Can AI ever be Einstein? Can AI ever be Picasso? I actually don't know. For example, we have all the celestial data of the movement of the stars that we observe today. Give that data to any AI algorithm. It will not be able to deduce Newtonian law of motion.
Starting point is 00:06:10 That ability that humans have, it's the combination of creativity, abstraction. I do not see today's AI or tomorrow's AI being able to do that yet. Eric? So one of the common examples that, and Pfei, of course, got it right, is to think about if you had all of the knowledge in a computer that existed in 1902, could you invent relativity, basically the physics of today? And the answer today is no. So for example, if you look at what is called.
Starting point is 00:06:46 test time compute, where the systems are doing reasoning, they can't take the reasoning that they learned and feed it back into themselves very quickly. Whereas if you're a mathematician, you prove something, you can base your next proof on that. It's hard for the systems today, although there are approximations. So we don't know where the boundaries are. The example that I'd like to use is let's imagine that we can get computers that can solve everything, that we normally can do as humans, except for these amazing set of creativities. How do really creative people do it? The best examples are that they are experts in one area, they see another area, and they have
Starting point is 00:07:30 an intuition that the same mechanism will solve a problem of a completely different area there. That's an example of something we have to learn how to do with AI. An alternative would be to simply do it in brute force, using reinforcement. learning. The problem is that combinatorily, the cost of that is insane, and we're already running out of electricity and so forth.
Starting point is 00:07:53 So I think that to get to real superintelligence, we probably need another algorithmic breakthrough. We need another what? Algorithmic breakthrough. Another way of dealing with this. The technical term is called non-stationarity of objectives. What's happening is the
Starting point is 00:08:09 systems are trained against objectives, but to do this kind of creativity that Fife is talking about, you need to be able to change the objectives as you're doing them? We've seen this past year, I think GPT5 Pro reach an IQ of like 148, which is extraordinary. And of course, there is no ceiling on this. I mean, it loses meaning at some point, but the ability for every human on the planet to have an Einstein level, not in the creativity side, but intelligence side in their pocket,
Starting point is 00:08:42 it changes the game for 8 billion humans. And now with Starlink and with $50 smartphones, it's possible that every single person on the planet has this kind of capability. Add to that humanoid robots, add to that, you know, a whole slew of other exponential technologies. And the commentary is we're heading towards a post-scarcity society, right? Do you believe in that vision, Faye Faye?
Starting point is 00:09:11 I do think we have to be a little careful. I know that we are combining some of the hottest words from Silicon Valley, from AI, superintelligence, humanoid robots, and all that. To be honest, I think robotics has a long way to go. I think we have to be a little bit careful with the projection of robotics. I think the ability, the dexterity of human-level manipulation is, you know, we have to wait a lot longer to get it. So are we entering post-scarcity? I don't know.
Starting point is 00:10:01 I actually, I'm not as bullish as a typical Silicon Valley person, because I think we're entering, I absolutely believe AI will be augmenting human capabilities in incredibly profound ways. But I think we will continue to see that the collaboration between humans and AI will be the most productive and fruitful way of, of doing things. So the projection is that AI is going to generate as much as 15 trillion dollars in economic value by 2030, an idea that shifting the foundation of national wealth from capital to labor to computational intelligence. So what's that implication, Eric, for the global economy?
Starting point is 00:10:52 How are we going to see redistribution, if you would, of wealth or of capabilities? Are we going to see a leveling of the field between nation states or are we going to see runaway winners? So in your abundance hypothesis, which we've talked a lot about, there may be a flaw in the argument because part of the abundance argument is that it's abundance for everyone. But there's plenty of evidence that these technologies have network effects which concentrates to a small number of winners.
Starting point is 00:11:24 So you could, for example, imagine a small number of countries getting all those benefits. In those countries, you could imagine a small number of firms and people getting those benefits. Those are a public policy question. There's no question the wealth will be created because the wealth comes from efficiency.
Starting point is 00:11:42 And every company that has implemented AI has seen huge gains. Think about here we are in Saudi Arabia. You have all of this oil distribution, all the oil networks, all the losses. AI can easily improve that by 10%, 20%. There's a huge numbers for this. country. If you look in biology and medicine and drug discovery, much faster drug approval cycles,
Starting point is 00:12:04 much lower costs of trials, look at materials, much more efficient and easier to build materials. The companies that adopt AI quickly get a disproportionate return. The question is, are those gains uniform, which would be our hope, or, in my view, more likely, largely centered around early adopters, network effects, well-run countries, and perhaps capital. But you could imagine still that we're going to see autonomous cars in which the ownership of a car is four times, let me put it the other way, being in an autonomous vehicle is four times cheaper than owning a car.
Starting point is 00:12:38 We can see AI giving us the best physicians, the best health care for free, in the same way that Google gave us access to information for free. We will see a massive demonetization in so much of our world. I think that it will be available to anyone with a smartphone and a decent bandwidth connectivity. Is that still not what you think will happen? Do you think there's something that would stop
Starting point is 00:13:04 that level of distribution of those services, which we spend a lot of our money on today? I do think AI democratizes. That I totally agree with you. I think whether it's health care or transportation or knowledge, AI will democratize massively. But I agree with Eric that, this increased global productivity does not necessarily translate to share prosperity.
Starting point is 00:13:32 Shared prosperity is a deeper social problem. It involves policy, it involves, you know, geopolitics, it involves distribution. And that's a different problem from the capability of the technology. So what's your advice to the country leaders that are here that are seeing ASI, as a future for someone else and not for themselves. What should they be doing? I mean, there's a speed at which is deploying. They don't have a lot of time to make critical decisions. Well, it's worth describing where we are now.
Starting point is 00:14:07 In the United States, because of the depth of our capital markets, and because of the extraordinary chips that are available in the Taiwanese manufacturers, TSMC in particular, America has this huge lead in building these what are called hyperscalers. If there's going to be superintelligence, it's going to come from those efforts. That's a big deal. If there is superintelligence,
Starting point is 00:14:29 imagine a company, like Google inventing this, for example, I am obviously biased, what's the value of being able to solve every problem that humans can't solve? It's infinite. So that's the goal, right? China is a second, doesn't have the capital markets, doesn't have the chips,
Starting point is 00:14:49 and the other countries are not anywhere near. Saudi has done a good job of part partnering with America, and the hyperscalers will be locating here and in the UAE, that's a good strategy. So that's a good example of how you partner. You figure out which side you're on, hopefully it's the United States, and you work with the U.S. firms. I do think countries all should invest, invest in their own human capital, invest in partnerships, and invest in its own technological stack, as well as the business ecosystem. This is This is, as Eric said, it depends on the strengthened particularity of the different countries,
Starting point is 00:15:30 but I think not investing in AI, it would be macroscopically the wrong thing to do. So under the thesis that investment involves building out data centers in your nation, do you think every country should be building out a data center that it has sovereign AI running on? every country is a very sweeping statement I do think it depends it depends I think obviously for a region like this absolutely where you know oil energy is cheaper and and such an important region in the world but if we're talking about smaller countries I don't know if every single country can afford to build data centers but there are other areas of investment, right?
Starting point is 00:16:25 Let me give an example. Let's pick Europe. It's easy to pick on Europe. Energy costs are high, right? Financing costs are not low. So the odds of Europe being able to build very large data centers is extremely low, but they can partner with countries where they can do it. France, for example, did a partnership with Abu Dhabi. So there are examples of that. So I think if you take a global view and you figure out who your partners are, you have a better chance. The one that I worry a lot about is Africa. And the reason is, how does Africa benefit from this? So there's obviously some benefit of globalization, better crop yields, and so forth, but without stable governments, strong universities, major industrial structures, which Africa,
Starting point is 00:17:10 with some exceptions, lacks, it's going to lag. It's been lagging for years. How do we get ahead of that? I don't think that problem is solved. We've seen incredible progress with AI today effectively beginning what people call solving math. That potentially tips physics, chemistry, biology, and we have the potential, my timeframe is the next five years, others may think longer, to be in a position to solve everything
Starting point is 00:17:42 where the level of discovery and the level of new product creation, new materials, biological therapeutics and such begins to grow at a super exponential rate. How do you think about that world in five years, Eric? So, first, I think it's likely to occur. And the reason, technically, is that all of the large language models are essentially doing next word prediction. And if you have a limited vocabulary, which matters,
Starting point is 00:18:15 is, and software is, and also cyber attacks are, I'm sorry to say, you can make progress because they're scale-free. All you have to do is just do more. So if you do software, you can verify, you can do more software. If you do math, you can verify it, do more math. You're not constrained by reality, physics and biology. So it's likely in the next few years that in math and software, you'll see the greatest of gains. And we all understand your point that math is at the basis of everything else. I think it's a, Fafi as the expert on the real world,
Starting point is 00:18:52 there's probably a longer period of time to get the real world right, which is why she founded the company, of which I'm an investor. Do you want to talk about that? Yeah. Well, first of all, I actually want to respectfully disagree.
Starting point is 00:19:06 Okay. I do not think that we will solve all the problems, fundamental math and physics and chemistry problems in five years. We're going to take a bet on that one. Yes, so FI.I. 14. Okay, you got it. We should take a bet on that.
Starting point is 00:19:21 Part of humanity's greatest capability is to actually come up with new problems. You know, as Albert Einstein said, most of science is asking the right question. And we will continue to find new questions to ask. And there are so many fundamental questions that in our science and math that we haven't answered. Faye Faye, your new company, World Labs, creating extraordinary, persistent, photorealistic worlds. Are you expecting that we are going to be spending a lot more of our time in virtual worlds? I mean, my 14-year-old boys right now are spending way too much time in their virtual gaming worlds. But is this what we're going to do in a, you know, 10, 20 years in a post-ASI world where we don't have to work as much?
Starting point is 00:20:13 We have a lot more free time. Our robots maybe by then are serving us. Are we going to live in the virtual worlds? Great question. So what we are doing is building large world models. That's a problem that's after large language models that humans have the ability to have the kind of spatial intelligence that we can understand the physical 3D world.
Starting point is 00:20:35 We can imagine any kind of 3D worlds and be able to reason and interact with it. and interact with it. So we do not yet, up till what our company has been doing, we do not have such a world model. So World Labs, the company, I co-founded and I'm CEO, has just created the first large world model. So the future I see, I actually agree with you,
Starting point is 00:21:03 that we will be spending more time in the multiverse of the virtual world. It doesn't mean that the reality, the real world, this world, this physical world is gone. It's just so much of our productivity, our entertainment, our communication. Our education. Our education are going to be a hybrid of virtual and physical world. Think about, you know, think about in medicine, you know, how we conduct surgery is very much going to be a hybrid world of augmented reality, virtual reality, as well as physical reality. And we can do that in every single sector.
Starting point is 00:21:49 So humanity is using these large world models are going to enter the infinite universe era. And I had a chance to see your model backstage. It's amazing. If you haven't yet, go check out Fei-Fei's World Labs. The technology she's building is going to be world-changing. So my last question here is about human capital. So superintelligence has been called the last invention humanity will ever make.
Starting point is 00:22:20 As it could automate eventually every process. We'll see if it automates discovery. We'll see how much of creation automates. But in a world where the best strategy, science, and economic decisions are being made by machines at some point, what is the ultimate irreplaceable function of human? intellect, and leadership. What are humans innately going to be left with in 10, 20 years? Well, in 20 years, we will enjoy watching each other compete in human sports,
Starting point is 00:22:54 knowing that the robots can beat us 100% of the time. But if you go to Formula One, you're going to want to see a human driver, not an automated car. Yes. So humans will always be interested in what other humans, can do. And we'll have our own contests, and perhaps the supercomputers will have their own contest too. But your reasoning presumes many, many things. It presumes a breakout of intelligence in computers that's human-like, unlikely, probably a different kind of intelligence.
Starting point is 00:23:29 It presumes that humans are largely not involved in that process, highly unlikely. All of the evidence, and Faye Faye said this very well, is going to be human and human. and computer interaction, that basically we will all have savants. Going back to what you said, about 8 million people, 8 billion people with smartphones with Einstein and their phone, the smart people, of which there's a lot, will use that to make themselves more productive. The win will be teeming between a human and their judgment and a supercomputer and what they can think.
Starting point is 00:24:03 And remember that there is a limit to this craze that supercomputers and super-intelligence need energy. So perhaps what will happen at some point is the supercomputers will say, huh, we need more energy, and these humans are not building fusion fast enough. So we'll accelerate it. We'll come up with a new form of energy.
Starting point is 00:24:23 Now, this is science fiction. But you could imagine at some point the objective function of the system says, what do I need? I need more chips or more energy, and I'll design it myself. Now, that would be a great moment to see. I agree.
Starting point is 00:24:37 I do want to say, It's so important as we talk about AGI and ASI, that the most important thing that we keep in mind is human dignity and human agency. Our world, unless we are going to wipe out the species, which we're not, has to be human-centered. Whether it's automation or collaboration, it needs to put human agency and dignity and human well-being in the center of all this, whether it's technology, business, product, policy, or any of that. And I think we cannot lose our focus from that. Amen.
Starting point is 00:25:16 Everybody, ladies and gentlemen, Faye Faye Lee, Eric Schmidt. Thank you all. Every week, my team and I study the top 10 technology metatrends that will transform industries over the decade ahead. I cover trends ranging from humanoid robotics, AGI, and quantum computing, to transport energy, longevity, and more. there's no fluff, only the most important stuff that matters, that impacts our lives, our companies, and our careers. If you want me to share these metatrends with you,
Starting point is 00:25:45 I writing a newsletter twice a week, sending it out as a short two-minute read via email. And if you want to discover the most important metatrends 10 years before anyone else, this reports for you. Readers include founders and CEOs from the world's most disruptive companies and entrepreneurs building the world's most disruptive tech. It's not for you if you don't want to be informed about what's coming, why it matters and how you can benefit from it. To subscribe for free, go to Demandis.com slash Metatrends to gain access to the trends 10 years before anyone else.
Starting point is 00:26:16 All right, now back to this episode. our prices online and enjoy unlimited delivery with PC Express Pass. Get your first year for $2.50 a month. Learn more at pceexpress.ca.ca.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.