No Priors: Artificial Intelligence | Technology | Startups - NVIDIA’s Jensen Huang on Reasoning Models, Robotics, and Refuting the “AI Bubble” Narrative

Episode Date: January 8, 2026

Even if ChatGPT never existed, the tech giant NVIDIA would still be winning. The end of Moore’s Law—says NVIDIA President, Founder, and CEO Jensen Huang—makes the shift to accelerated computing ...inevitable, regardless of any talk of an AI “bubble.” Sarah Guo and Elad Gil are joined by Jensen Huang for a wide-ranging discussion on the state of artificial intelligence as we begin 2026. Jensen reflects on the biggest surprises of 2025, including the rapid improvements in reasoning, as well as the profitability of inference tokens. He also talks about why AI will increase productivity without necessarily taking away jobs, and how physical AI and robotics can help to solve labor shortages. Finally, Jensen shares his 2026 outlook, including why he’s optimistic about US-China relations, why open source remains essential for keeping the US competitive, and which sectors are due for their “ChatGPT moment.”  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @nvidia Chapters: 00:00 – Jensen Huang Introduction 00:17 – Biggest AI Surprises of 2025 04:12 – AI and Jobs: New Infrastructure and Demand for Skilled Labor 09:03 – Task vs. Purpose Framework in Labor 12:31 – Solving Labor Shortages with Robotics 15:14 – The Layer Cake of AI Technology 18:39 – The Importance of Open Source 21:52 – The Myth of “God AI” and Monolithic Models 23:54 – Addressing the “Doomer” Narrative and Regulation 29:25 – The Plummeting Cost of Compute and Tokenomics 35:09 – The Return to Research 37:49 – Future of Coding and Software Engineering 43:20 – The Industries Due For Their “ChatGPT” Moments 46:00 – The Evolution of Self-Driving Cars and Robotics 54:06 – Energy Demand and Growth for AI 58:49 – 2026 Outlook: US-China Relations and Geopolitics 1:04:43 – Is There An AI Bubble? 1:16:20 – Conclusion

Transcript
Discussion (0)
Starting point is 00:00:00 Vincent, thanks so much for joining us today. So great to have you guys. What an amazing year. What a year. Happy Hanukkah. Merry Christmas. Happy New Year coming up. Happy holidays.
Starting point is 00:00:16 So with everything that's happened in 2025 and being in the middle of the vortex with it, what do you reflect on and say, like, this surprised you most? Or this is the biggest change. Let's see. There's some things that didn't surprise me. For example, the scaling laws didn't surprise me because we already knew about that. The technology advancement didn't surprise me. I was pleased with the improvements of grounding.
Starting point is 00:00:40 I was pleased with the improvements of reasoning. I was pleased with the connection of all of the models to search. I'm pleased that there are now routers that are in front of these models so that it could, depending on the confidence of the answers, go off and do necessary research. and just generally improve the quality and the accuracy of answers. I'm hugely proud of that. I think the whole industry addressed one of the biggest skeptical responses of AI, which is hallucination and generating gibberish and all of that stuff. I thought that this year, the whole industry,
Starting point is 00:01:20 everything from every field, from language to vision to robotics to self-driving cars, the application of reasoning and the grounding of the answers, big, big leaps, would you guys say this year? I mean, things like open evidence, too, for medical information where doctors are not really using that as a trusted resource. Like Harvey, for legal, you're really starting to see AI emerge as one of these things, become a trusted tool or counterparty for, you know, experts to actually be able to do what they do much better.
Starting point is 00:01:54 That's right. And so in a lot of ways, I was expecting it, but I'm still pleased by it. I'm proud of it. I'm proud of all of the industry's work in this area. I'm really pleased and probably a little bit surprised, in fact, that token generation rate for inference, especially reasoning tokens, are growing so fast, several exponentials at the same times, that seems. And I'm so pleased that these tokens are now profitable, that people are generating, I heard somebody, or heard today that open evidence, speaking of them, 90% gross margins. I mean, those are very profitable tokens.
Starting point is 00:02:35 Yeah. And so they're obviously doing very profitable or very valuable work. Cursor, their margins are great. Claude's margins are great. For the enterprise use of Open AI, their margins are great. So anyways, it's really terrific to see that we're now. generating tokens that are sufficiently good, so good in value that people are willing to pay good money for. And so I think these are really great grounding for the year. I mean, some of the
Starting point is 00:03:00 things that the narrative that, of course, the conversation with China really, really, you know, occupied a lot of my time this year, geopolitics, the importance of technology in each one of the countries. I spent more time traveling around the world this year and just by any time the history, all of my life combined, you know, my average elevation this year is probably about 17,000 feet, you know, so it's nice to be here on the ground with you guys. And so I think geopolitics, the importance of AI to all the nations, all worth talking about later. You know, of course, I spent a lot of time on expert control and making sure that our strategy is nuanced and really grounded and promotes national security, but recognizing the importance of various
Starting point is 00:03:47 various facets of national security. A lot of conversations around that. You know, of course, of course, lots of conversation about jobs, the impact of AI, energy, labor shortage. I mean, boy, we covered everything. Did we not? Everything was AI. Everything was AI.
Starting point is 00:04:06 Yeah, it was incredible. Yeah, I was definitely in the center of the storm for like every one of those themes. Maybe when we can start with actually is jobs because their jobs and employment. Because when I look at the traditional AI community, even before things were scaling and even before AI was really working, there was a strong sort of jimsday component in the people working on AI, oddly enough. The people who are most trying to push the field forward were often the people who are most pessimistic, which is very odd. Why would you do both at once? And I feel like that narrative is taken over some subset of media or some set of other things, despite all the things that we think are very positive about what AI is done. That's going to help with healthcare, with education, with productivity, with all these other areas. And, In general, whenever we have a technology shift, you have shift in terms of the jobs that are important, but you still have more jobs. That's right. Could you talk about how you think about employment and jobs and sort of what people are saying and what you think the real narrative is there? Maybe what I'll do is I'll ground it on three points in space, three points in time, now, maybe a very near future, and then some point out in the distance, and maybe some counter narratives, something else.
Starting point is 00:05:16 to think about with respect to jobs in the near term. One of the most important things is that AI is not just, AI is software, but it's not pre-recorded software, as you know. For example, Excel was written by several hundred engineers. They compiled it. It's pre-recorded. And then they distributed it as is for several years. In the case of AI, because it takes into the context, what you asked of it, what's happening
Starting point is 00:05:43 in the world, right? contextual information, it generates every single token for the first time, every time, which means every time you use the software and everything that we do, AI is being generated for the first time ever, just like intelligence. Our conversation today relies on some ground truth and some knowledge, but every single word is being generated for the first time here. The thing that's really quite unique about AI is that it needs these computers to generate these tokens every single time. I call them AI factories because it's producing tokens that will
Starting point is 00:06:19 be used all over the world. Now, some people would say it's also part of infrastructure. The reason why it's infrastructure is because obviously it affects every single application. It's used in every single company. It's used in every single industry, it's every single country. Therefore, it's part infrastructure like energy and internet. Now, because of that and the amount of computers that's necessary to generate these tokens, and it's never happened before, and because we need these factories, three new industries have emerged. Number one, well, three new type of plants have to be created. Number one, we have to build a lot more chip plants. TSM is building, SK-Hinix, building a lot more plants. And so we need more chip plants. We need more computer plants.
Starting point is 00:07:02 These computers are very different. These are supercomputers that the world has never seen before, right? Grace Blackwell looks like a very different type of computer than anything that's ever been made and entire rack is one GPU. And so we need new supercomputer plants. And then we need new AI factories. These three plants are currently being built in the United States at very large scale quite broadly all over the United States for the very first time. The number of construction workers, plumbers, electricians, technicians, network engineers, you know, right, the number of the skilled labor that's necessary to support this new industry in the near term, it'll be enormous. Let's just face it. I'm so excited to hear that electricians are seeing their
Starting point is 00:07:47 paychecks double. They're being paid to travels like us. We go on business trips. They're going on business trips. And so it's really terrific to see that these three industries are now three types of plants, factories, are just creating so much jobs. The next part is the near-term impact of AI on jobs. And one of my favorites is, I love Jeff Hinton, he said, you know, some five, six, seven years ago that in five years time, AI will completely revolutionize radiology, that every single radiology application will be powered by AI, and that radiologists will no longer be needed. and that he would advise the first profession not to go into his radiology.
Starting point is 00:08:42 And he's absolutely right. A hundred percent of radiology applications are now AI powered. That's completely true. And in some eight years' time, it is now completely pervaded radiology. However, what's interesting is that the number of radiologists increased. And so now the question is why. And this is where the difference between task versus radiologists, purpose of a job. A job has tasks and has purpose. And in the case of a radiologist, the task is to
Starting point is 00:09:15 study scans. But the purpose is to diagnose disease. And to do new research. And that, exactly, and they're doing new research. And so in the case, in their case, the fact that they're able to study more scans, more deeply, they're able to request more scans, do a better job diagnosing disease, the hospital's more productive. They can have more patients, which allows them to make more money, which allows them to want to hire more radiologists. And so the question is, what is the purpose of the job versus what is the task that you do in your job? And as you know, I spend most of my day typing. That's my task. But my purpose is obviously not typing. And so the fact that somebody could use AI to automate a lot of my typing, and I really appreciate that.
Starting point is 00:10:05 it helps a lot. It hasn't really made me, if you will, less busy. In a lot of ways, I've become more busy because I'm able to do more work. So I think that's the second part to consider is the task versus the purpose of the job. This example really strikes home because my sister-in-law, Erin, actually leads in nuclear medicine at Stanford. So she's in radiology. And with all the technology advancements that are coming, these doctors really welcome it. And they are working 20 hours a day trying to do more research and serve more patients. And I think one thing that is often missed beyond the sort of diversity of jobs being created by this investment in infrastructure is actually how much latent demand there is for
Starting point is 00:10:48 different goods that we need in society, like better health care. I don't think anybody feels like, you know what, we have reached the tip top, mountain top of like what American health care or global health care could be. And the more we can make these people, people productive, the more demand there will be. That's exactly right. If Envidio is more productive, it doesn't result in layoffs. It results in us doing more, more things. I met your new hire class today.
Starting point is 00:11:17 You seem to be hiring every week anyway. That's exactly right. Yeah. The more productive we are, the more ideas we can explore, the more growth as a result, the more profitable we become, which allows us to pursue more ideas. And so I think you're absolutely right that if the job, if your life, if the world, the problems is literally already specified and there's no other problem to solve, then productivity would actually reduce the economy, but it's clearly going to increase the economy.
Starting point is 00:11:51 I think that the next part that I would consider is, you know, people say, gosh, all of these robots that we're talking about is going to take away jobs. As we know very clearly, we don't have. have enough factory workers. Our economy is actually limited by the number of factory workers we have. Most people are having a very hard time retaining their workers. We also know that the number of truck drivers in the world is severely short. And the reason for that is people don't want those jobs where you have to travel across the country and live in different parts of the world, different parts of the country, you know, every single night. And so people want to stay in their town, stay with their families. So I think I think the first part, is that having robotic systems is going to allow us to cover the labor shortage gap, which is really, really severe in getting worse because of aging population. This is not only United States all over the world, as you guys know. And so we're going to cover the labor shortage.
Starting point is 00:12:51 But the second part that people forget, and as a result will go. There are shortages as well in other places that people talk about AI being relevant. Accounting would be an example where there's shortages there. nursing is another example. So you can go through multiple other industries and say, okay, there's gaps. And the I is trying to help fill those gaps. That's exactly right. And so automation is going to help us increase and solve the labor gap.
Starting point is 00:13:16 Now, people also don't remember that when we have cars, we need mechanics to take care of our cars. And if you look at the robotaxies that are even on the streets today, it's taken 10 years for that to have. And look at all the maintenance crews and all of the various, you know, the hubs that they're in where you have to take care of these robotaxies. And just imagine we have a billion robots. It's going to be the largest repair industry on the planet. So I think a lot of people don't, they just have to think through. And this is the part where you said, when we create this type of automation, we create this other job. Right now, look at AI is creating so many jobs.
Starting point is 00:13:58 The AI industry is creating a boom of jobs. I think one of the core challenges here is it's very easy to draw a straight line of extrapolation from like, oh, you know, there are tools that help lawyers be more productive. It's going to replace the lawyers. But it's actually, it takes like a step of incremental reasoning to say there's a sucking sound in the economy for everything in AI infrastructure. There's actually a sucking sound toward all of this demand that is latent in the places where we have gas. where I think a lot of policymakers have focused on, you know, we can't replace or reduce what we have when it's really, there's, there's far more demand in what we actually are not fulfilling. And in the case of the lawyer, what's the purpose of the lawyer versus the task of the lawyer? Reading a contract, writing a contract is not the purpose of the lawyer. The purpose of the lawyer is to help you resolve conflict.
Starting point is 00:14:53 And that's more than reading a contract. It's more than writing a contract. the purpose is to protect you, that's more than reading a contract. It's more than writing a contract. And so I think just it's really, really important to go back to what is the purpose of the job versus the task that we use, you know, to perform that jobs. That changes over time. Yeah, the other big theme of the year that you mentioned that I think is really important to touch upon is both China is sort of in the rise of Chinese open source in particular where, you know, some of the highest scoring models against benchmarks and our Chinese models on the open source side, on the closer side, it's to, a lot of the US models, but things like Quinn, Deepseek, etc., are doing very well. You've long been a proponent for open source in general. Could you share your views about both China emerging for AI, for open source, and what the US should be doing in terms of both open source as well as its own industries?
Starting point is 00:15:43 When you think about these complicated, interconnected, dependent networks of problems, this big goop of a mesh of problems, it's always good to go back and find a framework for what it is that we're talking about. In the case of AI, what is AI? Well, of course, the technology of AI and the capabilities of AI is about automation. It's about automation of intelligence for the very first time. And you could combine it with Megatronics technology to embody that Megatronics and make it perform tasks.
Starting point is 00:16:26 So that's what's AI automation. But what is the stack that makes AI possible? What's the technology stack, the functional stack? And of course, the easiest way to think about that is it's kind of like a five-year, five-year, five-year, five-liter cake, which is at the lowest level is energy. It transforms energy to the output that I just described. The next layer is chips. The next layer is infrastructure.
Starting point is 00:16:51 And that infrastructure is both hardware, software, right? This is where land power and shell. This is where construction is. Data centers are the software stack, you know, for orchestrating. So it's software and hardware. The layer above that is where everybody thinks about, which is AI, which is the models. We know this, but it's really helpful to understand that AI is a system of models. And AI is a technology that understands information.
Starting point is 00:17:21 And there's human information. And so we oftentimes think about AI as a chat bond. But remember, there's biological information, there's chemical information, there's physical information, information, information of all kinds. There's financial information, there's health care information, there's information of all modalities, all kinds. AI is really, really broad. And of course, human language is at the foundation of many things, but it's not the essence of everything. because, as you know, biology, molecules don't understand English. They understand something else, right?
Starting point is 00:17:56 Proteins don't understand English. They understand something else. I think the next layer, the important thing is that's where the AI models are, but there's a whole, that AI is very, very diverse. And then the layer above that is applications. And it depends on the industry. And you already mentioned open evidence. You mentioned Harvey.
Starting point is 00:18:13 There's cursor. There's all kinds of applications. Full self-driving is really an application in AI. application that is embodied into a mechanical car and figure is an AI application that has been embodied into a mechanical human and so so you got all these different applications well this five layer stack is one way of thinking about it and then the next way you're thinking about I just mentioned is AI is really diverse when you now have this framework of what the the technology capabilities are how to how to build the technology and how diverse it is then you can come
Starting point is 00:18:49 back and think about, okay, let's ask the question, how important is open source? Well, without open source, you know, today, of course, the frontier models, the leading labs have chosen to use a closed source application approach, which is just fine. You know, what people decide to do with their business models is really in the final analysis. There's their business. And they have to, they have to calculate what is the best way for them to get the return on investment so that they could scale up and make better advances. However, they made that calculus is fantastic. On the other hand, without open source, as you know, startups would be challenged. Companies that are in different industries, whether it's manufacturing or transportation or it could be in health care, without
Starting point is 00:19:40 open source today, all of that AI work would be suffocated. And so they just need to have something that's pre-trained. They need to have some fundamental technology about reasoning. From that, they could all adapt, fine-tune, you know, train their AI models into exactly the domain and application they want. And so what people really, really miss is just the incredible pervasiveness and the importance of open source to all of these industries, large companies without open source, some of 100-year-old companies that I work with. in industrial spaces and healthcare spaces,
Starting point is 00:20:20 they would be suffocated. They wouldn't be able to do that work. Open source at this point is driving all of our data centers. It's driving a big chunk of telephony in the world in terms of Android or other devices. It's driving.
Starting point is 00:20:30 Exactly. You know, to point, a lot of the industrial application. So it's already pervasive. And I think the big question is... Open source, without open source, higher ed. Higher ed wouldn't happen.
Starting point is 00:20:38 Education research, startups. I mean, the list goes on, you know? And so we talk all day long about the tip The most visible part of that, the part that's most newsworthy maybe, but underneath that is such an important space of open source AI. And whatever we decide to do with policies, do not damage that innovation flywheel. So I spent a lot of time educating policymakers that help them understand.
Starting point is 00:21:11 Whatever you decide, whatever you do, don't forget open source. Whatever you decide, whatever you do, don't forget biology. I think the counter-narrative here that is worth addressing is that essentially, like, you know, there should be a monolithic vertical player and monolithic asset in the, like, one model that does it all, and that we can't give away that crown jewel to other countries or non-American companies. And your argument is, like, we actually need this huge diversity of AI applications. and the American advantage is actually, or any sovereign advantages in the whole stack, right, the capability to deliver any piece of it. I guess someday we will have God-A-I.
Starting point is 00:21:55 When is that day? But that someday is probably on biblical scales, you know, I think galactic scales. I think it's not helpful to go from where we are today to God-A-I-on. And I don't think any company, practically believes there anywhere near God A.I. And nor do I see any researchers having any reasonable ability to create God AI. The ability to understand human language and genome language and molecular language and protein language and amino acid language and physics language all supremely well.
Starting point is 00:22:36 That God AI just doesn't exist. And yet we have a lot of industries that need AI. A.I. A.I. is, if you will, at the simplistic level, it's just the next computer industry. And give me an example of a company, an industry, a nation who doesn't need computers. And we all don't have to wait around for God AI for us to advance, right? So God AI is not showing up next week. I'm fairly certain of that. And God AI, God AI is not going to show up next year. But the whole world needs to move forward next week, next year. decade, I think that the idea of a monolithic, gigantic company, country, nation state that has God AI is just unhelpful. It's unhelpful. It's too extreme. Then, in fact, if you want to take it to that level, then we ought to just all stop everything. What's the point of having even governments? I mean, why are they doing policies? God AI is going to be smart enough to avert, you know, work around any policy.
Starting point is 00:23:42 And so what's the point? And so I think that we ought to bring things back to the ground, ground levels and start thinking about things practically and use common sense. This seems to be like a big theme in general in terms of this conversation where there's been a lot that's been kind of put out there that seems very extreme if you actually think about it. It's the jobs and employment. Nobody's going to be able to work again.
Starting point is 00:24:05 It's God AI is going to solve every problem. It's we shouldn't have open source for X, Y, Z reason, despite open source powering much of our industries already. That's right. And so it seems like in general, maybe one of the themes of 2025 was there's a lot of extremes that were sort of painted in the public with AI that if you look at them very closely, don't really follow a logical chain in terms of happening anytime soon. Yeah. And so it sounds like it's really important to have this conversation. Extremely hurtful, frankly.
Starting point is 00:24:29 And I think we've done a lot of damage with very well-respected people who have painted a doom-dumer narrative, end-of-the-world narrative, science, fiction narrative. And, you know, and I appreciate that many of us grew up and enjoyed science fiction. But it's not helpful. It's not helpful to people. It's not helpful to the industry. It's not helpful to the society. It's not helpful to the governments. There are a lot of, many people in the government who obviously aren't as familiar with, as comfortable with, the technology. And when PhDs of this and CEOs of that goes to government, and explain and describe these end-of-the-world scenarios and extremely, extremely dystopian future, the future, you have to ask yourself, you know, what is the purpose of that narrative
Starting point is 00:25:25 and what is their, what are their intentions, and what do they hope? Why are they talking to governments about these things to create regulations to suffocate startups? For what reason would they be doing that? You know, and so. And do you think that's just regulatory capture where they're trying to prevent new startups from showing up and being able to compete effectively? Or what do you think is the goal of some of these conversations? You know, I can't guess what they have in mind. I know that the concern is regulatory capture. As a policy, as a practice, I don't think companies at a go-to-governments to advocate for the regulation on other companies.
Starting point is 00:26:01 how to go to governments to advocate for the regulation on other companies and other industries. Just in practice, their intentions are clearly deeply conflicted. And their intentions are clearly, you know, not completely in the best interest of society. I mean, they're obviously CEOs, they're obviously companies, and obviously they're advocating for themselves. And so I think if we can all come back to where are we today and think about where the technology is going to be, I mean, look, literally in one year's time, as we were talking about in the beginning, some of the most proud moments is when the industry was able to invest very aggressively in advancing AI technology instead of being slowed down. Remember, just two years ago, people were talking about slowing the industry down. But as we advance quickly, what did we solve? We solved grounding, we solved reasoning, we solved research.
Starting point is 00:27:10 All of that technology was applied for good, improving the functionality of the AI, not, you know. Yet the end has not come. Yet the end has not come. It's become more useful. It's become more functional. It's become able to do what we ask it to do. You know, and so the first part of the safety of a product, is that it perform as advertised.
Starting point is 00:27:35 The first part of safety is performance. The first part of safety of a car isn't that some person is going to jump into the car and use it as a missile. The first part of the car is it works as advertised. 99.999% of the time working as advertised. And so it takes a lot of technology to make that car or make,
Starting point is 00:28:01 that AI work as advertised. And I'm really glad that in the last couple, two, three years, the industry has invested so much in enhancing the functionality of the AI as advertised. And I think if we were to look at the next 10 years, we have so much work to do to make it work as advertised. Meanwhile, as you know, you both of you invest so much in the ecosystem, you see So many companies being built for synthetic data generation so that the AIs could be more grounded, more diverse, less biased, more safe. You're investing in a whole bunch of companies in cybersecurity, using AI for cybersecurity, right? People think that there's this AI. The marginal cost of the AI is going to go down significantly, and it is.
Starting point is 00:28:51 And therefore, the AI is going to be dangerous. It's exactly the opposite. If the marginal cost of AI is going to go down significantly, that one AI is going to be monitored by millions of AIs. And more and more AI is going to be monitoring each other. People can't forget that an AI is not going to be an agent by itself. It's likely the AI is going to be surrounded by agents, monitoring it. And so it's no different than if the marginal cost of keeping society safe was lower.
Starting point is 00:29:22 We have police in every corner. So one thing that we were talking about a little bit earlier was just the cost of AI and how it's been coming down. And so I think in 2024, the cost of GPT4 equivalent models, if you look at a million tokens, it came down over 100X. You know, somebody of my team did this analysis to show that. So the cost of dropping pretty dramatically and very rapidly, and part of it is all the advancements. You all have been driving on the Bidia level, but also it's across the stack. We've been getting big efficiency gains. Yeah. At the same time, model companies are talking about how the costs are rising, how there's
Starting point is 00:29:56 enormous sort of capital modes to building these things out. How do you think about cost of training and cost of inference over time and what that means for the average end user or the average startup company trying to compete or people trying to do more in this industry? I forget the statistic, but you know, Andre Carpathie estimated the cost of building the first chat, UBT, I think, versus now. I think you could do that on the PC now. Yeah, yeah. It's probably tens of thousands. of dollars at this point or maybe even less right and so it costs nothing and he has an open source project that you can do in a weekend oh is that right okay that's incredible right yeah we're talking about
Starting point is 00:30:32 three years what people people said cost billions of dollars um super computers built raising billions of dollars in order to do all that now cost you know something that you can do on a weekend on a PC. And so that tells you something about how quickly we're making, making AI more cost effective. Or Spark. Sorry, probably not quite a PC. Yeah. Okay. Not quite a PC. Yeah. We're improving our architecture and performance every single year. The first Chibi T you, I think, was train on Voltus. Mm-hmm. And then Ampeer, you know, and it wasn't, I think the first breakthroughs, none of it included Hopper. And, of course, Hopper
Starting point is 00:31:19 the last couple, two, three years, and we're often Blackwell for the last year and a half or so. And every single one of these generations, the architecture improves, and of course the number of transistors go up and the capacity goes up. Every single generation,
Starting point is 00:31:35 very easily, every single year from a computing perspective, the combination of all that, getting 5 to 10x every single year, is not unusual. And here comes Rubin, just around the corner. And so we're seeing
Starting point is 00:31:48 5 to 10x every single year. Well, compounded. It's incredible. Moore's Law was two times every year and a half. And over the course of 5 years is 10X. Over the course of 10 years is 100x. In the case of AI,
Starting point is 00:32:02 over the course of 10 years is probably 100,000 to a million X. Okay? And that's just the hardware. Then the next layer is the algorithm layer and the model layer, the combination of all that, the fact that if you were
Starting point is 00:32:15 to tell me that in the cost, in the span of, you know, 10 years, we're going to reduce the cost of token generation by a billion times. I would not be surprised. Okay. And so that's the tokenomics of AI. On the training side, it's not quite as aggressive in cost reduction, but it's close. If you were to say that every single year we're increasing by two or three X over the course of 10 years, incredible. But the important idea is when somebody says it costs $100 million to train something or half a billion dollars to train something, well, next year it's 10 times less. Next year is people to scale these things up, though, right? So the counter argument is, well, we'll just get bigger every year by 10x or 100x or, you know, we'll try and offset that decrease in cost by scale.
Starting point is 00:33:04 And others can't keep up. Yeah, but really what's happening is, and this is where MOEs come in, as you know, the scale went up by a factor of 10, but the computational burden. did not go up by a factor of 10, because you're getting the compounded benefits of all three things. The hardware's going up. The algorithms of the training models are going up. And of course, the model architecture is going up. And we're getting the benefit of learning from each other. This is, you know, let's face it. Deep Seek was probably the single most important paper that most Silicon Valley researchers read from in the last couple of years. It was the only thing that felt frontier that was open. That's right. In years.
Starting point is 00:33:45 That's right, because a lot of the value of open source again. Yeah. Putting out these papers. Literally Deep Seek benefited American startups and American AI labs all over. And infrastructure companies. And infrastructure company all over. Probably the single greatest contribution to American AI last year. And so if you said this out loud, of course, you know, people kind of shudder that were American AI is actually getting learning from and benefiting from AI from AI from.
Starting point is 00:34:15 other nation. But why would that be surprising? You know, AI researchers in all over America, all over America are Chinese natives and come from different countries. We benefit from every country. We benefit from every researcher. And no, all of the world's ideas don't have to come from the United States. And so I think back to your original question, it is the case that, you know, some of the narratives around, around the cost of AI is about scaring everybody out of the market, you know, nobody ought to do pre-training but us. Nobody should do, you know, training these frontier models but us. But because the, because of innovation of models, algorithms and the computing stack, the cost of AI is actually decreasing well more than 10x
Starting point is 00:35:02 every single year. And so if you're just one year behind or even six months behind, you could, you could really stay close. And I think one thing that felt very different to me about 2025 is Ilya said recently that we're in the age of research again versus an age of scaling. I think both things are happening, by the way. Everybody is also trying to scale on multiple dimensions.
Starting point is 00:35:24 Yeah, exactly. Both are happening. You know, being six months behind or being at 100 versus a 200 K cluster, I think matters if you are competing symmetrically. But now you have people from Frontier Labs or at the very top of the game who have very different ideas about how to rest from here or who are working on diversity of problems.
Starting point is 00:35:43 That's right. And I think that felt different from 24, maybe where there was a lot of energy focused on just pre-training scale in LMs. Yeah, and several other dynamics. As the market grows, each one of these models could choose to have verticals or segments where they want to differentiate. Somebody could decide to be a better coder. Somebody could decide to be just better at being easier to be accessible.
Starting point is 00:36:11 so that it could be a greater consumer product. You know, the diversity of these models. As a result, you could probably make a niche leap without having to be great at everything else and still be super valuable to the market. It's no longer necessary to boil the entire ocean. The first two years ago, because it was called pre-training. You know, people said, well, you know, pre-training is over.
Starting point is 00:36:37 First of all, pre-training is not over. But the point of pre-training is to train yourself for training. That's why he's called pre-training, to prepare yourself to do the real training. And now we call it post-training. It's kind of weird. I think it's just training. But pre-training is pre-training, and therefore it's training. Training, as we all know, is where compute scaling directly translates to intelligence.
Starting point is 00:37:04 you've largely now now the data the data necessary to train the model is actually pretty small maybe it's just a verifiable result now it's really algorithmic very compute intensive and so and you don't have to be good at everything in life as you know just like all of us we don't we could decide because we don't have time to learn everything equally well we decided to choose a specialty and focus all of our energy on it and we become superhuman or incredibly good at something the other people are not. And so I think AI ladders are going to start doing the same. They're going to start bifurcating into various segments. And over time, you're going to, and startups will do the same. They'll find a micro niche and they'll take something open and then be incredibly good at it.
Starting point is 00:37:49 Well, I think one of the most optimistic views here is actually that these micro niches are quite valuable, right? I was talking to Andre, because I've been talking a lot of people about the predictions for next year. We'll ask you yours as well, of course. But he asked, you know, what is what's an example of a prediction that would have been prescient last year? And my answer, everything's easy in retrospect, is that coding would be the first application level business, that gets to a billion of ARR as an AI-native app, right? And I think if you'd taken an old-world view of this, you would have believed, like, one of two narratives, right? One is a single model does everything and it'll all just be subsumed into something monolithic. And two, is that developer tools
Starting point is 00:38:32 never get very big, right? Well, it kind of depends on how valuable the developer tool is. Now, I think many more people understand software engineering is in a niche and there's more demand than ever for it, but I think we'll see more like that. Also interesting, we are using, we use cursor here, and we use cursor pervasively here. Every engineer uses it. And a number of engineers, you just mentioned it, the number of people were hiring today is just incredible. Right? Monday has come to work at Nvidia Day. And why is that? This This is now the purpose and the task. The purpose of a software engineer is to solve known problems
Starting point is 00:39:09 and to find new problems to solve. Coding is one of the tasks. And so if the purpose is not coding, if your purpose literally is coding, somebody tells you what to do, you code it. All right, maybe you're going to get replaced by the AII. But most of our software engineers and all of our software, their goal is to solve problems. and it turns out we have so many problems in the company
Starting point is 00:39:33 and we have so many undiscovered problems and so the more time they have to go explore undiscovered problems the better off we are as a company. Nothing would give me more joy than none of them are coding at all. They're just solving problems. You see what I'm saying? And so I think this framework of purpose versus task
Starting point is 00:39:50 is really good for everybody to apply. For example, somebody who's a waiter, their job is not to take the order. That's not their job as to turn. Their job is so that we have a great experience. And if some AI is taking the order, their job, or even delivering the food, their job is still helping us have a great experience. They would reshape their jobs accordingly. So I think the question about cost of compute is really important.
Starting point is 00:40:23 Let me come back to one. the reason why we are so dedicated to a programmable architecture versus a fixed architect remember a long time ago a CNN ship came along and they said
Starting point is 00:40:37 NVIDIA's done and then and then a Transformership came and VINDA was done people are still trying that yes and the benefit of these dedicated ASICs of course
Starting point is 00:40:48 it could perform a job really really well and Transformers is a much more universal AI network. But the transformer, as you know, the species of it is growing incredibly. The attention mechanism. The attention mechanism, how it thinks about context, diffusion versus auto-regressive. These hybrid SSM transformers. For example, Nemetron, we just announced
Starting point is 00:41:12 a new hybrid SSM. And so the architecture of transformers is in fact changing very rapidly. And over the next several years, it's likely to change tremendously. And so we dedicate ourselves so in architecture that's flexible for this reason so that we can, on the one hand, adapt with, remember, because Moore's law is largely over, transistor benefit is only 10%, maybe a couple of years. And yet we would like to have hundreds of X every year. And so the benefit is actually all in algorithms. And an architecture that enables any algorithm is likely going to be the best one, right? Because the transistor didn't advance that much.
Starting point is 00:41:52 And so I think our dedication to programmability is number one for that reason. We have so much optimism for innovation and algorithms and innovation software that we protect our programmability for that reason. The second thing is by protecting this architecture, our install base is really large. When a software engineer wants to optimize their algorithm, they want to make sure that it doesn't run on just one little cloud or this one little stack. they wanted to run on as many computers as possible. So the fact that we protect our architecture compatibility, then flash attention runs everywhere. So SSMs run everywhere.
Starting point is 00:42:32 Diffusion runs everywhere. Auto regression runs everywhere. Just depending, it doesn't matter what you want to do, CNN still run everywhere. LSTM still runs everywhere. And so this architecture that is architecturally compatible so that we have a large install base, programmable for the future,
Starting point is 00:42:48 is really important in the way that we help to advance And as a result, all of this drives the cost down. And I'm super proud that our latest innovation, MVLink 72, we're the lowest cost token generation machine in the world by enormous amounts. And the reason for that is because MEOs are really, really hard. And so, you know, people didn't expect that. That for MEOs, it's probably easier to train, but for inference, it's incredibly hard to generate tokens on.
Starting point is 00:43:20 As cost drop, usually you open up new applications or new verticals that become more and more accessible. And we talked a little bit about coding like cursor and cognition and other companies that are benefiting from that in this last year. Do you have any thoughts or predictions in terms of what the next breakthrough industries will be or new applications or areas that you're most excited about coming in 26 in particular? Like are there one or two things that you think? Because of three things. Because of a couple, two, three things. I think several industries are going to are going to experience their chat GPT. moment. I believe that multi-modality and very long context is going to enable, of course,
Starting point is 00:44:02 really, really cool chatbots. But the basic architecture, that in combination with breakthroughs and synthetic data generation is going to help create the chat GPT moment for digital biology. That moment is coming. And by digital biology, do you specifically mean other aspects of like protein folding and protein binding or do you mean diagnosis? I see protein synthesis. I think we're good at protein understanding. Now, multi-protein understanding is coming online and we recently created a model called Lott Pratina.
Starting point is 00:44:35 It's opened. It's for multi-protein understanding and representation, learning, and generation. So I think that the protein understanding is advancing very quickly. Now protein generation is going to advance very quickly. chat GPT moment, proteins. Yeah, there are a lot of interesting companies working on molecule design and end-to-end way, like Thai. Exactly. And then, and then of course, chemical understanding and chemical generation. And then protein, chemical, confirmation, understanding, and generation. Is that right?
Starting point is 00:45:08 And so that combination, the chat GPT moment, the generative AI moment, all of that stuff, is coming together for digital biology. And to your point about, like, new industries or, you know, And the way I think about it is, like, investing in the inputs for this AI as well. All of these things around biology and chemistry and material science, they require real-world data generation and experimentation. And that's new infrastructure, too. New infrastructure, synthetic data is going to be really important because they just have such sparse, right, sparsity of data. And they just don't have as much as human language. And there, the real breakthrough is going to be when we can train a world foundation model, a foundation model for proteins, a foundation,
Starting point is 00:45:48 model for cells. I'm very excited about both of those things. Once we have a foundation model, our understanding capability, our generative capability, that data flywheel is really going to take off. The second area that I'm excited about, of course, reasoning made huge breakthroughs in language, but because of reasoning, cars are going to be able to perform better. So instead of just perception cars and planning cars, they're going to be reasoning cars. So these cars are going to be thinking all the time. And when they come up, they come up to a circumstance they've never encountered before, they can break it down into circumstances they have encountered it before and construct a reasoning system for how to navigate through it. And so the out of domain,
Starting point is 00:46:32 out of, you know, out of distribution part of AI is going to very much be addressed by reasoning systems. And as a result, we could do more things than what we're taught to do between generative AI and multimodal, you know, vision, language, action models and reasoning systems, I think we're going to see big breakthroughs in humanoid robots or multi-embodiment robots. What do you think is a time frame for that? Because if you look at the self-driving analog and obviously self-driving technologies were based on very different types of neural networks and what we're using today in terms of, you know, there's been a big swap over the last two, three years in terms of how we do a lot there.
Starting point is 00:47:12 We started too soon. Self-driving cars really had four eras. The first era was smart sensors connected into a car. The mobile era. The mobile era. And even the very earliest days of Waybo. EDAs, yeah. Yeah, even the earliest days of Waymo.
Starting point is 00:47:29 You're using smart sensors, a lot of human engineered algorithms. And severe mapping, extreme mapping. Mapping and then different systems from planning and pursuing. Exactly. And so you're essentially creating a car that is driving on digital rails, right? It's no different than the rails at Disneyland, except there are digital rails. And so that's the first generation. The second generation, and during that generation, you have perception, world model, and planning and these modules. And each one of these modules have the limits of their technology and perception was first affected by deep learning first. And the first. And And then it propagated through the pipeline. And so that, but that system is too brittle. And it only knows how to perform what you taught it. And now where we are are, our end-to-end models.
Starting point is 00:48:25 And then where we're going to go next are end-to-end models. With reason. Yeah, there go. So that those that are kind of the four eras. In a lot of ways, if we were to start self-driving cars probably three years ago, we'd probably be exactly the same place. All our poor friends who were working in self-driving. Yeah.
Starting point is 00:48:42 And I don't mind it. I've been working on it for 10 years. Envidious self-driving car stack, by the way, number one rated safety in the world today. Number one, we just got that rating today last week. And number two is Tesla. So I'm very proud that two American companies are up on the time. Are you, so from a robotics perspective, you think,
Starting point is 00:49:02 because we've already built all these forces of technologies in the modern era, robotics won't have the same 10, 15 years frog. That's right. We'll just jump straight to it. Much more optimistic with robotics because we've kind of advanced. We've been through foundational technology. Now, you know, people are thinking about human robotics. Human robotics has a lot of challenges.
Starting point is 00:49:23 I mean, there's all the megatronics challenges. Like, for example, it's not helpful if the robot weighs 300 pounds. And what happens if it falls over and interacting with kids and so on, so forth. And so you've got all kinds of challenges they deal with. I'm certain that we're going to solve those. But remember, the fundamental technology that goes into human world. robot can go into a pick-in-place robot. It could be, it could be...
Starting point is 00:49:47 How do you think about... One thing I've been curious about for robotics in particular is, if I look at who won, or who's perceived as winning and self-driving, it's largely incumbents, right? It's Waymo, it's Tesla. You mentioned the safety rating Nvidia's gotten. And so it's people who have been working on this for a long time. It took a lot of capital. It's really intensive to get there.
Starting point is 00:50:05 You have supply chain. You have hardware. You have all this extra complexity. Do you think the same thing will be true in robotics? So the winner is basically going to be Tesla, with optimists and other people who have both been in the industry for a while, but also have all those sort of incumbent effects? Do you think there's room for startups?
Starting point is 00:50:18 They will be one of the leader, one of them, and as surely a major one. But everything that moves will be robotic. Everything that moves will be robotic. And everything that moves is a very large space. It's not all human or robot. And yet every AI will be multi-embodement. Meaning, you know, just like a human with our multi-embodyment AI ourselves, we could sit in a car and embody that. We could pick up a tennis racket embody that.
Starting point is 00:50:54 We could pick up a chopstick embody that. And so we could embody the tools. Yeah, people are general purpose, right? We can do all these things. Exactly. And so AI is going to become general purpose. So you have one arm picking place. Maybe it's two arms picking place.
Starting point is 00:51:05 It can be six arms picking place. You know, so I think you're going to have all kinds of different sizes and shapes. it could be a caterpillar, it could be, you know, an excavator, it could be all kinds of stuff. And so AI will embody those just as a, just as a construction worker embodies an excavator, embodies a tractor. You know, they, you know, AI's- Could there be a small number of companies then that do the embodiment for everything? Or you're saying more there's going to be niche application? I could definitely see a lot of software companies. And then those, that software company could serve a lot of different verticals.
Starting point is 00:51:37 But each one of the verticals will still have solution providers. that then grounds it all, turns it into something that works perfectly. Does it make sense? Yeah. Because in the case of AI for consumers, if it works 90% of the time, you're delighted. You're mind-blown. If it works 80% of time, you're satisfied. In the case of most industrial and physical AIs, if it works 90% of the time, nobody cares about that.
Starting point is 00:52:00 They only care about the 10% that it fails, basically, 100% dissatisfaction. And so you've got to take it to 99.999.9.99. So the core technology might be able to get you to 99%. And then that's a vertical solution provider, like a caterpillar or somebody, they could take that core technology and make a 99.99% great. Do you think that's what happens like earliest on? Because in markets that are this immature, it seems one of the fastest paths to market could be full verticalization, right?
Starting point is 00:52:30 Because you just have control of iterations speed. The difficulty of verticalization for technology that is, this general purpose is that you don't have the R&D scale to build a general purpose technology. Now, of course, open source helps that tremendously, which is the reason why you're going to see a big surge of vertical opportunities in AI in the next several years. My prediction would be over the course of the next five years, the excitement is going to be verticalization. Notice, we're excited about open evidence.
Starting point is 00:53:05 We're excited about Harvey. We're excited about Cursor. Cursor is a horizontal, but it's kind of a horizontal vertical, you know, and so I'm super excited about all the verticals. You know, a lot of people said, yeah, AI is going to get so, God AI is going to get so good that all these rapper companies are going to be obsolete. It's just, it misses the big point. You know, the reason why you could talk about, the reason why somebody can talk, talk about somebody is creating technology, you could talk about the life of a surgeon is because they've never been a surgeon. The reason why somebody who builds an AI and talks about the life of an accountant and a tax, you know, a tax experts because they've never been a tax expert, you know. And so I think they just, you know, the reason why somebody could talk about being a bus boy without being a bus boy is they never been a bus boy.
Starting point is 00:53:50 And so I think you've got to be a little bit more empathetic about the depth of the complexity of the work and try to truly understand the purpose of the work. Oftentimes the technology addresses the task. it doesn't address the purpose. So I guess one of the other narratives from, we're looking at narratives that are true versus not true, you know, for 25. One other narrative that's come up has been more about energy and energy utilization and will we have enough energy to support AI? How do you think about that?
Starting point is 00:54:20 On the first week of President Trump's administration, he said, drill, baby drill. He did so much flag for that. If not for this entire change in sentiment about energy growth in our, country, we can all concede now. We would have handed this industrial revolution to somebody else. And we're still power constrained. We're still power constrained. Without energy, there can be no new industry. And of course, we've been energy starved now for, what, a decade? If not for the fact that President Trump reversed that narrative, we would be completely screwed.
Starting point is 00:54:59 Without energy, you can't have industrial growth. Without industrial growth, the nation can't be more prosperous. Without being more prosperous, we can't take care of domestic issues. We can't take care of social issues, you know, on and on and on. And so the fact that it matters, we need energy to grow. We need every form of energy. We need, you know, natural gas. We need to be, of course, we need more energy on the grid.
Starting point is 00:55:22 We need more energy behind the meter. We're going to need nuclear. Wind is not going to be enough. Solar is not going to be enough. Let's just all acknowledge that we'll take it. We'll take everything we can. But the fact that matters, I think, for the next decade, natural gas, you know, is probably the only way to go forward. What's really interesting is I agree the timeline is too far out to address people's, you know, power generation issues in 27 and 28, where, you know, large players, building clusters are very concerned.
Starting point is 00:55:53 But the biggest drivers of, like, climate innovation in the U.S. have actually been as a result of this AI infrastructure problem. Right? Because people look at the demand. Finally, that's right. Demand signal. They look at the demand. And the demand is driving people to create massive new battery companies, solar concentrators, it's put new energy behind, new energy. Like, you know, willpower behind SMRs and permitting. The AI industry is driving all of that sustainable energy industry.
Starting point is 00:56:23 Yeah. Because people see that there is going to be demand for it. That's right. So even if, and I think there is no practical answer in the small number of years timeframe versus large gas, right? It still drives climate innovation. Yeah, no question about it. And I think that's exactly right, that, you know,
Starting point is 00:56:44 Dumer messages causes policy, and that policy may affect the industry in some way, but there's nothing more powerful than demand. Look at all the jobs that's being created. Look at all the industries that's being formed around it. Sustainable energy likely. And when history, realize it as Sarah, I think you're going to be absolutely,
Starting point is 00:57:02 right, that if not for AI, well, AI is probably the biggest driver for sustainable energy ever. Yeah, a friend of mine has a saying that Doomers are the people who sound smart at dinner parties and optimists of the people who grab humanity forward. And I think that's very true for all these things we've talked about. Yeah, yeah, it's really true. Yeah. Well, that's one of the big, big takeaways for this last year, the battle of narratives.
Starting point is 00:57:29 And it's too simplistic. to say that everything that the doomers are saying are irrelevant, that's not true. A lot of very sensible things are being said. It is too simplistic to say that when somebody is optimistic, that they're just naive. It means to be grounded in reality. Yeah, that optimistic people are just naive, you know, and that that's obviously not true. But I think we just have to be mindful of the balance of it. When 90% of the messaging is all around the end.
Starting point is 00:58:02 end of the world and do and the pessimism. And, you know, I think we're scary people from making the investments in the AI that makes it safer, more functional, more productive and more useful to society. And so we just, you know, more secure. All of that takes technology. Security takes technology. Safety takes technology. I appreciate that my car is safer today because it has better technology than a car 50 years ago. And so, so I think it takes technology to be safe, technology to be secure. And so I'm delighted to see that the advancement of technology is still accelerating and ongoing. And so we just have to make sure that the policy makers around the world, the governments are able to, are thinking about balancing these two ideas.
Starting point is 00:58:50 How do you, so I guess we've talked a lot about 25 and the narrative is 25. How do you think about 26? What are you excited about? What do you see coming? What do you think are big changes that we should be aware of. I am optimistic that our relationship with China will improve. That President Trump and the administration has a really, really grounded and common sense attitude about and philosophy around how to think about China, that they're an adversary, but they're also a partner in many ways and that the idea of decoupling is naive and the idea of decoupling for whatever reason, philosophical reasons or national security reasons, it's just not, it's not based on any common sense. And the more you, the more deeply you look into it, the more the two countries are
Starting point is 00:59:43 actually highly coupled. Both countries ought to invest in their own independence. You know, when you depend too much on someone, the relationship becomes too emotional, as you know. And so it's good to have some independence or as much independence as either would like, but to recognize that there's a lot of coupling, a lot of dependence between the two countries. And I think there needs to be a nuanced strategy, a nuanced attitude about how to manage this relationship in a productive way for all of the people of two countries and for all of the people around the world. everybody depends on a productive, constructive relationship of the two most important nations and the single most important relationship for the next century. And so we have to find that
Starting point is 01:00:32 answer. And I'm just really delighted that President Trump is looking for a constructive answer. And so I think that next year will be a much better year than the last several. I'm happy with the administration was able to suggest a an export control policy that is grounded on national security, recognizing that they already make so many chips themselves, and they can depend on Huawei themselves for their military, for their national security. They've got ample technology to do that. And so that American technology, although general purpose, is unlikely to be used by their military because their military is too smart, just as our military is too smart to use their technology. And so it's grounded
Starting point is 01:01:19 on national security. It's grounded on technology leadership. It's grounded on national prosperity. One of the things that we just always have to remember is that the world's mightiest military is supported by the world's mightiest economy. And so the wealth that we generate brings jobs home, creates prosperity in the United States, provides for tax revenues, and ultimately funds the mightiest military on the planet. And so that circular system, that interconnected system requires a nuanced strategy. And I'm pleased to see some of the progress in that area that allows American technology companies to keep America first and keep America ahead and to support American technology leadership on the one hand to win globally. And then China, of course, is sorting
Starting point is 01:02:16 itself out. I mean, I'm not working, but they're sorting out the attitude about how to think about American technology. And there... Because the historical argument there has been that if you look, for example, the internet, there was what was known as a great firewall, right? China basically prevented U.S. competition into China while the opposite wasn't true. There's been mass expatriation of U.S. jobs and industry to China as part of the development of the 90s and 2000s. And so I think a lot of the things that people have brought up from a China-U.S. policy perspective, besides just the military adversarial relationship or spheres of influence or, you know, all the various things like that is also just the economic imbalances that have been perceived to exist between the two
Starting point is 01:02:56 countries. The way that I would think through that is go back to the first principles of technologies again. And let's say the internet, you have the chip industry, you have the systems industry, the software industry, you have the services industry on top. Remember, China's internet growth has been a boom for Intel and AMD selling something. CPUs, Micron selling DRAMs, SK-Hinex and Samsung selling DRAMs. It is the second largest internet market for American technology industry. And so maybe, maybe it wasn't helpful to some layer of the stack. The Googles of the world.
Starting point is 01:03:34 That's right. But don't exclude every layer of the stack. Always come back, every single one of these things. Take a step back and look at the whole stack. Maybe that's a theme for today as well. and it makes sense that you would send this message, but, you know, technology X actually not just the sort of internet software application layer that's been very dominant for two decades.
Starting point is 01:03:54 It's the whole stack. And remember, as Intel and A&D prospered with the internet industry in China growth, the China industry growth, don't forget, China also contributed tremendously to open source. No country in the world contributes more to open source than China. And look at all the startups here in America, that we're able to benefit from out of that open source to create the new startups that are here.
Starting point is 01:04:19 And so you can't look at one area in isolation. You have to look at the whole lifecycle with the technology and look at every layer of the stack. Doesn't make sense? When you take a look at that from that lens, China's Internet industry generated enormous prosperity for America.
Starting point is 01:04:40 Just not at the Internet company per se. Jensen, my other investor friends will not forgive me if I don't ask you about 2026 on the business side. Are we in an AI bubble? AI bubble. Yeah, there's a lot of ways to reason through that. And so again, you know, when asked that question, my mind goes to what is AI and where are we in that? There's AI. Then there's computing.
Starting point is 01:05:09 You know, as you know, Nvidia invented accelerated computing. Accelerated computing does computer graphics and rendering. AI doesn't. Accelerated computing does data processing, SQL data processing. AI doesn't. Accelerated computing does molecular dynamics and quantum chemistry. AI doesn't.
Starting point is 01:05:28 You know, these are all things that people could say someday AI will, but it doesn't today. Accelerated computing is really essential for classical machine learning, XG boost, recommender systems. the whole process of feature engineering, extract, load, and transform, that entire data science, machine learning's life cycle. Accelerated computing is used for all of that. The first thing to go to is in the context of Nvidia,
Starting point is 01:05:52 what we see is the dynamic is a shift from general purpose computing to accelerated computing because most largely ended. You can't use CPUs for everything anymore like you used to. And so it's just no longer predict. productive enough. It's not deflationary enough. And so we have to move towards a new computing model and that's where Accelerated comes in. If Generative AI, well, excuse me, if Chadbots, let's just go, you know, Open AI and Anthropic and Gemini, if none of that existed today,
Starting point is 01:06:25 NVIDIA would be a multi-100 billion dollar company. And the reason for that is because, as you know, the foundation of computing is shifting to accelerated computing. That's the first thing to realize is to take a step back and ask yourself what is actually happening. Now the next layer of, the question about AI now becomes, what is AI? Now, we ask the AI bubble question, and we always go back to Open AI's revenues. 100%, don't we? You ask somebody, hey, is there an AI bubble? Everybody goes directly to Open AI's revenues.
Starting point is 01:07:00 First of all, if Open AI currently has twice the capacity, their revenues will double. You guys know that. If they have 10 times the capacity, I really believe the revenues were 10 times. And so they need capacity. This is no different than Nvidia needs wafers from TSM, just because, you know, Nvidia exists and we're doing great. Doesn't mean we don't need capacity. We need capacity.
Starting point is 01:07:20 We need capacity of demran. And so in our world, it's sensible to everybody. We need capacity. Well, in their world, they need factories. And if they don't have factory capacity, how they generate tokens, which is where we started our conversation today. And so they need factory capacity in order to increase. their revenue growth. But nonetheless, we also said that AI is more than chatbots. It includes all these
Starting point is 01:07:43 different fields of science. Invita's AV business is coming up on $10 billion. Nobody ever talks about that. And you have to train world models. You have to train these AI AVs and it's happening robo-taxies happening all over the world. Our AI work with digital biology, our AI work in financial services. The whole industry of quants, quantitative trading, is moving towards. Yeah, exactly. They used to be classical machine learning, a whole bunch of human featured. They call quants, right? These specialized mathematicians, we're trying to figure out what the predictive features are. Now we use AI to figure it out. And so in order to have, instead of having quants, you need a lot of supercomputers. Financial services is one of our fastest growing segments, billions of dollars in, in
Starting point is 01:08:31 wants, you know, in financial services, billions of dollars in AV, billions of dollars in robotics coming up, billions of dollars in digital biology. And so how big can that all that be? Well, simple logic is this, simple math. Whether you think that AI is going to replace shortage, labor shortage, or workforce shortage in any kind, let's ignore that for a second. The world is $100 trillion in GDP. out of that, let's just say 2%, 2% annually is R&D.
Starting point is 01:09:03 And let's just go back in time. Five years ago, if you were to take the largest drug discovery company in the world, drug company in the world, and where's all of their R&D, wet labs? Today, where do they doing? Building supercomputters. And so there's a fundamental shift in how they think about that $2 trillion. It used to be $2 trillion for the old way of doing it. things. It's not going to be $2 trillion in the AI way of doing things. While $2 trillion is going
Starting point is 01:09:32 to need, $2 trillion of R&D is going to be powered by a whole bunch of infrastructure. And that's the reason why we're building supercomputers everywhere around the world. And so I think if you reason about it from the outside end, you know, either from the foundation up from the outside end, you come to the conclusion that what we're experiencing, what all three of us are experiencing, which is the amount of computing demand is insane. Give me an example of a startup company that goes, no, we're good. They are all dying for computing capacity.
Starting point is 01:10:05 Give me an example for a researcher in any university, a scientist in any company who says, got plenty of capacity. Everybody is dying for capacity. And so we have a global multi-company, multi-industry shortage. It's not just about Open AI, even though OpenEI could use a lot more capacity.
Starting point is 01:10:24 well. So I think how we think about this, what would the narrative, the narrative is not helpful and it's a little bit too superficial to say how do you prove there's an AI bubble, 12 billion dollars of revenues, hundreds of billions of infrastructure being built. It's a little bit too simplistic. Yeah. The other thing people tend to point out is the MIT study. There's some study that I think came out of MIT that claim that most enterprise deployments of AI weren't that useful. And you're like, well, did you do the change management? Did you do it? reorg, did you integrate into tooling, like, how long it would I even take to implement it? If a planning cycle in an enterprise is a year, you use it something in six months.
Starting point is 01:11:01 And so it feels like there's a lot of these kind of, again, overstated things that get a lot of attention, but then you map it against what's actually happening. And the growth of these companies using AI, and it's just a completely different world. And if you want to find out where the world's innovation is happening, I would not go find out at an enterprise. Would you guys agree? Yeah. Enterprise is like the slowest adopters of new technologies. I would go talk to all of the startups, the 30, 40,000 startups that are currently doing this stuff.
Starting point is 01:11:32 I would go talk to Open Evidence. How's it working? I would go talk to Cursor. How's coding working, by the way? You know, I would just go talk to these people. I think it's really interesting that you see that, of course, you do have companies making, you know, $100 million plus, multi-hundred million dollar plus progress of error in enterprise sales, RV, Sierra, etc. But some of the fastest growing companies have been end user adopted, even in conservative industries, right? Like healthcare or, you know, skeptical industries like engineering.
Starting point is 01:12:00 Healthcare, right? The most conservative of all. But guess what? They are so concerned about getting the right answer that the ability to have something like open evidence to do grounded research, high quality research and get that research as information to you. nobody wants to do research they want answers nobody wants to do search they want answers yeah a bridge is a great example that too where they're basically making it really easy to do the physician notes instead of the position sitting there and doing it back to your point on task versus past versus purpose and i think a different way to think about the demand is like
Starting point is 01:12:33 there are so many jobs where you're asking the the work is actually like an impossible ask right of a doctor or a radiologist keep up with the world's biomedical knowledge yeah in r&D which is accelerating you know computing and otherwise um and then do Like archive papers. Yeah. There was a time. You and I both do that. You and I both used to do.
Starting point is 01:12:55 I'm, I don't do that anymore. But now I just load it all into chat chbtee. You know, now I just load it all in with all of the ones that are interesting. And then I make it learn it. Yeah. And then I,
Starting point is 01:13:07 you know, make it summarize. Then you plug it out. And then another summary. And I interact with it. But the point is we used to do search. We don't do it anymore. I don't do search.
Starting point is 01:13:16 We used to do research. You know, the goal is to get answers. The goal is to get smarter. And these AIs allows to help us do all that. And I think all of it, all of it comes back. It's all more helpful if you come back to the framework that says AI is a multi-layer cake. And that AI is not just a chatbot. AI is very, very diverse in all of the industries and modalities and information and applications that it addresses.
Starting point is 01:13:47 when you think about wanting to win that America should win AI, it should not just be America should have this company win AI but we should try to win across the board and across domains. Across domains, exactly. And when we think about open source, all of a sudden
Starting point is 01:14:05 this is a helpful framework. When we think about winning, it's a helpful framework. When we think about energy is a helpful framework because we need factories, factories need energy. And without energy, we have no factory. Without factories, have no AI. That's a helpful framework. And so I think if we have a better understanding
Starting point is 01:14:25 a system of framework for understanding what AI is, I think the narratives will become more common sense. The narratives will become more pragmatic, become more balanced. We want to keep people safe. But one of the best ways to keep people safe is advancing a technology quickly. And I think the industry is doing that. And I'm very proud of the industry for doing No one wants to drive a car from, you know, the first decade of cars. No way. I think... ABS is a really good thing.
Starting point is 01:14:55 Yes. ABS is a really good thing. Lanekeeping is a really good thing. There's no question. FSD is a really good thing. And I think people will be excited about the, you know, third or fourth year of AI. Yeah, no doubt. And I say with great pride that the industry made tremendous strides this last year.
Starting point is 01:15:15 all the technologies we've mentioned and that the scaling laws are so intact that we we now know that more compute more intelligence and and gosh the the the innovations in one in in sector diffuses and spreads across all of the other sectors so fast I'm so happy to see all that and so I think the next five years it's going to be extraordinary No doubt about it. And I think next year it's going to be incredible. Amazing.
Starting point is 01:15:48 Well, we're excited to talk to you at the end of next year, too. Yeah, looking forward to it. Thanks so much. Thank you guys for all the work that you guys do. Congratulations. What a great year. Wow, that's for an amazing year.
Starting point is 01:15:56 Yeah. A lot, thank you. Thank you. Thanks, Johnson. Happy New Year. Yeah. Find us on Twitter at No Pryor's Pod. Subscribe to our YouTube channel
Starting point is 01:16:06 if you want to see our faces, follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no dash priors.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.