Dwarkesh Podcast - Jensen Huang – TPU competition, why we should sell chips to China, & Nvidia’s supply chain moat

Episode Date: April 15, 2026

I asked Jensen about TPU competition, Nvidia’s lock on the ever more bottlenecked supply chain needed to make advanced chips, whether we should be selling AI chips to China, why Nvidia doesn’t jus...t become a hyperscaler, how it makes its investments, and much more. Enjoy!Watch on YouTube; read the transcript.Sponsors* Crusoe’s cloud runs on state-of-the-art Blackwell GPUs, with Vera Rubin deployment scheduled for later this year. But hardware is only part of the story—for inference, Crusoe’s MemoryAlloy tech implements a cluster-wide KV cache, delivering up to 10x faster TTFT and 5x better throughput than vLLM. Learn more at crusoe.ai/dwarkesh* Cursor helped me build an AI co-researcher over the course of a weekend. Now I have an AI agent that I can collaborate with in Google Docs via inline comment threads! And while other agentic coding tools feel like a total black-box, Cursor let me stay on top of the full implementation. You can try my co-researcher out here, or get started on your own Cursor project today at cursor.com/dwarkesh* Jane Street spent ~20,000 GPU hours training backdoors into 3 different language models, then challenged my audience to find the triggers. They received some clever solutions—like comparing the base and fine-tuned versions and extrapolating any differences to reveal the hidden backdoor—but no one was able to solve all 3. So if open problems like this excite you, Jane Street is hiring. Learn more at janestreet.com/dwarkeshTimestamps00:00:00 – Is Nvidia’s biggest moat its grip on scarce supply chains?00:16:25 – Will TPUs break Nvidia’s hold on AI compute?00:41:06 – Why doesn’t Nvidia become a hyperscaler?00:57:36 – Should we be selling AI chips to China?01:35:06 – Why doesn’t Nvidia make multiple different chip architectures? Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Transcript
Discussion (0)
Starting point is 00:00:00 We've seen the valuations of a bunch of software companies crash because people are expecting AI to commoditize software. And there's a potentially naive way of thinking about things, which is like, look, Nvidia sends a GDS2 file to TSM. TSM builds the logic dies. It builds the switches. Then it packages them with the HBM that SK-Hinex and Micron and Samsung make. Then it sends it to an ODM in Taiwan where they assemble the racks. And so, Nvidia is fundamentally making software that other people are manufacturing. and if software gets commoditized, does NVIDIA get commoditized?
Starting point is 00:00:32 Well, in the end, something has to transform electrons to tokens. That transformation, there's no, the transformation of electrons to tokens and making those tokens more valuable over time. I think that that's hard to, hard to completely commoditize.
Starting point is 00:00:58 The transformation from electrons to tokens is such an incredible journey. And making that token, it's like making one molecule more valuable than another molecule, making one token more valuable than another. The amount of artistry, engineering, science, invention that goes into making that token valuable, obviously we're watching it happening in real time. And so the transformation, the manufacturing, all of the signs that goes in there is far from deeply understood. And it's far from, the journey is far from over. And so I doubt that it will happen.
Starting point is 00:01:40 We're going to make it more efficient, of course. I mean, the whole thing about Nvidia, in fact, the way that you framed the question is my mental model of our company. The input is electron. The output is token. That is in the middle, Nvidia. And our job is to do as much as necessary, as little as possible, to enable that transformation to be done at incredible capabilities. And what I mean by as little as possible, whatever I don't need to do,
Starting point is 00:02:12 I partner with somebody and I make it part of my ecosystem to do. And if you look at Nvidia today, we probably have the largest ecosystem of partners, both in supply chain upstream, supply chain downstream, all of the computers, computer companies, and all the application developers and all the model makers and all the, you know, AI is a five-layer cake, if you will, and we have ecosystems across the entire five layers.
Starting point is 00:02:37 And so we try to do as little as possible, but the part that we have to do, as it turns out, is insanely hard. And I don't think that that gets commoditized. In fact, in fact, I also don't think that the enterprise software companies, the tools makers, you know, most of the software companies today are tools makers. Some of them are not, but some of them are workflow codification, you know, systems. But for a lot of companies, they're tool makers. For example, you know,
Starting point is 00:03:12 Excel is a tool, PowerPoint's a tool, cadence makes tools, synopsis makes tools. I actually see the opposite of what people see. I think the number of agents are going to grow exponentially. The number of tool users are going to grow exponentially. And it's very likely that the number of instances of all these tools are going to skyrocket. It is very likely the number of instances of synopsis design compiler is going to skyrocket. And the number of agents that are going to be using the floor planners and all of our layout tools and our design role checkers, the number of agents that are today we're limited by the number of engineers.
Starting point is 00:03:59 Tomorrow those engineers are going to be supported by a bunch of agents. We're going to be exploring the design space like you've never seen explore before and want to use the tools that we use today. And so I think tool use is going to cause these software companies to skyrocket. The reason why it hasn't happened yet is because the agents aren't good enough at using their tools yet. And so either these companies are going to build the agents themselves or agents are going to get good enough to be able to use those tools. And I think it's going to be a combination of both. I think in your latest filings, you had almost $100 billion in purchase commitments with people, foundries, memory, packaging. And then semi-analysis has reported that you will have $250 billion of these kinds of purchase commitments.
Starting point is 00:04:44 And so one interpretation is NVIDIA's mode is really that you've locked up many years of these scarce components that are, you know, somebody else might have an accelerator, but can they actually get the memory to build it? Can they actually get the logic to build it? And this is really NVIDIA's big mode for the next few years. Well, it's one of the things that we can do that is hard for someone else to do. The reason why we could, we've made enormous commitments upstream. Some of it is explicit, these commitments that you mentioned. Some of it is implicit. For example, a lot of the investments that are upstream are made by our supply chain because I said to the CEOs, let me tell you how big this industry is going to be and let me explain to you why and let me reason through it with you and let me show you what I see. And so as a result of that process of informing, inspiring, aligning with CEOs of all. different industries upstream, they're willing to make the investments. Now, why are they willing to make the investments for me and not someone else? And the reason for that is because they know that I have the capacity to buy it, buy their supply, and sell it through my downstream. The fact that
Starting point is 00:06:01 Nvidia's downstream supply chain and our downstream demand is so large, they're willing to make the investment upstream. And so if you look at GTC and, and, you know, and, you know, And, you know, people are marveled by the scale of GTC and the people that go. It's a 360 degree. It's the entire universe of AI all in one place. And they're all in one place because they need to see each other. I bring them together so that the downstream could see the upstream, the upstream could see the downstream. And all of them could see all the advances in AI.
Starting point is 00:06:36 And very importantly, they can all meet the AI natives and all the AI startups that are all, you know, being built and all the amazing things that are happening. so that they could see firsthand all the things that I tell them. And so I spent a lot of my time informing directly or indirectly our supply chain and our partners and our ecosystem about the opportunity that's in front of us. You know, most of my keynotes, you know, some people always say, you know, Jensen, in most keynotes, it's like one announcement after another announcement, after another announcement after another announcement. our keynotes are, there's always a part of it that's a little torturous in the sense that it almost comes across like education. And in fact, that's exactly on my mind. I need to make sure that the entire supply chain upstream and downstream, the ecosystem, understands what is coming at us, why is coming, when is coming, how big is it going to be, and be able to reason about it systematically. just like I reasoned about it.
Starting point is 00:07:42 And so I think the mode as you describe it, we're able to, of course, build for a future. If our next several years is a trillion dollars in scale, we have the supply chain to do it. Without our reach, the velocity of our business, you know, just as there's cash flow, there's supply chain flow, their turns. Nobody's going to build a supply chain for an architecture if the architecture, the business turns, is low. And so our ability to sustain the scale is only because our downstream demand is so great. And they see it and they all hear about it. They see it all coming. And so it allows us to do the things that we're able to do at the scale we're able to do.
Starting point is 00:08:32 I do would understand more concretely whether the upstream can keep up. for many years now you guys have been 2xing revenue year over year you guys have been more than tripling the amount of flops you're providing
Starting point is 00:08:43 to the world year over year and 2xing at the scale now it's really incredible Yeah so then you look at logic say You're the biggest customer on TSMC's N3 node
Starting point is 00:08:54 and you're one of the biggest on N2 AI as a whole this year is going to be 60% of N3 it's going to be 86% next year according to some analysis How do you 2X if you're
Starting point is 00:09:05 the majority. And how do you do that year over year? So are we in a regime now where the growth rate in AI compute has to slow because of upstream? Do you see a way to get around these, you know, how do we build 2X more fabs year over year, ultimately? Yeah. At some level, the instantaneous demand is greater than the supply upstream and downstream in the world. And it could be at any instance, we could be limited by the number of plumbers, which actually happens. The plumbers are admitted to next year's GTC.
Starting point is 00:09:47 Yeah, by the way, great idea. But that's a good condition. You want a market, you want an industry where the instantaneous demand is greater than the total supply of the industry. The opposite is obviously less good. If we're too far apart, if one particular item, one particular component is too far, too far away, obviously the industry swarms it. So, for example, notice people aren't talking very much about COAS anymore. And the reason for that is because for two years, we swarmed a living daylights out of it. And we double, double, double on several doubles.
Starting point is 00:10:27 And now I think we're in a fairly good shape. And TSM now knows that COA supply has to keep up with the rest of the lot. and the memory demand and so they're scaling COOS and they're scaling you know future packaging technologies at the same level as a scale logic which is terrific because for a long time COOS was rather specialty and HBM memory with rather specialty but they're not specialties anymore people now realize their mainstream computing technology and then and of course we're now much more able to
Starting point is 00:11:05 influence a larger scope of our supply chain. In the past, in the past, you know, in the beginning of the AI revolution, all the things that I say now, I was saying five years ago. And some people believed in it and invested in it. For example, Sanjay and the Micron team, I still remember the meeting really well, where I was clear about exactly what's going to happen and why it's going to happen and and the predictions, the predictions of today. And they really double down on it and we partner with them and across LPDDR, across, you know, HBM memories, they really invest in it. And it obviously has been tremendous for the company. Some people came a little bit later and, but now they're all here. And so I think each one of these generation, each one of these
Starting point is 00:11:57 bottlenecks gets a great deal of attention. And now we're pre-fetching the bottlenecks years in advance. So, for example, the investments that we've done with Lumentum and coherent and all of the silicon photonics ecosystem, the last several years we really reshaped the ecosystem and the supply chain Silicon Photonics. We built up an entire supply chain around TSM. We partnered with them on COOP, invented a whole bunch of technology. technology. We licensed those patents to the supply chain, keep it nice and open. And so we're preparing the supply chain through invention of new technologies, new workflows, new testing equipment,
Starting point is 00:12:43 double-sided probing, investing in companies, helping them scale up their capacity. And so you could see that we're trying to shape the ecosystem so that it's ready, the supply chain, so that it's ready to support the scale. It seems like some bottlenecks are easier than others. And so scaling up co-os versus scaling up... I went to the hardest one, by the way. Which is? Plumbers. Yeah.
Starting point is 00:13:07 It's true. Yeah, yeah. I actually went to the hardest one. Yeah. Plumbers and electricians. And the reason for that is because... And this is one of the concerns that I have about all the doomers, describing the end of work and killing of jobs.
Starting point is 00:13:22 And, you know, one of the things that if we discourage people from being software engineers, we're going to, we're not a software engineers. And the same prediction 10 years ago, some of the, some of the Dumers were saying that we're telling people, whatever you do, don't be a radiologist. And you might hear some of those videos are still on the web. You know, radiology is going to be the first career to go. Nobody's, the world's not going to need any more radiologists. Guess what we're short of, radiologists. Oh, but, okay, so going back to this point about, well, some things you see. scale other things like how do you actually get how do you actually manufacture 2x the amount of logic
Starting point is 00:14:03 a year ultimately that's bottleneck by memory and logic or bottleneck but uv how do you get to two X as many uv machines a year yeah year over year none of that's impossible to scale quickly you just need to you you could do all of that is easy to do within two or three years you just need a demand signal that it's not once you once you can build one you could build 10 and once you can build 10 you can build a million and so These things are not hard to replicate. How far down the supply chain do you go where do you go to ASML and say, hey, if I look out three years from now, for me to, for NVIDIA to be generating $2 trillion a year in revenue, we need way more AUV machines. Some of them I have to directly, some of them are indirectly.
Starting point is 00:14:45 And some of them, if I can convince TSMC as ASML will be convinced. And so that's that, you know, we have to think about the critical, critical pinch points. But if TSMC is convinced, you'll have plenty of EV machines in a few years. And so none of that, my point is that none of the bottlenecks last longer than a couple, two, three years. None of them. And meanwhile, we're improving computing efficiency by 10x, 20x, in the case of Hopper to Blackwell, some 30-50X. We're coming up with new algorithms because Kuda is so flexible. we're developing all kinds of new techniques
Starting point is 00:15:26 so that we drive efficiency in addition to increasing capacity. So those are things that none of that worry me. It's the stuff that's downstream from us. Energy policies that prevent energy from, you know, you can't create an industry without energy. You can't create a whole new manufacturing industry without energy.
Starting point is 00:15:51 We want to reindustrialize the United States We want to bring back chip manufacturing and computer manufacturing and packaging. And we want to build new things like EVs and robots. And we want to build AI factories. And you can't build any of these things without energy. And those things take a long time. But more chip capacity, that's a two, three year problem. More co-os capacity, two, three-year problem.
Starting point is 00:16:13 Interesting. I feel like I have guests told me the exact opposite thing sometimes. In this case, I just don't have the technical knowledge to adjudicate. Well, the beautiful thing is you're talking to the expert. Yeah. True, true. Okay, I want to ask about your competitors. Yeah.
Starting point is 00:16:28 So if you look at TPU, arguably two out of the top three models in the world, Claude and Gemini were trained on TPU. What does that mean for Invidia going forward? Well, we have a very different, we build a very different thing. You know, what Invidia built is accelerated computing, not a tensor processing unit. And accelerated computing is used for all kinds of things. You know, molecular dynamics and quantum chromodynamics,
Starting point is 00:17:01 and it's used for data processing, data frames, structured data, unstructured data. It's used for fluid dynamics, particle physics. You know, and in addition, we use it for AI. And so accelerated computing is, is much more diverse and although AI is the conversation today
Starting point is 00:17:24 is obviously very important and impactful. Computing is much broader than that. And what Nvidia has done is reinvented the way computing is done from general purpose computing to accelerate computing.
Starting point is 00:17:38 Our market reach is far greater than any TPU any ASIC can possibly have. And so if you look at our position, we're the only company that that accelerates applications of all kinds. We have a gigantic ecosystem. And so all kinds of frameworks and algorithms all run on Nvidia.
Starting point is 00:18:01 And because our computers are designed to be operated by other people, anyone who's an operator could buy our systems. Most of these home-built systems, you have to be your own operator because it was never designed to be flexible enough for other people to operate. And so as a result of the fact that anybody can operate our systems, we're in every cloud, including Google and Amazon and, you know,
Starting point is 00:18:29 Azure and OCI and, right? And so whether you want to operate it to rent or operate it, if you want to operate to rent, you better have large ecosystem of customers in many industries that be the off-takers. If you're operating it, if you want to operate it for yourself, we obviously have the ability to help you operate yourself, like for example, for YLong with XAI.
Starting point is 00:18:55 And because we could enable operators in any company, in any industry, you could use it to build a supercomputer for scientific research and drug discovery at Lilly. And so we can help them operate their own supercomputer and use it for the entire diversity of drug discovery and biological science. that we accelerate. And so there are just, you know, a whole bunch of applications that we can address that you can't do so with TPUs. Because NVIDIA's built Kuta as a fantastic tensor processing unit as well, but it does, you know, it does every life cycle of data processing and computing and AI and so on so forth. And so our market opportunity is just a lot larger. our reach is a lot greater and because we have such a large
Starting point is 00:19:49 we basically support every application in the world now you could build Nvidia systems anywhere and know that there will be customers for it and so it's a very different thing this is going to be sort of a long question but you know you have spectacular revenue and this revenue is mostly
Starting point is 00:20:05 you're not making $60 billion a quarter from pharma and quantum you're making it because AI is unprecedented technology that is going unprecedentedly fast. And so then the question is what is best for AI specifically, and I'm not in the details, but I talk to my AI researcher friends, and they say, look, when I use a TPU,
Starting point is 00:20:23 it's this big systolic array that's perfect for doing matrix multiplies, whereas a GPU is very flexible. It's great when you have lots of branching, when you have irregular memory access. But what is AI? Just like these very predictable matrix multiplies again and again and again. And you don't have to give up any direia for warp schedulers. for switches between threads and memory banks.
Starting point is 00:20:47 And so the TPU is really optimized for the majority, the bulk of this growth in revenue and use case for a compute that is coming online right now. Yeah, I wonder how you react to that. Matrix multiplies is an important part of AI, but it's not the only part of AI. And if you want to come up with a new attention mechanism or if you want to disaggregate in a different way,
Starting point is 00:21:12 if you want to come up with a whole new type of architecture altogether, for example, you know, a hybrid SSM, if you want to use a, you want to create a model that that fuses diffusion and auto-regressive somehow, you want an architecture that's just generally programmable. And we run everything you can imagine. And so that's the advantage. It allows for invention of new algorithms a lot more easily. And so because it's a programmable system. And the ability to invent new algorithms is really what makes AI advance so quickly. You know, TPUs like anything else is impacted by Moore's Law. And we know that Moore's Law is increasing about 25% per year.
Starting point is 00:22:08 And so the only thing. way to really get 10x leaps, 100x leaps, is to fundamentally change the algorithm and how it's computed every single year. And that's MVD's fundamental advantage. The only reason why we were able to make Blackwell to Hopper 50 times, you know, I said it was 35 times, and when I first announced it was going to, Blackwell was going to be 35 times more energy efficient than Hopper. Nobody believed it. And then Dylan wrote an article. He said, in fact, in fact, I sandbagged it's actually 50 times. And you can't reasonably do that with just Moore's law. And so the way that we solve that problem is new models, M-O-E's, paralyzed and disaggregated and
Starting point is 00:23:02 and distributed across a computing system and without the ability to really get down and come up with new kernels with CUDA, it's really hard to do. And so the combination of the programmability of our architecture, the fact that Nvidia is an extreme co-designed company where we could even offload
Starting point is 00:23:28 some of the computation into the fabric itself, NVLink, for example, into the network spectrum X, and that we could affect change across the processors, the system, the fabric, the libraries, the algorithm. All of that was done simultaneously. Without CUDA to do that, I wouldn't even know where to start. My sponsor Crusoe was among the first clouds to offer NVIDIA's Blackwell and Blackwell Ultra platforms.
Starting point is 00:23:58 And they just announced their NVIDIA-VARA-Rubon deployment scheduled for later this year. But access to state-of-the-art hardware is only part of the story. For example, most inference engines already do KV caching for a single user's forward passes. But Crusoe does it across users in GPUs. So if a thousand agents are running on the same system prompt, Crusoe only has to compute the KV cache once for it to become available to every single GPU in the cluster. This is especially important as systems get more authentic and require much longer prefixes in order to use tools and access files. In a recent benchmark, Crusoe was able to deliver up to 10 times faster time to first
Starting point is 00:24:31 token and up to five times better throughput than VLLLM. This is just one among many reasons that you should run your infant workload with Crusoe. And if you need GPUs for training, you don't need to switch clouds. Crusoe's got you covered there too. Go to cruso.a.ai slash Thor Cash to learn more. So this gets an interesting question about NVIDIA's clientele, where if 60% of your revenue is coming from these big five hypers, you know, in a different era with different customers,
Starting point is 00:25:01 Let's say it's professors who are running experiments. And they are helped a bunch by they need Kuta. They can't use another accelerator. They need to just run PITORch with Kuda and have everything optimized. But if you got these hyperscalers, they have the resources to write their own kernels. In fact, they have to get that extra last 5% that they need for their specific architecture. Anthropic Google are mostly running their own accelerators or running TPUs and Traneum. But even OpenEI, using GPUs.
Starting point is 00:25:31 using GPUs as Triton, which they're like, we need our own kernels. So they've down to Kuda C++, they've, instead of using Kubez and nickel and everything, they've got their own stack, which compiles to other accelerators as well. And so if most of your customers can, can and do make replacements for Kuta, to what extent is Kuda really the thing that is going to make Frontier AI happen on NVIDIA? Kuta is a rich ecosystem. And so if you want to build on any computer first, building on Kuda first is incredibly smart. And because the ecosystem is so rich, we support every framework.
Starting point is 00:26:16 If you want to create custom kernels, if you need, for example, we contribute enormously to Triton. And so the back end of Triton, huge amounts of Nvidia technology. We're delighted to help every framework become as great as it's going to be. And there's lots and lots of frameworks. There's Triton, there's S.G. Lang, and there's more, right? And now there's a whole bunch of new reinforcement learning frameworks coming out. You know, you got Vero, you got NEMORL, you got a whole bunch of new. And then now with post-training and reinforcement learning, that entire area is just exploding, right?
Starting point is 00:26:53 And so if you want to build on an architecture, building on a KUDA makes the most sense. Because you know that the ecosystem is great. You know that if something happens, it's more likely in your code and not in the mountain of code underneath. You know, don't forget the amount of code that you're dealing with when you're building these systems,
Starting point is 00:27:11 when something doesn't work, was it you or was it the computer? You would like it always to be you and to be able to trust the computer. And obviously, we still have lots and lots of bugs ourselves, but our system is so well wrung out that you could at least build on top of the foundation. So that's number one,
Starting point is 00:27:32 is that the richness of the ecosystem or programmability of it, the capability of it. The second thing is, if you were a developer and you were building anything at all, the single most important thing you want more than anything is install base. You want the software that you run to run on a whole bunch of other computers.
Starting point is 00:27:49 You don't want to build a software. You're not building software just for yourself. You're building software for your fleet or for everybody else's fleet because you're a framework builder. And Nvidia's CUDA ecosystem is ultimately its great treasure. We are now, I don't know how many, several hundred million GPUs, every cloud has it, goes back to A10, A100, H100, H100, H100, you know, the L series, the P series.
Starting point is 00:28:18 I mean, there's a whole bunch of them. And they're in all kinds of sizes and shapes. And if your robotics company, you want that CUDA stack to actually run in the robot itself. We're literally everywhere. And so the install base says that once you develop the software, once you develop the model, it's going to be useful everywhere. And so the install base is just too incredibly valuable. And then lastly, the fact that we're in every single cloud makes us genuinely unique.
Starting point is 00:28:46 Because you're an AI company and you're an AI developer, you're not exactly sure which CSP you're going to partner with and where you're you would like to run it, and we'd run it everywhere, including on-prem for you if you like. And so I think that the richness of the ecosystem, the expansiveness of the install base and the versatility of where we are, that combination makes CUDA invaluable. That makes a lot of sense. I guess the thing I'm curious about is whether those advantages matter a lot to your main customers. Like there's many people who they might matter for the kind of person who can actually build their own software stack who make up most of your revenue. Especially if you go to a world where AI is getting especially good at the things which have tight verification loops working RL on them.
Starting point is 00:29:41 And then this question of how do you write a kernel that does attention or, MLP the most efficiently across a scale-up. It's a very verifiable sort of feedback loop. And so, oh, can everybody, can all the hyperscalers write these custom kernels for themselves? And they might still, Nvidia still has great PICE performance, or they might still prefer to use NVIDIA. But then the question is, does it just become a question of who is offering the best specs, the best flops and memory and memory bandwidth for a given dollar? where historically, Enmidi has just had, and still has, you know, the best margins in all of AI across hardware and software, 70% plus, because of this Kudemote. And the question is, oh, can you sustain those margins if, for most of your customers, they can actually afford to build instead of the Kudamote?
Starting point is 00:30:33 The number of engineers we have assigned to these AI labs is insane, working with them optimizing their stack. And the reason for that is because nobody knows our architecture better than we do. And these architectures are not as general purpose as a CPU. The reason why a CPU is so, you know, CPU is kind of like a Cadillac, you know, it's just always, you know, it's a nice cruiser. It never goes too fast. Everybody drives it pretty well, you know. It's got cruise control, you know, and everything is easy. But in a lot of ways,
Starting point is 00:31:12 Nvidia's GPUs are accelerators are kind of like F1 racers. And yeah, I could imagine everybody's able to drive it at 100 miles an hour. But it takes quite a bit of expertise to be able to push it to the limit. And we use a ton of AI to create the kernels that we have.
Starting point is 00:31:32 And I'm pretty sure we're going to still be needed for quite some time. And so our expertise helps our, our AI labs partners get another 2X out of their stack easily, oftentimes. It's not unusual that we, you know, by the time that we're done optimizing their stack or optimizing a particular kernel, their model sped up by 3x, 2X, 50%. That's a huge number, especially when you're talking about the install base of the fleet that
Starting point is 00:32:06 they have, of all the hoppers and black walls that they have, when you increase it by a factor of two, that doubles the revenues. That directly translates to revenues. Invitya's computing stack is the best performance per TCO in the world, bar none. Nobody can demonstrate to me that any single platform in the world today has better performance TCO ratio, not one company. And in fact, In fact, the benchmarks are out there. Dillon's, right, inference max is sitting out there for everybody to use. And not one, TPU won't come, Traneum won't come. I encourage them to use inference max and demonstrate their incredible inference cost.
Starting point is 00:32:55 It's really, really hard. Nobody wants to show up. ML Perf, I would welcome Traneum to demonstrate their. 40% that they claim all the time. I would love to hear them demonstrate the cost advantage of TPUs. It makes no sense in my mind. It makes absolutely zero sense. On first principles, it makes no sense. And so I think the, I think the, the reason why we're so successful is simply because our TCO is so great. There's a second, you say 60% of our customers are the top five, but most of that business is external. For example, most of AWS is, most of NVIDIA in AWS is for external
Starting point is 00:33:40 customers, not internal use. Most of our customers at Azure, obviously, all of our customers are external, all of our customers at OCI are external, not internal use. The reason why they favor us is because our reach is so great, we can bring them all of the great customers in the world. They're all built on Nvidia. And the reason why all these companies are built on Nvidia is because our reach and our versatility is so great. And so I think the flywheel is really install base, the programmability of our architecture, the richness of our ecosystem. And the fact that there's so many AI companies in the world, there's tens of thousands
Starting point is 00:34:20 of them now. And if you were one of those AI startups, what architecture would you choose? You would choose an architecture that's the most abundant, word, the most abundant in the world. The one has the largest install base. We're the most largest install base. And one that has a rich ecosystem. And so that's the flywheel. That's the reason why between the combination of one, our perf per dollar is so great that they have the lowest cost tokens.
Starting point is 00:34:49 Second, our per watt is the highest in the world. And so if one of these companies, if our partners built a one gigawatt data center, that one gigawatt data center, that one gigawatt data center better deliver the maximum amount of revenues that and number of tokens, which directly translates the revenues, you wanted to generate as many tokens as possible, maximize the revenues for that data center. We are the highest tokens per watt architecture in the world. And then lastly, if your goal is to rent the infrastructure, we have the most customers in the world. And so that's the reason why the flag what works.
Starting point is 00:35:25 Interesting. I guess the question comes down to, what is the actual market structure here? Because even if there's other companies, there could have been a world where there's tens of thousands of AI companies that have roughly equal share of compute. But if even through these five hyperscalers,
Starting point is 00:35:41 really the people on Amazon using the computer, anthropic, open AI, and these big foundation labs who can themselves afford and have the ability to make different accelerators work. No, I think your assumption is, premise is wrong. Maybe.
Starting point is 00:35:59 But let me ask you a different question. Come back and make me correct your premise. Okay. Let me just ask a different question which is, okay, everything you're saying.
Starting point is 00:36:08 But still make sure that make me come back and fix because it's just too important to AI. It's too important to the future of science is too important
Starting point is 00:36:16 to the future of the industry. That premise, the premise, look. Let me just finish the question and then you can address it together. Yeah. So what do you think if all these things are true about price performance and performance for watt, et cetera, true,
Starting point is 00:36:33 why do you think it is the case that, say, Anthropic, for example, just announced a couple days ago they have a multi-a-gigodd deal with Broadcom and Google for TPUs and majority of their compute. Obviously for Google, it's TV is a majority of the computer. So if I look at these big AI companies, it seems like a lot of their company, at some point where it's all Nvidia, and now it's not. And so I'm curious how to square if these things are true on paper, why are they going with other accelerators? Yeah. Anthropic is a unique instance and not a trend. Without anthropic, why would there be any TPU growth at all?
Starting point is 00:37:14 It's 100% anthropic. Without anthropic, why would there be any uranium growth at all? It's 100% anthropic. I think that's fairly well known and well understood. It's not that there's an abundance of ASIC opportunities. There's only one anthropic. But opening eyes deals with AMD. They're building their own Titan accelerator.
Starting point is 00:37:35 Yeah, but they're mostly, I think we could all acknowledge they're vastly invidia. And we're going to still do a lot of work together. Yeah. And we're not, I'm not offended by other people using something else and trying things. If they don't try these other things, how would they know how good? ours is this, you know, and sometimes you've got to be reminded of it. And we got to, and we have to continuously earn, earn the position that we're in. There are always big claims and look at the number of ASICs that have been canceled. Just because you're going to build an ASIC, you still have to
Starting point is 00:38:11 build something better than Nvidia. And it's not that easy building something better than Nvidia. It's not sensible, actually. You know, it's, we, Nvidia's got to be missing something seriously. Because our scale, our velocity, we're the only company in the world that's cranking it out every single year, big leaps every single year. I guess their logic is that, hey, it doesn't need to be better. It just needs to be not more than 70% worse because they're paying you 70% margins. No, no, no, don't forget. Even in A6 margins really quite high. Invita's margin is 70%, let's say, but in ASIC margin is 65. What are you really saving? Oh, you mean? From Bodcom or something.
Starting point is 00:38:52 Yeah, sure. You got to pay somebody. And so I think the AIC margins are incredibly good from what I can tell. And they believe it. They believe it so too. And so they're quite proud of their incredible AIC margins. And so you asked the question why. A long time ago, we just didn't have the ability to do it.
Starting point is 00:39:17 And at the time, at the time, I didn't deeply internalize how difficult it would be to build a foundation AI lab, like OpenAI and Anthropic. And the fact that they needed huge investments from the supplier themselves, we just weren't in a position to make the multibillion dollar investment into Anthropic so that they could use our compute. but Google and AWS were. And they put in huge investments in the beginning so that Anthropic in return use their compute. We just weren't in a position to do so at the time. Nor did I, I would say my mistake is
Starting point is 00:40:05 I didn't deeply internalize that they really had no other options that a VC would never put in $5, $10 billion of investment into an AI lab with the hopes of it turning out to be anthropic. And so that was my miss. But even if I understood it, I don't think we would have been in a position to do that at the time. But I'm not going to make that same mistake again. And I'm delighted to invest in Open AI.
Starting point is 00:40:36 And I'm delighted to help them scale. And I believe it's essential to do so. And then when I was able to, when I was able to, when, Anthropic came to us. I'm delighted to be an investor, delighted to help them scale. But we just weren't at the time able to do so. If I could rewind everything, Nvidia could have been as big back then as we are now. I would have been more than happy to do it. This is actually quite interesting, which is for many years, Nvidia has been this the company in AI making money, making lots of money.
Starting point is 00:41:17 And now you're investing it. It's been reported that you've done up to $30 billion in Open AI and $10 billion and Anthropic. But now their valuations have increased, and I'm sure they'll continue to increase. And so over these many years, you know, you were giving them the compute. You saw where I was headed. And then they were worth like one-tenth what they are now a couple years ago or even a year ago in some cases.
Starting point is 00:41:42 And you had all this cash. there's a world where either Nvidia themselves becomes a foundation lab does a huge investment to make that possible or has made the deals you've made now at current valuations much earlier on
Starting point is 00:41:57 and you had the cash to do it so I am curious actually why not have done it earlier we did it as soon as we could have we did it as soon as we could have and if I could have I would have done it even earlier at the time that Anthropic
Starting point is 00:42:14 needed us to do it. We just weren't in a position to do it. It wasn't, it wasn't, you know, it wasn't in our sensibility to do so. How so, like a cash thing or just? Yeah, the level of investment, you know, we never invested outside the company at the time and not that much. And, um, and we didn't realize we needed to. You know, I always, I always thought that they could just go raise VCs for God's sakes like, like all companies do. Um, but, but, um, uh, uh, what they were trying to, what they were trying to do, couldn't have been done through VCs. What opening I wanted to do couldn't have been done through VCs. And I recognize that now.
Starting point is 00:42:56 I didn't know it then. But that's their genius. That's why they're smart. You know? And so they realized that then that they had to do something like that. And I'm delighted that they did, you know. And even though we caused Anthropic to have to go to somebody else, I'm still happy that it happened.
Starting point is 00:43:17 Anthropics' existence is great for the world. I'm delighted for it. I guess you still are making a ton of money, and you're making way more money quarter after quarter. It's still okay to have regrets. So the question still arises, okay, well, now that we're here, you have all this money that you keep making, what should Nvidia be doing with it?
Starting point is 00:43:37 And there's one answer which says, look, there's this whole middleman ecosystem that has popped up for converting CAPEX. into OPEX for these labs so that they can rent compute because the ships are really expensive. They make a lot of money over their lifetime because the AML is getting better. The value that they generate their tokens is increasing,
Starting point is 00:43:55 but they're expensive to set up. NVIDIA has the money due to the CAPEX. And in fact, it's been reported, your backstopping core, we have up to $6.3 billion and have invested 2B. But, yeah, why doesn't NVIDIA become a cloud themselves? Why doesn't become a hyperscaler themselves and rent this compute out?
Starting point is 00:44:13 You have all this cash to do it. This is a philosophy of the company, and I think is wise. We should do as much as needed as little as possible. And what that means is the work that we do with building our computing platform, if we don't do it, I genuinely believe it doesn't get done. If we didn't take the risk that we take, if we didn't build NVLink the way we built, if we didn't build the whole stack, if we didn't create the ecosystem the way we did it, If we didn't dedicate ourselves to 20 years of Kuda while losing money most of that time,
Starting point is 00:44:47 if we didn't do it, nobody else would have done it. If we didn't create all the KudaX libraries so that they're all domain-specific, you know, this is several decade and a half ago. We pushed into domain-specific libraries because we realized that if we didn't create these domain-specific libraries, whether it's for ray tracing or image generation or even the early works of AI, these models, if we didn't create them for data-priced libraries, processing, structure data processing, or vector data process. If we didn't create them, nobody
Starting point is 00:45:17 would. And I am completely certain of that. We created a library for computational lithography called Kulitho. If we didn't create it, nobody would have. And so accelerated computing went advanced the way it has if we didn't
Starting point is 00:45:33 do what we did. And so we should do that. We should dedicate our company, all of our might, wholeheartedly, to go do that. However, the world has lots of clouds. If I didn't do it, somebody show up. And so following the recipe, the philosophy of doing as much as needed, but as little as possible, as little as possible, that philosophy exists in our company today. And everything I do, I do it with that lens.
Starting point is 00:46:01 In a case of clouds, if we didn't support CoreWeave to exist, these Neo clouds, these AI clouds, when exist. If we didn't help Corweave exist, they would not exist. If we didn't support N-scale, they wouldn't be where they are today. If we didn't support Nibius, they wouldn't be what they are today. Now they are, they're doing fantastically.
Starting point is 00:46:25 Is that a business model work? No. We should do as much as needed as little as possible. And so we invest in our ecosystem because I want our ecosystem to thrive. And I want our, I want, I want, I want the architecture and I want AI to be able to connect with as many industries as possible, as many countries as possible, and make it possible for the planet to be built on AI and to be built on the American Tech Stack. And so that vision, I think, is exactly what we're pursuing.
Starting point is 00:46:59 Now, one of the things that you mentioned, there are so many great amazing foundation model companies and we try to invest in all of them. And this is another thing that we do. We don't pick winners. And we like, we need to support everyone. And it's part of our, part of our joy of doing so. It's an imperative to our business. But we also go out of our way not to pick winners. And so when I invest in one of them, I invest in all them.
Starting point is 00:47:27 Why do you go out of your way to not to pick winners? Because it's not our job to, number one. Number two, when Nvidia first started, there were 60 graphics companies. companies, 63D graphics companies, we are the only one that survived. If you were to take in those 60 companies, 60 graphics companies, and asked yourself which one was going to make it, Nvidia would be the top of that list not to make it. You know, this is long before you, but Nvidia's graphics architecture was precisely wrong. It's not a little bit wrong.
Starting point is 00:48:00 We created an architecture that was precisely wrong. and it was an impossible thing for developers to support. It was never going to make it. We reasoned about it from good first principles, but we ended up in the wrong solution. And everybody would have counted us out. And here we are. And so I have enough humility to recognize that, you know,
Starting point is 00:48:27 don't pick winners. Either let them all take care of themselves or take care of all of them. One thing I didn't understand is you said, look, we're not prioritizing these neoclouds just because there are new clouds and we want to prop them up. But you also said, you listed a bunch of neoclouds and you said
Starting point is 00:48:45 they wouldn't exist if it wasn't for NVIDIA. Yeah. And so how are those two things compatible? First of all, they need to want to exist and they come to ask us for help. And when they want to exist and they have a business plan and they have expertise
Starting point is 00:49:01 and they have the passion for it. They obviously have to have some capabilities themselves. But if at the end of the day, they need some investment and we're to get it off the ground, we would be there for them. But the sooner they get their flywheel going, your question was,
Starting point is 00:49:20 do we want to be in the financing business? The answer's no. Yeah, we don't want to be, we want to, because there are people in the financing business. And we'd rather work with all of the people who are in the financing business than to be a financier ourselves. And so I think our goal is to focus on what we do,
Starting point is 00:49:37 keep our business model as simple as possible, support our ecosystem, when someone like OpenAI needs an investment of $30 billion scale because it's still before their IPO. And we deeply believe in them. We deeply believe that, I deeply believe that they're going to be,
Starting point is 00:49:57 well, they're an extraordinary, company already today. They're going to be incredible company. The world needs them to exist. The world wants them to exist. I want them to exist. And they have everything, they have the win of their back. Let's support them and let them scale. And so, so to those, those investments will do because they need us to do it. And, but we're not trying to do as much as possible. We're trying to do as little as possible. I spend way too much time copy pasting text back and forth from Google Docs to chat pots. And so I built what's based a cursor for writing, which operates the way I think an AI co-researcher should operate.
Starting point is 00:50:34 I can tag it and it can talk with me through inline comment threads and help me dig deeper and brainstorm. I wrote this entire thing over the weekend with Curser and their new Composer 2 model. With a lot of agentic coding tools, I feel like I have no idea what's going on under the surface. I just have to relinquish control and hope for the best. But Curser, let me try a bunch of different ideas while staying on top of the implementation. I did most of my brainstorming in the agents window, and after I got some basic files in place, I use a diff window to track changes. The future few times that I needed to make a quick tweak by hand, I just used the editor. If you want to try my AICo researcher yourself, I've linked the GitHub repo in the description.
Starting point is 00:51:06 And if you have a tool that you've been wanting to build, you should make it happen. Go to cursor.com slash 4Cash to get started. This may be sort of an obvious question, but we've lived many years in this situation where there's a shortage of GPUs. And it's grown now because models are getting better. We have a shortage of GPUs. Yes. Yeah.
Starting point is 00:51:28 And Nvidia is known for divving up the scarce allocation, not just based on highest bidder, but rather on, hey, we want to make sure that these neoclodes exist. Let's give some to Corby. Let's give some to Crusoe. Let's give some to Lambda. Why is it good for Nvidia? First of all, would you agree with this characterization of fracturing the market? No. No. No. Yeah. Your premise is just wrong. Yeah. Yeah. We're sufficiently mindful about these things. And we're very mindful about these things. First of all, if you don't place a PO, all the talking in the world won't make a difference. And so until we get a PO, what are we going to do? And so the first thing is we work really hard with everybody to get a forecast done. Because these things take a long time to build. And the data centers take a long time to build. And so we align ourselves with demand and supply and things like that through forecasting.
Starting point is 00:52:28 Okay, that's job number one. Number two, everybody who, you know, we've tried to forecast with as many people as possible, but in the final analysis, you still have to place an order. And maybe, maybe for whatever reason you didn't place your order, what can I do? And so at some point, first in first out. But beyond that, if you're not ready, because your data center's not ready, or certain components aren't ready to enable you. to stand up a data center,
Starting point is 00:53:00 we might decide to serve another customer first. That's just maximizing the throughput of our own factory. And so we might do some adjustments there. Aside from that, the prioritization is first in first out. Yeah, you got to place a PO. If you don't place a PO. Now, of course, there are stories about that. You know, like, for example, all of this kind of started from,
Starting point is 00:53:30 from a, it was an article about Larry and Elon having dinner with me where they, where they begged for GPUs. That never happened. We absolutely had dinner. We absolutely had dinner. And it was a wonderful dinner. In no time did they beg for GPs. And so they just had to place in order. And once they place in order, we do our best to get the capacity to them.
Starting point is 00:53:57 We're not complicated. Okay. So it sounds like there's a queue and then based on whether your data center is ready and when you place a purchase order, you get them a certain time. But it still doesn't sound like high spitter just gets it. Is there a reason to do it? We never do that. Okay. We never do it.
Starting point is 00:54:15 Why not just do high spitter? Because it's a bad business practice. You set your price. You set your price and then people decide to buy it or not. And there, they're, there, I understand. that others in the chip industry change their prices when demand is higher. But we just don't.
Starting point is 00:54:38 We just don't. That's just never been a practice of ours. You can count on us. You know, I prefer to be dependable, to be the foundation of the industry. And you don't need to second guess. You know, if you, if I quoted you a price, we quoted you a price.
Starting point is 00:54:58 That's it. And if demand goes through the roof, so be it. And on the other end, that's why you have a productive relationship with TSM, right? Yeah. Yeah, yeah. Nvidia's been in business. We've been doing business with them for, I guess, coming up on 30 years. And Vindia and TSM don't have a legal contract.
Starting point is 00:55:18 There's always some rough justice. And sometimes I'm right. Sometimes I'm wrong. Sometimes I got a better deal. Sometimes I got a worse deal. but overall in the whole the relationship is incredible and I can completely trust them
Starting point is 00:55:33 I can completely depend on them and our one of the things that you can count on with Nvidia is that next year this year Vera Rubin's going to be incredible next year Vera Rubin Ultra will come the year after that Feynman will come and the year after that I haven't introduced the name yet
Starting point is 00:55:50 and so so every single year you can count on us and this is an you're going to have to go find another ASIC team in the world. Pick your ASIC team where you can say, I can bet my entire business that you will be here for me every single year. Your cost, your token cost, will decrease by an order of magnitude every single year.
Starting point is 00:56:16 I can count on it, but I can count on the clock. Well, I just said something about TSMC. No other foundry in history. Can you possibly say that? You can say that about Nvidia today. You can count on us every single year. If you would like to buy a billion dollars with an AI factory compute, no problem. If you like to buy $100 million, no problem.
Starting point is 00:56:41 You'd like to buy $10 million. Or it's just one rack, not a problem. Or just one graphics card. Okay, no problem. If you would like to place an order for $100 billion AI factory, no problem. We're the only company in the world where you can say that today. I can say that about TSM as well. I want to buy one, buy one billion, no problem.
Starting point is 00:57:04 We just got to go through the process of planning for it and all the things that mature people do. And so I think this ability for Invidia to be the foundation of the world's AI industry, this is a position that has taken us decade, several, couple of decades to arrive at, enormous commitment, enormous dedication, and the stability of our company, the consistency of our company is really important.
Starting point is 00:57:37 Okay, I want to ask about China. Yep. And I always like to take, I actually don't know what I think about whether it's good to sell ships to China or not, but I played devil's not good against me, I guess. So when Daria was on, who supports tax work control, I asked him, well, why can't America and China both have country of geniuses in the data center? But since you're on the opposite side, I'll ask you in the opposite way. And look, one way to think out is Anthropic actually announced a couple days ago,
Starting point is 00:58:01 this model mythos are not even releasing publicly because they say it has such cyber offensive capabilities that we don't think the world is ready until we make sure these zero days are patched up. But they say it found thousands of high severity vulnerabilities across every major operating system, every browser. It found one in OpenBSD, which is this operating system that was specifically designed to not have zero days and have found one for 27 years that's existed. And so if Chinese companies and Chinese labs and the Chinese government had access to the AI chips to train a model like Claude Mythos with these cyber offensive capabilities and run millions of instances of it with more compute, the question is, oh, is that a threat to American companies to American national security? First of all, Mithos was trained on fairly mundane capacity and a fairly mundane amount of it by an extraordinary company. And so the amount of capacity and the type of compute that it was trained on is abundantly available in China.
Starting point is 00:59:08 And so you just have to first realize that chips exist in China. They manufacture 60% of the world's mainstream ships, maybe more. It's a very large industry for them. They have some of the world's greatest computer scientists. As you know, most of the AI researchers in all of these AI labs, most of them are Chinese. They have 50% of the world's AI researchers. And so the question is, if you're concerned about them, what is the consider all the assets they already have.
Starting point is 00:59:47 They have an abundance of energy. They have plenty of chips. They got most of the AI researchers. If you're worried about them, what is the best way to create a safe world? Well, victimizing them,
Starting point is 01:00:06 turning them into an enemy, likely isn't the best answer. They are an adversary. We want the United States to win. But I think Having a dialogue and having research dialogue is probably the safest thing to do. This is an area that is glaringly missing because of our current attitude about China as an adversary.
Starting point is 01:00:32 It is essential that our AI researchers and their AI researchers are actually talking. It is essential that we try to both agree on what not to use the AI for. With respect to finding bugs in software, of course, that's what AI is supposed to do. Is it going to find bugs in a lot of software? Of course. There's lots and lots of bugs. There are lots of bugs in the AI software. And so that's what AI is supposed to do.
Starting point is 01:01:04 And I'm delighted that AI has reached a level where it could help us be so much more productive. One of the things that is under-emphasized is the richness of ecosystem around cybersecurity, AI cybersecurity and AI security and AI privacy and AI safety. That whole ecosystem of AI startups that are trying to create this future for us, where you have one AI agent that's incredible, surrounded by thousands of AI agents keeping it safe, keeping it secure. That future surely is going to happen. And the idea that you're going to have an AI agent running around with nobody watching after it is kind of insane.
Starting point is 01:01:56 And so we know very well that this ecosystem needs to thrive. It turns out this ecosystem needs open source. This ecosystem needs open model. they need open stacks so that all of these AI research and all these great computer scientists can go build AI systems that are as formidable and can keep AI safe. And so one of the things that we need to make sure that we do is we keep the open source ecosystem vibrant. And that can't be ignored.
Starting point is 01:02:31 That can't be ignored. And a lot of that is coming out of China. We had to not suffocate that. You know, with respect to China, we want to have, of course, we want the United States to have as much computing as possible. We're limited by energy, but, you know, we got a lot of people working on that, and we had to not make energy a bottleneck for our country. But what we also want is we want to make sure that all the AI developers in the world are developing on. developing on the American tech stack and making the contributions, the advancements of AI, especially when it's open source available to the American ecosystem.
Starting point is 01:03:17 And it would be extremely foolish to create two ecosystems, the open source ecosystem, and it only runs on the Chinese tech, a foreign tech stack, and a closed ecosystem, and that runs on the American tech stack. I think that that would be a horrible outcome for the United States. Since there are a lot of things, let me just triage the response. I mean, I think the concern going back to the flop difference in the hacking is, yes, they have compute, but there's some estimates that because they're at 7 nanometer, they don't have UVs because of chip making expert controls, the amount of flops that you actually produce, they have like one-tenth the amount of flops that the U.S. has. And so with that, could they train eventually a model like mythos? Yes, but the question is because we have more flops, American labs are able to get to these level of capabilities first.
Starting point is 01:04:12 And because Anthropic got to it first, they say, okay, we're going to hold onto it for a month while all these American companies, we give them access to it. They're going to patch up all their vulnerabilities. And now we release it. Furthermore, even if they train a model like this, the ability to deploy that scale, you know, if you had a cyber hacker, it's much more dangerous if they have a million of them versus a thousand of them. So that inference compute really matters a lot. And in fact, the fact that they have so many AI researchers are so good is the thing that makes it so scary because what is it that makes those engineers more productive is compute? If you talk to any AI lab in America, they say the thing that's bottleneck and they miscompute.
Starting point is 01:04:47 And there are quotes from DeVSEC founder or Quinn leadership or whatever. They say, like, the thing we're bottlenecked on is compute. So then the question is, isn't it better that we get to get American companies because they have more compute, get to the level of spot or mythos level capabilities first, prepare our society for it before China can get to it because they have less compute. We should always be first and we should always have more. But in order for that outcome for what you described to be true, you have to take it to the extremes.
Starting point is 01:05:20 They have to have no compute. And if they have some compute, the question is how much is needed. The amount of compute they have in China is enormous. I mean, you're talking about the country. It's the second largest computing market in the world. If they want to deploy, aggregate their compute, they've got plenty of compute to aggregate. But is that true? I mean, people do these estimates and they're like, well, this Mick is actually behind on the process notes.
Starting point is 01:05:50 I'm about to tell you. Okay. The amount of energy they have is incredible. Isn't that right? AI is a parallel computing problem, isn't it? Why can't they just put four, 10 times as much chips together? Because energy is free. They have so much energy.
Starting point is 01:06:05 They have data centers that are sitting completely empty, fully powered. They've, you know, they have ghost cities. They have ghost data centers. They have so much capacity of infrastructure. If they wanted to, they just gang up more chips, even though they're 7 nanometer. And their capacity of building chips is one of the largest in the world. The semiconductor industry knows that they monopolize mainstream chips. They overcapacity.
Starting point is 01:06:33 They have too much capacity. And so the idea that China won't be able to have AI chips is completely nonsense. Now, of course, if you ask me, would the United States be further ahead if the entire world had no compute at all? But that's just not an outcome. That's not a scenario that's true. They have plenty of compute already. the amount of threshold they need for the concern you're worried about, they've already reached that threshold and beyond.
Starting point is 01:07:04 And so I think you misunderstand that AI is a five-layer cake. And at the lowest layer is energy, when you have abundant of energy, it makes up for chips. If you have abundance of chips, it makes up for energy. For example, United States is scarce on energy, which is the reason why Nvidia has to keep advancing, our architecture and do this extreme code design so that with the few chips that we ship, okay, with the few chips, because the amount of energy is so limited, our throughput per watt
Starting point is 01:07:39 is off the charts. But if your amount of watts, it's completely abundant, it's free. What do you care about performance per watt for? You can use old chips to do so. So seven nanometer chips are essentially hopper. The ability for Hopper, I got to tell you, today's models are largely trained on Hopper. Yeah, Hopper generation. And so Hopper, seven nanometer chips are plenty good.
Starting point is 01:08:10 The abundance of energy is their advantage. But then there's a question of, okay, well, can they actually manufacture enough chips given their... But they do. What's the evidence? Huawei just had the largest single year in the history of the country of their... company. How many chips did they shift? A ton. Millions. Millions is way more, way more than Anthropic has. So there's a question of how much logic smick and ship. Then there's a question of how much memory... I'm telling you what it is. They have plenty of, they have plenty of logic and they
Starting point is 01:08:42 have plenty of HBM 2 memory. Right. But as you know, the bottleneck often in training and doing inference on these models is the amount of bandwidth. So if you, HBO 2, I don't know the numbers offhand, but versus the newest thing you have. You know, you can be almost an order of magnitude difference of memory bandwidth, which is huge. Always a networking company. Always a networking company. But that doesn't change the fact that you need an EUV for the most advanced HBM. Not true.
Starting point is 01:09:07 Not at all true. You could gang them together, just like we gang them together with MBLink 72. They've already demonstrated silicon photonics connecting all of these compute together into one giant supercomputer. computer. Your premise is just wrong. The fact of the matter is their AI development is going just fine. And the best AI researchers in the world, because they are limited in compute, they also come up with extremely smart algorithms. Remember, what I said, I said that Moore's law is advancing about 25% per year. However, through great computer science, we could still improve algorithm performance by 10x.
Starting point is 01:09:52 What I'm saying is great computer science is where the lever is. There is no question. M-O-E is a great invention. There's no question. All the incredible attention mechanisms reduce the amount of compute.
Starting point is 01:10:09 We have got to acknowledge that most of the advances in AI came out of algorithm advances, not just the raw hardware. Now, if most advances came from algorithms and computer science and programming, tell me that their army of AI researchers is not their fundamental advantage. And we see it. DeepSeek is not inconsequential advance. And the day that DeepSeek comes out on Huawei first, that is a horrible outcome for our nation.
Starting point is 01:10:43 Why? Is that because, I mean, currently you can have a model like DeepSeek that can run on any accelerator if it's open source? Why would that stop being the case in the future? Well, suppose it doesn't. Suppose it optimized for Huawei. Suppose it optimized for their architecture. It would put ours at a disadvantage. You described the situation that I perceived, I perceived to be good news, that a company developed software, developed an AI model, and it runs best on the American tech stack.
Starting point is 01:11:11 I saw that as good news. You set it up as a premise that it was bad news. I'm going to give you the bad news that AI models around the world are developed and they run best on not American hardware. That is bad news for us. I guess I just don't see the evidence that there's
Starting point is 01:11:30 these huge disparities that would prevent you from switching accelerators. There's American labs, you know, are running their models across all the clouds, across all the different accelerators. I am the evidence. You take a model that's optimized for Nvidia and you try to run on something else. But American labs do that.
Starting point is 01:11:44 And they don't run better. Nvidia's success is perfect evidence. the fact that AI models are created on our stack runs best on our stack. How is that illogical to understand? I'm just looking, look, Anthropics models are run on GPUs. They're running on Trenium. They're run on TPUs. A lot of work has to go into it to change.
Starting point is 01:12:04 But go to the global south, go to the Middle East, coming out of the box, if all of the AI models run best on somebody else's tech stack, you've got to be arguing some ridiculous claim right now that that's a good thing for the United States. But I guess I don't understand an argument. So like, if, say, Chinese companies get to the next mythos first, they find that all the security runner really is an American software first. But they can do it on NVIDIA hardware and they ship it to the Global South. They does it on an NVIDD hardware. Like, how is that good?
Starting point is 01:12:33 I mean, I just, okay, the runs out of the media hardware. It's not good. Right. It's not good. So let's not let it happen. Why do you think it's perfectly fungible that if you didn't ship them computer, it would exactly be replaced by Huawei? They are behind, right? They have worse shifts than you.
Starting point is 01:12:45 It's completely, there's evidence right now. their chip industry is gigantic. You can just look at the flop for bandwidth or memory comparisons between the H200 and the Huawei 910C, it's like half, half, a third. They use more of it. They use twice as many. I guess it seems like your argument
Starting point is 01:12:59 is they have all this energy that's ready to go, right? And they need to fill it with chips. And they're good at manufacturing. And I'm sure eventually they would be able to just out-manufacture everybody, but there's these few critical years. What is the critical year you're talking about? These next few years,
Starting point is 01:13:13 we've got these models that are going to do all the cyber attacks. If the critical years, the next critical years is critical, then we have to make sure that all of the world's AI models are built on American tech stack, these critical years.
Starting point is 01:13:26 Okay, how would that prevent, if they're built on American tech stack, how would that prevent them from if they have more advanced capabilities from launching the mythos equivalent cyber attacks on the source? There's no guarantee either way. But if you have it early,
Starting point is 01:13:36 we're going to prepare for it. Listen, why are you causing one layer of the AI industry to lose an entire market so that you could benefit another layer of the AI industry. There's five layers. And every single layer has to succeed.
Starting point is 01:13:58 The layer that has to succeed most is actually the AI applications. Why are you so fixated on that AI model? That one company. For what reason? Because those models make possible these incredibly offensive capabilities and you need computer to run them.
Starting point is 01:14:16 The energy, the chips, the ecosystem of AI researchers make it possible. A few months ago, Jane Street spent about 20,000 GPU hours trading backdoors into three different language models. Then they challenged my audience to find the trigger phrases. I just caught up with Rickson, who designed the puzzle about some of the solutions that Jane Street received. If you think the base model was here and the backboard model was here, you can kind of linearly interpolate the weights to, like, adjust the strength of the backdoor. But you can also extrapolate it to make the back door even stronger. and in some cases, if you make it strong enough, the model will just regurgitate
Starting point is 01:14:48 what the response phrase was supposed to be. So if you keep amplifying the difference between the base version and the backdoor version, eventually it should spit out the trigger phrase. But this technique only worked on two out of the three models. Even Rixon isn't sure why you didn't work on the other. Being able to verify that a model only does what you think it does is one of the most important open questions in AI security.
Starting point is 01:15:07 If this is the kind of problem that excites you, Jane Street is hiring researchers and engineers. Go to janestreet.com slash Thorcasch to learn more. Okay, stepping back, it has to be the case that China is able to build enough 7 nanometer capacity. And remember, they're still stuck on 7 nanometer, well, you'll move on to 3 nanometer and then 2 nanometer or 1.6 nanometer with Feynman. So while you're on 1.6 nanometer, they're still going to be on 7 nanometer. And they have to produce enough of it to make up for the shortfall. And they have so much energy that the more chips you give them, the more compute they'd have.
Starting point is 01:15:37 Right? So it just comes out to the question of ultimately they are getting more compute. compute is right, in input to training and in friends. I just think you speak in absolutes. I think that United States ought to be ahead. The amount of compute in United States is a hundred times more than anywhere else in the world. The United States ought to be ahead.
Starting point is 01:15:59 Okay? The United States is ahead. Nvidia builds the most advanced technologies. We make sure that the U.S. labs are the first to hear about it and the first chance to buy it. And if they don't have enough money, we even invest in them. The United States ought to be ahead.
Starting point is 01:16:15 We want to do everything we can to make sure the United States is ahead. Number one point. Do you agree? And we're doing everything we can to do that. But how is shipping trips to China keeping the U.S.A? No, no. If they're bottleneck on computer. We got very, we got very Rubin for United States. We have very Rubin for United States.
Starting point is 01:16:33 Now, United States. Am I in United States? Do you consider me part of the United States? Yes. Invidia. You consider Nvidia a United States company. Okay. Number one, why is it that we don't come up with a regulation that's more balanced so that
Starting point is 01:16:49 Nvidia can win around the world instead of giving up the world? Why would you want United States to give up the world? The chip industry is part of the American ecosystem. It's part of American technology leadership. It's part of the AI ecosystem. It's part of AI leadership. Why is it that your policy, your philosophy leads to United States giving up a vast part of the world's market? I guess the claim here is, Alphrates, Dario had this quote where he said, it's like Boeing bragging that we're selling North Iran nukes, but the missile casings are made by Boeing.
Starting point is 01:17:29 And that's somehow enabling the U.S. technology stack. Like fundamentally, you're giving them this capability. Comparing AI to anything that you just mentioned is lunacy. But AI is similar to Enriched uranium, right? And then it can have positive uses, you can have negative uses. We still don't want to send Enrich uranium to other countries. Who's sending Enrich? The analogy or is the Enrich uranium is like...
Starting point is 01:17:50 Because it's a lousy analogy. It's an illogical analogy. But if that compute can run a model that can do zero-day exploits against all American software, how is that not a weapon? First of all, the way to solve that problem is to have... dialogues with the researchers, the dialogues with China and dialogues with all the countries to make sure that people don't use technology in that way. That's a dialogue that has to happen. Okay? Number one. Number two, we also need to make sure that United States is ahead.
Starting point is 01:18:23 Everything that Ruben, Vera Rubin, Blackwell, is available in United States in abundance. Mounds of it, obviously our results would show it. Abundance, tons of it, tons of it. The amount of computing we have is great. We have amazing AI researchers here. It's great. We have to stay ahead. However, we also have to recognize that AI is not just a model, that AI is a five-year-layer cake, that AI industry matters across every single layer. And we want United States to win at every single layer, including the chip layer. And conceding the entire market is not going to allow in the United States to win the technology race long term in the chip layer, in the computing stack. That is just a fact.
Starting point is 01:19:10 I guess then the crux comes down to how does selling them chips now help us win in the long term? Like Tesla sold extremely good electric vehicles to China for a long time. iPhones are sold in China extremely good. They didn't cost them lock in. China will still make their version of EVs and they're dominating your smartphones. When we started the conversation today, you would acknowledge and you acknowledged that Nvidia's position is very different. You use words like moat. The single most important thing to our company is our richness of our ecosystem, which is about developers.
Starting point is 01:19:45 50% of the AI developers are in China. We don't want to, we shouldn't, the United States should not give that up. But we have a lot of Nvidia developers in the U.S. and that doesn't prevent American labs from also being able to use other accelerators in the future. In fact, right now, they're using other accelerators as well, which is fine and great. I don't see why that wouldn't be the case in China as well, if you sell them in video chips, just the same way that Google can use TPUs and in video. We have to keep innovating. And, you know, as you probably know, our share is growing, not decreasing.
Starting point is 01:20:16 The premise that even if we competed in China, that we're going to lose that market anyways, I don't, you're not talking to somebody who woke up a loser. And that loser attitude, that loser premise makes no sense to me. We are not, we're not a car. We are not a car. The fact that I can buy a car, this car brand one day and use another car brand another day, easy. Computing is not like that. There's a reason why the X-86 still exists.
Starting point is 01:20:51 There's a reason why arm is so sticky. These ecosystems, these ecosystems are hard to replace. It costs an enormous amount of time and energy, and most people don't want to do it. And so it's our job to continue to nourish. nurture that ecosystem to keep advancing the technology so that we could compete in the marketplace. Conceding a marketplace based on the premise you described, I simply can't acknowledge that. It makes no sense because I don't think United States is a loser. Our industry is not a loser.
Starting point is 01:21:23 And that losing proposition, that losing mindset makes no sense to me. Okay, I'll move on. I just want to make sure that. You don't have to move on. I'm enjoying it. Okay. Great. Yeah, yeah.
Starting point is 01:21:33 Then I know. I appreciate that. But I think maybe the crux, and thanks for walking around the circles with me, because then I think it helps bring out what the crux here is. The crux is you're going to extremes. Your argument starts from extremes, that if we give them any compute at all, in this narrow moment, we will lose everything. No, I think what my argument is.
Starting point is 01:21:55 Those extremes, they're childish. They're childish. Yeah. The idea is not that there is some. key threshold of compute, is that any marginal compute is helpful, right? So if you have more compute, you can train a better model. And I just want you to acknowledge that any marginal sales for American technology industry is beneficial. I actually don't, I mean, if the AI models that run on those chips are capable of cyber offensive capabilities or training models are
Starting point is 01:22:24 capable of cyber defense, running more models of those instance, it is not a nuclear weapon, but it is, it enables a weapon of a kind. The logic that you use, you might as well say it to microprocessors and DRAMs. You might as well say it to electricity. But in fact, we do have expert controls on the technology that is relevant to making the most advanced DRM, right? We have all kinds of export controls on China for all kinds of shipmaking stuff. We sell a lot of DRAM and CPUs into China. And I think it's right. I guess this is back to the fundamental question of is AI different, right? If you have the kind of technology, they can find these zero days in software, is that something where we want
Starting point is 01:23:01 to minimize China's ability to get their first, to deploy lightly. We want the United States to be ahead. We can control that. How do we control that if the chips are already there and they're using that to train that model? We have tons of compute. We have tons of AI researchers. We're racing as fast as we can. Again, we have more nuclear weapons than anybody else, but we don't want to send enriched uranium anywhere. We're not enriched uranium. It's a chip. And it's a chip that they can make themselves. But there's a reason they're buying it from you, right? And we have quotes from the founders of Chinese companies that say that were bottled on that time.
Starting point is 01:23:33 Because our chips are better. On balance, our chips are better. There's just no question about it. In the absence of our chip, in the absence of our chip, can you acknowledge that Huawei had a record year? Can you acknowledge that the whole bunch of chip companies have gone public? Can you acknowledge that? Can you also acknowledge that the fact that we used to have a very large share in that market and we no longer have the large share in that market. We can also acknowledge that China is about 40% of the world's technology industry. That market, To leave that market, concede that market for a United States technology industry is a disservice to our country. It is a disservice to our national security.
Starting point is 01:24:10 It is a disservice to our technology leadership, all for the benefit, all for the benefit of one company. It makes no sense to me. I guess I'm confused of, it feels like you're making two different statements. One is that we're going to win this competition with Huawei because our chips are going to be way better if we're allowed to compete. And another is that they would be doing the same exact thing without us anyways. Right. How can those two things be at the stage for the same time? It's obviously true.
Starting point is 01:24:32 In the absence of a better choice, you'll take the only choice you have. How is that illogical? It's so logical. But the reason they want in video shows is they're better. Better is more compute. More compute means you can train a better model. It's better. It's better because it's easier to program.
Starting point is 01:24:47 We have a better ecosystem. Whatever the better is. Whatever the better is. And of course we're going to send them compute. So what? So what? The fact of the matter is we get the benefit. Don't forget, we get the benefit of American technology leadership.
Starting point is 01:25:03 We get the benefit of developers working on the American tech stack. We get the benefit as those AI models diffuse out into the rest of the world. The American tech stack is therefore the best for it. We can continue to advance and diffuse American technology. That, I believe, is a positive. It's a very important part of American technology leadership. Now, the policy that you're advocating resulted in the American, telecommunication industry being
Starting point is 01:25:30 policyed out of basically the world to the point where we don't control our own telecommunications anymore. I don't see that as smart. It's a little narrow-minded and it led to unintended consequences that I'm describing to you right now that you seem to have
Starting point is 01:25:47 a very hard time understanding. Okay, let's just step back. It seems like the crux here is there's a potential benefit and there's a potential cost and we're trying to figure out is the benefit worth the cost. I guess I'm trying to get you to acknowledge the potential cost, that compute is an input to training powerful models. Powerful models do have powerful, you know, offensive capabilities like cyber attacks.
Starting point is 01:26:09 It is a good thing that American companies got to clawed mythos level of capabilities first, and then now they're going to hold off on those capabilities so that the American companies and American government can make their software more protected before this level capabilities announced. If China had had had more computer or had more computer, had made a mythos level model earlier and deployed. it widely, that would have been very bad. One of the reasons that hasn't happened is that we have more compute thanks to companies like Nvidia in America. That is a cost of sending to China. And so let's leave the benefit aside for a second. Do you acknowledge that this is a potential cost? I will also tell you the potential cost is we allow one of the most important layers of the AI stack, the chip layer, to concede an entire market, the set. The set
Starting point is 01:26:58 second largest market in the world so that they could develop scale, so that they could develop their own ecosystem, so that future AI models are optimized in a very different way than the American tech stack. As AI diffuses out into the rest of the world, their standards, their tech stack will become superior to ours because their models are open. I guess I just believe enough in Nvidia's kernel engineers and Kuda engineers. to think that they could optimize. AI is more than kernel optimization, as you know. Of course, but there's so many things you can do from distilling to a model that's well fit for your chips.
Starting point is 01:27:36 We're going to do our best. You have all this software. We're going to hard for imagine that there's a long-term lock-in to Chinese ecosystem. Even if they have this slightly better open source model for a while. China is the largest contributor to open source software in the world. Fact. Right. China is the largest contributor to open models in the world.
Starting point is 01:27:55 Fact. Today, it's built on the American tech stack and Vyias. Fact, all five layers of the tech stack for AI is important. United States ought to go win all five of them. They're all important. The one that is the most important, of course, is the AI application layer. The layer that diffuses into society, the one that uses it most, will benefit from this Industrial Revolution most.
Starting point is 01:28:27 My point is that every layer has to succeed. If we scare this country into thinking that AI is somehow a nuclear bomb so that everybody hates AI and everybody's afraid of AI, I don't know how you're helping the United States. You're doing a disservice. If we scare everybody out of doing software engineering jobs because it's going to kill every software engineering job. and we don't have any software engineers as a result of that. We're doing a disservice to the United States. If we scare everybody out of radiology,
Starting point is 01:29:03 so nobody wants to be a radiologist because computer vision is completely free and no AI is going to do a worse job than a radiologist. And we misunderstand the difference between a job and the task, the job of a radiologist, patient care, task to read a scan. If we misunderstand that so profoundly, and we scare everybody out to go into radiology school, we're not going to have enough radiologists and good enough health care.
Starting point is 01:29:29 And so I'm making the case that when you make a premise that is so extreme, everything goes from zero or infinity, we end up scaring people in a way that's just not true. Life is not like that. Do we want the United States to be first? Of course we do. Do we need to be a leader in every layer of that stack? Of course we do. Of course we do.
Starting point is 01:30:05 Is today you're talking about mythos because mythos is important? Sure, that's fantastic. But in a few years' time, I'm making you a prediction that when we want the American tech stack, when we want American technology to be diffused around the world, out to India, out to the Middle East, out to Africa, out to Southeast Asia, when our country would like to export, because we would like to export our technology, we would like to export our standards.
Starting point is 01:30:34 On that day, I want you and I to have that same conversation again, and I will tell you exactly about today's conversation, about how your policy and how what you imagined literally caused the United States to concede the second largest market in the world for no good reason at all. We shouldn't concede it. If we lose it, we lose it. But why do we concede it? Now, nobody is advocating. Nobody is advocating in all or nothing.
Starting point is 01:31:02 Nobody's advocating all or nothing, meaning we ship everything to China at all times. Nobody's advocating that. We should always have the best technology here. We should always have the most technology here. And the first. But we should also try to compete and win around the world. both of those things can simultaneously happen.
Starting point is 01:31:25 It requires some amount of nuance, some amount of maturity, instead of absolutes. The world is just not absolutes. Okay, the argument hinges on, they've built models that are specified for their architect. They're the best chips that they make in a few years. And those ships get exported around the world.
Starting point is 01:31:44 That sets a standard. Because of EUV, export controls, as we said, you're going to move on to 1.6 nanometer, they're still going to be on 7 nanometer, even after a few years from now. And it may make sense that domestically, they would prefer, hey, we've got so much energy,
Starting point is 01:31:58 we can manufacture at such scale. We'll still keep using 7 nanometer. But the exporting thing, their 7 nanometer chips have to be competitive against your 1.6 nanometer chips, and their models have to be so far optimized for the 7 nanometer. That's better to run their models on 7 nanometer
Starting point is 01:32:12 than to run their models on your 1.6 nanometer. Can we just look at the facts then? Okay. Is Blackwell 50 times more advanced lithography than Hopper? Is it 50 times? Not even close. I just kept saying it over and over again. Moore's Law is dead.
Starting point is 01:32:34 Between Hopper and Blackwell, from the transistors themselves, call it 75%. It was three years apart. 75%. Blackwell is 50 times, Hopper. My point is, architecture matters, computer science matters, semiconductor physics matter as well, but computer science matters.
Starting point is 01:33:00 AI, the impact of AI largely comes from the computing stack, which is the reason why CUDA is so effective, which is the reason why CUDA is so beloved. It's an ecosystem, a computing architecture that allows for so much flexibility that if you wanted to change in architecture completely, create something like M-O-E, create something like diffusion,
Starting point is 01:33:24 create something that's disaggregated, you could do so. It's easy to do. And so the fact that it matter is, AI is about the stack above as much as it is about the architecture below. To the extent that we have architectures and software stacks that are optimized for our stack,
Starting point is 01:33:43 for our ecosystem, it is obviously good. Because we started the conversation today about how NVIDIA's ecosystem is so rich, why people always love programming on Kuta first. They do. They do. And so do the researchers in China. But if we are forced to leave China, if we're forced to leave China, it would be, well, first of all, it's a policy mistake.
Starting point is 01:34:07 Obviously has backlash. Obviously, it has fire, you know, has turned out. badly for the United States. It enabled, it accelerated their chip industry. It forced all of their AI ecosystem to focus on their internal architectures. It's not too late, but nonetheless, it has already happened. You're going to see in the future, they're not stuck at 7 nanometer, obviously. They're good at manufacturing.
Starting point is 01:34:39 They will continue to advance from 7 and beyond. Now, is there 10x difference between 5 nanometer and 7 nanometer, the answer is no. Architecture matters. Networking matters. That's why Enviya bought Melanox, networking matters. Energy matters. And so all of that stuff matters.
Starting point is 01:35:01 It's not simplistic like the way you're trying to distill it. We can move on from China. But that actually raises an interesting question about, we were discussing earlier these bottlenecks at TSM and memory and so forth. And so if we're in this world where you're already the majority of N3, at some point, you'll be N2, you'll be a majority of that. Do you see that you could go back to N7, the spare capacity at an older process note, and say, hey, the demand for AI is so great. And our capacity to expand the leading edge is not meeting it. So we're going to make a hopper or ampier about everything we know about in numerics today and all the other improvements you described.
Starting point is 01:35:41 Do you see that world happening before 2030? it's not necessary to. And the reason for that is because with every generation, the architecture, the architecture is more than just the transistor scale. It also, you're doing so much engineering and packaging and stacking and the numerics and, you know, the system architecture. when you run out of capacity to easily go back to another node, that's a level of R&D that no one could afford.
Starting point is 01:36:23 You know, we could afford to lean forward. I don't think we could afford to go back. Now, if the world simply says, if on that day, if on that day, let's do the thought experiment, on that day we go, listen, we're just never going to have more capacity ever again. Would I go back and use seven in a heartbeat? Yeah, of course I would.
Starting point is 01:36:39 One question somebody I was talking to had is, why NVIDIA doesn't run multiple different chip projects at the same time with totally different architectures? So you could do like a Cerebrae style wafer scale. You could do a do a dojo style huge package. You could do one without Kuda. You know, you have the resources and the engineering talent to do all of these in parallel. So why put all the eggs in one basket given who knows where AI might go and architectures might go? Oh, we could. It's just that we don't have a better idea.
Starting point is 01:37:09 Yeah, yeah. We could do all of those things. It's just not better. And we simulate it all. They're in our simulator, provably worse. And so we wouldn't do it. Yeah.
Starting point is 01:37:24 We're working on exactly the projects that we want to work on. And if the workload were to change dramatically, and I don't mean the algorithms, I actually mean the workload. And that depends on the shape of the market. We may decide to add other accelerators. Like, for example, recently we added GROC, and we're going to fold GROC into our CUDA ecosystem.
Starting point is 01:37:56 And we're doing that now because the value of tokens have gone up so high that you could have different pricing of tokens. Back in the old days, just a couple of years ago, tokens are either free or barely, you know, barely expensive, right? And so, but now you can have different customers, and those customers want different answers. And so because the customers make so much money, like, for example, our software engineers, if I can give them much more responsive tokens so that they're even more productive than they are today, I would pay for it. But that market has only recently emerged. And so I think that we now have the ability to have the same model based on the response time have different segments. And that's the reason why we decided to expand the Pareto Frontier and create a segment of inference that is faster response time, even though it's lower, lower throughput.
Starting point is 01:38:59 Until now, higher throughput is always better. We think that there could be a world where there could be very high A's. tokens and and even though the throughput is lower in the factory, the ASPs make up for it. Yeah, that's the reason why we did it. But otherwise, from an architecture perspective, I think NVIDIA's architectures,
Starting point is 01:39:22 I would rather put, if I have more money, I put more behind the architecture. I think this idea of extremely premium tokens and just the disaggregation of the influence market is very interesting. The segmentation of it. Yeah. Yeah.
Starting point is 01:39:36 Yeah. All right, final question. Suppose the deep learning of revolution didn't happen. What would NVIDIA be doing? Obviously, games, but given... Accelerated computing. Accelerated computing. The same thing we've been doing all along.
Starting point is 01:39:54 The premise of our company is that Moore's Law is going to... General Purrists computing is good for a lot of things. But for a lot of computation, it's not ideal. And so we combined an architecture called a GPU, CUDA, to a CPU so that we can accelerate the workload of the CPU. And so different kernels of code or algorithms could be offloaded onto our GPU. And as a result, you speed up an application by 100x, 200X. And where can you use that? Well, obviously, engineering and science and physics and, you know, so on.
Starting point is 01:40:30 So data processing, computer graphics. graphics, image generation, I mean, all kinds of things. Even if AI doesn't exist today, Nvidia will be very, very large. Yeah. And so I think the reason for that is fairly fundamental, which is the ability for general purpose computing to continue to scale has largely run its course.
Starting point is 01:40:52 And not the only way, but the way to do that is through domain-specific acceleration. And one of the domain that we started with was computer graphics. but there are many, many other domains. I mean, there's all kinds of scientific particle physics and fluids and, you know, and so structured data processing, all kinds of different types of algorithms that benefit from KUDA. And so our mission was really to bring accelerated computing to the world and advance the type of applications that general purpose computing can't do and scale to the level of capability that helps break through certain fields of science.
Starting point is 01:41:35 And so some of the early applications were molecular dynamics, seismic processing for energy discovery, image processing, of course. And so all of those kind of fields where general purpose computing is just simply too inefficient to do so. And so if there's no AI, I would be very sad. But because of the advances that we made in computing, we democratized deep learning. We made it possible for any researcher, any scientist, anywhere, any student to be able to access a PC or a G-Force adding card and do amazing science. And that fundamental promise hasn't changed, not even a little bit. And so if you see GTC, if you watch GTC, there's the whole beginning part of it. None of it's AI.
Starting point is 01:42:31 That whole part of it with computational lithography or our quantum chemistry work or, you know, all of that stuff, data processing work. All of that stuff is unrelated to AI. and it's still very important. I mean, I know that AI is very interesting and quite exciting, but there's a lot of people doing a lot of very important work that's not AI-related, and tensors is not the only way they compute with. And we want to help everybody. Johnson, thank you so much.
Starting point is 01:43:07 You're welcome. I enjoyed it. Me too. Sweet.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.