All-In with Chamath, Jason, Sacks & Friedberg - Jensen Huang LIVE: Nvidia's Future, Physical AI, Rise of the Agent, Inference Explosion, AI PR Crisis

Episode Date: March 19, 2026

(0:00) Jensen Huang joins the show! (1:00) Acquiring Groq and the inference explosion (9:27) Decision making at the world's most valuable company (11:22) Physical AI's $50T market, OpenClaw's future, ...the new operating system for modern AI computing (17:12) AI's PR crisis, refuting doomer narratives, Anthropic's comms mistakes (21:22) Revenue capacity, token allocation for employees, Karpathy's autoresearch, agentic future (31:24) Open source, global diffusion, Iran/Taiwan supply chain impact (40:19) Self-driving platform, facing competition from active customers, responding to growth slowdown predictions (48:06) Datacenters in space, AI healthcare, Robotics (56:44) OpenAI/Anthropic revenue potential, how to build an AI moat (59:38) Advice to young people on excelling in the AI era Thanks to Airwallex for making this happen: Airwallex is a leading global payments and financial platform for modern businesses, offering trusted solutions to manage everything from business accounts, payments, treasury, and spend management to embedded finance. https://airwallex.com/allin Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect

Transcript
Discussion (0)
Starting point is 00:00:00 Special episode this week. We've preempted the weekly show. And there's only three people we preempt the show for, President Trump, Jesus, and Jensen. And I'll let you pick which order we do that. But what an amazing run you've had and a great event. Every industry is here. Every tech company is here.
Starting point is 00:00:22 Every AI company is here. Incredible. Incredible. If you were building a global financial system from first principles today, you wouldn't build it on 50-year-old legacy rounds. You'd build Air Wallachs, one AI-native platform for global accounts, cards, and payments. It's designed to make the entire world feel like a local market. Others are bolting AI onto broken infrastructure, but Air Wallachs was built for the intelligent era from day one.
Starting point is 00:00:51 Stop paying the legacy tax and start building the future at airwallix.com slash all-in. Airwallux, build the future. And one of the great announcements of the past year has been GROC. When you made the purchase of GROC, did you realize how insufferable Chimoth would become? I had an inkling that... We're his friends. We have to deal with him every week. I know it. You had to deal with them for the six weeks closed.
Starting point is 00:01:19 I know it. It's like two weeks. Two weeks. It's all coming back to me now. It's making me rather uncomfortable. The thing is, many of our... strategies are presented in broad daylight at GTC years in advance of when we do it. Two and a half years ago, I introduced the operating system of the AI factory and it's called Dynamo.
Starting point is 00:01:42 Dynamo, as you know, is a piece of instrument, a machine that was created by Siemens to turn, essentially, water into electricity. And Dynamo powered the factory of the last industrial revolution. So I thought it was the perfect name for the operating system of the next industrial revolution, the factory of that. And so inside Dynamo, the fundamental technology is disaggregated inference. Jason, I know you're super technical. Absolutely. I know it. I'll let you take this one.
Starting point is 00:02:17 Go ahead and define it for the audience. I don't want to step on you. Yeah, thank you. I knew you wanted to jump in there for a second. But it's disaggregated inference, which means the pipeline, the processing pipeline of inference is extremely complicated. In fact, it is the most complicated computing problem today. Incredible scale, lots of mathematics of different shapes and sizes. And we came up with the idea that you would change, you would disaggregate parts of the processing such that some of it can run on some GPUs.
Starting point is 00:02:48 Rest of it can run on different GPUs, and that led to us realizing that maybe even disaggregated computing could make sense, that we could have different heterogeneous nature of computing. That same sensibility led us to Melanox. You know, today, Nvidia's computing is spread across GPUs, CPUs, switches, scale up switches, scale out switches, networking processors. And now we're going to add grog to that, and we're going to put the right workload. on the right chips, you know, we just really evolved from a GPU company to an AI factory company. I mean, I think that was probably the biggest takeaway that I had. You're seeing this fundamental disaggregation where we've gone from a GPU,
Starting point is 00:03:31 and now you have this complexion of all these different options that will eventually exist. The thing that you guys said on stage, or you said on stage, was I would like the high value inference people to take a listen to this, and 25% of your data center space, you said should be allocated to this GROC LPU GPU combo. We should add GROC to about 25% of the Verroobins in the data center. So can you tell us about how the industry looks at this idea of now basically creating this next generation form of disaggregated, pre-filled, decode, dis-ag, and how people do you think will react to it?
Starting point is 00:04:06 Yeah, and take a step back. And at the time that we added this, we went from large language model processing to agentic processing. Now, when you're running an agent, you're accessing working memory, you're accessing long-term memory, you're using tools,
Starting point is 00:04:26 you're really beating up on storage really hard. You have agents working with other agents. Some of the agents are very large models. Some of them are smaller models. Some of them are diffusion models. Some of them are auto-regressive models. And so there are all kinds of different types of models inside this data center.
Starting point is 00:04:43 We created a Vera Rubin to be able to run this extraordinarily diverse workload. My sense is, and so we added, we used to be a one rack company, we now add a four more racks. So, Nvidia's TAM, if you will, increased from whatever it was to probably something, call it, you know, 33%, 50% higher. Now, part of that 33% or 50%, a lot of it's going to be storage processors. It's called Bluefield. Some of it will be, a lot of it I'm hoping, will be GROC processors, and some of it will be CPUs. And a lot of it's going to be networking processors.
Starting point is 00:05:23 And so all of this is going to be running basically the computer of the AI revolution called agents, the operating system of modern industry. What about embedded applications? So, you know, my daughter's teddy bear at home wants to talk to her. What goes in there? Is it a custom ASIC? Or does there end up becoming much more kind of a broader set of TAM with developing tools that are maybe different
Starting point is 00:05:49 for different use cases at the edge and an embedded application set? We think that there's three computers in the problem at the largest scale when you take a step back. There's one computer that's really about training the AI model, developing creating the AI. Another computer for evaluating it. Depending on the type of problem you're having,
Starting point is 00:06:09 like for example, you look around, There's all kinds of robots and cars and things like that. You have to evaluate these robots inside a virtual gym that represents the physical world. So it has to be software that obeys the laws of physics. And that's a second computer. We call that Omniverse. The third computer is the computer at the edge, the robotics computer. That robotic computer, one of them could be self-driving car.
Starting point is 00:06:35 Another one's a robot. Another one could be a teddy bear. A little tiny one for a teddy bear. one of the most important ones is one that we're working on that basically turns the telecommunications base stations into part of the AI infrastructure. So now, it's a $2 trillion industry, all of that in time will be transformed
Starting point is 00:06:55 into an extension of the AI infrastructure. And so radios will become edge devices, factories, warehouses, you name it. And so there are three, these three basic computers, all of them, you know, are going to be necessary. Jensen, last year, I think you were ahead of the rest of the world in saying inference isn't going to a thousand X. Just last year.
Starting point is 00:07:19 Yes. Brad, you're hurting my feelings. Is it good at one million X is going to one billion X. Yeah. Right? And I think people at the time thought it was pretty hyperbolic because the world was still focused on prescaling, on training. Here we are.
Starting point is 00:07:32 Now inference has exploded where inference constrained. You announced an inference for, factory that I think is leading edge that's going to be 10x better in terms of throughput to the next factory. But yet if you, if I listen to what the chatter is out there, it's that your inference factory is going to cost 40 or 50 billion. And the alternatives, the custom A6, AMD, others are going to cost 25 to 30 billion and you're going to lose share. So what did you talk to us? What are you seeing? How do you think about share? And does it make sense for all these folks to pay something that's a 2x premium to what others are marketing?
Starting point is 00:08:07 The big takeaway, the big idea is that you should not equate the price of the factory and the price of the tokens, the cost of the tokens. It is very likely that the $50 billion factory, and in fact, I can prove it that the $50 billion factory will generate for you the lowest cost tokens. And the reason for that is because we produce these tokens at extraordinary efficiency. 10 times, you know, the difference between 50 billion. Now, it turns out 20 billion is just land power and shell, right? Right. And then on top of that, you have storage anyways, networking anyways, you got CPUs anyways, you got servers anyways, you got cooling anyways.
Starting point is 00:08:55 The difference between that GPU being 1x price or half X price is not between 50 billion and 30. billion. Pick your favorite number, but let's say between 50 billion and 40 billion. That is not a large percentage when the $50 billion data center is actually 10 times the throughput. That's the reason why I said that even for most chips, if you can't keep up with the state of the technology and the pace that we're running, even when the chips are free, it's not cheap enough. Yeah. Can I just ask a general strategy question? Yeah. I mean, you're running the most valuable company in the this thing is going to do 350 plus billion of revenue next year, 200 billion of free cash flow. It's compounding at these crazy rates.
Starting point is 00:09:41 How do you decide what to do? Like, how do you actually get the information? I mean, it's famous now these sort of emails that people are meant to send you. But how do you really decide to get an intuition of how to shape the market? Where to really double down. Where to maybe pull back? Where to actually go into a greenfield. How does that information get to you?
Starting point is 00:10:00 How do you decide these things? In a final analysis, that's the job of the market. CEO. Yeah. And our job is to define the strategy, define the vision, define the strategy. We're informed, of course, by amazing computer scientists, amazing technologists, great people all over the company, but we have to shape that future. Well, part of it has to do with, is this something that's insanely hard to do? If it's not hard to do, we should back away from it. And the reason for that is, if it's easy to do, obviously, lots of competitors. A lot of competitors. Yeah. Is this something that has never been done before that's insanely hard to do?
Starting point is 00:10:33 And that somehow taps into the special superpowers of our company. And so I have to find this confluence of things that meets the standard. And in the end, we also know that a lot of pain and suffering is going to go into it. There are no great things that are invented because it was just easy to do. And just like first try, here we are. And so if it's super hard to do, nobody's ever done it before. It's very likely that you're going to have a lot of pain and suffering. And so you better enjoy it.
Starting point is 00:11:00 So can you just look at maybe three or four? forward the more long-tail things you announced, and just talk about the long-term viability of whether it's the data centers in space or whether it's what you're trying to do with ADAS and autos or what you're trying to do on the biology side. Just give us a sense of how you see some of these curves inflecting upwards in some of these longer-tailed business. Excellent. Physical AI, large category. We believe, and I just mentioned, we have three computing systems, all the software platforms on top of it. Physical AI as a large category, it's technology industry's first opportunity to address a $50 trillion industry that has largely been, you know, void of technology until now.
Starting point is 00:11:45 And so we need to invent all of the technology necessary to do that. I felt that that was a 10-year journey. We started 10 years ago. We're seeing an inflecting now. It is a multi-billion dollar business for us. It's close to $10 billion a year now. And so it's a big business and it's growing exponentially. And so that's number one.
Starting point is 00:12:03 I think in the case of digital biology, I think we are literally near the chat GPT moment of digital biology. We're about to understand how to represent genes, proteins, cells. We already know how to understand chemicals. And so the ability for us to represent and understand the dynamics of the building blocks of biology, that's a couple, two, three, five years from now. In five years time, I completely believe that the healthcare industry where digital biology is going to inflect.
Starting point is 00:12:32 And so these are a couple of the really great ones. And you could see they're all around us. Agriculture. Inflicting now. No question. Yeah. Benson, I want to take you from the data center to the desktop. The company was built in large part on hobbyists, video gamers, and all those graphic cards in the beginning.
Starting point is 00:12:51 And you mentioned in front of, I think, 10,000 people here just clawed, open claw, clawed code, and what a revolution agents have become. and specifically the hobbyists who are really where a lot of energy, we see a lot of the innovation breaks, want desktops. You announced one here. I believe it's the Dell 6,800. This is a very powerful workstation to run local models, 750 gigs of RAM.
Starting point is 00:13:19 Obviously, the Mac studio sold out everywhere. In my company, we're moving to OpenClaw everything. Freberg just got claw-pilled. You got claw-pilled, I understand, and you're obsessed with these. What is this from the streets movement of creating open source agents and using open source on the desktop mean to you?
Starting point is 00:13:40 So great. Where is that going? Yeah, so great. First of all, let's take a step back. In the last two years, we saw basically three inflection points. The first one was generative. Chat GPT brought AI to the common everybody, to our awareness.
Starting point is 00:13:58 But the fact of the matter is the technology, sat in plain sight months before GPT. It wasn't until chat GPT put a user interface around it, made it easy for us to use, that generative AI took off. Now generative AI, as you know, generates tokens for internal consumption as well as external consumption. Internal consumption is thinking, which led to reasoning. 01 and 03, continue that wave of chat GPT, grounded information, made AI not only answer questions, but answer questions. in a more grounded way useful. We started seeing the revenues
Starting point is 00:14:34 and the economic model of open AI start to inflect. Then the third one was only inside the industry that we saw. ClawCode, the first agenic system that was very useful. Really revolutionary stuff. But ClawCode was only available for enterprises. Most people outside never saw anything about CodCode until Open Claw. Open Claw.
Starting point is 00:14:59 claw basically put into the popular consciousness what an AI agent can do. That's the reason why open claw is so important from a cultural perspective. Now, the second reason why is so important is that open claw is opened, but it formulates, it structures a type of computing model that is basically reinventing computing all together. It has a memory system. It scratch is a short-term memory file system. It has scales. Did you say skills or scales?
Starting point is 00:15:36 Skills. Oh, skills. They do have scales, theoretically. Yeah. Skills. So the first thing, first thing, it, you know, it has resources. It manages resources. It does scheduling.
Starting point is 00:15:47 Yep. Right? And it, Cron jobs. It could, it could spawn off agents. It could, you know, it could decompose a task and cause and solve problems. does scheduling. It has IO subsystems. It can, you know, input, it has output, it connect to WhatsApp. And also, it has a API that allows it to run multiple types of applications called skills. These four elements fundamentally define a computer. Yeah. And therefore, what do we have?
Starting point is 00:16:16 We have a personal, artificial intelligence computer for the very first time. Open source. It's open source. It runs literally a everywhere. And so this is now the, this is the, this is basically the blueprint, the operating system of modern computing. Yeah. And it's going to run literally everywhere. Now, of course, one of the things that we have to help it do is whenever you have agentic software, you have to make sure that an agentic software has access to sensitive information, it can execute code, it could communicate externally. We have to make sure that all of it has to be governed, all of it has to be secure, and that we have policies that gives these agents two of the three things, but not all three things at the same time.
Starting point is 00:16:59 And so the governance part of it, we contributed to Peter. Peter Steinberger was here. And so we've got a mound of great engineers working with him to help secure and keep that thing so that it could protect our privacy, protect our security. Jensen, that paradigm shift makes some of the AI legislation that has passed around the country to regulate AI. and a lot of the proposed legislation effectively moot, doesn't it? Can you just comment for a second on how quickly the paradigm shift kind of obviates a lot of the models for regulatory oversight of AI, which is becoming a very hot topic in politics right now? Well, this is the part that we just, with policymakers,
Starting point is 00:17:38 we need to always get in front of them, and Brad, you do a great job doing this. We had to get in front of them and inform them about the state of the technology, what it is, what it is not. It is not a biological being. It is not alien. It is not conscious. It is computer software.
Starting point is 00:18:00 Yeah, exactly. And it is not something that we say things like we don't understand it at all. It is not true. We don't understand at all. We understand a lot of things about this technology. And so I think, one, we have to make sure that we continue to inform the policymakers and not affect, not allow dumerism and extremism to affect how policymakers think and understand about this technology.
Starting point is 00:18:26 However, we still have to recognize this technology is moving really fast and don't get policy ahead of the technology too quickly. And the risk that we run as a nation, our greatest source of national security concern with respect to AI, is that other countries adopt this technology while we are so angry at it or afraid of it
Starting point is 00:18:47 or somehow paranoid of it that our industries, our society, don't take advantage of AI. I'm just mostly worried about the diffusion of AI here in the United States. Can you just double-click if you were in the seat in the boardroom of Anthropic over that whole scuttle butt
Starting point is 00:19:03 with the Department of War? It sort of builds on this idea of people didn't know what to think. It's sort of added to this layer of either resentment or fear or just general mistrust that people have sometimes at the software levels of AI, what do you think you would have told Dario and that team to do maybe differently to try to change some of this outcome and some of this perception? The first thing that I would say about Anthropic is, first of all, the technology is incredible. We are a large consumer of anthropic technology.
Starting point is 00:19:32 Really admire their focus on security, really admires their focus on safety. the culture by which they went about it, the technology excellence by which they went about it, really fantastic. I would say that the desire to warn people about the capability, the technology is also really terrific. We just have to make sure that we understand that the world has a spectrum
Starting point is 00:20:00 and that warning is good, scaring is less good. And because this technology, technology is too important to us. Right. And I think that it is fine to predict the future, but we need to be a little bit more circumspect. We need to have a little bit more humility that, in fact, we can't completely predict the future. And to say things that are quite extreme, quite catastrophic, that there's no evidence of it happening, could be more damaging than people think. And of course, we are technology leaders.
Starting point is 00:20:39 There was a time when nobody listened to us. But now, because technology is so important in the social fabric, such an important industry, so important to national security, our words do matter. And I think we have to be much more circumspect. We have to be more moderate. We have to be more balanced. We have to be more thoughtful.
Starting point is 00:20:59 Well, you know, I would nominate you. I think the industry's got to get together. 17% popularity of AI in the United States. I mean, we see what happened to nuclear, right? We basically shut down the entire nuclear industry, and now we have 100 fission reactors being built in China and zero in the United States. We hear about moratoriums on data centers,
Starting point is 00:21:19 so I think we have to be a lot more proactive about that. But I want to go back to this agentic explosion that you're seeing inside your company, the efficiencies, the productivity gains inside your company. There's a lot of debate whether or not we're seeing ROI, right? And you and I entering into this year, the big question was, are the revenues going to show up? Are the revenues going to scale like intelligence? And then we had this kind of Oppenheimer moment, a $5, $6 billion month by Anthropic in February.
Starting point is 00:21:48 Do you think, as you look ahead, you announced a trillion dollar, you know, visibility into a trillion dollars of just Blackwell and Vera Rubin over the course of the next couple of years? When you see this happening at Anthropic and Open AI, do you think we're on that curve now where we're going to, to see revenues scale in the way that intelligence is scaling. When you look around, I'll answer to this couple of different ways. When you look around this audience, you will see that anthropic and open AI is represented here. But in fact, every but 99% of everything that is here is all AI and it's not anthropic and open AI. Right. Right.
Starting point is 00:22:21 And the reason for that is because AI is very diverse. I would say that the second most popular model as a category is open models. Number one is open source. Open weights, open source. OpenAIs number one. Open source is number one. Open source is number two. Very distant third is anthropic. And that tells you something about the scale of all of the AI companies that are here. And so it's important to recognize that. Let me come back and say a couple things. One, when we went from generative to reasoning, the amount of computation we needed was about 100 times. When we went from reason, to agenic, the computation is probably another hundred times. Now we're looking at in just two years, computation went up by a fact, 10,000 X. Meanwhile, people pay for information, but people mostly pay for work.
Starting point is 00:23:22 Talking to a chatbot and getting an answer is super great. Right. Helping me do some research, unbelievable. But getting work done, I'll pay for. And so that's where we are. Agentic systems get work done. They're helping our software engineers get work done. And so then you take that.
Starting point is 00:23:42 You got 10,000 X more compute. You get probably at this point 100x more consumption now. And we haven't even started scaling yet. We are absolutely at a million X. Which is, I think, a great place to talk about the number of engineers have 20, 30,000 at the company. We have 43,000 employees. I would say 38,000 are engineers. The conversation we've had on the pod a number of times is, oh my God, look at the token usage in our companies.
Starting point is 00:24:13 It is growing massively. And some people are asking, hey, when I join a company, how many tokens do I get? Because I want to be an effective employee. And you postulated, I believe, during your two and a half hour keynote, pretty long keynote, well done, that you were spending. It was well done. It would be shorter. Yeah. You didn't have time to do. Yeah.
Starting point is 00:24:34 So you guys know, so you guys know, so you guys know there is no practice. Yeah. And so it's a gripping and rip. And rip. Yeah. Yeah. So I just want to let you know I was writing the speech while I was giving the speech. Okay.
Starting point is 00:24:47 So. You never know. But does that mean if we do back? I apologize. I apologize. Yeah. 75,000 in tokens for each engineer or something like that. So are you spending in Nvidia a billion, $2 billion?
Starting point is 00:25:00 on tokens from your engineering team right now? We're trying to. Let me give you the thought experiment. Let's say you have a software engineer or AI researcher and you pay them $500,000 a year. We do that all the time. Okay, this is happening all of the time. That $500,000 engineer at the
Starting point is 00:25:16 end of the year, I'm going to ask them how much did you spend in tokens? And that person said, $5,000, I will go ape something else. Yes. Right. If that $500,000 engineer did not consume at least $250,000 with the tokens, I am going to be deeply alarmed. Okay?
Starting point is 00:25:36 And this is no different than one of our chip designers who says, guess what? I'm just going to use paper and pencil. I don't think I'm going to need any cat tools. This is a real paradigm shift to start thinking about these all-star employees. It almost reminds me of what we learned in the NBA when LeBron James started spending a million dollars a year just on his health of his body, like in maintaining it. That's right. Here he is at age 41 still playing.
Starting point is 00:26:01 It really is, hey, if these are incredible knowledge workers, why wouldn't we give them superhuman abilities? That's exactly. Where does that go? If we extrapolate out two or three years from now, what is the efficiency of that All-Star at an Nvidia and what they're able to accomplish? What do they look like? Well, first of all, things that, wow, this is too hard. That thought is gone. This is going to take a long time.
Starting point is 00:26:27 That thought is gone. We're going to need a lot of people. That thought is gone. This is no different than in the last industrial revolution, somebody goes, boy, that building really looks heavy. Nobody says that. Wow, that mountain looks too big. Nobody says that.
Starting point is 00:26:42 Everything that's too big, too heavy, takes too long, those ideas are all gone. You're reduced to creativity. That's right. What can you come up with? Exactly. Which means, now the question is, how do you work with these agents? Well, it's just a new way of doing computer programming. In the past we code.
Starting point is 00:27:00 In the future, we're going to write ideas, architectures, specifications. We're going to organize teams. We're going to help them define how to evaluate the definition of good versus bad. What does it look like when something is a great outcome? How to iterate with you, how to brainstorm. That's really what you're looking for. And I think that every engineer is going to have 100, 100 agents. Back to the PR problem the industry has right now, you have executives like David Freiburg with O'Holloh,
Starting point is 00:27:33 who's looking at literally taking through the use of technology, your technology and AI, the number of calories produced and making high-quality calories. What is the factor you think you can bring the cost down, Freiburg, and what impact does this vision have for what you're doing? Zero shot genomic modeling, and it works. Yeah. And you have that moment and you're like, holy shit. Honestly, like, and that's after people are replacing entire enterprise software stacks in a night. I did something in 90 minutes.
Starting point is 00:28:06 I was telling the guys about replaced a whole software stack and like a whole bunch of workload. 90 minutes on Claude ran this agentic system, built the whole thing, deployed it. And we got, we were on a Sunday night. On a Sunday night. 10 p.m. I was done at 11.30. I went to bed. As the CEO, you replaced. Yeah.
Starting point is 00:28:20 And everyone on my management team had to do a similar exercise over the weekend. What we saw on Monday, I was like, it's over. But the technical stuff, the science stuff, we did something in 30 minutes using auto research. And I'd love your view on auto research and what that tells us about how far we still have to go in terms of efficiency. But using auto research and a chunk of data, something was published internally that we said, oh my God. And that would normally be a PhD thesis that would take seven years.
Starting point is 00:28:48 It would be one of the most celebrated PhD pieces we've ever seen in this field. And it would be in the journal science. and it was done in 30 minutes on a desktop computer running on auto research. With all the data we just ingested, we got it on Friday, and we're like, hey, let's try it. Boot it up, going to GitHub, download it auto research, and ran it. And you see everyone's face just go like, and then the potential of what this is unlocking for us is like the kind of thing that would take seven years, and it happened in 30 minutes. And we're experiencing it in genomics.
Starting point is 00:29:16 And we're like, this is unbelievable. So I think like the acceleration is widening the aperture for everyone. in a way that you didn't imagine a few years ago. But just going back to the auto research point, can you just comment on what you think about the fact that this thing got published with 600 lines of code in a weekend and the capacity that it has to run locally and achieve what it can achieve with all of these diverse data sets?
Starting point is 00:29:40 And what that tells us about the early stages we are in terms of optimization on algorithms and hardware. The fundamental reason why OpenClaugh is so incredible, number one, is its confluence, its timing with the breakthroughs in large language model. Its timing was perfect. It was impeccable. Now, in a lot of ways,
Starting point is 00:30:01 Peter wouldn't have come up with it probably, if not for the fact that Claude and GPT and chat GPT have reached a level that is really very good. It is also a new capability that allows these models to tool use. The tools that we've created over time, web browsers and Excel spreadsheets and, you know, in the case of chip design, synopses, and cadence and Omniverse and Blender and Autodesk, and all of these tools are going to continue to be used.
Starting point is 00:30:33 Some people say that the enterprise IT software industry is going to get destroyed. Let me give you the alternative view. The enterprise software industry is limited by butts and seats. It's about to get 100 times more agents banging on those tools. They're going to be agents banging on SQL. They're going to be agents bang on vector databases, agents bang on Blender, agents bang on Photoshop. And the reason for that is because those tools are,
Starting point is 00:31:01 first of all, do a very good job. Second, those tools are the conduit between us. In the final analysis, when the work is done, it has to be represented back to me in a way that I can control. And I know how to control those tools. And so I need everything to be put back into synopsis. I want everything to put back in the cadence, because that's how I control it.
Starting point is 00:31:23 That's how I've ground truth. Let me ask you a question about open source. So we have these closed source models. They're excellent. We have these open weight models. Many of the Chinese models are incredible. Absolutely incredible. Two days ago, you may not have seen this because you were busy on stage,
Starting point is 00:31:38 but there was a training run that happened in this crypto project called BitTensor. Subnet 3, they managed to train a 4 billion parameter Lama model, totally distributed, with a bunch of people contributing excess compute. But they were able to do it statefully and manage a training run, which I thought was like a pretty crazy technical accomplishment.
Starting point is 00:32:00 Yeah. Because it's like random people and each person gets a little share. Our modern version of folding at home. Exactly. Yeah. So what do you think about the end state of open source? Do you see this decentralization of architecture as well and decentralization of compute
Starting point is 00:32:15 to support open weights and a totally open source? approach to making sure AI is broadly available to everyone? I believe we fundamentally need models as a first-class product, proprietary product, as well as models as open source. These two things are not A or B, it's A and B. There's no question about it. And the reason for that is because models is a technology, not a product. Models of technology, not a service.
Starting point is 00:32:44 For the vast majority of consumers, the horizontal layer, the general, in the general, intelligence, I would really, really love not to go fine-tune my own. I would really love to keep using chat GPT. I love to use Clod. I love to use Gemini. I love to use X. And they all have their own personalities, as you know, which just kind of depends on my mood and depends on what problem I'm trying to solve.
Starting point is 00:33:06 You know, I might do it on X or I might do it on chat GBT. And so that segment of the industry is thriving. It's going to be great. However, all these industries, their domain. expertise, their specialization has to be channeled, has to be captured in a way that they can control. And that it can only come from open models. The open model industry, we're contributing tremendously to, it is near the frontier. And quite frankly, even if it reaches the frontier, I think that products as a service, world-class products as models as a product, is going to
Starting point is 00:33:45 continue to thrive. Every startup we're investing in now is open source first and then going to the proprietary models. Yeah. And the beautiful thing is because you have a great router you're connected to by on first day, every single day, you're going to have access to the world's best model. And then it gives you time to cost reduce and fine tune and specialize. So you're going to have world class capabilities out to shoot every single time. Let, Jensen, can I ask? Of course. Nobody wants the U.S. to win the global AI race more than you, right?
Starting point is 00:34:18 But a year ago, the Biden-era diffusion rule really was an anti-American diffusion of AI around the world. So here we are a year into the new administration. Give us a grade. Where is, where are we in terms of global diffusion and the rate at which we're spreading U.S. AI technology around the world? Are we an A, or are we a B, or are we a C? What's working? What's not working? Well, first of all, President Trump wants American industry to lead. He wants American technology industry to lead.
Starting point is 00:34:51 He wants American technology industry to win. He wants us to spread American technology around the world. He wants the United States to be the wealthiest country in the world. He wants all of that. At the current moment as we speak, Nvidia gave up a 95% market share in the second largest market in the world. world and we're at zero percent. President Trump, that's right.
Starting point is 00:35:16 President Trump wants us to get back in there. And the first thing is to get licensed for the companies that we're going to be able to sell to. We've got many companies who have requested for licenses. We've applied for licenses for them and we've got approved licenses from Secretary Lutnik. Now we've informed the Chinese companies and many of them have given us purchase. orders. And so we're going to, we're in the process of cranking up our supply chain again to go ship. I think at the highest level, Brad, I think one of the things that we should acknowledge is this.
Starting point is 00:35:53 Our national security is diminished when we don't have access to miniature motors, rare earth minerals. It's diminished when we don't control our telecommunications networks. It's diminished when we can't provide for sustainable energy for our country. It is fundamentally diminished. Every single one of these industries is an example of what I don't want the AI industry to be. When we look forward in time and we say, what do we want? What does it look like when American technology industry, American AI industry leads the world, we can all acknowledge that there is no way that AI models is one.
Starting point is 00:36:37 universally. It is, we can all acknowledge that is an outcome that makes no sense. However, we can all imagine that the American tech stack from chips to computing systems to the platforms are used broadly by the world where they build their own AI, they use public AI, they use private AI, whatever, and they can build their applications in their society. I would love that the American tech stack is 90% of the world. world. I would love that. The alternative, if it looks like solar, rare earth, magnets, motors, telecommunications, I consider that a very bad outcome for national security. How much are you monitoring the situation with the conflicts around the world right now?
Starting point is 00:37:27 And how much does it worry, you, Jensen? So China and Taiwan and then helium availability coming out of the Middle East, I understand, can be a supply chain risk to semiconductor manufacturing. How much do these situations worry you? How much are you spending on them? Well, first of all, I think in Middle East, I have, we have 6,000 families there. We have a lot of Iranians at Envidia, and their families are still in Iran.
Starting point is 00:37:50 And so we have a lot of families there. The first thing is they're quite anxious, they're quite concerned, quite scared. We're thinking about them all the time. We're monitoring and keeping eye on them all the time. They have 100% of our support. I've been asked several times, are we still considering being in Israel?
Starting point is 00:38:07 We are 100% in Israel. We are 100% behind the families there. We are 100% in the Middle East. I was also asked, you know, given what's happening in the Middle East, is that an area where we believe that we can expand artificial intelligence to? I believe that there's a reason we went to war,
Starting point is 00:38:27 and I believe at the end of the war, Middle East will be more stable than before. And so if we were there, if we're considering it before, we should absolutely be considering it after. And so I'm 100% in on that. With respect to Taiwan, we have to do three things. One, we have to make sure that we reindustrialize the United States as fast as we can. And whether it's the chip manufacturing plants, the computer manufacturing plants, or the AI factories.
Starting point is 00:38:56 How are we doing on that? We're doing excellent. With, by gaining the strategic support, by gaining the friendship, of the supply chain of Taiwan. By gaining their friendship, by gaining their support, we were able to build Arizona and Texas, California, at incredible rates. They are genuinely a strategic partner.
Starting point is 00:39:21 We really, they deserve our support. They deserve our friendship. They deserve our generosity. And they're doing everything they can to accelerate the manufacturing process for us. And so I think that's number one. Number two, we ought to diversify the manufacturing supply chain. And whether it's South Korea, whether it's Japan, it's Europe, we ought to diversify the supply chain, make it more resilient.
Starting point is 00:39:46 And number three, let's be, let's demonstrate restraint. And while we're reducing, increasing our diversity and resilience, let's not press, push. You need to be patient. It's thoughtful. Is helium a problem? A lot of reports. You know, I think helium could be a problem, but it's also the case that the supply chain probably has a lot of buffer in it. These kind of things tend to have a lot of buffer.
Starting point is 00:40:17 But, you know. You've made massive progress in self-driving. You made a big announcement. You've added many more partners, including BID. There was just a video of you driving around in a Mercedes. and a huge announcement with Uber that you're going to have a number of cars on the road for many different manufacturers. Your bet is, I believe that there's going to be an Android-type open-source platform
Starting point is 00:40:46 that you're going to play a major part in with dozens of car providers. And then maybe on the other side, there could be an iOS with Tesla or Waymo. What's your strategy thinking there and how that chessboard emerges? because it feels like you have a pretty deep stack and in some ways you're competing and in other places you're collaborative. Yeah. It's taking a step back.
Starting point is 00:41:12 We believe that everything that moves will be autonomous completely or partly someday. Number one. Number two, we don't want to build self-driving cars but we want to enable every car company in the world to build self-driving cars. And so we built all three computers, the training computer, the simulation computer,
Starting point is 00:41:30 evaluation computer, as well as the car computer. We developed the world's safest driving operating system. We also created the world's first reasoning autonomous vehicle so that it could decompose complicated scenarios into simpler scenarios that it knows how to navigate through, just like us, reasoning systems. And so that reasoning system called Alpamio has enabled us to achieve incredible results.
Starting point is 00:41:57 we open this we vertical optimization we horizontally innovate and we let everybody decide do you want to buy one computer from us in the case of Elon and Tesla they buy our training computers
Starting point is 00:42:12 do they want to buy our training computer and our simulation computers or do you want to let us work with us to do all three and even put the car computer in your car so we you know our attitude is we want to solve the problem we're not the solution provider
Starting point is 00:42:28 and we're delighted however you work with us. Let me build on this question because I think it's so fascinating you actually do create this platform. A thousand flowers are blooming but it's also true that some of those flowers want to now go back down in the stack and try to compete with you a little bit.
Starting point is 00:42:46 Google has TPU, Amazon has Inferentia and Traneum. Everybody's sort of spinning up their own version of I think I can out NVIDIA and Vida even though they also tend to be huge customers. How do you navigate that? And what do you think happens over time and where do those things play in the complexion of this kind of vision?
Starting point is 00:43:06 Yeah, really great. You know, first of all, we're the only AI company. We're an AI company. We build foundation models. We're at the frontier in many different domains. We build every single layer, every single stack. We're the only AI company in the world that works with every AI company in the world.
Starting point is 00:43:24 They never show me what they're building and I always show them exactly what I'm building. Right. Yeah. And so, so the confidence comes from this. One, we are delighted to compete on what is the best technology and to the extent that, to the extent that we can continue to to run fast, I believe that buying from Nvidia still is one of the most economic things they could do. And I just incredible confidence there. Number one, number two, we're the only architecture that could be in every cloud and that gives us some fundamental advantages. where the only architecture you could take from a cloud and put into on-prem in the car in any region.
Starting point is 00:44:01 In space. That's right. In space. And so there's a whole part of our market, about 40% of our business, most people don't realize this, 40% of our business, unless you have the Kuta Stack, unless you can build an entire AI factory, you have the customers don't know what to do with you. They're not trying to build chips. They're not trying to buy chips. They're trying to build AI infrastructure. And so they want you to come in with the full stack, and we've got the whole stack. And so surprisingly, Nvidia is gaining market share. If you look at where we are today, we're gaining share. If you think what happens is these guys try and they realize,
Starting point is 00:44:34 oh my God, it's too much, and then they come back. Is that why the share grows? Well, we're gaining share for several reasons. One, our velocity has gone, we help people realize it's not about building the chip, it's about building the system. And that system is really hard to build. And so they're, they're, they're, business with us is increasing. In the case of AWS, I think they just announced, I think it was yesterday,
Starting point is 00:44:58 that they're going to buy a million chips in the next couple of years. I mean, that's a lot of chips from AWS, and that's on top of all the chips they've already bought. And so we're delighted to do that. But number one, we're gaining share this last couple of years because we now have Anthropic coming to Nvidia. MetaS.S.L. is coming to Nvidia. And the growth of open model. is incredible, and that's all on Vivida. And so we're growing and share because of the number of models. We're also growing and share because all of these companies are outside the cloud, and they're growing regionally in enterprise and industries at the edge,
Starting point is 00:45:38 and that entire segment of growth is really hard to do if it's just building an ASIC. Brad. Related to that, and not to get in the weeds on the numbers, but analysts don't seem to believe, right? So if you look at the... the consensus forecast, you said compute could one million X, right? And yet they have you growing next year at 30%, the year after that at 20%, and in 2029, which is supposed to be a monster year at 7%. Right? So if you just, if you take your TAM and you apply their growth numbers,
Starting point is 00:46:10 it suggests that your share will plummet. Do you see anything in your future order book that would make that correct? Yeah, first of all, they just don't understand the scale. And and the breadth of AI. Yes. Yeah. I think that most people think that AI is in the top five hyperscalers. Right. That's right.
Starting point is 00:46:31 There's also an orthodoxy around these law of large numbers where, you know, they have to go back to their investment banking risk committee and show some model. They're not going to believe in their minds that $5 trillion goes to $15 trillion. They're like, it can go to $7. Or they didn't have a $10 trillion company. It's all just CYA stuff that I think ultimately made. It's never happened before. so you can't say it will.
Starting point is 00:46:52 And because you have to redefine what it is that you do. There was somebody who made an observation recently that Invidia Jensen, how can you be larger than Intel in servers? And the reason for that is because the CPU market of the entire data center was about $25 billion a year. Right. We do $25 billion a year, as you guys know, in a very, in the time that we were sitting here.
Starting point is 00:47:17 And so obviously, obviously. That was a joke. All in podcast. Don't worry. Everything on the show is roughly true. It's all in. That's all in. That was not guidance.
Starting point is 00:47:32 But anyhow, anyhow, the point is how big you can be depends on what is it that you make. Nvidia's not making chips. Number one, making chips does not help you solve the AI infrastructure problem anymore. It's too complicated. Number three, most people think that AI is narrowly in the things that they talk about in the hear and see.
Starting point is 00:47:54 It's AI is much. Open AI is incredible. They're going to be enormous. Anthropic is incredible. They're going to be enormous. But AI is going to be much, much bigger than that. Tell us about, and we address that segment.
Starting point is 00:48:06 Tell us about data centers in space for a second. Yeah. We're already in space. How should the layman think about what that business is versus when you hear about these big data center buildouts that's happening in, in on the ground. Well, we should definitely work on the ground first because we're already here. And number one, number two, we should prepare to be out in space.
Starting point is 00:48:28 And obviously there's a lot of energy in space. The challenge, of course, is that cooling, you can't take advantage of conduction and convection. And so you can only use radiation. And radiation requires very large surfaces. And so now that's not an impossible thing to solve. And there's a lot of space and space. But nonetheless, the expense is still quite there is there. We're going to go explore it.
Starting point is 00:48:53 We're already there. We're already radiation hardened. We have Kuda in satellites around the world. They're doing imaging, image processing, AI imaging. And that kind of stuff ought to be done in space instead of sending all the data back here and do imaging down here. We ought to just do imaging out in space. And so there's a lot of things that we ought to do in space.
Starting point is 00:49:14 And in the meantime, we're going to explore. what does the architecture of data centers look like in space? And it'll take years. It's okay. I got plenty of time. I wanted to double click on health care. I know you've got a big effort there. We're all of a certain age where we're thinking about lifespan, health span.
Starting point is 00:49:31 I mean, we all look great, I think. Some better than others. I think some better than others. I don't know what your secret is, Jensen. Pretty good. I mean, what are you taking? What's off the menu? You've got to talk to me when we're backstage.
Starting point is 00:49:43 I want to know in the green room what you've got going on. Squats and push-ups and sit. Perfect. Okay. But what you know in terms of the buildout in health care, where is that going? And what kind of progress are we making? I was just using Claude to do some analysis and saying like, where are all these billing codes? We spend twice as much money in the U.S. We seem to get half as much. It seemed like 15 to 25 percent of the dollar spent were on these first GP visits. And I think we all know, like chat GPT. and a large language model does a better job more consistently today at a first visit. So what has to happen there to kind of break through all that regulation and have AI have a true impact on the health care system? There's several areas that we're involved in health care. One is AI physics, or AI biology.
Starting point is 00:50:42 Using AI to understand, represent, predict biology, behavior. biological behavior. And so that's one. That's very important in drug discovery. There's second, which is AI agents, and that's where the assistance in helping diagnosis and things like that, open evidence is a really good example.
Starting point is 00:50:59 Hippocratics is a really good example. Love working with those companies. I really think that this is an area where agentic technology is going to revolutionize how we interact with doctors and how do we interact for healthcare. The third part that we're involved in is physical AI. The first one is AI physics,
Starting point is 00:51:15 using AI to predict, The second one is physical AI, AI that understand the properties of the laws of physics, and that's used for robotic surgery, huge amounts of activities there. Every single instrument, whether it's ultrasound or, you know, CT or whatever instrument we interact with in a hospital in the future will be agentic. Yeah. You know, open claw in a safe version will be inside every single instrument. And so in a lot of ways, that instrument is going to be interacting with patients and nurses
Starting point is 00:51:46 and doctors in a very unique ways. So much investment in AI weapons. It would be wonderful to see some investment in AI EMTs and paramedics and saving lives, not just taking them. Yeah. Which I think is a great segue into robotics. You've got dozens of partners. Yeah.
Starting point is 00:52:02 We had this very weird, I don't know what I call a lost decade or 20 years of Boston Dynamics. Google bought a bunch of companies. They then wound up selling them and spinning them out where people just thought, ah, robotics is just not ready for prime time. And now here we have the world's greatest entrepreneur at this time tied with you. Elon Musk doing, well, that was a good say, if I hope. Optimus, pretty impressive. And then other companies in China, how close is that to actually being in our lives
Starting point is 00:52:34 where we might see a chef, robotic chef, a robotic nurse, a robotic housekeeper, you know, these humanoid factor actually working in the real world, knowing what you know with those partners and the fidelity, especially in China where they seem to be doing as good a job as we're doing here, or maybe better? We invented the industry largely. America invented. You could argue we got into it too soon.
Starting point is 00:53:00 Yeah. And we got exhausted. We got tired about five years before the enabling technology appears. The brain. Yeah, yeah. And we just got tired of it just a little too soon. Okay, that's number one. But it's here now.
Starting point is 00:53:16 Now, the question is how much longer? From the point of high-functioning existence-proof, high-functioning existence-proof, to reasonable products, technology never takes more than a couple, two, three cycles. And so a couple, two, three cycles would basically be somewhere around three years to five years. That's it. Three years to five years, we're going to have robots all over the place. I think China is formidable.
Starting point is 00:53:43 And the reason for that is because their microelectronics, their motors, their rare earth, their magnets, which is foundational to robotics, they are the world's best. And so in a lot of ways, our robotics industry relies deeply on their ecosystem and their supply chain. And they're, you know, obviously moving very quickly. We're going to, you know, our robotics industry will have to rely a lot on it. The world's robotics industry will have to rely on a lot on it. And so I think you're going to see some fast, fast movements here. Ultimately, one for one, Elon seems to think we're going to have one robot for every human, $7 billion for $7 billion, $8 billion for $8 billion.
Starting point is 00:54:23 Well, I'm hoping more. Yeah, I'm hoping more. Yeah. Well, first of all, there's a whole bunch of robots that are going to be in factories working around the clock. There's going to be a whole bunch of robots that don't move. They move a little bit. Almost everything will be robotic. What does the world look like?
Starting point is 00:54:39 Sorry, let me just say, I think, like this is one of the, The robotics for me is one of the pieces that I think unlocks economic mobility opportunities for every individual. Everyone now, like when everyone got a car, they could now go and do a lot of different jobs. When everyone gets a robot, their robot can do a lot of work for them. They can stand up an Etsy store or a Shopify store. They can create anything they want with their robot. They could do things that they independently cannot do.
Starting point is 00:55:05 I think the robot is going to end up being the greatest unlock for prosperity for more people on Earth than we've ever seen with any technology before. Yeah, no doubt. I mean, just the simple math at the moment is we're millions of people short in labor today. Right. Yeah. Right. We're actually really desperate in need of robotics.
Starting point is 00:55:24 And so that all of these companies could grow more if they had more labor. I mean, number one. Some of the things that you mentioned are super fun. I mean, because of robots, we'll have virtual presence. You know, I'll be able to go into the robot. of my house and virtually operated. I'm on a business trip. Right.
Starting point is 00:55:45 Tell you. Walk around the house. Yeah, walk the dog. Break the dog. Break the leaves. Yeah, exactly. Break out the dog. Maybe not quite bad, but just, you know, just, you know, wander around.
Starting point is 00:55:54 Yeah. And just see what's going on in the house, you know, chat with the dogs, chat with the kids. Yeah. Yeah. Time travel is also, we're going to be able to travel at the speed of light, you know. And so, you know, clearly, we're going to send our robots ahead of us. Yeah. I'm not going to send myself.
Starting point is 00:56:09 I'm going to send a robot. Check it out. Yeah, yeah. And then I'm going to upload my AI. Well, it's inevitable. It unlocks the moon and it unlocks Mars as targets for colonization, which gives us infinite resources. Getting back from the moon is effectively zero energy cost to move material back because you can use solar and accelerate. So you could have factories that make everything the world needs on the moon, and the robots are going to be the unlock for enabling that.
Starting point is 00:56:32 That's right. Distance no longer matter. Distance doesn't matter. Yeah. Yeah. The more revenue we get out of models and agents, the more we can, invest in building the infrastructure, which then unlocks more capabilities on models and agents. Dario on Dwarkesh's podcast recently said by 27, 28, we'll have hundreds of billions of dollars of
Starting point is 00:56:51 revenue out of the model companies and the agent companies, and he forecasts a trillion dollars by 2030, right? This is non-infrastructure AI revenue. I think he's being very conservative. I believe Dario and Anthropic is going to do way better than that. Wow. Wow. So from $30 billion to a trillion. Yep. And the reason for that is the one part that he hasn't considered is that I believe every single enterprise software company will also be a reseller, value added reseller of Anthropic code, anthropics tokens.
Starting point is 00:57:27 Value added reseller of Open AI. That's right. And they're going to, that part of their. Get this logarithmic expansion. Yes. Yeah. Their go-to-market is going to expand tremendously this year. What do you think in that world is the moat?
Starting point is 00:57:43 What's left over? I mean, you have some moats that are, frankly, I think, as this scales almost insurmountable, the best one that nobody talks about is probably CUDA, which is just like an incredible strategic advantage. But in the future, if a model can be used to create something incredible, then the next spin of a model can be used to maybe disrupt it.
Starting point is 00:58:04 Sort of in your mind, what do you think for these companies that are building at that application layer, what's their moat? Like, how do they differentiate themselves? Deep specialization. Deep specialization. I believe that these models, they're going to have general models that are connected into the software company's agentic system. Right. Many of those models are cloud models and proprietary models, but many of those models are specialized subagents that they've trained.
Starting point is 00:58:35 on their own. Right. So the call to arms for you for entrepreneurs is, look, know your vertical. That's right. Know it as deep and as better than everybody else. That's right. And then wait for these tools because they're catching up to you and now you can imbue it with your knowledge. That's right.
Starting point is 00:58:50 And the sooner you connect your agent, the sooner you connect your agent with customers, that flywheel is going to cause your agent to get hyper. It very much is an inversion of what we do today because today we build a piece of software and we say, what generalizes? and then let's try to sell it as broadly as possible and then sell the customization around it. And in fact, in fact, exactly right. We create a horizontal, but notice there are all these GSIs
Starting point is 00:59:16 and all of these consultants who are specialists who then take your horizontal platform and specializes it into. Exactly. And that's arguably a five or six time bigger industry is the customization. It is, absolutely. Yeah, that very much is.
Starting point is 00:59:30 That's right. So I think that these platform companies have an operational opportunity to become that specialist, to become that vertical. Right. Yeah, domain X. You know, I just want to give you your flowers. I think it was three years ago you said, you're not going to lose your job to AI. You're going to lose your job to somebody using AI.
Starting point is 00:59:46 And here we are. The entire conversation has revolved around this concept of agents making people superhuman and the business opportunity expanding and entrepreneurship expanding. You actually saw it pretty clearly. That's right. You changed your view. Well, I go, this is, do my, I'm not too much. Do you can hold space for, I think, two ideas.
Starting point is 01:00:08 One is there are going to be a large. That's spiral J-Calicole. But that's just because he doesn't hang out with me enough. I mean, we fogg a little bit. Be careful what you got. You don't talk about it. He will show you about it. You'll follow you around.
Starting point is 01:00:21 I'm not asking for it. I'm not asking for it. You can come with me in Tucker. We ski in Japan every January. Oh, love it. You or Tucker will go road trip. Wow. There is going to be job displacement.
Starting point is 01:00:33 And then the question becomes, you know, do those people have the fortitude, the resolve to then go embrace these, you know, technologies? We're going to see 100% of driving go away by humans. That's just, that's a beautiful thing in the lives saved. But we have to recognize that's 15 million people in the United States, 10 to 15 million who are employed in that way. And so that is going to happen, yes? I think that jobs will change. For example, there are many chauffeurs today who drive. the car, I believe that many of those chauffeurs will actually be in the car sitting behind the
Starting point is 01:01:09 steering wheel while the car is driving by itself. And the reason for that is because remember what a chauffeur does. In the end, these chauffeurs, they're helping you, they're your assistants, they're helping you with your luggage, they're helping you, I mean, they're helping you with a lot of things. And so I wouldn't be surprised, actually, if the chauffeurs of the future become your mobility assistant and they are helping you do on a whole bunch of, other stuff. I check it to the hotel. And the car is driving by itself.
Starting point is 01:01:36 Right. The autopilot in planes created a lot more pilots. Yeah. And didn't take any of the pilots out of the cockpit. Yeah. Even though the autopilot is flying the plane 90% of the time. And by the way, while that car is driving itself, that chauffeur is going to be doing a bunch of other work on his phone. And he's going to be arranging, for example, coordinating a bunch of things for you, getting, you know.
Starting point is 01:01:55 Yeah. It's all the pie just grows in a way that. Yeah. So one of the things that, that, yes, every job will be, will be transformed. some jobs will be eliminated. However, we also know that many, many jobs will be created. The one thing that I will say to young people who are coming out of school, who are concerned, who are anxious about AI,
Starting point is 01:02:14 be the expert of using AI. Yes. How much, look, we all want our employees to be expert at using AI. And it's not not trivial, not trivial. And so knowing how to specify, not to overprescribe, leaving enough room for the AI to innovate and create while we guide it to the outcome we want. All of that requires artistry.
Starting point is 01:02:41 You had this great advice to when you were at Stanford, I think it was, which is I wish to you pain and suffering. Do you remember that? Fantastic. What's your advice to young people around what they should be studying? So if they're sort of about to leave high school, because now those are the kids that are at this really native, they haven't made a decision about college,
Starting point is 01:03:00 what to study, if at all go to college, college. How do you guide those kids? What would you tell them? I still believe that deep science, deep math, language skills, you know, as you know, language is the programming language of AI. The ultimate programming language. And so as it turns out, it could be that the English major could be the most successful. Yeah. And so I think, I would just advise whatever education you get, just make sure that you're deeply, deeply expert in using AIs. One of the things that I wanted to say with respected jobs, and I want everybody to hear it,
Starting point is 01:03:38 that in fact, at the beginning of the deep learning revolution, one of the finest computer scientists in the world, I deeply, deeply, deeply respect, predicted that computer vision will completely eliminate radiologists. And that the one field he advises everybody to not going to is radiology. Ten years later, his prediction was at 100% right. Computer vision has been integrated into all of the radiology technologies and
Starting point is 01:04:10 radiology platforms in the world 100%. The surprising outcome is the number of radiologists actually went up and the demand for radiologists is skyrocketed. The reason for that is because everybody's job has a purpose and its task. the task that you do is studying the scans but your purpose is to help the doctors help the patient diagnose disease and so what's surprising is because the scans are now being done so quickly they could do more scans improving health care yes but doing more scans more quickly allows patients to be onboarded a lot more quick treated a lot more quickly and as it turns out
Starting point is 01:04:57 because hospitals enjoy making money too. Yeah. They're doing more scans. They're treating more customers and more patients. The revenues go up and guess what? Perfect example. And a country that grows faster, productivity increases, a wealthier country can put more teachers in the classroom,
Starting point is 01:05:16 not less teachers in the classroom. That's right. You just give every one of those teachers a personalized curriculum for every student in the room. It makes them all bionic and leads to a lot more. Every single student will be assisted by A.I. but every single student will need great teachers. Yeah.
Starting point is 01:05:31 Amazing. Jensen, congratulations. I know your success. And really, this is an incredibly positive, uplifting discussion. We really appreciate you taking the time for us.
Starting point is 01:05:40 He is the steward we need. You are. I think you need to be more vocal. I'm being very, very vocal about the positive side of it. I think there's so much dumerism. But I also think it takes the humility to have this level of success
Starting point is 01:05:51 and be humble about we're making software, guys. Yeah. And I think that that's actually really healthy for people to hear. We have done this before. We have invented categories and industries before. We don't need to go to this scare-mongering place. It does nothing. And we get to choose, right? We have autonomy and agency. We get to pick how to deploy this. Okay, everybody. We'll see you next time. Thank you. On the All-In interview. Okay. Well done, brother. Thanks, man.
Starting point is 01:06:20 Good job. Thank you, sir. That was awesome. Good, good. You guys are awesome. Jensen Dane. Look at this. Look at this big crowd behind you guys. Man, I think they're here for you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.