All-In with Chamath, Jason, Sacks & Friedberg - Four CEOs on the Future of AI: CoreWeave, Perplexity, Mistral, and IREN

Episode Date: March 23, 2026

(0:00) Intro live from Nvidia GTC (0:37) CoreWeave CEO, Michael Intrator (32:58) Perplexity CEO, Aravind Srinivas (1:07:11) Mistral CEO, Arthur Mensch (1:18:57) IREN CEO, Daniel Roberts Our episode is... sponsored by the New York Stock Exchange - a modern marketplace and exchange for building the future. It all happens at the NYSE - https://nyse.com Follow the besties:  https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect

Transcript
Discussion (0)
Starting point is 00:00:00 I'm here at NVIDIA's annual GTC conference, and I'm going to interview four amazing AI CEOs. Stick with us. Our episode is sponsored by the New York Stock Exchange. Are you looking to change the world and raise capital? Do it at the NYSE. The NYSE is a modern marketplace and a massive platform built for scale and long-term impact. So if you're building for the future, the NYSE is where it happens. One of the great companies of the AI era is, of course, CoreWeave.
Starting point is 00:00:41 They're building massive infrastructure for these hyperscalers. And in some ways, Michael, in Trader, welcome to the program. You're the original hyperscaler. You guys got in very early and secured your, I don't know, which GPUs you hound up getting. You were very early to this trend. How did you get to it so early? And how did you build out this, you know, first, I guess at the time, NeoCloud? Yeah, so we didn't really start it as a neocloud.
Starting point is 00:01:10 And I was running an algorithmic hedge fund focused on natural gas. And when you build an algorithmic hedge fund, once the algorithms are built, you're really just monitoring it and testing different thesis and doing all that. But there's also a lot of downtime. And we got super interested in crypto. And we're pretty nerdy. We kind of dig under the hood. and we started to get interested in the security layer.
Starting point is 00:01:38 We looked at Bitcoin and the mining for Bitcoin and we didn't like it. We just thought that there's some brilliant engineer that built the ASIC and they're probably going to be better at running it than we are. So we really began to focus on the GPUs, mostly because the GPUs were, you can mine Ethereum with them, but you could also do all these other things. And really so right from the start, we looked at the compute as an option.
Starting point is 00:02:05 to be able to deploy our computing power to different use cases. And so, you know, began the company in 2017, you know, spent the first kind of three years mining crypto, went through a couple of crypto winters. Because we had come from hedge fund, our, you know, we have real chops in risk management and how we think about capital and risk exposure and allocation and all of that. And so we were really careful around that right from the start. So we weathered crypto winter really well and began to scale the company and immediately started to look for other use cases that you could use this compute for because crypto was pretty
Starting point is 00:02:47 volatile. Yeah. And crypto was a question mark at that time. Absolutely. Yeah. I mean, Bitcoin was speculative and there were many other speculative projects. The only other people using this type of hardware, quants, medical researchers. So a good way to think about it is like the progression of.
Starting point is 00:03:04 products that we kind of started to work on. You know, first was crypto, but we immediately moved from crypto to CGI rendering. And we built projects that allowed folks that were trying to animate and render images, you know, kind of what makes the movies cool, right? And we started to work on that. And then we moved to batch computing and started to look at medical research and different ways of using the compute to be able to drive science. And we just kind of kept moving up the stack in terms of complexity on how GPUs could be used.
Starting point is 00:03:40 And ultimately, in, like, call it like 2020, 2021, we started to really try to figure out how you can go ahead and use GPUs for neural networks. And that was not something that we knew how to do. And so we actually went out and bought a bunch of A-100s and donated them to a group that was working on. on Lutra AI. They were working on an open source project with the thought that these guys are taking the GPU compute because we're donating it. They can't really get pissed at us
Starting point is 00:04:14 if we're not very good at it initially. And that worked out really well because they can't complain about the SLA. They kept telling us like, we need more of this, you've got to work on this. And that began to really give us an understanding of what was necessary to run scale parallelized computing. And, you know, that, that, uh, we went through it. I, I kind of feel like buying those initial GPUs was the tuition we paid to learn how to run this business.
Starting point is 00:04:40 And then one of the interesting things is all of those guys went back to their day jobs because they were all volunteers working on this. They were like-minded scientists. And when they got to their day jobs, they were all like, I want that infrastructure. Yes. It's built the right way. That's the way that researchers are going to want to use it. And that launched our, our business. It was an amazing story.
Starting point is 00:05:00 And so you went from crypto to these researchers into academia and deep research. What's the next card to turn over in the poker game? Yeah. So, so what became very clear to us very, very early on was that the scaling laws were going to drive. And remember, this is really back in the, you know, 2020, 2021 before chat GPT moment occurred. And we began to understand that like computing decommoditize. at scale, right? Like when, you know, anybody can run a GPU, but can you run a cluster that's large enough to train a model that can change the world? And that's a different question. And so
Starting point is 00:05:39 we really began to think about, like, how do you go about scaling up your delivery of this computing to clients, larger and larger clients? And that was the next card to turn is to think about it from a, okay, you know, there's a component of this that is going to lean into our ability to access the capital to be able to deliver our solution to the broadest possible audience, to the most sophisticated consumers of this compute. And that was really the next card is thinking about it as a business rather than as a engineering project to be able to deliver the infrastructure and the software. And really everything between, you know, when you're thinking about what we do, we kind
Starting point is 00:06:23 of live above the Nvidia GPUs, but below the models. Yeah. And everything in there, all the software. the integration of software and operations and observability and all the things that you need to be able to build a cloud that's purpose built for this one specific use case. So we don't we don't do everything. We really focus on one use case, which allows to do web servers. You got AWS. You know what?
Starting point is 00:06:49 They do a great job. It's like it's a great solution. It was a brilliant solution to solve a problem. We just looked at it and said there's a new problem. And let's go about looking at this problem and try and come up with. the solution to deliver compute that solves that problem. And when did the language model start dialing and calling you for, you know, capacity? Yeah. So our first, well, our first language model was really a Luther.
Starting point is 00:07:16 Yes. But our first, like, large commercial was inflection. And so, you know, we work with Mustafa and inflection. And then we really diversified from there. Our into the hypers, into, you know, open AI across the model, the foundation models across, you know, and just kept scaling and scaling with the belief that, you know, once again, the decommoditization of compute, the ability to deliver a solution. And the solution is building supercomputers that can change the world. And that's really what we began to focus on. that was the lead into training and now the world has gone through, you know, this,
Starting point is 00:08:04 this moment where we've moved from research into the productization of this. It's beginning to work its way in from the fringe of organizations into the core of what they do. And you can see that every day in the amount of inference compute that is being driven through, you know, our infrastructure layer, which is just massive, which is just like one of the It shows your people are consuming it, not just building models, but they're deploying them and utilizing them. I always think of inference as the monetization of the investment in artificial intelligence. So when we see our compute being used to stand up the massive scale of inference that's hitting our compute every day. And like, you know, inferences when people ask the model a question, it comes back with an answer.
Starting point is 00:08:57 that's an inference. Or when you ask the model a question and then to go do something, that's inference, right? And that's actually where you're, you're, you have the opportunity to really drive value outside of the model itself, but into the real world. And that's really exciting for us. That's what we like to watch. That's what I like to watch in terms of gauging the health. What chips are those? So, so really, you know, we are, we are the tip of the spear in bringing the new architecture out of Nvidia into commercial production at scale. And so when we were the first ones to bring the H-100s at scale, we were the first ones to bring the H-200s at scale, first ones with the GB-200s, and now you've got the GB-300s.
Starting point is 00:09:47 And one of the things that's amazing and really fascinating for us is, you know, people are using the bleeding-edge GPUs to train models. as the new architectures come out. And then they take those GPUs and they move them into different experiments. And then over time, they move them into inference. And they continue to use them in inference for a very, very long time.
Starting point is 00:10:11 What is the shelf life of a 100 right now? That's been a big debate is, I think, for your company, for Microsoft, and I guess Michael Burry, who you must have known when you were a quant, you know, saying, oh my God, the whole industry is the sky's falling. And then we all know in the, industry that people don't just throw this hardware away, that they find uses for it.
Starting point is 00:10:31 The street finds its own use for technology. So what's the reality of the lifespan of these things? So my take on the GPU depreciation debate is that it's nonsense, right? It's a debate that is being brought to the forefront by some traders that have a short position in the stock and they're trying to talk down. Look, here's what we know, right? when we buy infrastructure, we're a success-based company, right? We're a small company on a relative basis compared to the enormous companies that we're competing with.
Starting point is 00:11:05 And so they come, our clients come into us and they buy compute for five years, for six years. Our average contract is five years. So any commentary by anyone either inside or outside of the industry that this stuff becomes obsolete in 16 months or whatever nonsense they're spewing, It doesn't in any way match up with the facts on the ground. The facts on the ground is they're buying it for five years. Right. And my approach to this has always been, if people are willing to pay me for it, it still has value. Correct.
Starting point is 00:11:38 Pretty simple way of approaching it. We use a six-year depreciation. We believe that the GPUs will last in excess of six years, but we felt like that was a fair and reasonable approach to a technology cycle that's moving at this velocity. the A-100s, the ampires, this year, the price has appreciated through the year. Why is that? I think it's because one of the things that happens is as more installed capacity becomes available, you have new companies that come into existence, that have new use cases, that have different size models that are trying to build new commercial ventures that maybe have been
Starting point is 00:12:16 blocked out of the H-100s and never had an opportunity to run on that. I mean, to make a very simple example for the audience, like when you trade in your iPhone after three or four years, you're like, who's going to use an iPhone 12? And it's like, have you been to South America or Africa where you go to the store and you buy an iPhone 12 or you buy the Pixel 7 and it costs $50? That's still got great life left in it. Absolutely. Yeah, you know.
Starting point is 00:12:43 And so look, you know, we find these amazing use cases, new companies that have come into existence or existing companies that have integrated new models into their workflow that are able to use the ampiers. And so they keep buying any GPUs that we have available. And once again, you know, the concept that a GPU is no longer relevant or commercially viable after 16 more, 18 months or two years. Yeah, that's farcical. It just doesn't make any sense. It's obviously far school. I think sometimes people get caught up in Moore's law or in just how fast our industry is growing. Yeah. and that there's so much at stake that big companies are demanding the most recent products. That doesn't mean that the lifespan has gotten shorter.
Starting point is 00:13:29 It means the opportunity and the surface area of the opportunity has gotten much larger. Yeah. One of the things is, like, you know, the industry has gotten so much attention for the unprecedented scale of capital that is coming to bear on this. And because of that, there tends to be an incredible focus on the companies that are building on these most advanced chipsets. And the truth of the matter is, is, you know, even within those companies, they have a long tail of useful life to provide inference horsepower, to work on other experiments, to do less bleeding edge activity, but still needs to be done. And, yeah, I mean, rendering comes to mind as well. or yeah, we're making images on nanobanana. Like there will be a use for it.
Starting point is 00:14:22 There is a moment at time where maybe the compute to power ratio doesn't make sense. My expectation is, is obsolescence will be defined by the moment in time where the power in the data center, for me, will be able to be repurposed for a higher margin than the existing infrastructure provides. And, you know, like I said, I fully expect this infrastructure to last in excess of six years. But the standard in the space has really been used with one exception, which is Amazon, which is, yeah, it's six years. That seems like the right schedule. I'm not making it up. That's what everybody's using. Yeah.
Starting point is 00:15:05 And the energy cost is the opportunity because, hey, it's just we need that space. There's a better reward here. And that might get resold that hardware to somebody else who wants it a hobbyist or something. Yeah. Or it could be sent someplace else where they have more capacity when they can repurpose it there. But I kind of feel like we'll deal with that part of the business when we get there. What I know right now is it is extraordinarily profitable. It's very cretive to my company to continue to keep the infrastructure that's been up and running
Starting point is 00:15:41 that's been on these long-term contracts as it rolls off. as it's been in use for five years, you know, as it becomes available, I am still able to sell it at a higher price than it was at a year ago. There's competition now. When you were buying these from Jensen back in the day, yeah, you could buy them and have them shipped, I would assume, within 30 days or less. Nowadays, what's the weight like even for you, a loyal old customer? And is there a bit of a battle?
Starting point is 00:16:08 Is there politics to who gets the servers? Like, you see some like very big names talking about they got to get an allocation. Is it still a little bit crazy? What's it like to be in that category having to buy something everybody wants? Look, you know, I think of it as an affirmation of the business that we're in, right? Like the fact that we are attracting competitors, the means that the business is healthy and there's a lot of people trying to deliver this service because the need for this infrastructure, the need to integrate the infrastructure, you know, into the software layers to deliver it to artificial intelligence, either at the model level or the inference level or the application level or whatever, you know, level of the five-layer cake that Jensen's, you know, focused on.
Starting point is 00:16:55 The fact that there are more people coming into this, it doesn't discourage me. Yeah. As far as getting access to the GPUs, we show up like everybody else with a, you know, we'd like to buy, here's a PO and we're ready to pay. The one, what's the wait time like? And is it just really competitive or not? Because I talked to Jensen about, he said, I said, how do you manage all these big egos and names and companies trying to buy stuff? And he said, well, they order it and we give it to them in the order in which they order it. Does it really like that?
Starting point is 00:17:27 It really is. Right. Like, you know, he doesn't want to be in the position of playing favorites or allies. Like that just seems like a bad place to be with your clients. Or auctioning them. off. Yeah. So you imagine that would that that that'd be crazy. Yeah. I don't I'm not sure that would be good for the long-term business. No. Yeah. So so our our our approach is you might get some sovereigns coming in and saying I'll pay double yeah they do that with ferrari's too sometimes.
Starting point is 00:17:54 This these are the ferrari's of computing in a way. Yeah. But I mean our our our approach is to work with clients across the entire space to find opportunities that are really interesting companies that can fit into our contracting requirements where we're going to be able to go out and structure the debt that we require in order to go out and build infrastructure at this scale. How does all that debt work? That is something that you guys specialize in. Corporate debt. I'm in the venture business.
Starting point is 00:18:28 People are like, why should I be in venture when corporate debt pays so well? Corporate paper is so huge. I'm curious how this fits in and what interest rate people are. are paying on a billion dollars in infrastructure. Where do they pay on that? Yeah, so CoreWeave has really been the innovator around a lot of the financing engines that have come to bear on this. We did the first GPU based loans.
Starting point is 00:18:58 And like, I think it's important, or I'm gonna try to explain this in a way people can understand. So what we do is we go out and we find a client. Let's use Microsoft. you brought them up before, right? And Microsoft comes to us and says, we'd like to buy some computer viewing. And we say, okay, great, we're going to sign a contract.
Starting point is 00:19:15 Once I have a contract in hand, then what I do is I create something, it's not a particularly creative name, it's called the box, right? And what I do with the box is I take my contract with Microsoft and I put it in the box. I go to Jensen and I buy the GPUs. I put it in the box.
Starting point is 00:19:31 I take my data center contract. I put it in the box. And now the box governs cash flow. And it has a water. of cash flow that comes into it and goes out of it. And so the way it works is then I build the compute and then I deliver the compute to Microsoft and they pay the box. They don't pay me.
Starting point is 00:19:49 It goes into the box. And the first thing it does is it pays the data center. It pays the power bill. It pays the interest in the principle. And then whatever's left flows back to us. And so it is an incredibly well-structured, time-tested, pressure-tested, vehicle to be able to borrow money against client paper and all of the other collateral around the deal, which is why Corweave, which is a company that many people haven't ever heard of,
Starting point is 00:20:20 was able to go out and raise $35 billion in 18 months to build infrastructure at scale. But what's important to understand is the economics in this box are such that within two and a half years of a five-year deal, we have paid for everything. The principal's been paid off. The principal has been paid off. The interest has been paid off. The return into the box is such that we are able to generate returns to our company at the box level, which gives the most sophisticated lenders in the world, whether it's banks or private equity funds or, you know, whoever, confidence that they're going to be able to achieve the one rule of lending.
Starting point is 00:21:08 which is give me my money back. Yes. It serves better when that happens. So they look at this box and they're like, wow, we're really confident we're going to get our money back. And maybe they want 10 boxes. That's correct. And if any one box goes upside down, you can deal with it and it's not as acute. That's correct. And they don't cross-pollinate.
Starting point is 00:21:27 They don't cause a contagion across the boxes. They're all independent and discreet. One. And number two is as you do this and as you show the lenders, how this financing tool and how this financing mechanism works, what they do is they continue to lend you money at progressively lower rates. And so when you think about our cost of capital over the last two years, we have dropped our cost of capital by 600 basis points. Wow. It is enormous, right? And so you're seeing a company that is driving its cost of capital down
Starting point is 00:22:04 towards where the hypers borrow, which will enable us to be able to be competitive with them over time. And we have been extremely militant and diligent about feeding, watering, and caring for those boxes so that we continue to have access to the capital markets in a way that allows us to build and drive our business. It means you have to say no, you have to say no to maybe some people who want to be in the box.
Starting point is 00:22:30 Yeah, so we look at some deals and we're just like, you know, you know, they want to buy GPUs for a year. And I look at it and say, that's not a deal that I can do because it's too short for me to amortize the expenses. And so I won't do that, right? And they can go to another provider who maybe wants to take that risk on who has extra capacity. Absolutely.
Starting point is 00:22:50 But our business is really built about around the risk management of being able to get to scale. Because in my mind, during this period of disequilibrium, during this period where there were not enough GPUs in the world to provide the compute for all of the different use cases in artificial intelligence. The part that's important for me and for my company is to get enormously large so we can drive down our cost of capital so that we have information flow coming in from all different parts of the market, large language models, high speed trading, search, all of these things.
Starting point is 00:23:27 And they're feeding information back into us that is letting us know what the next product we to build is or where, you know, they need help scaling or what type of compute they need. And all of that information flow is incredibly valuable to us. What can you tell us about demand? There's been reports of, hey, maybe the Oracle Starbase thing with Open AIs been downsized or maybe not. And then, you know, other folks, Microsoft is going big and Google's going big. Met is going big and those people obviously have massive cash flow.
Starting point is 00:24:03 Apple seems to be MIA. They don't seem to want to play. You've named a lot of really big companies with really big balance sheets that have the capacity to drive a lot of demand. Look, I have been truly steadfast in this for years now. For four years, the depth of the demand for the service we provide has been relentless and overwhelms the global capacity of the world. world to deliver enough compute to enable all of the demand for artificial intelligence to be stated. And that has been, we have been relentless about that. It sounds like Nick's tickets during
Starting point is 00:24:43 the Patrick Ewing era. Yeah. They got up to 50,000 people on the wait list. So if magically the wait list went away, if the constraint went away and we just had a large amount of GPUs available, a lot of energy available, a lot of data center available, how much capacity would just all of a sudden come out of the system? Or would be deployed, I should say. So remember how we build our business through this box. And it's a five-year box. So if we had an air pocket,
Starting point is 00:25:17 if demand were suddenly to disappear because of a technology breakthrough, because of a war, a war, anything, right? Like the why from a range, risk management perspective does not matter. You have to prepare your company for the what happens if it happens. Yeah. And so by entering into these long-term contracts into entering into contracts with counterparties that have large balance sheets, you are or we are protecting ourselves and our lenders so that we are confident and they are confident because you can see how confident they are by the rate that they're charging us, continuing to decline, that they're ultimately going to get their
Starting point is 00:25:57 money back and that is the one rule of lending. And so, you know, if... But just in terms of the capacity, if you're unconstrained in Nvidia, Jensen says, hey, order as many as you want, what would happen? So it's also important to understand the constraints aren't just GPUs. Right. Electricity. It's power shells.
Starting point is 00:26:18 It's memory. It's storage. It's networking. It's optics. All of the things. And there's various throttles that will limit the, Memory is a throttle right now, right? Oh, yeah, it is.
Starting point is 00:26:29 Oh, yeah, it is. How did memory become the throttle? If memory and it has historically been a cyclical business, right? We have seen these waves of demand driving up the cost for memory and then it collapses. And then it drives it up. It's a very boom and bust business. It's cyclical in its nature because the fabs are so capital intensive that people invest in of fabs, build a ton of capacity, and then overbuild if there's any type of turn down.
Starting point is 00:27:03 And we've seen that cycle again and again. What's happening right now is the confluence of two things, right? One is, with all the demand for artificial intelligence and the corresponding demand for compute and the ancillary services around the GPU, the demand is through the roof. That's number one. Number two is that there was probably an investment cycle that needed to happen back in 2023 that would have brought on the necessary fab capacity to be able to serve. Impossible to predict what just with energy.
Starting point is 00:27:38 It's impossible to predict what just happened. And now people are chasing energy. The data centers are going where the energy is. It's not based on real estate. That's based on where is there some wind. And anytime you have a very cat. Not every any time, but many times when you have a capital intensive business like, you know, building fabs, you will get this boom and bust cycle, just like an energy.
Starting point is 00:28:01 They overbuild. Yeah. And then, you know. Fiber. Yeah. I mean, there's there's a lot of examples of that. Our approach. Some ways when you look at that, it's a beautiful aspect of capitalism that we're able to have a boom-bust cycle,
Starting point is 00:28:16 that we're able to weather it, right? If you think just that capitalism from first principles, something like that happens and we have too much fiber, it creates an opportunity for Google to buy it all up or the next person. Listen, the, you know, it does a lot of things having a boombo cycle. It clears out the underbrush. Yeah. The strongest companies will be able to survive and take advantage of that.
Starting point is 00:28:37 And it shows the seeds of future business. The other thing that it does is you put that infrastructure into the ground. You put the fiber into the ground, which became the backbone of how, you know, We watch movies every day and how we communicate and how we hop on a Zoom and, you know, COVID and all of these things were based on that infrastructure that was available to be consumed. Yeah, people don't recognize this fact. If you, the premise of YouTube from the founders who I knew, Chad, Hurley and his other partner, they basically had the realization at this curve, storage is coming down so quickly. we could offer free unlimited uploads. And bandwidth is coming down,
Starting point is 00:29:22 so I guess we don't have to charge people for sharing a video online. Before that, if your video went viral, people are going to have their minds blown, but your server would turn off and it would say, this person needs to pay their bill because they were getting charged for carriage
Starting point is 00:29:38 by the Megavit going out. Yes. I mean, look, and the business models change and evolve, And, you know, like you said, Moore's law, and certainly Jensen will talk about the fact that, like, what is going on within the accelerated compute says dwarfs. Yeah. Moore's law, right? And all of that is going to lead to more opportunity to build more companies that are going to do things like you two did, which has really changed the world.
Starting point is 00:30:09 Yeah. Yeah. I mean, the concept that I don't know if it was like a million hours being uploaded every hour, or minute. But at some point, Susan Wajaki, rest in peace, that told me just like how much was being uploaded every minute and it made no logical sense until you realized, well, there's three billion people, two or three billion people in the service and one percent upload or point one, ten bips upload. It's like, okay, one in a thousand people upload. It's a big, it's a big denominator. I was sitting on a panel with Sarah Fryer, CFO of Open AI.
Starting point is 00:30:46 And every once in a while, she really puts out like interesting information. And so she was talking about the cost of a million tokens when ChatGP3 came out. And it was $32 and change. And now a million tokens cost $9. Yeah. And so you just see like the incredible power of how the capital markets, how capital, capitalism is fueling engineering and fueling competition. It's become recursive now too.
Starting point is 00:31:24 I mean, these models, if you say to the model, hey, make yourself more efficient, spend less money and lower the cost of tokens, it'd be like, okay, Captain. Yeah. I don't know if you saw Carpathie's recursive thing last weekend, but it's like now civilians who've never worked in a language model on computer science are like, I'm going to try to do something recursive this weekend. You know, it's one of the things that I talk to, you know, the other founders about, you know, and it's like when you think about some of the things that AI does, right, it's lowering the barrier to operations.
Starting point is 00:32:02 So if you have a good idea or a great idea, you can open up your model and you can tell your model, you can vibe code it. You can do all kinds of different things and create things. that never existed before. That's amazing, right? Like, that's bringing down this incredible barrier that kept human creativity contained. And now all of a sudden, this whole new vector of, you know, medical research or different approaches to, you know, baseball cards or whatever you want. If you've got a great idea, if you've got a new creative idea, that's the valuable kernel right now that allows you to build new things and to create new things. And I just think that's incredibly exciting. Yeah. Like you're bringing the minds of eight billion people, a tool that allows them to
Starting point is 00:32:49 overcome what was insurmountable for forever. For humanity. Yeah. It's a bright new future, Michael. I appreciate you sharing the information with us and the vision. I am really delighted to have Arvin Srinivasa on the program. Thank you for having me here, Jason. It's so great. I want to go through three stages in which I fell in love with your product. The first phase was I could go and pick my language model if I want to use OpenAI, if I want to use Claude, whatever it was. That was like a real unlock for me. And on the sidebar, I noticed you had done essentially like what Yahoo did in the early days,
Starting point is 00:33:33 finance, sports. And when I pulled my Nick game up, it gave me a live version of that. When I pulled my stocks up, it summarized the news in real time. And I was like, wow, this execution is great. And I kind of made you my front door to different models and it made it easier for Michigan. Then you came out with the comet browser. And I was like, holy cow, I can give this a series of instructions.
Starting point is 00:33:56 Go to my LinkedIn, find everybody from this company, put them into a Google sheet, and boom, you were the first out of the gate with that. And then just the last couple of weeks, I had been claw-pilled and using open claw, but you came out with computer. And I started using computer. and boy, it's good. It's a really strong start, allowing me to do repetitive tasks, very similar in some ways to co-work from Claude or basically an engineer or developer using it.
Starting point is 00:34:27 So are these the evolution of the company, and I should think about it that way? But how do you look at perplexity now? You have a very loyal fan base. You're making a lot of money. I don't know if you disclose it, but I think it's hundreds of millions to billions. You can tell us. But what is perplexity in the face of, wow, Claude's having a great run, Open AI still doing strong, GROC doing very well, Gemini coming on strong.
Starting point is 00:34:51 There's like six or seven of you, and you just happen to be one of my top two's right now. Thank you. First of all, first of all, thank you. Thank you so much. Perplexity has always been built for people who are always looking for the extra edge, the curious people. So it's very natural that you are one of our power users. One common theme for us for the last three and a half years is accuracy.
Starting point is 00:35:18 Perplexity wants to be the company that's building the most accurate AI. So when you want to give somebody answers, accuracy is very essential for building trust. Because only then the user is going to ask the next set of questions. It turns out it was a great idea to give AI access to the internet to be accurate. So that's the perplexity Ask product. It turns out it's a great idea for AI to have full access to a browser. so that it can be accurate when you task it to go do something that you would do yourself on a browser.
Starting point is 00:35:47 Egentic browsing, comment. Now, the last phase is, it turns out it's a great idea for AI to be given a full access to a computer so that it can do whatever you do on a computer on its own, essentially becoming the computer itself, an orchestra of everything AI can do today, every single capability each individual AI model has, be it GPD or Clod or Gemini or anything else,
Starting point is 00:36:15 an orchestra of all those capabilities, that's what Perplexity Computer is. And all these sub-agents that are running inside computer are the musicians. The models are essentially the instruments. And they're like hundreds of models out there, each having their own specializations. Some are good at coding.
Starting point is 00:36:34 Some are good at writing. Some are good at multimodal, visual synthesis, image generation, video generation, audio. But what matters is the end output, the music you play. That's the work AI gets done for you. And that's what perplexity computer is. The AI itself is the computer now. Still lives inside of a browser.
Starting point is 00:36:53 Have you considered giving it desktop route access? Yes. That feels like the next place this is going. But that comes with a lot of security issues, a lot of trust issues. As you mentioned, trust is paramount. Getting the right answer is what builds it, but also not getting hacked and not having it delete your files. So how do you think about root access to my Windows machine? Obviously iOS, they won't let you, but with an Android phone, it would let you.
Starting point is 00:37:18 Yes. So do you have that in the works? Yes. So we announced something called personal computer, perplexity personal computer. That's essentially going to take all the trust and reliability and the server's side execution of perplexity computer, but synchronize it with your local computer so that you can use it from your phone. And we're going to do this with the Mac Mini. where you synchronize your computer with the Mac Mini. So that becomes your local server. All the agent orchestration that has to do with your local private data
Starting point is 00:37:48 will run on that local orchestration loop, that runtime with the Mac Mini. Not on your servers, not on Anthropics. Exactly. Yeah. It could still ping Frontier models if it needs to with your permission. But it will be orchestrating everything on your local hardware. Yeah. And if it needs to run on the server side hardware,
Starting point is 00:38:07 If you don't want very complicated long-running stats to be running on your local hardware, you can delegate it to run on your server-side computer, which is, again, only accessible to you and you alone. So that way, we're going to bring this perfect hybrid of trustworthy hybrid between local and server-side. And you'll make it easy to do. It just be abstracted. You install one executable, boom, it's done. It's like open cloth for dummies.
Starting point is 00:38:35 Nobody needs to learn how to use it. Nobody needs to manage API keys. Nobody needs to manage separate billing across like 100 different services, figure out what you can give access to and not access to. We take care of that. So it's the Steve Jobs way of doing it, you know, end-to-end integration. And how do you think about local models? I have started running Kimmy 2.5 on a Mac studio.
Starting point is 00:38:58 It's not as good as Claude or Gemini or GROC. But you can probably do about 80% there for free, essentially. Yeah. And so that's quite compelling, considering some of my other bills, Claude and stuff we're getting expensive. So do you have one of those? You started testing on your local Mac Studio? I assume you have a Mac Studio and you're doing this yourself. Yeah.
Starting point is 00:39:20 Or now, I don't know if you saw Dell and Nvidia announced a giant workstation. Yeah. Was it a $3,800? Something like that. Something like that with 750 gigs of RAM. So what do you think about the desktop going back to workstation slash source? server status. I think it's very promising.
Starting point is 00:39:40 My prediction is that initially start off as a sub-agent. So whatever you need to go, like your tax returns, your personal photos, your emails, your calendar, all that stuff, those local apps, your personal notes, very personal notes, you can make sure that the models that access those tokens will be running on your local hardware if you want to. if you're that privacy conscious. And more complicated stuff that accesses your data that's already on the server side,
Starting point is 00:40:15 example, your Google Calendar, your Gmail. This is personal data still, but AI runtime can access that through your connector, your Google Calendar connector, your Google Workspace connector, and that could run on the server side because anyway, the data is on the server, it's not even lying on your device. So that sort of hybrid orchestration is where we are headed
Starting point is 00:40:36 to. I don't think it's a dichotomy between fully local versus fully server. It's all about choice. And anyway, when you're on your phone, you don't care actually which server that workloads running from because it's not going to be able to run on your phone anyway. The chips need to exist on a Mac Studio or a Mac Mini or on the server. Or this new Dell that's coming out. And I really think the idea of spending $10,000 on a powerful desktop will appeal to people if it lowers their $500 a month. Yes. Claude bill.
Starting point is 00:41:09 Yes. This is an incredible savings. Plus, you get the benefit of privacy and not educating the language models on your personal data. Yes. And it's going to be like you're buying a refrigerator. Your internet motive. Like the costs for these will eventually go down.
Starting point is 00:41:25 Yeah. But it's not going to feel like you're wasting your money. Every home has a lot of other. sensors that runs your home that will also be part of this orchestration loop. Yeah. So that's where it gets exciting because now you can just dictate something to your phone and that can control your entire home. So that's the dream that everybody has and all that orchestration loop can run on your
Starting point is 00:41:52 local hardware, no problem. And I'm curious what you think of the operating system. What's eventually going to be the operating system of this workstation? AI is the operating system. Like earlier in the traditional operating system, you execute programmatically. Now you start with objectives, not specific instructions. You come up with a high-level objective. Go build this website for me that takes all the transcripts of all in podcasts
Starting point is 00:42:20 and tracks the stock price just before the podcast and after and charted for the Mac 7. And charted over time. So that's the objective. But individually, it's running a file system, a click. code sandbox, access to the internet, it's having its own HTML tools. And like, yeah. So I think that's basically where, you know, models, systems and files and connectors are all coming together.
Starting point is 00:42:44 You would think of that as an OS, except you're operating at an abstraction about that where you're thinking in terms of objectives. Yeah. And does it need to eventually become its own operating system in your mind? It could be. Like people could think about it as just like, yeah, I have my perplexity computer. running all the time, whether it essentially runs on Linux machines right now,
Starting point is 00:43:07 every server-side computers are Linux machine. So I think Mark in recent tweeted this right after our release that turns out Linux computers was the right idea. Desktop Linux computers are finally going to work. Yeah, I mean, they're stable, they're customizable, and you're not at the mercy of Apple's desire to contain the experience, or Microsoft's surface area as, for hack, hackers. Exactly. You build something rock solid and it does feel like Linux might actually become
Starting point is 00:43:38 the eventual winner. It may not need to have a front-in. That's the thing. You could access the Linux machine on your phone. Right. It could be running iOS or Android. It doesn't matter. Right. The actual valuable runtime is running on Linux on the server. You've done great as a consumer company. A lot of love there. Now I'm starting to see corporations, with computer starting engaging. In fact, you'll be happy to know this. Last week, I took two people in my back office, and I said, stop working on OpenClaw.
Starting point is 00:44:12 Your job is to do the back office automation at our venture firm only using perplexity. And they were a perplexity computer. And they were like, oh, okay, it doesn't talk well in Slack. It doesn't have an agent in Slack. I was like, it will. I'm going to see Armand. I'll talk to him about that.
Starting point is 00:44:31 So we need a really strong Slack connector. It's already out. It is. Okay, great. Computer exists as a Slack bot right now. Okay. That you can add to your Slack workspace on the enterprise plan. And our entire company works like that.
Starting point is 00:44:43 People are talking more to computer on Slack than to other people. In our first volley, we were sending reports in, but it wasn't interactive. That's perfect. So now you've got your company going in two different directions. This incredible consumer run you have. How many people are using the product every month? of millions. So tens of millions of people. That's very much similar to the trajectory of the Google and Yahoo consumer business. Now you've got corporate. How are you doing on the corporate side?
Starting point is 00:45:12 Thousands of companies. It's the fastest growing business for us. It's growing faster than the consumer and revenue. And things like computer unlock entirely new possibilities. For example, we've saved more than $100 million for our enterprise max customers who are on the highest tier of enterprise. Explain what that is. What does it cost, 200 a month per person? person? So there are two tiers. One is the Enterprise Pro, which is $40 a month. And there's the Enterprise Max, which is $400 a month. And on a computer, after you run out of your credits, you would pay for the tokens. You pay for the usage. Are you making money on the $400 a month, $5,000 a year one? Or at this point in time, are people going so crazy? One thing that perplexity has is
Starting point is 00:45:57 every revenue we make, unlike certain other rapper companies, every revenue perplexity makes, has positive gross margins. Got it. Because we're not just selling tokens. Right. Most of our revenue is recurring because people are paying a subscription fee. And because we route through multiple different models,
Starting point is 00:46:14 we're very efficient in terms of how we spend on the tokens. Because we have all this advantage with rag and orchestration and search, we don't actually need to blow up the context window of the models. Yeah. As a result of that, we have positive gross margins on all the revenue. Every single penny we make, we make profits on that. Overall, the company still yet to be profitable,
Starting point is 00:46:35 but we're working towards that. You've had the opportunity to exit. A lot of rumors, Apple, other people were like, hey, this is a great team. How many people on the team now? About 400. Yeah, you've got a very coveted team. You obviously understand consumer.
Starting point is 00:46:48 You obviously understand business. It's a product-driven organization. Reports are you declined. But the world's getting hyper-competitive here. How do you keep up as a 400-person organization when you got Sam Altman over here raising $100 billion, you know, and then you have Elon putting data centers in space and merging with SpaceX and Twitter.
Starting point is 00:47:10 You have Google with unlimited resources, Amazon getting in the game, and obviously Gemini, a very strong product and Google, really good at consumer. I think we'd all agree, Facebook and meta, haven't figured it out yet, except maybe for serving us better ads, but they haven't figured out the consumer case.
Starting point is 00:47:30 but they'll copy it. They always do. How do you look at the playing field? Because the degree of difficulty, this isn't playing checkers or this is like playing against the 10 best chess players in the world. That's what you have to do every day. So how do you think about it? Long-term and independent company, do you think you'll need to join forces at some point? And why didn't you take the deal?
Starting point is 00:47:53 This deals were incredible that you got offered. So one advantage we have that all these companies you mentioned don't. have is the multi-model orchestration. We're like Switzerland. We don't have to have one horse in the race. If GPT wins, Gemini wins, Claude wins, Lama wins, it doesn't matter to us. Or even open source models can win, no problem. And you have them on the service. Yes. We have Deep Seek and Kimmy. We have Kimi, we have Nemetron, and we have a lot of usage of Quinn, Alibaba Quinn, silently under the hood. So for us, like that advantage of being able to take the best in each model and give the user the orchestra of everything they can do, I don't think any of the companies you mentioned can do that. Right. Nor would they.
Starting point is 00:48:40 Nor would they. It makes no sense for them. It would be an admission that all the data centers and CAPEX they have built out, still couldn't produce them the best model. And Dario, CEO of Anthropics said recently in an interview that models are specializing. Towards the beginning of last year, people thought models are going to commoditize. But towards the end of last year, models started specializing. Even within coding, Clod code and codex have very different capabilities. Our iOS engineers love using codex. Our backend engineers love using Claudecode. So even within a specialization like coding, models have their own unique specialities.
Starting point is 00:49:23 And there are many other use cases outside coding where different models are good at different things, which means the orchestra conductor that has no one model the horse in the race can win by providing a very unique value and service to the customer that each of these amazing names that you mentioned cannot. And so you're buying tokens wholesale from them and then you'll charge customers to do it or do you think it's all? We're going to take care of all that orchestration. Yeah.
Starting point is 00:49:53 So you don't have to manage tokens across different models. Because I authenticate a couple of my different accounts, my pro accounts, interpreplexity. But does it, I don't have enough knowledge to know if you are abstracting that and people can just search across them and it's part of their perplexity subscription? No, we're not bundling subscriptions into other AIs. Yeah. We just ping the models directly. Got it. What you get in us is the perplexity or orchestration.
Starting point is 00:50:18 Got it. The harness. Right. So when models are kind of specializing, there's a bigger value in the, one who knows how to build a great harness that can take the best in each model. Does it auto route today or do you still have the drop down? Somebody's got to pick. It definitely auto routes the best model for each prompt.
Starting point is 00:50:37 But we also give users the flexibility to pick whatever model they want. What do you think of, I've seen a bunch of startups hack this together, but doing the same query across multiple. We built a thing called model council. Model counsel, yeah. Yeah. So that's one of the modes and perplexity where I saw Jensen saying in one of the interviews that he puts the same prompt in five different AIs and sees what each of them says.
Starting point is 00:51:00 Yes. Everybody does that. But then you still have to apply a biological compute to read every answer and then figure up where they're different. It's like talking to five lawyers about your trust or your... Five different doctors. Five different doctors trying to figure it out. Exactly.
Starting point is 00:51:15 It's dumb. So the model council is the feature we built where it would not just give you the answers of each model, but it will tell you exactly where they agree, where they disagree and where the nuances are. And that's in the interface? Yeah. I didn't know it was there. It's there.
Starting point is 00:51:28 I mean, you released product at a pretty great cadence, huh? Yes. Where did you learn that? And what's your philosophy of shipping product? Our philosophy is like speed is our mode. Like, you know, again, one of the things that big companies cannot do is move at the speed we do, serve customers at the speed and quality. It's very hard to maintain quality, speed, and trust at the same time.
Starting point is 00:51:48 Yeah. Like Apple takes a long time to ship anything. Right. Because he's very worried about people not trusting them. Yeah. And so some companies are bureaucratic and they just take forever to ship something. They don't maintain what they ship. They may make a big deal about an event, but nobody even knows how to go and use that feature.
Starting point is 00:52:05 Yeah, they get abandoned. Exactly. So perplexity has those advantages to being very small. And towards the end of last year, we found that like AI coding tools have made it much faster for us to ship things. Which is honestly one of the reasons why we built computer because now even non-engineers a shipping code here by just bringing a slackbot and asking you to fix bugs. So the iteration has just been like exponential. The moment I had where I became claw-pilled was when I was working with it and I was like, hey, I want to build my network. I know these 20 people in Japan.
Starting point is 00:52:39 I had dinner with them during my recent trip. I want to know who they know. So check out LinkedIn other things and who they're associated with and make me like a mind map of it. And then the next trip, I want to meet with the next circle of, you know, those connections. So I started asking you, it's like, okay. I got the results. I was like, great. And it said, where do you want me to put them?
Starting point is 00:53:00 And I was like, well, where can you put them? And it said, well, I can put it in a Google sheet. I could put it in a notion table. I can put it here. I could give you a PDF. I could give you a CSV file. Or I could write you a CRM. And I was like, yeah, sure, make me a CRM system.
Starting point is 00:53:13 And it made a CRS system. And I think that becomes, and I think maybe one out of a thousand people working with AI have had that experience. Maybe it's one in 10,000. Yeah. where your agent says, I'll make you bespoke software. Yeah. Have you had that yet?
Starting point is 00:53:29 And do you see that as a part of computer that when a person needs a spreadsheet, you don't launch Excel or Google Sheets, you just pop up a spreadsheet? Yeah. Well, we have a board meeting tomorrow. Okay, I'll come. I'll pitch it to the board. Sure. Our computer made the memo.
Starting point is 00:53:50 Oh, wow. Yeah. And we had a partner meeting to go. pitch a partnership idea. And earlier, we would have a design team do the whole deck. Yeah. Computer just one shot at it.
Starting point is 00:54:01 I had a press briefing with a bunch of journalists. My comms person would usually... Sorry about that. Bruttle. And then my comms person would usually give me a memo of what to say. Computer one shot at him. So...
Starting point is 00:54:14 It's crazy. The context is so good because the memory's getting better, yeah? Yeah. So it's like, I know that journalist from the last time. Yeah. I know the board meeting, have all the previous decks. Yes. When did that happen?
Starting point is 00:54:30 I think it happened with Opus 45, the Anthropic. 4-5, yeah. That was an inflection point when models started being amazingly good at orchestration and reasoning and tool calls. And Claude-code brought in this new idea and AI that everything can happen inside a sandbox, a console, a terminal, with access to tools. where tools are just command line tools. Yeah.
Starting point is 00:54:56 They don't even need to have graphical user interface. So when you did that and when you organize around files and subagents and skills and CILIs, the models started becoming very good at handling the context. So the context window no longer became a problem. It just put whatever necessary into the context whenever it wanted to and dumped them away when it wanted to. Yeah. And that made it like suddenly so good at doing very, long orchestration tasks.
Starting point is 00:55:26 Yeah, it's pretty crazy. I have every episode of This Week in startups, all the transcripts, and then all of all in... That was one of the tasks I did, by the way. I can send it to you. I asked it, I want you to download every All-In podcast. Yeah. Since the beginning.
Starting point is 00:55:40 And I want you to take a mention of all the public companies they mentioned during the episode. Yes. I want you to have a histogram of the counts. And I also want you to charter it across time. and then I want you to analyze the impact on the stock price. And the sentiment of what we said. Exactly.
Starting point is 00:55:57 And it did. Like, it clearly said... Are we moving stocks? Around Google's stock going up. Yes. Prior to that, you guys were talking a lot about Google. Yes. And clearly...
Starting point is 00:56:06 And I said I made a bet publicly on the thing. I said, I am buying a bunch of Google because I believe, even though they're behind, it's because they're too precious. You were kind of mentioning a company that might be too precious at times and doesn't release. Yeah. I was like, that's that company. They need to release more. Yeah.
Starting point is 00:56:21 And I told Sergey, I was like, like, give us the good stuff. Yeah. Not really did it do that. It literally gives you the timestamps of every single. And then I can go click on it and actually hear exactly. That moment. Yeah. Sweet.
Starting point is 00:56:35 Yeah. So that's when I was like, damn, like this, I would have had somebody do this as a week long project. It would have been 10 hours a week of researcher. I'm experiencing the same thing. When I do research notes, I've created my own. like mega prompt. Yeah.
Starting point is 00:56:54 And it will go and like tell me where you worked before and who's in your circle, who your competitors are, who your friends are, blah, blah, blah. And then go, I try to find old podcasts is one of my secrets. If you're an interviewer watching, I try to find what was the person talking about five years ago, 10 years ago, and then over 10 years ago. And I've gone into interviews now with Michael Dell and talked about things he was talking about in the 90s. Yeah.
Starting point is 00:57:19 And it finds me some ancient stuff. Like you would pay a research or a producer, you know, $70,000 a year, $80,000 a year to do this, and they would have done a third of the job in 10 times longer. It's really gotten weird just in the last six months. What do you think the next six months looks like? I think the dream that what we are going to try to do is help businesses run as autonomously as possible. You know, everybody talks about this. AI is going to create this one person, one billion dollar company.
Starting point is 00:57:49 Some people say it's already happened because people pay researchers like $1 billion. But it's not truly moving the GDP by $1 billion. It's not truly creating new value. So the best way to do that is to actually help a small business. People who would otherwise drive Uber's for like extra passive income to like buy like a Mac Mini, set up perplexity personal computers, run their business on that or like run it on the server. It doesn't matter. And actually make real money.
Starting point is 00:58:18 Yeah. Hundreds of thousands or even millions a year. And grow it. Have computer go and run your ad campaigns on Instagram or Google. I mean, integrate with SCM and SEO tools, find new users, and integrate with Stripe, charge them, ship new features, have your own like intercom integration for customer support. And like, have this all working. Well, you can be sipping wine in Napa. That's the dream that, you know, so it feels awesome to say.
Starting point is 00:58:46 everybody thinks, yeah, it's already there. It's not there yet. Someone has to do that hard work. Yeah. That's what we want to do. Yeah, it's a great vision because when I watched startups 20 years ago, there were so many checkboxes they had to do. I have to find an office space.
Starting point is 00:59:01 I got to put up a bunch of servers. I got to hire an HR firm. I got to hire a PR person, all this stuff. And now I talk to young founders. They got a three-person team. They've come out of A16Z, my program, launch accelerator, or whatever it is, Y Combinator. And I'm like, okay, you raised a half million.
Starting point is 00:59:19 You raised a million. Who are you hiring? And they're like, I don't know if we need to hire anybody. I'm like, if you could hire somebody, what are you? They're like, well, I do my own HR. I have this partner. And I'm like, how are you doing hiring anyway? And they're like, well, I put out an ad.
Starting point is 00:59:35 And then it sorts and ranks the candidates. And then it emails the top 10, ask them a bunch of questions. And then I meet with the last two. And I'm like, that's what a recruiter did. The entire recruiting job has been abstracted. Exactly. A tool-like computer is going to make that even faster. There's so much work to do.
Starting point is 00:59:53 A lot of connectors, a lot of specific workflows. People don't want to learn how to write like, you know, essay long prompts. You know, it needs to be so quick and fast and autonomous. You just set it up and done. And you have an idea. You can turn it into a business and start making money. Yeah, it's an incredible future. And it feels like it's right here.
Starting point is 01:00:13 do you how do you think about job displacement because you're actually making the tool that enables people yeah to be a solo entrepreneur and get to a million in revenue but it's also the same tool that doesn't require them to hire and we've had this debate a million times on the podcast do you i'm wondering if like me you have moments where you're like oh my god this is really terrifying yeah a lot of people are going to lose their jobs really fast yeah and then oh my god you can learn any skill you want and all the things that were hard are now easy. Yeah.
Starting point is 01:00:45 I go back and forth. I'm 70, 80% super positive about this, but I do worry about like 20% of the time I'm a little worried. Yeah. Where do you sit? I mean, America has always been about like
Starting point is 01:00:55 entrepreneur, entrepreneurship, right? Like we've been about like trying to build new things, discover new things, go explore. I think this whole like Henry Ford came and built factories and brought in jobs and things like that
Starting point is 01:01:07 and like put people into a box. But I think the reality is people, most people don't enjoy their jobs. They're doing it. No, they hate them. Exactly. So there is suddenly a new possibility, a new opportunity to go use these tools, learn them, and start your own mini business.
Starting point is 01:01:25 And if it pays for your needs for your, or multiple years, and lets you have a high quality life and good work life balance and true feeling of agency and ownership and passion to, like, get your ideas out there. I think that is, even if there is temporary job displacement to deal with, that sort of glorious future is what we should look forward. I think you're exactly right. There will be some displacement, but then there's also going to be so many opportunities open up.
Starting point is 01:01:52 And it requires the individual to not be passive. Exactly. They have to be rugged individualists. They have to be resilient. Yeah. And they have to be resourceful. And I think once you start playing with these tools, that's what happens. Exactly.
Starting point is 01:02:05 You all of a sudden feel like... It brings out the best in you if you truly are in a good space. Yeah. Yeah. And then today, comment for iOS is out. I'm a comet superfan. I required everybody. You were nice enough
Starting point is 01:02:18 when I emailed Joe. I was like, can you send me some licenses? You don't remember. You sent me a bunch of licenses. I said, everybody put this on because it was $300 a month when you first came out with the common browser. Now it's free, I think. We're all users. Highly recommend it. Highly recommend getting a pro account. It's only 20 bucks a month to get interperplexity, which is a joke. So you can get on board for nothing, less than a dollar a day. But what does iOS allow me to do?
Starting point is 01:02:43 And how does it connect to computer? Because that's another thing I'm having. Yeah. Cloud code computer. There's not a good enough integration with this mobile device yet. Yeah. So computer is already on the perplexity app. So you can just toggle the computer and start using it.
Starting point is 01:03:01 Comet's uniqueness and perplexity for the company. And the strategy is the fact that you can control the browser. So the browser also becomes a tool for a computer, just like your Google workspace and all these other things. Until the whole world is organized around CLIs and tools, there's still a lot of tasks we have to do manually on the web, on the browser, open tabs, fill of forms, click on things, upload stuff. All that stuff, if you want to automate, you need a browser.
Starting point is 01:03:31 You need an AI that can natively control the browser. So that is comet. And that's why, no matter how many other tools in the market, exist like OpenClaw or like Cloud Co-Work, executing tasks on the browser on the server side, along with all the other things, is something uniquely perplexity can do. Yeah, my dream is that you'll create an Android app
Starting point is 01:03:54 that roots my Android phone. Yeah. And that you just take over and see everything because one of the blockers I have now is some of the websites have gotten a little persnickety. Yeah. I don't want to mention too many, but Reddit, LinkedIn. Yeah.
Starting point is 01:04:09 And like, they're just, I am a great Reddit user. I'm a great LinkedIn supporter. But sometimes, like, I need to get my in-mail from my LinkedIn. And I just need to, you know, find seven people of company. Is there going to be a solution between the LinkedIn and Reddits of the world and the claws and perplexities? Yeah. How is that negotiation going?
Starting point is 01:04:34 You don't have to speak about any specific. once unless you want to. Yeah. But it feels like there's got to be a solution, and I'm willing to pay for it as a user. I'm willing to play Reddit to allow my bot to show up and behave properly. Well, I cannot speak about any particular company, but we are happy to work with anyone, right? So I think with Comet, our idea is to give people the flexibility to set things up on their
Starting point is 01:04:58 own. Yeah. And any official APIs that anyone's willing to offer, we're always happy to put that, part of computer. Here's what I think should happen. Let me see if you agree. And this is for Steve Huffman at Reddit. I go on Reddit. I do a pro account for 20 bucks a month. And when I do that, I can authenticate whatever tool I want to do a series of well-behaved things a certain number of times a day. Yeah. So it's not unlimited. I'm not going to scrape the whole site, but I would like it to just let perplexy or computer go and just tell me, hey, what are people saying on
Starting point is 01:05:40 this week in startups and all in subreddit, summarize it for me so I get the customer feedback. And I would literally name my agent and I would say it won't post on my behalf, it won't vote on my behalf, just need it to do a couple of little read-only things. This would be an easy solution. Or LinkedIn, I would, like if you, I already pay LinkedIn like 50 bucks a month. Like they should just let the $50 a month one work with computer. Yeah, absolutely. I mean... Okay.
Starting point is 01:06:07 This is for Satya Nadella. Let LinkedIn work with perplexity and the other players and we'll pay you extra. Perfect. It's a revenue stream. Don't you think API access for our customers is a revenue stream? I think so. I think fundamentally giving users a choice and setting it up as a win-win for both the business and the user is where the world should head to.
Starting point is 01:06:28 Yeah. And I would say the same thing applies to any website in the world. Like, if you want an AI to use it on your behalf, it should be okay for it. Because that's what the user wants. I mean, I have a paid New York Times subscription. Like, let me go in there and do, you know, whatever, 100 searches a day, a week, a month, whatever they choose. But that would make the subscription that much more sticky. Exactly.
Starting point is 01:06:51 All right. Arvin, love the product. Anybody at home, it's just tremendous. Go learn computer and get the comment browser. it has changed my business for the last two years. Thank you, Jason. Love the product, and we'll have you back soon. Thank you.
Starting point is 01:07:07 When you launch your operating system and come up with your own server and desktop server, but business is the focus, yeah. Yes. All right, great scene. Thank you. We have an amazing guest, Arthur, Manchis here, the CEO of Mistral AI. How are you doing, sir? Great.
Starting point is 01:07:21 Thank you for your. And so you're here at InVidio's big conference, big announcement. you're going to be working with Nvidia to build models, to open source them. What is the big announcement here? Well, we're announcing that we are going to be training the next generation of Frontier models with Nvidia.
Starting point is 01:07:43 It's something that we've been doing before with Envidia with Mistral Nemo, something we did like 18 months ago. And the point for us is really to be able to produce the best open source models out there so that we can actually use those assets to specialize them through products that we do for our customers,
Starting point is 01:07:58 like forge, that helps us customize the models for the enterprise we work with in engineering, in physics, in science, in making them better at certain languages when we work with governments, etc. And Mischeril, obviously, based in France, you're the leading AI company there. What's it like running the company and building a large language model in Europe? Obviously, there's regulations and all kinds of considerations. Privacy, the French are known for protecting privacy. in the United States, we're known for taking it away. How is the landscape there?
Starting point is 01:08:30 And what do you have to deal with there that maybe you wouldn't have to deal with in America? And what's the pros and the cons? Let's say, first, we have 25% of our business in the U.S. And 25% of our researchers are actually here. So I actually spend a lot of time here as well as in France, as well as in the UK, in Singapore, where we are. So, of course, it's different markets. It's markets where you have language, which is a topic,
Starting point is 01:08:53 where there's much more manufacturing is a bigger piece of the cake than it is here. And I'd say our strength has been to also work with European companies that are a bit lagging behind and that wants to adopt the technology to leap forward. And we've been able to do that through a forward deployment engineering engagement for our Ford product, for our studio product, that allows to deploy agents that do end-to-end automation.
Starting point is 01:09:18 But on top of that, the thing that we have announced today, like Forge, is something that is actually being used today, with customers in the US because they come to us with needs for post-training for making more specifically good at financial services. And what's happening is that we have this product and we can bring the models to specialize them as well. And so your belief is specialized, verticalized models, healthcare finance, engineering, different verticals will win the day or a global model will win the day that does everything? Well, you need general purpose models to do the orchestration parts, etc. But at some point,
Starting point is 01:09:53 you enterprise sits on a lot of intellectual property, on a lot of signals coming from physical systems, from factories, from tools. And it's actually not trivial to connect those systems, to connect those data to models that are closed source. If you have open models, you can actually
Starting point is 01:10:09 add new parameters, you can make a lot of deeper things that you cannot do with closed models. You can also, and that's something that we do, we, not only do we work at the model side, but also at the orchestration side, we see it, we subject matter experts to understand their needs and we build business applications that are fully bespoke to their needs by modifying the models but also modifying the harness on top, etc.
Starting point is 01:10:30 So we believe that eventually building on open source technology is a way to save cost is a way to have better control because you can see the thing on every cloud that you want, on your hardware if you want, you can deploy it on the edge if you want. And eventually from a customization perspective and from leveraging your decades of IP that you've been accruing in financial services, in heavy manufacturing companies like ISML, for instance, they do benefit from working with us because we take their data and we build models that are specifically good for their purposes. And this training data, using experts to come in and refine a model, most people don't know this business that well. But this has become a very large part of the industry.
Starting point is 01:11:10 Obviously, Scale AI was doing it. They went to Facebook, lost a lot of the customer base who didn't want to send their data, I guess, over to Meta. We're investors in a company called Micro One that's doing pretty well in this space. There's other folks doing it. Explain to the audience what you're doing specifically for companies and how this training works in a verticalized way. And then how you silo that data, because if you're working with one customer in aerospace or fintech, they might have a need set. But they may not want that training to go to a competitor. I can use a few examples.
Starting point is 01:11:44 I think overall, the data segregation is super important. And the way we have solved that is through a portable platform. So our technology is a set of services, a set of training tools, a set of data processing tools that I can take and that I can put on the infrastructure of my customers. So suddenly from an IT perspective and when we talk to the CIOs, they realize that from security perspective, the flow of data doesn't go. There's no data flow coming back to Mistral because everything stays there. Now, the way we then use that technology that has been deployed is that we're going to be working with the teams that
Starting point is 01:12:17 is doing image scanning and default detection with ISEML, for instance. And we're going to be sending forward deployment engineers, scientists, they all PhDs, they know how to train models, and they spend some time with the subject matter experts. They can explain how an image is being detected, how do you detect defaults, et cetera. And based on that, we're going to work out what kind of data needs to be used to train the models that is going to solve the task in itself.
Starting point is 01:12:42 And so we send the technology, typically we send a little bit of scientists, because you do need that expertise transfer and that knowledge transfer in between our teams and the vertical experts. And then we make sure that eventually our team no longer needs to be there to retrain the models, to get more data access, etc. So that culmination of data segregation,
Starting point is 01:13:03 expertise transfer, knowledge transfer is the one thing that makes us quite unique and allows us to serve the most critical use cases, the most critical processes in industries that actually need to take their data and put it into models for it to work. Yeah, this seems to be once the entire open web, what was available, legally gray market, et cetera.
Starting point is 01:13:25 I wouldn't have your comment on that controversy, but we kind of exhausted what's in the open crawl, yeah? We have. And it's time to actually either make synthetic data or actually use experts. Do you believe in synthetic data? And where does that work and where does it fail? We use synthetic data as a way to warm up the models. It's a way to actually be quite efficient.
Starting point is 01:13:47 at the beginning. If you have a large model and you want to train a small model, you will use your large model to process and to produce a lot of synthetic data at the beginning. But eventually, you do need to have human signal. So the human signal is something that is always a bit costly to acquire because you need to talk to the experts. They need to give feedback to the machines. And so at the beginning, synthetic data allows you to do the compression, to further compress the models, at the end, you do need to go and get data that is produced by humans. So, yeah, it's a, it's a way to have, it's mostly an efficient way of training models. You have bigger models that are used as teachers for smaller models, but it's not enough. And so you also need human signal.
Starting point is 01:14:27 Arthur, we've seen an incredible explosion. We're sitting here on AO 52, after OpenClaw, the year of our Lord, 52 days. When you first saw OpenClaw and saw the reaction of hackers, founders, startup, CEOs, just the amount of energy and it racing to the top of GitHub with the most number of stars and likes and all these contributors. What did that say to you as an executive in the space who's been grinding on this for many years? What does that open claw moment mean? Well, it resonated a lot with what we are doing with our customers because pretty quickly enterprises realized that if they wanted to make some gains with artificial intelligence, generative AI, they would need to automate
Starting point is 01:15:15 full processes. And to automate the full process as an enterprise, well, you can use OpenClo, but it's going to be, it's actually not really enough because you have data problems, you have governance problems, you can't observe the process that is running, and you can't control it. In many cases, when you run a KYC process, if you're HSBC, for instance, one of our customers, you will want to have deterministic gates that are going to always do the same thing in a way that is observable and that you can guarantee the CEO, that it's always going to go through these gates. And that's not something that OpenClaw is providing, because it doesn't have the kind of primitives that you need to work on collective productivity, observable productivity, and to work
Starting point is 01:15:55 on mission critical systems. On the other hand, the autonomy it gives and the autonomy it brings to people that are just individuals that are hacking together things is a way to also show to enterprises that if you set up the right control plane, if you set up the right sandboxes, if you connect to the right data sources, if you make sure that your access controls are well respected, Then you can actually unleash the power of agents doing things for your employees. And that's going to work. Work on the platform because otherwise you will not be at ease when you're sleeping. It is definitely something you have to be thoughtful about.
Starting point is 01:16:27 When I installed it, I gave it just for my agent, root access to my Google docs and my G Suite, my Notion, my Zoom, and my Notion and G Calendar, everything. And then I realized, wow, I can with my enterprise, edition of Gmail, essentially, I can just summarize for my entire 21-person investment company every conversation going on in Gmail and then correlate it with every conversation in Slack. And then I realized, oh my gosh, there's compensation discussions going on. There's a person on a PIP who we put about a performance improvement plan perhaps or
Starting point is 01:17:06 something like that. Oh, I have to make sure nobody else can access this because the power comes from giving it access to data, but with great power comes great responsibility. And I think people are learning that in real time. Yeah, it's a big problem because the enterprise data is not a single thing that you want to put into a single system that is going to be accessible by everyone. And so you need to have this layer that actually understands what is in the data. You need to have a semantic of what can actually be proposed to HR or what can be proposed to engineering.
Starting point is 01:17:41 And typically compensation is one of these things. You want to make sure that the compensation data does not flow back to all of the enterprise because you're going to have a lot of problems if that's the case. And so what you actually need, which is hard to do, is what we call a context engine. So a mapping of where the data sits that comes with a certain number of metadata that is telling you that this data is actually not accessible to this part of the company. And if you actually have someone engineering that is asking for something related to comp, The thing is actually going to tell you, look, you actually can't access that data.
Starting point is 01:18:12 So that's hard. It's actually hard. You need to rethink entirely the way your IT systems are being connected. And at some point, you also need to think about your management. Because your information flow is completely different today. If you're connecting agents together with your data sources, then it used to be. And suddenly, maybe you don't need that manager whose only purpose was to take information from the bottom and put the information on the top, et cetera. So there's some IT problems.
Starting point is 01:18:38 to solve and you need the right primitives, you need soundboxes, you need airbag, world-based access control and these kind of things. And you have changed to do. You need to rethink your entire customer service department because suddenly you actually don't need that much transfer of information operated by humans. All right. You have to go. You got a flight to catch. I have. It is so great to see your author. Continuous success with Mishdrell. Thank you very much. Cheers. I'm really lucky to have Daniel Roberts here. He's the co-CEO and co-founder along with his brother of Iran, They are a publicly traded company. They started in BTC.
Starting point is 01:19:12 Welcome to the All In interview program. Thanks, Jason. Pleasure to be here. Yeah. And so you started in Sydney. You and your brother was seven, eight years ago, and you got in early on Bitcoin. And all these Bitcoin miners wanted to have data centers, huh? Yeah, that's directionally right.
Starting point is 01:19:32 So the thesis we saw was this explosion of the digital world, the growth in the online. and at some point the real world was going to struggle. So we set about to build out large-scale data centers. Yes, the first use case was Bitcoin mining. But as we said to our seed investors, use that to bootstrap the platform, generate cash flow, layer in, hiring better use cases over time as they emerge. Here we are today with AI. We are swapping out all the Bitcoin for AI chips.
Starting point is 01:19:57 When did you first start seeing the demand in the company shift from, hey, Bitcoin miners, we need some H-100s, whatever? it is to, hey, we're this nonprofit open AI. Hey, we're this research lab. We need some AI compute. When did that start hitting? Look, we had a bit of a false dawn, I would say, back in 2020. We signed an MOU with Dell to start bringing out customers and compute. But in hindsight, it was too early. So we went back to Bitcoin, kept bootstrapping the platform. Look, I would say about two years ago, and month by month, the demand just continues to And you were in so early that when you were looking at data center space in the United States,
Starting point is 01:20:41 you were one of one looking at the space, one of two or three people looking at the space. They were trying to sell you on space, yeah? Yeah, so we actually developed the data centers ourselves. So we go and find the land. We go and get the permits. We go and apply for grid connections. And we were doing it at a scale that just amazed people at the time, like 750 megawatts is our flagship Texas site. four years ago was unheard of. In the middle of the desert, we're building these big data centers.
Starting point is 01:21:08 The traditional data center industry going, what are you guys doing? We're saying we believe in the future digitization, high performance computing, and obviously now today it's paying dividends. Yeah, I don't think anybody could have predicted when chat GPT came out, OpenClaw recently as a turning point, and then, you know, Microsoft, Google, and everybody embracing this. And that's your big partner, Microsoft. Yes, Microsoft's one of our early partners. We signed a $9.7 billion contract with them late last year. But as I was explained to you before the show, that's 5% of our capacity.
Starting point is 01:21:43 So things are busy at the moment. And when you do these buildouts, the big conversation today is no longer the number of GPUs putting in. It's just power. Power is the constraint today, yeah? Look, for many of the industry it is. But for us, because we started eight years ago tying up all this land and power, it's not. So we've got four and a half gigawatts for context. That's almost as much power annually as the Bay Area uses its entirety.
Starting point is 01:22:14 Wow. It's huge. So for us, the hurdle or the constraint is really time to compute. And that's emerging across the industry as well. And time to compute means trades people coming to West Texas, living in a trailer that you set up to then break around on a data center, build foundations, build water cooling systems. Like, this is hard manual labor going on, yeah? Exactly.
Starting point is 01:22:43 And this is the whole real world challenge to respond to these digital exponential demand curves that are unconstrained by the real world in terms of their appetite. And it just compounds. You need thousands of people out in these locations that haven't supported it. You put stress on supply chains. We're seeing what's happening with the memory. every aspect of it. So it's just permanent whack-a-mole, permanent solving fires to try and bring online this compute.
Starting point is 01:23:07 And you get to spend time there. What's it like when you set up a town or you bring a thousand people or 2,000 people to what's a pretty much remote small town? I'm assuming that when you bring a thousand, there might only be 500 living there right now. So what are those towns like? It sounds to me like something out of like the... gold mining era when people first, you know, went and were prospectors. Yeah.
Starting point is 01:23:35 Prospecting town? Pretty much. I mean, the barbecue's great. That was a draw card. But apart from that, look, we've always had a policy of hiring local, supporting the local community. This year, we're hitting a million dollars in community grants cumulatively. Yeah.
Starting point is 01:23:50 That's things like local playground, supporting the fire departments. But we will hire locally. Once we can't find that trade locally, we will expand the radius by 20 miles and hire out of that and so on and so on. That's very thoughtful, yeah. And these folks are coming, say, an electrician or a construction worker. They're coming having built houses or, you know, maybe building corporate offices. And now they come for a tour of duty here and the salaries go up massively. But they got to leave their family for a three-month tour or something. Yes and no, because typically where we locate is where there's heavy electrical infrastructure. where there's heavy electrical infrastructure is typically where old manufacturing and industry has closed down.
Starting point is 01:24:36 So we go down, leverage that sunk CAPEX, rehire, retrain local workforces and bring a new industry to town in these data centers. Has that workforce now been completely depleted and we need to train another generation, a younger generation to be Generation 2Belt and really embrace the trades? 100%.
Starting point is 01:24:57 We're partnering with universities, trade colleges, absolutely. And you go to a trade school, you go to a college, people are getting degrees in philosophy and English literature, they're going 50K a year in debt, 200k a year in debt. What's the starting salary for a tradesperson, working on a data center, doing electrical or construction or HVAC? What's the ballpark range?
Starting point is 01:25:22 Look, I won't talk specifics, but they are going up. The price is going up. It depends on the level. But yes, there is a rush for a good labor. I'm hearing 150 to like 300K. Am I in the ballpark? The lower end direction of the area. Yeah.
Starting point is 01:25:35 I mean, it's incredible when you think about it. There's concern about, hey, AI, shaking jobs. And then on this other side of the ledger, can't find enough talent to service it. Talk to me about energy sources and how you think about that. President Trump, Chris Wright, the administration, that kind of started with, hey, clean, beautiful coal. Year two, they're like all sources matter. Nuclear. Obviously, nat gas is plentiful in that area. We obviously got a lot of oil. People don't know this about Texas. In the United States, the number one source of solar installations. Yeah?
Starting point is 01:26:14 Talk to us about energy. So our philosophy has been sustainability from day one. We have used 100% renewable energy since inception. What? 100. Wait, how is that possible? We use hydro in British Colombia, we use wind and solar in West Texas. In West Texas, where we're located, there's around 45 to 50 gigawatts of wind and solar. Yeah. The transmission line to export that down to the load centers in Dallas and Houston is 12 gigawatts. Oh. So you go and locate to the source of low-cost excess renewable energy, monetize it into this digital commodity, exported at the speed of light as token. Great arbitrage. And the wind is producing a lot, but it's harder to get from those areas where people are willing to put up. I mean, people don't understand how big West Texas is.
Starting point is 01:27:00 It is an incredible amount of land. And you're coming from Australia. We're also on the West side, people don't understand exactly how much just pure nature land there is, yeah, undeveloped. So much land. And the issue is distance. You've got to spend billions of dollars on this transmission connection infrastructure to move that power to where people actually want it. You can build wind farms, you can build solar farms. But if you build it in the desert and no one can use it, then what's the point? So the whole opportunity for our industry is to go to the source of that power and monetize it. So the data centers follow the wind turbines, the solar installations. How do you think about batteries and are you able to put those online because obviously you're going to have periods where, hey, it's not a windy day?
Starting point is 01:27:43 In Texas, we have very few days when it's overcast. So that problem is pretty much solved. But you're going to have 50 days where the sun's not beating down. So how do you deal with the demand and softening that? that duck curve. We don't need to. The utility does that on our behalf. So this is why these grid connections are so scarce, so hard to get and so highly valued, because once you get that grid connection, the utility underwrites all of that variability. They guarantee you 24-7 reliable power. Got it. So on their side, they're figuring it out. Something goes down. And they could fall back, even though you're 100% committed to renewables. If they needed to fall back to gas or whatever,
Starting point is 01:28:22 they have that ability out there. So you have that as a backup. A lot of talk about or a debate, are we getting ahead of our skis? Are people slowing down? There was some talk about the Open AI project, maybe downscaling a little bit. Is Open AI a partner as well?
Starting point is 01:28:39 Can't comment. Can't comment. Okay. So we'll read into that, whatever we want. But are there pockets where people are saying, hey, let's slow down, or is it still gangbusters? It's right up the end of the spectrum. It's gangbusters.
Starting point is 01:28:54 We cannot meet demand. That's why the whole industry now is around time to compute. There are no idle GPUs in the world sitting in a data center. Yeah. And what's your take on when software makes, and this is a big discussion from Jensen himself during his two and a half hour keynote yesterday. We're sitting here Wednesday. I think he did his keynote on Tuesday.
Starting point is 01:29:17 He was talking about, hey, software is going to make it 50 times more. you know, lower the cost of tokens, 50X, and then you have transport also contributing to that. When do you think the curve goes from parabolic to simply growing at a ridiculous level? Is there a slowdown coming? Or how are you planning for the future? Look, I think it's actually the opposite. I think it feeds on itself. So I'll give you one example. You go into chat GPT today and you generate an image. You enter the prompt. It's like the dial-up internet days. It is.
Starting point is 01:29:49 Right? It takes minutes. You know, I better get this prompt right. Yeah. Finally two minutes later it comes. Now, I'll give you an example. If we 10x the amount of compute available, which is an enormous task from where we are today, and those images take five to 10 seconds, are we going to generate more or less images? Oh, many more. This is Jevon's paradox. This is the theory of induced traffic. You build a couple more lanes. People start to think, well, maybe the distance from Bondi Beach to the central business district in Sydney terms would be, an acceptable commute. Love the analogy. Yeah. So what do you think about, or what are you seeing? I mean, we're hearing at Nvidia.
Starting point is 01:30:28 Obviously, they make the leading edge chips. They just walk rock. So now you've got, you know, two of the leading edge chips coming out of the same company. But custom silicon becoming a big discussion. Has that started to land in the data centers yet? Obviously, Google, don't know if they're a customer you can tell us, but they're making custom silicon. Amazon is making custom silicon, meta is making custom silicon.
Starting point is 01:30:52 Talk to me about that revolution, and is it actually making it to the data centers yet? Look, to various degrees it is. They're promoting their products. They're trying to tie up data center capacity. So, yes, there's multiple silicon looking for homes. I think it's fair to say Nvidia has a massive head start. The ecosystem, they've incubated the standards that they're set in. So I would say the safest pathway to build out at scale early is to follow the
Starting point is 01:31:16 Nvidia roadmats, but absolutely over time we are seeing these chips emerge. And in terms of desktop computing, I don't know, you saw the announcement that Dell and Nvidia are making a really powerful desktop, 750 gigs of RAM, a lot of power, you're going to be able to run some local models, open source with OpenClaw and open source coming from Kimmy and a bunch of the models out in China. Has the hacker group, which I think you started in like I did, probably in similar time periods, people are starting to get really obsessed with having a 10 or $20,000 desktop set up and running this local. What do you think of that trend? I'm curious. Yeah, I mean, the breakthroughs we're seeing in software, the way it's distributing power
Starting point is 01:32:04 to every man and woman in every house and their ability to code and use products like OpenClaw, the generation of demand and appetite for computer at a local level all the way through. to these mega data centers, it's absolutely real. And as we see the emergence of agents using more and more, as we see autonomous vehicles and other automation robotics, it's absolutely going to compound. And what about nuclear? The Trump administration really seemed to flip the switch on a growing belief that, hey, wait, nuclear is pretty great. It's clean. It's the original renewable in a way. and these new modular reactors have nothing to do with Chernobyl, Fukushima, or a three-mile island. They're much safer.
Starting point is 01:32:50 They're a completely different architecture. Have those started to land yet? And are you, since you followed correctly in the great state of Texas where I'm from, you followed correctly that time, are you following nuclear? I think you have to. I think the reality is it's going to take a decade a bit longer by the time big projects can come into commissioning. But now is the time to start that conversation. put in place policies, mobilized capital, and start that ball rolling. Yeah.
Starting point is 01:33:18 Do you have a data center going up near nuclear? No, not at the moment. Not at the moment. But you're actively tracking that activity? Yes. Yeah, this seems pretty inevitable, yeah? Feels like it. And if that happens, what impact does it have on your industry?
Starting point is 01:33:35 If you could, because obviously it's happening in China and people always put the Bitcoin miners, they were like the canary in the coal mine near the, hydro dams and near the nuclear where there was excess capacity. What impact do you think this has if you could actually have small modular reactors next to data centers? Well, I think it just opens up the market and enhances the U.S.'s competitive advantage in this space. Like, AI is inevitable. Robotics is inevitable. The reality is the correlation between human progress and energy consumption is really, really high over a very long time period. So if we can find a way to unlock new generation, clean generation as nuclear, and locate the
Starting point is 01:34:13 that more at the source and enable more compute on a distributed basis, all those use cases we just discussed become easier, more fluid, faster, and then you get that positive flywheel around Jevon's Paradox and Demand. Talk to me about the architecture today of Ethernet and data moving between data centers within data centers. That backbone is going through a paradigm shift as well, yeah? Yeah, it is. And Jensen coins the term. The data center. is the new computer. So you need to step back and you say, right, this big building is essentially the old desktop PC we had under our desk at home.
Starting point is 01:34:52 You go, right, how does that work? So all the cabling, the latency, the number of hops between each GPU, how they talk to each other, the fabric around infiniband, Ethernet, it's absolutely critical because every millisecond matters in terms of performance of that cluster. Yeah. And where do you think, or what do you think? of Elon's vision. It's obviously a longer term vision of putting data centers in space.
Starting point is 01:35:20 And there's a couple other people working on it as well. Yeah, I mean, it's very hard to argue with Elon. He's been very right on a number of things for a very long time. I think sitting here today, it feels exceptionally difficult, given the cost of moving things to space, the challenges around radiation. There's a huge amount of engineering challenges, but that's never scared Elon before. So I'm not being qualified. He's inevitably right, but sometimes he's late.
Starting point is 01:35:47 He might be late to the party. He might be late to the dinner party. He might show up at dessert, but generally he nails it. How much of an issue is getting the data out of the data center to consumers today? Is that not something people are worried about when you're building something out in West Texas? All that data, fiber, all that's been taking care of, or does that become a gating issue at some point? So this was one of the big myths that we had to bust when we started this business. Because everyone said, data centres must be located close to population centres, metropolitan areas,
Starting point is 01:36:20 latency is really important. And we say, yeah, that's right. Latency is important. But the reality is, in the US, Texas especially, there is fibre everywhere underneath the ground. Lots and lots and lots of it. And when you look at latency, from our site in the middle of the desert in West Texas, down to Dallas, the big carrier hotel, six millisecond round trip latency. What's six milliseconds?
Starting point is 01:36:42 There's a thousand milliseconds in a second. Yeah. We're talking six. It's adjacent. Yeah, it's not even, yeah, it's definitely not material. Listen, continued success and you're hiring a lot of people. Yeah. Yeah.
Starting point is 01:37:01 I think we've got 129 job advertisements up at the moment. All right. So everybody go to the Iran website and listen, companies doing fantastic. Thanks for spending some time with us. here at All In at GTC. Thanks, Jason. Appreciate it.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.