Yet Another Value Podcast - SemiAnalysis' Jeremie Eliahou Ontiveros on all things datacenter / power

Episode Date: August 18, 2025

In this episode of Yet Another Value Podcast, host Andrew Walker welcomes Jeremie from SemiAnalysis for a deep exploration into the surging demand for power infrastructure driven by AI. Jeremie breaks... down the evolving trends in onsite gas turbines, the shift from diesel generators, and how data centers are racing to scale quickly. They dissect hyperscaler capex forecasts, the sustainability of GPU investments, and the contrasting strategies of CoreWeave and Oracle. The discussion extends to robotics adoption, the future of labor, and whether remote locations like Alaska could become new data center hubs. A timely conversation as AI’s infrastructure footprint continues to expand.__________________________________________________________[00:00:00] Andrew introduces the podcast and guest.[00:03:14] Jeremie discusses onsite gas trends.[00:05:00] Andrew raises concerns about turbine demand.[00:10:23] Jeremie outlines backup strategy shifts.[00:13:42] Andrew shifts to hyperscaler capex forecasts.[00:18:24] Jeremie on declining asset efficiency.[00:19:35] Andrew questions depreciation accuracy.[00:27:03] Andrew digs into CoreWeave business risks.[00:32:50] Jeremie frames GPU vs. data center contracts.[00:34:56] Jeremie says Oracle took a CoreWeave-style risk.[00:42:06] Jeremie explains labor and logistics challenges.[00:52:15] Andrew pivots to robotics discussion.[00:56:49] Jeremie sees social value rising post-automation.[01:00:18] Andrew wraps up and plugs SemiAnalysis.Links:Yet Another Value Blog: https://www.yetanothervalueblog.com See our legal disclaimer here: https://www.yetanothervalueblog.com/p/legal-and-disclaimer

Transcript
Discussion (0)
Starting point is 00:00:00 You're about to listen to the yet another value podcast with your host, me, Andrew Walker. Today's podcast, we have Jeremy from semi-analyst on. This is his second time on the podcast. He came on in February. We talked all things semis, and particularly power then. We dive into all things power. Now, I think it's a really interesting conversation for someone who is, I mean, if you've ever seen the semi-analysis stuff deep, deep in the weeds.
Starting point is 00:00:24 But it's going to touch on a lot of the, whether you're interested in, you know, invidia yolos and core weave or whether you're interested in just kind of the demand for power the outlook for natural gas that'll look for natural we're going to touch on a lot of different stuff i think you're going to be really interested in it we even hit on a little bit of oracle and larry ellison who i've been a little bit obsessed with since the book club on oracle so we're going to get to all that in one second but first a word from our sponsors this podcast is sponsored by portrait analytics people ask me all the time what's your favorite stock screens run to look at for ideas is it low price to earnings high dividend yield what What is it? And my answer is simple, I don't run screens. I somehow doubt that there's serious alpha in sorting through stocks that are trading under 10 times price earnings on Yahoo finance. But portrait analytics has completely changed the game on screening. It lets you create bespoke screens to generate actually unique ideas. Let me give you an example of one that I've been using recently. I wanted to look for stocks with greater than 200 million market cap where the company has publicly discussed trading at a discount appears, and both the companies
Starting point is 00:01:23 and insiders have been buying shares on the open market in the past 12 months. To me, that's an interesting screen. It's unique ideas. It's a blend of quantitative and qualitative. It's pulling things that aren't, you know, just purely numbers-based that the insiders are talking about and it's building something that kind of fits with my view of the world and my view of the type of stocks that would be interesting. Portrait, let me find a handful of companies that meet that criteria. The last time I did the screen, by the way, the screens run every day if you want them to, every week if you want them to. The last time I did the screen, it had four or five stocks that exactly hit that criteria. And then boom, I had a list of really
Starting point is 00:01:54 interesting things that I actually might buy that I could sort through. It also showed me exactly where the company was talking about how their valuation compared to peers. So I could see, hey, was this a one-off or are they consistently talking about it in a way I think it's interesting. Anyway, I think it's completely changed the game for screening and for generating new ideas. If you're looking to up your screening game, you should check out portrait analytics at portrait analytics. I'll include a link in the show notes. All right. Hello.
Starting point is 00:02:19 Welcome to yet another value podcast with me. I'm happy to have one today for the second time. Jeremy from Sending Analysis. Jeremy, how's it going? Hey man, good to see you again. Thanks for having me. Look, you've got an open invite. Your last pod was actually, I never reveal sets,
Starting point is 00:02:34 but it was one of the most popular podcasts we put out this year. YouTube stats were great. I don't know what it says that the audio stats were even better. I don't know if that speaks to my face, your face, whatever is, but both the stats were great, but audio in particular was off the chart. So the people loved it. I'm super excited to have you back. I want to dive into all things, send me analysis after I just remind everyone,
Starting point is 00:02:53 quick disclaimer, not investing advice. We're going to talk about a ton of stocks today, see the full disclaimer at the end. Jeremy, you and I were just wrapping earlier today. Obviously, I think our conversation is going to cover everything, semis, power and everything. But I'd love to you start, like, kind of in the semis and power space. What's the next big trade in the semis and power space, I guess, is the headliner. Yeah, yeah, disclaimer again, not investment advice. But definitely, I think one of the interesting things that you're seeing is the search in on-site gas.
Starting point is 00:03:22 And I'd say a lot of people talk about deploying CCGTs to power data servers, but there's an issue with those. It's only time. Everyone knows there's a multi-yearly time to get CCGTs. Maybe what people don't realize is that actually you can deploy at very large scale much faster, more smaller units, more modular units. So you have two types of systems. You have the IRO derivatives, which are turbines mostly sold by GEV and CAT. And you also have basically huge car engines, it's called Rice Engine, Respirating Engine, supplied by folks like VoltaGrid, also Caterpillar and a few others.
Starting point is 00:04:02 And in both cases, we're increasingly seeing, I'd say, solid, reliable data service projects, projects that I believe are going to be operational in 26, 27, serves some very large end users like AI labs and hypersators. And I think they're going to be deploying those technologies. So basically, if you think about what's the market for on-site gas for data centers? In 2024, it's pretty much Elon Musk. It's pretty much the Memphis site. In 2025, Elon Musk, again, is a driving force. So in 24, you had like 300 megawatts of on-site turbines or 250, whatever. 25, he's probably adding another 500 to gigawatts on site.
Starting point is 00:04:42 But you have other sites. Another one that's well-published size is the Stargate Adelene, Texas site, which is going to deploy 360 megawatts of the site. 60 megawatts of those GEV-Iro derivatives. But there's more, actually. I can't talk about all of these sites. If you want to do more, subscribe to the DASER model. That's a fantastic pitch.
Starting point is 00:05:01 And look, I was just reading some stuff to prep for this podcast. And if you have, I mean, some analysis is huge. If you haven't subscribed, though, my personal favorite is you guys just have the drone shots of all of these things, you know, with the, especially on the Oracle co-located one where you're like, it's Oracle here. It's just really interesting. and it'll give you a really deep knowledge, just on natural gas turbines. So you said it.
Starting point is 00:05:23 And I think me as a dumb, dumb generalist, the thing that worries me is like, well, G. Bernova, who provides a lot of these turbines, the stock has been a screamer. I mean, I believe the spin-off from GE happened in 2024. It's like a six-bagger since the spin. The stock is a, is up one and a half times this year. So I'm not saying like, hey, I'm scared to buy a stock that's gone up, though, that is always a little worried.
Starting point is 00:05:48 I guess the thing with G.E. Veranova is, what I have heard is, hey, these wind turbines, they are complicated, right? But pricing is going through the roof because the demand is crazy. And what I kind of worry about as a generalist is twofold. A, if the demand pauses, and I do want to talk a lot about AI capax. But B, I believe pricing is also going up because GE and it's kind of an oligopoly and the other people who make it are like, we don't know if we can trust in this demand. So they've actually held back on building new plants, bringing new supply online. So they've got this beautiful, beautiful thing where demand is through the roof and they're not bringing supply online. Like, how long can that really hold? At some point, somebody's going to be like,
Starting point is 00:06:26 I need to bring a little bit more supply online. And once that happens, the floodgates break. And you know, you worry you're buying into a super cycle. And there's two methods for it to break. So I knew it threw a lot out to you, but I'd love to just get your overall thoughts on that. Yeah, sure. To guess for the roof, you said it. However, I'd say there's two types of demand here. One is the big turbines, again, 500 megawatt per unit. Those are mostly serving the grid. It's a few select very large datasets like META, for example, talked about building an on-site power plants.
Starting point is 00:06:57 But mostly, it's to serve the grid. So it's still related to data center growth, but it's for a broader use case. Now you have another use case, which is deploying turbines on site for time to market purposes. And what is interesting is that the dollars that flow to these turbines are actually, to some extent, not net new dollars, they're taking market share
Starting point is 00:07:17 for something else, which is the diesel generator market. So actually what you could see and what you're starting to see, if we take the Outlay and Texas data center as an example, I mean, I look at it as I like pictures, I don't see any diesel generator. And what I understand is that they're going to be
Starting point is 00:07:32 deploying those turbines maybe at first for on-site power, but then as grid power comes online, they're going to be using those turbines for backup. I think you're seeing that same pattern with the Memphis Data Center. where Elon brought online, those smaller turbines, again, which are much faster to scale up,
Starting point is 00:07:50 which can be manufactured at scale their factory made. He first uses those turbines for on-site power, and then as big power is built and they have substation and such, they use it as backup. So what happens is that the dollars that typically flow to diesel generators now flow to turbines, which serve initially as primary power and then as backup. Let me just a question on that. Again, you are the data center expert, I'm not.
Starting point is 00:08:13 But when you're building these big wind turbines and back up, and you had a great piece. I was so fascinated by the piece on intermittent power issues at data centers and their impact on the grid. We can talk about that later. But when you say, hey, you're making, I mean, you're making tens and hundreds of millions of dollars of investments into these mac gas turbines, right? And they initially serve as primary power. And then they're eventually serving as secondary power.
Starting point is 00:08:36 And this might blend nicely into our catbacks discussion at some point. But when I hear that, I say, oh, you have a rush right now, right? But when you're spending hundreds of millions of dollars on something that's going to end up being your backup power once like the grid's kind of online, that doesn't seem sustainable, right? Like, yes, it's sustainable when you need to get stuff up now. But when you're planning your 2027, 2008 data center, at that point, aren't you kind of looking and saying, hey, we're going to be connected to the grid? Maybe we don't need to spend hundreds of millions of dollars on these giant turbines. And also, I mean, there is the discussion of as batteries get better, like maybe you need less backup sources. It just strikes me as if these are eventually going to be backup sources, it sounds great in the near term.
Starting point is 00:09:17 And obviously, it sounds like diesel generators are really up a creek. But, you know, the medium to longer term outlook for these strikes me as a little bit murkier. I mean, data source have been deploying backup forever diesel generators. If you think about it, most of the time, those are stranded expense. They never actually serve. If you normalize the cost of power in terms of how many hours they run per year for these diesel generators, it's absolutely high. we're probably in the 1,000 per megawatt hours. So those are not economical purchases.
Starting point is 00:09:47 They serve when you have a blackout. And so, for example, in Spain a few months ago, there was this horrible blackouts. The data center industry was to some extent, I would say, I don't know, proud. I don't know if that's the right world, but just because they were actually able to navigate through that events. You've got to be very careful because you're the data center industry. You're like, don't worry, guys, we had 100% up time through the blackout. And then you're like, hey, the hospital down the street was off one. Granny was sitting at 100 degree to print.
Starting point is 00:10:15 I think that answered it well. I do have some more questions there, but let me, let's go to the overall AI catbacks. You guys published a piece. Oh, did you have something else, please? Yeah, I even flagged something else, which is that, so the thing is that today it's all about time to markets, but as the market matures, like some of these, or maybe think about this way, it's obviously technical, so right now it's the upcycle, so time to market matters about everything.
Starting point is 00:10:41 But then as we get into the down cycle, which for sure is going to happen at some point, you're going to see companies starting cash-saving mode. When they do that, they're not going to be contracting new cloud capacity or much less. They're going to be using their existing footprint, which basically means taking training GPUs to serve inference. And so what if you build a training data server with no backup at all? Because uptime doesn't matter so much for training data service. And then so you have a down cycle and you want to save money.
Starting point is 00:11:08 So your inference service, I mean, you want it to be high availability. So, like, that's also why it matters to have backup. Like, that's, again, a decade-old rule for the data service. You have backup. I just think it's a, you just shift from diesel to gas, basically, because it serves as an initial purpose of answer, yes. Yeah, sorry, go ahead. No, no.
Starting point is 00:11:30 Hey, that's fantastic to think about. And obviously, when I asked the question, I wasn't framing it as a, hey, these guys always need backup power. But I guess my push to you would be, you know, right now they're doing it with the gnat gas turbines because you do the gnat gas turbine. It serves as your primary and then you can use it as your secondary once you're plugged into the grid. If I was planning in the 2008, right? Let's say I had something in 2008 and I had a hookup to the power grid in some way in this thing that's coming on in 2020.
Starting point is 00:11:57 Would someone be planning that 2008 power plant with the gnat gas turbines or would they go back and say, hey, obviously I need backup. Nobody's on that. But would they go back and say, hey, in 2020, battery is much better. So maybe I can store a lot. And even if I don't want to rely purely on battery, I mean, diesel is, as you said,
Starting point is 00:12:15 it's always been the backup. It is so much cheaper than that gas backup. Should I just use diesel plus battery as my backup instead of buying these giant, super expensive nat gas things because I don't need a primary any. So first of all, with DCB on cost, I actually think it's pretty similar.
Starting point is 00:12:32 Yeah, not gas. Is that right? I thought diesel was significantly cheaper. I mean, if you look at the diesel engines, they're actually very similar to the net gas engines. So, for example, Kat has both a diesel engine and a nat gas engine. They're pretty similar, roughly same pricing. Now, again, as I said earlier, you have basically two types of on-side gas power for these deployments. You either have what is called reciprocating engines, which should basically a giant car engine, three megawatts, four megawatts per unit, and then you have the turbines, iron derivatives,
Starting point is 00:13:01 which are like 50, 15, 30 megawatt per units. You're seeing both today being used. I'm not exactly sure what's going to be their share, but both seem to be getting a lot of traction. Maybe it's slightly more on the turbine side. But anyways, in both cases, actually, both the turbines and the rice engines have a roughly similar cost structure.
Starting point is 00:13:20 It's expensive, to be clear, but it's not like a surge versus diesel. So in terms of cost, it's roughly the same. The difference is really about side selection, like basically you need to be near a pipeline. That's the only difference. You didn't need that before. You could just build a tank on sites for diesel,
Starting point is 00:13:38 whereas now a site selection criteria is having that gas access. Perfect. And I actually do want to come back to site selection theory. Let me back up a second. Cappex. You know, you guys published this great piece on, it looked at the five hypers basically, right? If I remember correctly, Oracle, Amazon, Google, Microsoft,
Starting point is 00:13:57 and I think you guys had core we've thrown in there. And you said, hey, we are forecasting CAPX. I thought the two interesting pieces were, A, your CAPX estimates for the rest of 2025 and especially 2025 and especially 2006 and 27 were way above street estimates, right? So the party ain't stopping for CAPX. And then the other interesting, actually I'll just pause there. What are you seeing that suggests you that the CAPX estimates for the street, which, by the way,
Starting point is 00:14:23 it's pretty bullish on CAPEX, and I think CAPX estimates have been going up a lot as the year has gone on? What are you seeing that suggesting that the street is, still not just low, but way too low, in your opinion. Yeah, I just want to say, first of all, that the report you're mentioning was sent on core research since only for institutional clients. It wasn't on the broad newsletter website. So sorry, guys, it's not accessible for free.
Starting point is 00:14:45 We can still talk about it, though. We can still talk about it. For sure, of course. We're going to do it. Anyways, why is CAPEX about street? Well, first of all, I ask you, like, you look at Selsa KappaX estimates. And they're not really modeling growth versus 2025, so 26, 27. It has some growth, like maybe high single digit, maybe low double digit, but it's not a lot, right?
Starting point is 00:15:08 And so I guess we go back to something we discussed last time, which is how do cycles typically play out? Is it generally slow steady trend or is it actually strong steady up? And then I think, again, 50 years of history back me up in saying cycles tend to be like very strong towards the upside. Anyway, this is just overall, I guess, theory. Now, what we're seeing more on the ground, using our data certain model, for example, we're seeing that construction starts for self-built data terms are surging. We're seeing that the leasing rates, pre-leasing, I should say, of hypersteers are extremely high today, which again, when you pre-lease, it means that the data is going to be a crucial next year.
Starting point is 00:15:45 So that's indication on next year's Cappex. So most of the forward-looking data center indicator points to very sharp growth in the high double digits. which trots pretty well with the numbers that you saw, which is like high double-ditch's growth for hypers and way ahead of truth. So it's just like that. Those signals clearly don't show CAPEX being stabilized at high levels. They show CAPEX going up, up, up, up. And I mean, this is up, up, up, right?
Starting point is 00:16:15 Like from 23 to 24, CAPX was up 50%. From 24 to 25, another 50%. You guys have, I think, only 35%, to 40% growth in 26 and like 25 to 30% growth in 27. You know, at that point, the, these companies are eating the world, right? Like just those five companies, I think you guys are saying in 2006, over half a trillion dollars of KAPX. So I, unless you have anything to add there, I have some questions on that.
Starting point is 00:16:45 But if you want to add anything, I can just pause there. Yeah, I mean, again, like all the signals go up. So we're just seeing these companies, like, committed to invest. And if you think about, like, I mean, we can talk about the drivers. Maybe we can do that after. There's a couple of ways to frame what's driving the market. I would just say, yeah, like, we don't see anything today that would suggest they're slowing down. And also, I do want to emphasize that Oriport, who has actually sent before earnings.
Starting point is 00:17:13 And the early indications that you saw, the best one was from meta, where they basically said, we're probably going to add roughly the same dollar amounts on CAPEX, which suggests. has plus $40 billion year on year. So they're spending, what, $65, $70 billion in 2025? They're basically saying we're going to be close to $100 billion by $236.26 soft guidance, right? So that's the first indication from management that we've had. It clearly goes towards what we have, which is actually in terms of dollars amounts,
Starting point is 00:17:41 the growth here on year is going to be roughly similar. So you've got accelerating spent. And I mean, these companies are spending at levels that in terms of as a percent of the economy, I don't think we've seen since kind of the, railroad buildouts in the late 1800s. I could be wrong. It's not like I've got my fingers on all that kind of, but I mean, it's going to be over 1% of the economy really, really quickly, right? So just high level, is there, obviously it's not accelerating because we're going from 50 to 35% growth, but I think no one would fault me if I said, hey, it's not exactly
Starting point is 00:18:15 diminishing when you're talking numbers this big. Are there any worries about signs of diminishing ROI yet? Yes and no. If you take a simple view of the ban sheets of these companies, that you're seeing definitely the sort of revenue to assets are going down for folks like Amazon, for example. And if you compare revenue to assets for these companies versus a core weave, for example, you're seeing that GPU-only business model clearly has a lower asset turnover,
Starting point is 00:18:49 which I guess is a way to say returns are going down. But on the other hand, that's not fair, though, right? Like, you are fine. You're going from Microsoft where they were selling CAPEX free software licenses, right, to this heavy business. Like, yes, it's going to be lower R.OIC, whatever it is, then the software licensing business.
Starting point is 00:19:09 But it doesn't mean the returns on capital aren't incredible. The returns of capital, from the estimates we've done, they're decent. They're not as good as, typical Azure business, for example, which was very high margin, but they're pretty good regardless. And one thing, it's one of the big questions, what's going to be the life of these chips, the useful life of these chips?
Starting point is 00:19:35 Actually, my next question. Do you mind if I frame it because I did a lot of work? So look, if you talk to bears, they will tell you, Jim Chanos, and I hate to pick on him, but he's got two prominent tweets in his last like five days on it. He says, hey, Microsoft's, I think the one that really jumped out to me was he had a tweet that said, hey, meta is depreciating their entire useful life at 11 to 12 years. And a lot of that is GPUs, which they're depreciating, I believe, at three years. And there's lots of debates on shouldn't you be depreciating GPUs faster. And I think he kind of missed the point.
Starting point is 00:20:07 But he's basically saying, hey, if meta is building all these things and they're depreciating him at his best cast was 11's 12 years. He thinks it might be 20 years if you do some, are these guys are these guys, I mean, it's basically saying not accounting fraud, but are they overstating the returns on investment because you should depreciate these at five instead of 10, so you're actually way overstating the investment. And I'd love to get your thoughts on the
Starting point is 00:20:29 depreciating useful life that you kind of so teed me up to ask you. Yeah, so just top of my head, the accounting depreciation that we have today is about six years. Meta is 5.5. I remember correctly, COVID or Oracle are at 6, four servers. Anyways, so the bigger risk is
Starting point is 00:20:45 what if from six years you're going to four years. There's a couple of ways to think about this. First of all, we can look at empirical evidence. The best empirical evidence is the HPC world, so high-performance computing. Some of that is pretty public, right? If you think about the top 500 supercomputers,
Starting point is 00:21:03 some of those have been running for over six years. Typically, what you hear from the HPC community is five to six years' useful life. So I think when you use empirical data, what has happened historically, it's tough to make a case that this is going to be, that six years or five to five, five, five years, whatever, is not an accurate measure, right?
Starting point is 00:21:22 That's the first thing I would say is, so it kind of makes sense based on the signals we've seen historically. Now, you could also argue that we're running these systems at max power these days. We're just doing more crazy stuff, I guess, with these GPUs. Obviously, we're trying to sweat every single watt
Starting point is 00:21:41 out of these GPUs, like the whole point is trying to utilize them as much as possible. As such, you could imagine maybe they're going to be, they're going to be, they're going to be dying faster. It's honestly, it's anyone's guess at this point. Like, I just, I just cannot know what's going to happen in the future. Again, just looking at historicals, five to six years seems okay. But then I do think it's a risk. And if it's five instead of six years or four years instead of six years, then I think what's going to happen, we had a post on Oracle showing that. The very large open AI type
Starting point is 00:22:14 contracts, actually, if you look at how they work, the margin that they get out of it is very high. We estimate about 40% EBIT margin, just on a project basis to be clear, so that doesn't include all the structure costs and such. But on a project basis, this is a very high margin project. But now, of course, if actually the useful life is four years, then they're going to book a massive write down in a few years and they're going to have maybe one quarter with like minus $10 billion of loss. You know, just what strikes me as like, you know, just what strikes me you said, hey, useful life is kind of six years. If you go from six to four years, like maybe you overinvested a little bit, you know, but it's not the end of the world.
Starting point is 00:22:53 I think the differentiation is if he's right and they're doing useful lives of 12 years or 20 years, like that is the crux of the bear tweet. And yeah, if you did a useful life of 12 or 20 and then you write down to four, well, then it was a disaster, right? But what you're saying, if it's at six and it's four, I don't think anyone's sitting here saying, hey, you know, this giant $100 billion data center build, it's going to be completely useless within five years. So I'm kind of with you. Now, there's still the questions on return on investment. Let me just ask that the returns, you know, the returns, you said the returns on capital
Starting point is 00:23:29 are good. How do you guys measure the returns on capital here? Because to me, like, you do have the issue. You know, there's the famous thing with bank financials. The scariest thing in a bank is a fast-growing bank, because if it makes a dollar of loans today and then $500 of loans two years from next year and then $50,000 of loan three years from now. Their metrics are going to look great, but that first dollar loan might be terrible and you kind of don't know it until you level out and then you say, oh, my God, like we've
Starting point is 00:23:56 just been fooled because all the new loans aren't paying off. These guys are accelerating cabdecks so fast. How are they measuring their current return on spend, right? Because they're spending $350 billion this year. That's not even coming in online for 12 months. So how do you they know that they're getting this great risk spend, and it's not just, oh, yeah, the first 10 billion was great, but the next 350 billion was just crazy. Yeah. Yeah, I mean, it's basically the co-week business model, right? Like, the bears are looking at that and saying this is the worst business model ever.
Starting point is 00:24:30 And the analysts that we've done, which we actually posted a report that was pretty positive on Co-Wreath. We don't provide financial advice, but we just, looked at the trends and everything to us looked actually much more viable that many people were suggesting. And the way we do, we just build a project by project analysis, right?
Starting point is 00:24:53 And so actually, we built a comprehensive model about a year ago called the AI Cloud TCO model, which we actually built for someone building a GPO Cloud literally. And so we feel like very good about all the estimates we have here, and what we did is we just estimated all of the different CAPEX related costs, all of the different OPEX.
Starting point is 00:25:11 We went extremely deep in the rabbit hole to model like every single line item. And look, what we're saying is there's a few things that impact your returns. Of course, you want to have low data center cost. Of course, cost of capital matters a lot as well because many people in the neoclod industry have a very high cost of capital. Like a year ago, it wasn't unusual to see like 50 to 20% cost of debt, cost of equity probably also in the 20% plus. I think it's going to get down because some of these companies are getting more and more sure, especially with people are much more comfortable lending to call me that much better rates.
Starting point is 00:25:47 But yeah, anyway, so there are a lot of these assumptions that are baked into the business model. And if you actually are able to optimize on your OPEX, on your cost of capital, and you can also optimize on your CAPEX to some extent, which is something with like an Oracle is doing with networking. They have a really good networking configuration to server dash clusters, enabling them to have a higher lower CAPEX and others. That can actually, all of these optimizations taken together can take a business that for random NeoCloud, it may be, I'm making it up 5 to 10% ARR to 25% AR business for.
Starting point is 00:26:22 Anyway, the point being, to answer your question, they think they're thinking about this on a project basis, just like we're doing. At least, that's my understanding. It might be wrong. But I think the best way to think about this is because there's so much upfront CAPEX, you just want to know what your likely returns. And that's also why you see a lot of these big hyper-skaters signing much longer contracts than the average market rates. I think from Corrie disclosures, from leaks that happen by the information in Reuters and whatnot on Open AI, it's pretty clear that they tend to sign four to five-year contracts. And when you sign a five-year contract, basically you have a warranty error on, unless something goes wrong, of course, or opening I failed and whatnot.
Starting point is 00:27:03 Just on Corrieve, you said, hey, you published your report. that was positive, you know, I wish I, I really looked at that IPO for a while. I was like, man, this IPO, you know, you get an IPO that's just really shit on, launches low flow. Like, we've seen this a few times with arm and, especially with arm. And I really looked at it and then the stock was up three X a month later. I was like, gosh, damn it, Andrew, you, you sweat. But I do want to ask you.
Starting point is 00:27:28 And we're not making financial advice. I'm not even talking stock price here. But, you heard lots of bears, especially at the IPO time. And even now, you know, a lot of people will say, hey, the stock price is completely inflated by a thin float. There's lots of, you know, pod monkeys trading around with borrow rates and everything. And just on the business, you know, it jumps out to me that this business was really spun up in large part because Microsoft was so desperate for capacity that they entered a huge deal
Starting point is 00:27:52 with them, Microsoft of an AI. And Microsoft has said, hey, we're not doing that deal again. We're trying to do everything on our own, right? Look at this company. Their capex spend, even at their level, is a pittance compared to their larger peers. their larger peers, obviously have other business models where they can subsidize. They can go build a new data center knowing, hey, most of it's going to be taken by our core internal processes, and then we can fill the books with third parties if necessary.
Starting point is 00:28:19 CoreWeave doesn't have any of that. So I'd just love to ask, again, not the stock, but just the core we've business. Why do you think it makes sense? Why are you still kind of positive on the business versus these giants that are playing in a similar field with other advantages? Yeah. So basically, if you think about the market for very large contracts, open AI, hyper-skaters, Antarctic, these guys. It's basically a commodity because what these clouds are providing is bare metal infrastructure. So what you have to do is building a data center, putting some machines in there, assembling the machine through networking, but there's no software layer on top of it. There's no, like, technical mode, technical differentiation or there's, to some extent, but it's really much lower compared to what we were used to see
Starting point is 00:29:06 from all US and Azure and others. So barriers to entry are much lower. And if you think about it, OK, it's a commodity. How can you win? There are two ways to win. One is long term is having the best cost structure. But actually, that's not what matters today, because what matters today is speed.
Starting point is 00:29:24 That's really the thing that matters, because that's what these big end users want. That's what Open AI was, I'm dropping one. They want the clusters as fine. as possible. So basically, you got to optimize everything for speed. And this is where you look at Corey's strategy and it actually done some stuff which enabled them to beat everyone else's speed. I mean, they contracted over like two gigawatts of power in roughly two years. They had to take on, of course, material financial risk and such to do that. But for example, they've been
Starting point is 00:29:54 working with those crypto miners, which no one was considering by then. Corey was among the first who signed with a miner. And these guys, they have the power right there. They have the substation, everything is in there. So you just need to trust that the miner has good enough contractors to actually build a data server, but the time to market is enrivaled, right? So if you think about the challenges
Starting point is 00:30:13 to build a green field data service, well, Corey said, no, I just want to do, I just want to go brownfield. And they have a couple of others, like Curia and TechFusion, a couple of other partners that build brownfield data centers
Starting point is 00:30:25 at an extreme speed. And in terms of costs, honestly, it's probably not the best cost structure. And I think they're trying to optimize that increasingly as of today because now they scale and they have gained sort of that trust from many partners. That's why they acquired cross-scientifices because that way they're more vertical. They can own the infrastructure and such. But they really scale at an impressive pace in 2008-23 and 2024.
Starting point is 00:30:53 I want to add something. If you think about the time to build a data survey. You ask most people that have been in the industry for a while, they're going to tell you those are two, three, four-year projects, right? That's the time it takes to build a data center. And if you look at the listed companies, think digital realty and others, that's what you see on their statements, is that when we start building, it becomes real and generates revenue, whatever,
Starting point is 00:31:16 two years after, 18 months, sometimes like more, sometimes like they less. In Corrieve, if you look at their deal with core scientific, things are moving faster, right? They're doing that stuff in less than a year, in some cases. And so that speed, again, speed is really what enabled them to gain those contracts. And so really, I think they took some pretty innovative ways to think about infrastructure that are much more optimized for AI, whereas everyone else was stealing the cloud mindset. And look, I'll give you another example, which is what Oracle did in Abel. Can we pause on Oracle?
Starting point is 00:31:48 Because I actually do want to come back to Oracle. Do you mind if I ask one more on Corley? You know, I heard a lot of interesting things there. But if I was a bear, I would say, hey, what Jeremy just described was they took on risks that no one else was willing to do, and they really emphasized speed. And that is in an up cycle, that is everything you want, right? But one thing Jeremy said is there is always a down cycle. Like whenever back in 2022, when people were just first starting to talk about shit, everyone
Starting point is 00:32:17 said, hey, even Nvidia, there's a down cycle. And all the Nvidia like long timers, they might have missed the big move because they were like, things are starting to look good. That's when you sell because there's a. down cycle coming and they missed it. If core we've optimized for, they're basically optimized for an up cycle. Like, aren't they going to get crushed the moment a down cycle comes? And then all of a sudden, they're committed to, hey, all they've done is accelerate and take as much as they can and they're delivering it as fast as possible. And then a down cycle, even if it's a blip
Starting point is 00:32:45 for six months come and they're just completely stuffed. Am I crazy to think that? You're not crazy. And actually, I think that I play also for Oracle, which we can talk about a bit. But yes, 100%. It's a lot of risk. The simplest way to frame the risk is your longest GPU contract is going to be five years. Your data surre deal is going to be 15 years. So if you cannot replace those GPUs and find a new contract,
Starting point is 00:33:11 but your stock with 10 years of paying rent to a data server operator without any revenue. So that's the simplest way to frame the risk, which I agree 100% exists. So you can think of it as a bet on the other line companies, right? All of these companies likely to need capacity in the future, I think that's the way the bet is framed, right? And so if you think about I'm exposed to Open AI, so I'm placing both Corrieve and Oracle, the way you think about it is Open AI is not going to fail. Open AI is going to need GPUs forever because they have a business model that structurally requires GPUs on the different side. And also they always have big training requirements.
Starting point is 00:33:52 So that's the way you can frame it is. I'm betting on the AI industry to be a long-term consumer of GPUs, a long-term consumer of power. And as I secure power, as you build data servers, I can renew those contracts over time. I actually have some more questions power, but let's go to Oracle. You wrote a piece of Oracle, and I'm sure you didn't see, but my last book club, I read Larry Ellison's biography from 2002. And as I was reading it in the back of my mind, I mean, Oracle stock has been on it. heck of a run this year, right? And as I was reading it in the back of my mind, I was kind of
Starting point is 00:34:26 thinking, hey, Larry Ellison managed somehow as a late 70s, early 80s, I can't remember if he's in a 70s or 80s. Somehow, he managed to catch the AI wave. Now, he wasn't like crazy early, but in 2003-ish, he saw where it was going. He made a big bet, and boom, this man who's called so many trends calls this one again, and Oracle's really benefiting right now. So I just want to throw that background out in Ness, just like, what is Oracle's AI strategy and why is it working out for them? It's Corrieve. They're taking a Corrieve path. Oracle strategy is, I'm going to use my balance sheet to get the largest contracts. They're using their investment grade signature to get, call it normal contracts. Let me explain. Basically, Corrieve, one of their issues was that because they're not investment grade and they're not a reliable hyperscare and such,
Starting point is 00:35:20 they struggle to get capacity from reliable data center operators. Sorry, when I say reliable? I mean, experience guys, again, the digital realties, all of these people that have been building forever, that have a lot of land that can deliver power. So definitely these guys, they love working with hyper-skillers list. So we believe, Oracle fits in the hypersketer category, so Oracle can easily get capacity from folks like digital reality and others.
Starting point is 00:35:43 So that's one advantage that they have, which all hypers share, to be clear. But what Oracle did in the Abilene's site was basically going, in Corrieve was, I'm going to do a massive bet on this company, which by then, like, if you look on paper, who is Crusoe? Cruceau is a crypto miner that never built a data server for like uptime related, like traditional data service. They built mines, and I do think they had amazing engineering teams and such. They're a great company, but back then, if you look at it on paper, it's betting on
Starting point is 00:36:12 Cruceau was the same as betting on other crypto miners today, right? And Oracle took that bet. They signed this contract, and if you think about it, the same situation, they're getting a five-year deal with Open AI, but the deal they signed with Crusoe, I think the leak numbers were like 15 years, right? A 15-year deal, it's probably $15 to $20 billion over like those 15 years. So if the contract fails after five years and they cannot renew it, they're stock with like 10 years of paying a billion dollars a year or more to Crusoe. So it's the same thing, right? It's taking massive, massive financial risk,
Starting point is 00:36:48 betting on the success of these companies. And if something were to happen to Open AI or to the overall AI, Gen AI industry growth, you could frame a future world world. Isn't very happy to take. I guess, I mean, I guess it's easier because even in 2023, this is a $3 to $400 billion EV company. So they're not, they are making a big bet, but Corrieve was existential like ThisWorks or the company zeroes.
Starting point is 00:37:13 And it's kind of interesting. Oracle made that bet, and it was not purely existential, right? like Oracle would exist if this bet had gone in flames, but at the same time, you know, Corweave, as many people pointed out, they took the optionality bet, right? This works and we're 100x. This doesn't work and we're a zero. Oracle, I mean, they had both sides of that bet, right? If it went up, they would benefit as they are benefiting.
Starting point is 00:37:35 You know, the stock is up, what, probably 50% this year. But if it didn't work, they were on the hook for all those payments. And then they're sticking around and they're saying, hey, okay, we're the largest data center people in the world. We've got all this excess capacity. and we're going to have to lease it for a song. Anything else? Yeah, I was going to say,
Starting point is 00:37:53 I guess you could frame a difference with Oracle, which is that they already had in the city cloud business. There's always the hope that by signing a gigantic deal with Open AI, you also incentivize Open AI to use your CPU-based infrastructure, which I'm sure the spending that Open AI does on Azure for traditional cloud services. I'm sure that's fairly high. Again, to manage like 700 million weekly users. It's not just about models.
Starting point is 00:38:18 It's also about, like, data storage and traditional sort of CPU front-end processing and whatnot. So that's also, I guess, the hope that Oracle has is by signing those GPU deals. They can also grow their Oracle Cloud Infrastructure business, which was, like, 10x or more smaller than other rival hyper-skaters. So I think Oracle, like, if you look at the history, they made this very bold bet on Cloud in 2016 or 2017. And it turned out okay, but not that good, right? They're still, they were still lagging way behind other hypers growing decently, but not like triple digits at a base that was enough to catch up with the others. So it makes sense also from a strategic perspective to just increase the size of the cloud
Starting point is 00:39:00 business and hopefully upsell some services. And what we've said in the report is that we don't see a lot of evidence that this is happening today. And we don't think it fundamentally has to happen because it's very easy to sort of connect GPU infrastructure with another cloud provider. So we're yet to see meaningful benefits of co-locating or having on the same cloud GPUs and CPUs. But it's always a possibility, right? So they have this option.
Starting point is 00:39:28 I don't know if it's going to happen or not. I don't think so, but it could happen. And if it does, of course, very positive for their process. I think it's cool about Oracle is just like high level as someone who's kind of not a specialist in this. But Larry Elson, again, he's in his, let's just call it 70s. I can't remember his exact age. but, you know, Oracle is late to the cloud computing party. I mean, you just said they started in 2016, 2017.
Starting point is 00:39:51 I mean, you know, AWS starts becoming the driver of Amazon in, what, 2012? Microsoft with Azure, like, they are late and they missed the boat. And they're basically, you can tell me if I'm wrong. They're also ran there. And Larry Ellison in his 70s says, hey, we missed the boat in cloud computing. Hyper-skilling starts taking off in 2023. And look, there were plenty of people at the time who were saying, hey, this is a bubble, too much capacity is already getting built, you know, where are the returns, all this sort of stuff.
Starting point is 00:40:18 And Ellison says, and I say Allison, Oracle, but I think Oracle is Ellison. They instantly go all in. And when I read that Oracle autobiography, it fits totally with his personality. But, you know, he goes all in and he hits it again in the 70s. It's just like crazy to have that mental flexibility and that forecast of the future. I want to ask you a few things about power, but any last thing on Oracle or anything? No, I think the way of your frame it is great. Like, yeah, definitely that was a big ball bet.
Starting point is 00:40:44 being up, being off very nicely now. Power. So it still strikes me that, and we had this discussion in our first conversation, but it still strikes me that for the most part, all these data centers are going up, and I don't want to say only domestically, but the U.S. is the hot spot for data centers. And you can tell me if I'm wrong. There are big data centers in Asia. Sargate 2 is in Saudi Arabia, I think, I can't remember.
Starting point is 00:41:09 But for the most part, they're going up in the U.S. and they're going up, you know, not in New York City, but they're going up pretty close to, you know, urban areas. Like, obviously, connection, latency, all that sort of stuff is power. Tell me if I was wrong on any of that, but when do we start seeing data centers getting built in crazy places, right? Like, I think there was the Elon Musk Street about putting a data center up in space. I doubt we're getting one in space, but you can see the appeal, right?
Starting point is 00:41:36 Like, you don't have to worry about power as much because you don't have any heating components if any heating worries, if you're in space, I'll take one. Alaska, right? We don't have to go build in Antarctica or somewhere with, but there's infrastructure in Alaska. It's really cold up there. There's a lot of access. We talked about natural gas. When do we start seeing a big data center get built out in Alaska where land is cheaper,
Starting point is 00:41:59 labor is cheaper? I'm asking, because it's clearly a silly question, but why is it a silly question? No, I don't think it's actually a silly. I think it could happen. Generally, the problem that you see in more remote locations is labor. Can you actually get thousands of people on site? Are the logistics good enough to be able to handle a project of that scale? That can be one of the big issues.
Starting point is 00:42:24 But overall, I think, actually, I think what you said in the beginning is inaccurate because I do think we're already seeing very large data service pretty far away from population centers. West Texas, I think, is already growing. Avaline, I mean, you could say Avalon is a city. It's not too far away from Dallas. But I think we're going to see a few massive data centers in West Texas in the next two, three years, in more remote locations. I mean, you could also say like Allendale, North Dakota, sorry, applied digital guys, but that's really far away from everything.
Starting point is 00:42:56 Elendale is the applied digital site, right? Yeah, correct. The logistics to go there are pretty insane. It's pretty tough to go. But it's still possible. So I don't think we're seeing already a move away from the. those metros, a lot of these data servers are being built in areas that are just more remote. But even they're like, and it's still in the domestic US, right?
Starting point is 00:43:21 And yeah, getting out to LLL, I haven't looked at the map, but I'm sure it involves a pretty long car ride and a flight into a very small airport, but, you know, Alaska or I don't think, I said Alaska because my first thought was, hey, super up north Canada, right? But super up north Canada is there's nothing built out there, right? So you would have to build out the gas pipeline, and you'd have to build out a fiber connection. Like, Alaska has all that, right? Now, I'm not saying it has it in what you need for a 500 billion Stargate or something, but it's got a lot of that.
Starting point is 00:43:54 And if it's got a lot of that, you can lay some more fiber and stuff. I'm just surprised, like, in Alaska, I know, like, there were a few people talking about the Nordic regions, and I know there's some smaller ones out there. But, again, I'm an outsider, but I'm surprised that you haven't seen like something. huge getting built in something where a little more out the way and you start saying, hey, you know, we're spending $6 billion in this, let's pay the data scientists an extra $5,000 to fly out there when we need them or something, you know? Yeah.
Starting point is 00:44:24 I mean, you might have seen an announcement, I think two or a few weeks ago from Crusoe saying they're going to build a massively to serve in Wyoming. It's still, I guess it's close from Cheyenne, but still pretty like her far away, I would say. But yeah, overall, I definitely. expect to see data so it's going to more remote locations. It's already happening today. I think it's going to happen more.
Starting point is 00:44:45 I haven't looked at Alaska specifically, so I can't really tell you. I was just during that. Actually, one thing we can touch on is there was a question on Twitter I saw. I was asking about the Permian Basin. And so I would say it's just a thought. I haven't looked at Alaska again, but the issues with the Permian Basin is that they don't have a good infrastructure. Yes. Ideally, you want to build a great infrastructure because it's lower electricity costs.
Starting point is 00:45:09 It's higher uptime. Grid is actually the best power you can have on site. Ideally, you want to have a grid connection. Fully islanded data centers are pretty expensive. It works out, again, as we discussed before, for a fast time to market, but then these equipment serves as backup, and you want to have primary power coming from the grid. Termin doesn't have a large electroculture's mission infrastructure.
Starting point is 00:45:32 There's a project going on that's going to be like five years out or something. So maybe that's one of the reason why Alaska isn't considered today, day, but I may be wrong. Let me ask a general question on power, right? It strikes me that everything, like power is the limiting factor. It is the component, obviously times of all these things, but it's been a race for power, and you've seen this in the stocks of all the power players have gone off, and I just want to ask, I think we touched on this in the first podcast, but I want to follow up again.
Starting point is 00:45:58 Like, if I look at the history of things, power efficiency tends to improve over time, particularly when it's a limiting factor. And I would point you to, you know, the biggest improvements in car fuel efficiency come after big oil spikes, right? Airline fuel efficiency. I believe jets today are depending on your source, 50 to 75 percent more fuel efficient than they were 30 or 40 years ago. Homes today, they take a lot, they consume a lot more power because we use a heck of a lot more power, but they are much more power efficient. Just what I ask like, at one point do, right now we're in the rush, right? But could you see a world where, hey, there's still a rush for GPU capacity,
Starting point is 00:46:41 but two years from now, they say, all right, demand is still growing, but it's not growing exponentially. Let's optimize on power. And then all of a sudden you have a world where a lot of these data centers, not that they're stranded assets, but you know, you were building one gig data centers, two gig data centers. You say, hey, now that we optimize on power, it turns out we need 50% less power. And all these data centers are sitting here saying, oh, my gosh, like there's just no need for us. because even though demands growing, like, we just took our cost down because it's the bottleneck and physical bottlenecks tend to get solved over time.
Starting point is 00:47:12 Does that question make sense? Yeah, of course, makes a ton of sense. I mean, it's always a big fear that we're doing a massive oversupply and that the whole demand is going to go away, which is completely possible to be here. It's a scenario. I mean, we can look at the cloud computing world, I guess, maybe as a proxy, where you've seen electricity costs as a share of revenue for these huge cloud providers go down over time.
Starting point is 00:47:36 There seems to be a moment where it doesn't really go down that much at some point. One thing you can do is just look at Azure Revenue Growth and look at what they publish in their ESG reports, for example. And you see that it actually tracks pretty well. They're growing revenue a lot. They're growing electricity consumption a lot. Basically, my point being, even if the unit of compute itself is improving, So the CPUs, in this case, especially the GPUs, are getting more and more powerful.
Starting point is 00:48:07 Like, this is kind of the Javons effect, I guess, which is you're just going to sell more with those GPUs. But generally speaking, okay, let me put it this way. Generally, when you look at the GPUs over time, if you look at Nvidia's roadmap, of course, pricing goes up for their GPUs. But power costs also go up for these GPUs. And actually, you see power and price tracking out. correlating pretty well. And that's because if you look at Blackwell, the reason Blackwell is more powerful,
Starting point is 00:48:36 there's a couple of things. But one obvious is that it's actually two compute days. It's not one compute day. And so when you have two compute days, you don't double the power, but it's actually close to it. Then you have system level improvements and whatnot. But the point at the hardware level,
Starting point is 00:48:49 you actually have a pretty good correlation between price and power of the system. Can I just slightly push back on that? I mean, it does strike me that for the past, let's say, 50 years. since the dawn of the computer industry, power has never really been a constraint, right? And by power, I'm using electricity. I'm basically saying electricity costs, like the costs of power.
Starting point is 00:49:09 I'm sure people would have loved to get more power into the GPUs and everything. But you could build it with the assumption that basically you were kind of treating your electricity and power costs as free. And obviously they were not free, but they were so low fragment. Now power is actually the bottleneck, right? Like I think we would have a lot more data centers. If people could just get access to the power and electricity, we would have a lot more capacity right now. As we go into a world where power has been the bottleneck for since mid-2020, for two years, do people start, like, does InVedia's next project, does it, what they're released in 2026, does it optimize for this power-constrained world? So actually, GPUs go up, but maybe a little bit less, but power you should go its way down.
Starting point is 00:49:52 Does that makes it, is there any roadmap there? I mean, if you look at Nvidia's roadmap, what they're doing is, as I said, price and power truck pretty well, but actually the throughput of the chip goes up a lot. So in terms of throughput per power, energy efficiency goes up a ton to be clear. Like if you think about the system level improvements, it's all about having more GPUs working together and whatnot to deliverable outputs. We're roughly similar power output. So yes, Nvidia is very clearly trying to push power efficiency. Now, the reason they're doing it is mostly because you can use those added tokens to generate more intelligence.
Starting point is 00:50:32 And so that's where we go back to the debate of, like, Japanese paradox, which is, what are you going to do with your extra compute power? Are you going to use it to save on costs or to increase intelligence? And the path that the leading AI labs are choosing is to increase intelligence, because I think what they're saying today is that the best way to monetize LLMs is to have the single best models. very simply. Anthropic has a single best model at code, and you're seeing everyone use Anthropic models. And then if you sort of go down the stock
Starting point is 00:51:02 and start looking at the market for cheaper models that have very good price for your price per intelligence, actually building those models is much cheaper because you can use techniques like distillation, which is you use one model to generate whatever data, synthetic data and such, to distill intelligence into a smaller model. All the AI labs are doing that.
Starting point is 00:51:24 And so this is where you get into much more competition, right? Like if you think about the market for these mid-level models, like there's a lot of competition from the Chinese, the open source firms, the big labs, but really the moneymaker is the frontier model. And so that's why we think because of this specific structure, there is an incentive to always use the extra computing power that you get from Nvidia to increase the intelligence of your model. And so you can frame it this way, like in terms of physical
Starting point is 00:51:54 constraints. Yes, price per GPU goes up. It tries, so power goes up as well. That's pretty linear. But then the intelligence you get, the compute power you get from Nvidia goes this way, much higher. And the intelligence you get out of the malls also go way, way up, more in exponential curve, I guess. Does that make sense? Any push? It does make sense. It does make total sense. There are some following questions I want to follow there, but I do, I'm aware of time, and I want to ask one last completely unrelated subject. You guys had a report on Robo that I thought was very interesting. So I'd love to just pause here,
Starting point is 00:52:27 and you can give overall thoughts on robotics and then I had some specific questions. Yeah, yeah. I mean, basically the idea behind the robotics piece was, we just wanted to provide a framework to help people understand robotics markets. And so we added this classifications of levels of autonomy, which we've seen on automotive, we've seen on L.A.
Starting point is 00:52:46 I loved it because you instantly knew, right? You had level zero two. I think yours went up to level four versus level five for driving. But as soon as I said out, I was like, oh, I can equate it to vehicles. I really like that framing. Yeah, and what we find out, you know, research, and maybe some people are going to needpick on some things because we cannot always make a level that's going to make everyone happy.
Starting point is 00:53:08 Like some stuff, there's always some nuance at the point. But overall, what we found is that it's pretty, we can't frame it in a way that's easy to understand. And so if you think about our levels, like level zero is like the, rigid robotic arm, level one is like the slightly more flexible, become placed arm. Level two is adds mobility, so it's like the robot dog. Level three is sort of a weak humanoid, and level four is like a strong humanoid. And typically, again, that's an over simplification. And I hope a robotist guy aren't going to kill me when they listen to that,
Starting point is 00:53:40 but it's kind of a simple way to think about is that there's these different capabilities that add up over time. And actually somewhat, yeah, there's a, there's a sort of a simple way to frame it, I guess, That's the point. Is it easy to understand for people. And basically what we wanted to do with that piece is how people understand where we are. And so if we think about level four, which is those superhumanites, we're pretty far away still. We're still in the research phase.
Starting point is 00:54:05 But we're actually already seeing level two. People don't really talk about that, but all of these sort of mobile quadru pets, like the robot dogs, you're seeing that's actually in early production phases. And so that might be actually a trend that could be different. be interesting in the next one to two years. The overall question I had on robotics was how quickly do you think we accelerate in robotics, right? Because, I mean, AI, you know, maybe people are starting to get disappointed with the chat
Starting point is 00:54:36 GPT5 versus chat GPT4, but if you were sitting here four years ago and talking about where we are with AI, I think most people would have their mind blown, you know, but chat GPT wasn't even out at that point. I think most of it, how quickly does robotics start accelerating? because it does strike me. Like, Tesla, controversial, the robotics are out of the field, but you're starting to see more of the robotics to get out into the field. I saw a, there was like a fight league, a robotics fight league.
Starting point is 00:55:00 Like, you're starting to see more. And what you started to see more, at 10-6thous, especially with the AI did. So if you and I were talking here in like four years, do you think we're seeing level three out there? If we're talking in 15 years, like, how quickly do you think this is accelerating? Yeah, I'd say, truth, we don't have any evidence that it's going to happen as fast as we've seen. with LLMs and one of the big reasons are just harder than, business are just harder than anything else, yeah.
Starting point is 00:55:27 Correct, right. And in terms of data, like there's always the data issue, which is, yeah, text, we have the internet data, like trillions of tokens. It's pretty hard to generate high-quality data for robotics. So that's one of the big bottlenecks in increasing capabilities. So this is being currently solved, and I do think progress is starting to accelerate.
Starting point is 00:55:47 But I just struggled to see world where it gets as fast, as we've seen with LLMs. So if you think about it, like LLMs, as the leaders love to say, like today you have some models that are nearly as smart as whatever, like advanced university students. And two years ago, it was like a dumb kid. No fast kids. I like when you say it's nearly as smart as an advanced university student
Starting point is 00:56:13 because I've given chat chiefs here or whatever research projects and I've had it spit out just like brilliant insights, right? It's digested scientific papers and half a second and spit it out in language that, yeah, I like to say like, hey, I'm someone who hasn't taken a science class since high school, please put it out to me in a language I can understand it, and they do that. So you get these brilliant.
Starting point is 00:56:33 And then I also like yesterday, it was going again, how many bees are in blueberry? And it says, I would bet my life there are three bees in blueberry. And you're just like, man, on one hand, it can understand science or advanced university student. On the other hand, it might be as dumb as my kid. It's so funny. Yeah, for sure, but I'm definitely with you.
Starting point is 00:56:51 Like, I'm a huge user of deep research and that kind of tools. So I think it's incredible. In terms of robotics, anyways, to go back there is there are a few additional challenges. So I think it's going to accelerate, but not to the same extent. So I don't know. I'm not the main robotics analyst. That's something else that I don't want to make a crazy prediction. But let's say level four in the next few years seems unlikely to me right now.
Starting point is 00:57:15 It's just, it's funny when you look at this and this gets more into dystopian future, maybe we need to have a sci-fi writer on. But like, you know, if they're as smart as, as smart as university students and you're already hearing lots of, I think it's overblown, but you're already hearing lots of consulting firms like struggle and taking consult jobs. And then you're saying, hey, if level four is here, I mean, how long until it starts taking low wage, like kind of manual labor? And you're like, Jerry, what are we going to be doing in six years, you know?
Starting point is 00:57:44 like we can't use our hands and it's smarter than us? What are we going to be doing? Yeah, I mean, there's this great analogy. I think it's interesting where you think about accountants, like, I don't know, before Excel, before spreadsheets. And like I've talked to people that told me, like, back then, like people were thinking spreadsheets were going to kill the accounting job, right? And actually, you've seen accountants, like, go up a lot.
Starting point is 00:58:06 It's just they're doing new types of work. They don't have to do all the manual calculations and such. They can let the machine do that, and they can use their brain in some other ways. So, yeah, I guess it's just like the framework, I would say. I mean, the one people point to now is law firms, right? And you say, hey, this is going to take the work of a lot of junior analysts. But at the top, like, we have no lack of legal law and legal cases are accelerating. But you do wonder, like, or there's the famous, what is it, the, they tried to ban the sewing machine because it was going to replace all the seamstress jobs in Britain.
Starting point is 00:58:39 It's just, it's a little scary, right? Like, I tend to be pretty optimistic and say, hey, there's going to be jobs. and it's going to create really new interesting ones. But then when you start getting to, hey, you've never had something that can replace human thinking before. And you still needed a human, while it can automate basic tax, you still needed a human to, like, you know, go turn, flip the French fries over and then hand them out to the customer. And if you've got, on one hand, robots take the French fry machine. And on the other hand, computers think better than McKinsey consultant. It's like, what is left for us to do?
Starting point is 00:59:11 It is. Yeah, I don't know. I guess maybe more relationships, like the value of social relationship probably goes up over time because at some point you trust humans more than machines for certain things. So maybe, yeah, maybe that's the future. You've got to be sure to have a little well-connected and have a good network. That's the good thing we're so handsome. That's the one being robots can't see. You and I can hop on it and we can go be fashion models. Robots can't take that for now. Jeremy, this is been great. Look, the first one, was a hit. I think this is going to be hit. Maybe we'll have to have you on like before the end
Starting point is 00:59:44 of the year or something to do like outlooks into 2026 or something. Talk about how high that capex spend is going in 2026. 10 trillion dollars. Wow. I said it was an accelerate. We're good real acceleration. This has been great. I'll include a link to semi-analysis in the show notes and we'll go from there. All right, man. Always a pleasure. Thanks for having me. A quick disclaimer. Nothing on this podcast should be considered an investment advice. Guess or the host may positions in any of the stocks mentioned during this podcast. Please do your own work and consult a financial advisor. Thanks.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.