TBPN Live - FULL INTERVIEW: Dylan Patel Says We’re Still Underestimating AI

Episode Date: February 3, 2026

This is our full interview with SemiAnalysis Founder and CEO Dylan Patel, recorded live on TBPN at the Cisco AI Summit in San Francisco. We discuss data centers in space, the limits of today...’s AI hardware, and how chips, power, and geopolitics will shape the future of AI infra.TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays from 11–2 PT on X and YouTube, with full episodes posted to podcast platforms immediately after. Described by The New York Times as “Silicon Valley’s newest obsession,” TBPN has recently featured Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella.

Transcript
Discussion (0)
Starting point is 00:00:00 Good to see it. Good to have you. Data Center in Space, which you got. Right now. Where are we going? Coming in hot. Me, you, the International Space Station. Let's break it down.
Starting point is 00:00:11 You know, the space tourism industry is quite a fun one, right? Yeah. Would you go? Would you do the Blue Origin thing where they blast you out past the Carmen Line line? It's good enough for Katie Perry. It's not good enough for you? What's going on? It's like, you know, like you're in free fall.
Starting point is 00:00:25 You're not actually in space. Oh, it doesn't count. Shouts fired. Oh, Carmen Line denier. I want to be like going around for day. I want my bone density to start to atrophy, right? I truly want to feel the negative effects of space. Yeah, yeah, it's not enough just to go on back.
Starting point is 00:00:38 I think I would do it. It's like 90 seconds, right? Yeah, but it's better than being hanging out on Earth. But like all the cool stuff that astronauts do, right? Like, you know, put water and then like they're bubbling and then you like try and drink the water. So they'll be unplugging the GPU and plugging it back. Oh yeah, yeah, yeah. That's what you pay for your space tourism.
Starting point is 00:00:57 You gotta go unseat a ship. 90 seconds of servicing, SpaceX satellite. One 90-second trip at a time. No, but people were wondering, you know, TPUs and video going on the Starlink V-5 or whatever something gets up there, it feels like this will be something more like a Tesla silicon per chip and AI chip. Like, do you have any insight into like what the process, if you wind up figuring out how the heat dissipate, if you wind up figuring out the costs, what might the chip look like?
Starting point is 00:01:29 So I think, I think, you know, everyone freaks out, oh my God, putting stuff in space is expensive. Yeah. But if you look at like starship launch costs and, you know, they keep falling, you're like fine, right? Like I think that's not, you know, by the end of the decade, the cost of space launch will be fine. The heat dissipation, I mean, it's a challenge, but you just put a massive, massive, effectively radiator. And it's fine, right?
Starting point is 00:01:48 By the end of the decade, like, you'll be good. I think the big challenge is that chips are just really unreliable, right? Sure. And so, so how do you deal with, like, a couple things, right? Satellites can only be so large before they like, you start needing a lot of support and structure before they tear themselves apart. So when you look at like the launches, right,
Starting point is 00:02:06 these things are shooting out like tiny satellites and many of them, okay, so you can't have like a big, fully connected cluster of chips. And then like on top of that, right, how do you deal with any random error? On Earth you have text running around the data center, unplugging stuff, putting in spares, things like that. What do you do in space?
Starting point is 00:02:27 you RMA it to the factory where they like might unsodder it and re-sodder it and then like test it and it works and go back out. Sometimes it is just trashed. But like, you know, that's the challenge to me. Is that, is that, I feel like maybe the pattern we should be looking at is like how often do the Tesla self-driving chips need to get serviced? Because that's like the team that would probably be building or like bridging the gap there. Like the Starlink satellites, sir, they go down, but like the service works.
Starting point is 00:02:54 Like you're just relying on some sort of like, you know, 90% uptime stuff's coming down. But most people that are in a way, mo, like the chip keeps working, right? Most people that are in a Tesla self-driving, like they're not, like, you don't hear about Tesla owners being like, I love FSD, but I'm constantly in the shop getting my, my custom silicon ships unseated and reseated, right? Well, I mean, it's also a function of like the complexity of a chip, right? Sure. You know, if a chip is twice as fast. Yeah. And let's say the bit error, right, right?
Starting point is 00:03:23 like how often a bit flips is the same, then it's erring out twice as often. Yeah. But let's say the chip is 10x is big, right? And so when you look at like a Tesla FSD chip, very, very good, very, very efficient, very, still like relatively inexpensive and cheap compared to, you know, a big old GPU or TPU or whatever, right? Those things are extremely large. And, you know, again, like if the error rates are the same, then it fails 10x more. But in fact, the error rates are a bit harder, higher because they're pushing these things to the absolute limit. Yeah.
Starting point is 00:03:53 Whereas, you know, Tesla does have some level of like, well, first of all, the Tesla car has two chips. Sort of redundancy already built in, right? Maybe you do that on the satellite, but that's more power, more. Yeah, right? So the whole all the lure, right, of it is, is, you know, effectively power is free, right? And solar panels, you look at the cost curve of solar panels, we look at the cost curve of satellite launches. You're like, this is free, this is great. But power is less than 10% of the cost of the cluster.
Starting point is 00:04:19 Sure. Right? So like it's that 90% you're not saving anything on. Yeah, yeah. And in so far as much as... For potentially a hundred times the hassle. Yes, yes. Yeah.
Starting point is 00:04:31 There's this whole like, you know, like if you look at Nvidia GPs, right? When you first turn on the cluster, about 10 to 15% of them fail RMA in the first two weeks. Wow. And then that's fine. Like you have to reseat them, whatever. And like the industry knows how to deal with this, right? And over time, like hopper's now at 5% but Blackwell's still. 10 to 15% right actually started out higher than that sure and when a new
Starting point is 00:04:54 generation comes out it's going to be higher than 10 15% it'll have its curve gradually decline down but you know who's gonna are you gonna test it and burn it in on the ground or are you gonna say 5% of my chips or 10 15% of my chips are trashed yeah yeah because someone can't go up there and like do these things or am I saying oh I need robots who can do all this stuff in space and now that's like an additional engineering problem when sacks of meat are actually very cheap yeah speaking of my video we haven't talked since the Grock acquisition What does that look like in the bull case?
Starting point is 00:05:23 Like if it's a good, if the next version of GROC is a great chip, is it sitting next to the, you know, H-200, H-100s in the rack, GB-200, like how does it fit into the actual, like what Nvidia deploys? Is it just a separate chip? I think it's, I think it's a big vibe shift from Nvidia, right? Before they were like, all right, I got this big GPU,
Starting point is 00:05:44 everyone's gonna use this GPU, software ecosystem of the GPU is so good. It's one size fits all. Everyone, you know, like, everyone's trying to make all these, like, specific point solutions, but we've got the thing that's good at everything. And then they had a vibe shift, right? They launched this thing called CPX, which is a chip made for pre-fill, you know, prompt processing, creating a KV cache. And also good at, like, video generation and image generation. And that's coming out later this year. In your Grogress release, they were really talking about video generation as well.
Starting point is 00:06:10 So, yeah, you've got like CPX, you've got like the standard GPU, and now you've got the GROC chips, and they all fill a different niche. But really it screams, oh, crap, we'd. don't really know exactly where AI is going, which I don't think anyone does, right? I mean, it's moving so fast the software is, the model architectures, et cetera. So we're just going to like engineer solutions that are along multiple points of the parietal optimal curve. And then, you know, one of them will win, right? And I think it's like sort of like a big vibe shift from Nvidia. Also, they just knew opening I was going to do this service deal, so they freaked out.
Starting point is 00:06:39 Got it. Yeah, well, yeah, getting up to speed on what makes Cerebrus important in the ecosystem right now. So, you know, you have people thinking like, oh, latency matters in terms of where our data center is. It doesn't matter at all. What matters is, you know, as we've moved from, you know, chat applications, which were like, or search response immediately, chat applications, let's say response takes 10, 20, 30 seconds.
Starting point is 00:07:03 You've got agents, you know, I don't know, my cloud codes are working in the background for a long time, right? It doesn't matter where the data center is. But what does matter is that these streams of inference take, you know, 30 minutes versus 10 minutes versus five minutes. And for a lot of people, I'm fine to spend 10x the price on something that completes 10x faster. And so Cerebus sort of just makes a ton of sense there.
Starting point is 00:07:24 So Open AI, you know, they've got these like long horizon. There's like Codex 5.2 extra high thinking or whatever. It's terrible. Can you guys teach them how to market? Open it out you have to sponsor this podcast. Yeah, yeah. We had to do one yesterday and I did actually ask him like, I had the Codex app pulled up my desktop and I was like there are six different models and then there's a
Starting point is 00:07:48 thing there's another button that I can pick to it well how many different products are called codex now there's a lot right now there's an app yeah yeah yeah we actually another guy on just to do branding lexicon branding came on the show yesterday talking about the all the naming architecting architectures it is complicated but hopefully you could tell he's just blood's boiling because like all the AI companies just have the most chaotic anthropic clot Claude code but also you can use Claude code code for other stuff. Yeah.
Starting point is 00:08:16 But yeah, I mean, with Surrey-Rus, it seems like there is a value to it, but are they constrained on the supply side? Like, can they actually scale up to, you know, a colossus-style data center that could actually speed up codex not just for like one user, but all the users? So, I mean, Surreber's can speed up multiple users for sure. Yeah, yeah. The question is sort of like where you use it, and that's where they have to like figure out where within Codex, right?
Starting point is 00:08:41 Yeah. Because there are times where Codex is running for like 10 hours. Yep. And sometimes you don't mind, right? Like, screw it, I put up this nice prompt, gone, work on it, refactor my code, do this thing, do this task. Other times I want this iteration feedback loop. So how do you expose it to the user without saying,
Starting point is 00:08:56 hey, actually there's another toggle. So your presentation is 18 times. Well, hopefully a really robust model router, but it feels like that's been a process. Yeah, so the opening ideal is like for 750 megawatts. It's not that much capacity on the order of like what opening has talked about. By the end of 28, they'll be at like 16 gigawatts.
Starting point is 00:09:14 Sure. Of that. So it's like the absolute cutting edge, the most price insensitive customers in that specific use case of this is the type of prompt that you need to return fast, then you'll get the speed up potentially. Right, right. And they've got to figure out how to do it from a product, exposing it to the user, et cetera.
Starting point is 00:09:31 But it's clearly something where there is demand, right? Like I don't know, like Andre Carpathie doesn't care if you spending a thousand bucks per agent per second or whatever, right? Like, you know, so whoever it is, these like super cracked engineers don't care at all. And then obviously there's like a long tail of like actually cost does matter for most people. And so all along that curve, they've got to have solutions, right? Yeah. When did you first think that XAI might end up at another Elon company?
Starting point is 00:09:59 I mean, this has been rumored for a long time, right? Like people are saying Tesla, Tesla, Tesla for the longest time. It's harder with a public company. Yeah, yeah. And then a bit ago, people were like, oh, SpaceX, I'm like, wait, this makes no sense. No, but there was a very coordinated, like, narrative pump. Oh, yeah. And then the space data centers.
Starting point is 00:10:15 At the end of last year, and it was like almost like perfectly telegraphed. Well, there's a bet, right? Between basically the head of compute of XAI and the head of compute of Anthropic, and the bet is what percentage of worldwide data center capacities in space by the end of 28? And the bar is 1%. Oh, wow. And so the XAI guy is like really bullish. The Anthropic guy's like, eh.
Starting point is 00:10:38 Yeah, yeah. Yeah, yeah. So, but it's a really interesting bet. I take the under. on 1% by 28 because that's a gigawatt in space. Yeah. But it's actually not that crazy, right? Yeah. It's roughly 150 starship launches.
Starting point is 00:10:52 Yeah. We'll get them to a gigawatt in space. Yeah. So, you know, Starship hasn't worked yet. Yeah. Fully. I was looking at the energy draw of the current Starling fleet. And I think they're at like, what is it, 200 kilowatts or something like that.
Starting point is 00:11:07 So you get 1,000 of those, 200 megawatts. And like, you're starting to be in the territory, something like that. Yeah, so the V2 stars satellites, I think, are the only ones they've launched. Maybe they've launched a few V3s, but the V3s are coming soon, and those are, those are like 100x more bandwidth each, right? Yeah, and more power. And so when I'm just thinking of, like, can you scale this thing up at all? It's like, are they two orders of magnitude off?
Starting point is 00:11:29 Are they three orders of magnitude? It feels like they're like one order of magnitude off being able to run, something that looks like an H-100. I think the metric is like 50, it's either 50 kilowatts a ton or something like this per satellite for V3. Yeah. If they let's say from V3 to whatever the compute thing is they double it again get to a hundred I think that V2s are like 25 Yeah, yeah, yeah, yeah, so if you get to 100 kilowatts per ton yeah for launch it's it's it's only a hundred and fifty or so So it's so reasonable Maybe not 28 maybe it takes 29, but like you know it's it's so reasonable the question is cost and reliability and
Starting point is 00:12:02 You know what happens when the chip fails how do you service it that kind of stuff? How do you deal with having clusters be much smaller instead of like you know these big clusters even for inference big clusters or useful. Yeah. Yeah. How do you think about Google's response to Grox, ERIS, TPUs, obviously, very successful, but are they forking that project to eat more of the Pareto curve? Yeah, so for the longest time, Google's had one main line of TPUs, right? All made by Broadcom, and then sort of next year they've diverged it, right, where Broadcom makes a TPU and Media Tech makes a TPU. These two TPUs are focused at different things. And they're fabbed at They're both Fab to TSM. Everything at the end of the day goes to RACUS, right?
Starting point is 00:12:45 I want to go there next, but continue it. Everything goes to Iraqis. So FABITSMC regardless, but both of these TSP are focused on different things. And they've actually got a third project for another kind of TEP there. They also see this need to proliferate along the curve of like, hey, do I care a lot about super high amounts of flops, not much memory? Do I care a lot about super fast on-chip memory only? Do I care about 3D stacking memory?
Starting point is 00:13:08 Do I care about, you know, this sort of general purpose middle ground? AI chip which is what you know an H-100 a Blackwell a TPU looks like today you know they're sort of like oh we need to hit the entire Prado optimal curve and it's like okay within this there's training versus inference differences and like what numerics you want and all these other things there's so much complexity there everyone everyone sort of is diverging their roadmaps once they're at a sufficient scale I think yeah are is Google still way ahead on cross data center training yes are the other labs like is that is that important to the
Starting point is 00:13:41 other labs to catch up there? Or is it something that will just naturally happen because everything sort of commoditizes or do the other labs need to sort of marshal some Herculean effort to like crack the code on what it takes and what Google's doing? Yeah, so it's a couple of things, right? In 2023, everyone thought that scaling was pre-training. Yeah. Right? You know, more parameters, more data. And that's very difficult to split across data centers. And has Google been able to do that? And Google's been able to do that to an extent, So what they've done is they've got, you know, they don't have the largest individual data center campus, but what they do is they do these like regions where it's like, hey, each data center is roughly 40 miles apart from each other.
Starting point is 00:14:20 Sure. So in Nebraska and Iowa and then in Ohio they've got like these complexes and now they're building one in Oklahoma, Texas. Got it. You know, these complexes where there's all these data centers pretty close to each other. So it's not really across data center across the world. Right. It's just across like region. Yeah. And then that makes a lot of the difficulties a lot easier. Flipside is we've also moved to RL, right? And majority of the time of the chips is spent generating data, right? Only doing forward passes through the model.
Starting point is 00:14:45 Sure. And then you only send the final tokens that you verified sort of back to train on to the training, to train, right? So then you end up with like, oh, instead of in pre-training scaling, you need to like synchronize all the weights every 10, 20, whatever seconds. When you're doing these rollouts, and especially as things get more and more agentic in training, you might not only need to send not the entire weights, but just the tokens that are relevant. So way smaller amount of data and way less frequently, right? Minutes at a time instead of seconds at a time. Yeah.
Starting point is 00:15:15 And so you've got this like now, now it's become like reasonable where, oh, actually multi-data center training is completely reasonable. And people do this. People do multi-data center multi-chip training. Sure. Right? You know, you do your inference on one set of chips and you do your training on another set of chips. So like Anthropic does this.
Starting point is 00:15:30 I don't know if Google does this, but Google's kind of already got the cards. Yeah. Okay, got it. Let's go to Iraqis. Yeah, talk about Iraqis. Just there's this debate. TSM risk, is that the bottleneck or is energy the bottleneck? I was doing back of the envelope calculations.
Starting point is 00:15:46 Seems like we're using maybe like 1% of global energy production or Western energy production on AI specifically workloads. And then we're using like 50% of leading edge fab capacity on AI workloads. And so that feels like, okay, well, even if we all agree, and we say as a society, we're going all in on AI, we can only double the AI chip capacity before we need to build more fabs, that takes years. Whereas we could say everyone turn off your air conditioning, we're sending the electricity to the data vendors, right? Like we have the ability to digital. So we don't create new, uh, with you know, clapping,
Starting point is 00:16:23 clapping for turning off the AC. Turn off your. Quad needs to eat. He strokes for all the grandmothers. Yes, yes, yes. I need my cat dancing videos. You need to feed claws, right? But, but, but, but, but, but, but, but, but Seriously, like there's this debate over, you know, is TSM the main bottleneck or energy the bottleneck? How are you feeling about that? Yeah, yeah. So sidebar before I answer the question, because I think it's fun.
Starting point is 00:16:44 You know, in the U.S., it's insane to say turn off your AC for AI. Yes. Right? And the general public hates AI already. But in Taiwan, they've had droughts before, and they've turned off water to entire cities. They're like, oh, you get water three days of the week. And then the fab still gets supplied water. It's like, this is, you know, you've got to understand the mindset.
Starting point is 00:17:03 We are not ready as weak Americans to do this. No, but at the end of the day, right, like water and power are certainly less big of constraints. Now, you've got to imagine, like, you know, semiconductor industries used to, hey, doubling the amount of transistors made every year or two. Part of that is more small. Part of that is more capacity. Whereas the energy industry in America wasn't. And so, like, initially people were like not creative. They're like, let's do these kinds of gas plants.
Starting point is 00:17:30 It's like, well, no, now we've realized, you know, yes, there's three main manufacturers of turbines. and then you've got for a dual combine cycle then you've got like IGTs but you've also got like medium speed reciprocating engines right like turns out commons can make like a million diesel engines a year and like those can make electricity like if i don't give a fuck and i put it in west texas easy um so now it's more of like a regulation thing a supply chain thing power is not a constraint in in so far like that much right i think it certainly is a constraint still today um it was the biggest constraint in 24 25 data center capacity power because the industry was not ready people have woken up they've like sort of been shocked to the system now you've
Starting point is 00:18:07 got you know tens of gigawatts being deployed you know next to your 30 gigawatts are being added we think the power is there for it what was this year it's it's or this year is like I think it's like 18-ish 10-ish 15 to 18ish sorry so almost a doubling yeah almost a doubling yeah well and when you look at when you look at TSM and the crew right there is not really oh this random you know there's 12 people making medium-speed reciprocating engines that you can now convert to make power at some random data center. No, no, no, there's like, there is a rackus, right?
Starting point is 00:18:39 There is one set of spice, like, you know, there's, you know, that's it, right? And so, and then the flip side is like, okay, when you have 12 vendors, everyone's got a little bit of slack capacity, you know, there's more likelihood, you know, you can, people are like, oh, turbines you can't get, you can call a broker and you can get a turbine, you might be paying 50% more, 2x more, but you can get a turbine, right? Yeah, right? Like it's, it's, it's a 3-9mm, fab, exactly.
Starting point is 00:19:03 And so when you talk about what's the, you know, the baton got passed from semiconductor shortages in 23 to power and data centers in 2425. 26, we're still, we're swinging the pendulum, but it will fully be chemiconductors again in 27, right? And so we see this across the entire space of the ecosystem. It's not just TSMC, it's also a memory, both because both of them have built at a certain pace. Now TSMC has been expanding at some rate. The memory makers, in fact, have just not expanded capacity basically, they have not. built new fabs since 2022 because they're like their cyclists so undulating yeah and and so when you look at it it's like oh even if they wanted to double capacity they need to build the fabs
Starting point is 00:19:44 right and building the fabs it is the most complex building humans make right um it's it's it's the entire air of a clean room circulates itself every 1.5 seconds and you don't even feel it when you're inside really it's like that and it's like parts per billion of particles right like it's it's actually insane how you could you could get coughed in the face by someone who has COVID and not get COVID and so it's like it's circulated so fast it doesn't even hit you it's like it's like it's like the spraying when someone's talking and then it's just so one another another sidebar is everyone knows COVID like really popped off in Wuhan yeah right Wuhan also is home to China's largest memory company YMTC and so when they were like
Starting point is 00:20:26 welding people into their homes the people who worked in the fab still went to work Wow. It was because it's, you know, one, it's a national, like, it's national importance, but two, like, these people are getting sick. This fab is like way too clean. Yeah. Yeah. Sorry, Jordy.
Starting point is 00:20:40 I want to talk about Oracle. They put out a post this morning that said, our partners financing for the Donya, Anna County, New Mexico, Shackleford County, Texas, and Port Washington, Wisconsin Data Centers are secured at market standard rates, progressing through final syndication on schedule and consistent with investment-grade deals. they were fast following their posts from yesterday, where they said that NVIDIA deal has zero impact on our financial relationship with Open AI.
Starting point is 00:21:06 We remain highly confident in OpenAI's ability to raise funds and meet its commitments. And obviously everyone was looking at this being like, give me a cigarette. Like it's like bank run language. I haven't seen posts like this since like the FTX era. Is it just bad comms or is there something worried? It's terrible comms.
Starting point is 00:21:23 Yeah, like like I told my Oracle context is like, who the hell is in charge of the, Twitter, like what are you doing? Invidia did something similar last year when the whole TPU mania was going on. Yeah, yeah, yeah. It was like, we're thrilled with Google's progress with the TVU. That said, and video chips are the only, you know. It's like no one asked you to comment.
Starting point is 00:21:46 I mean, like, I'm sure a handful of people in your DMs and random, but that doesn't mean. Doesn't project confidence. It's sort of the lion shouldn't concern themselves with the sheep. And like, okay, Nvidia is the line. Maybe Oracle is a little bit more bumpy, but I think, or Oracle is like fine. People are just freaking out because, you know, Open AI is peak, you know, people are peak negative on Open AI right now because of how good Anthropics been killing it.
Starting point is 00:22:08 Sure, sure, sure. Yeah, I think it's just like kind of silly. Like they need to, they need to hire someone to do comms like a Lulu or something, right? Both Nvidia and Oracle, because what are you doing? How did you process yesterday in general, Jensen was clip farming? He was like, I don't know why he does these street interviews, right? No other CEO does those, where they just stick 25 microphones in your face and the paparazzi's flashing. It's a great vibe.
Starting point is 00:22:35 It's, you know, Jensen's not been as famous as other CEOs for as long, and yet he's so important now. And if you've like, if you know of Jensen, how he is in meetings, I feel like there's two Jensen's, right? There is like PR, like good at PR, just good at talking, good at, like, making people hyped up and believe what he's doing. He's great at standing on stage, holding up the chip, delivering a sermon. And then there's the real Jensen, which is like a business killer. Yeah. And like actually just knows about every like aspect of the supply chain, right? All the way from like niche semiconductor, you know, design and manufacturing stuff all the way to like energy power data center.
Starting point is 00:23:15 Like and then doing the business deals too, right? And so like you've got this whole paradeo like a whole thing, a whole range of things that he's good at and he's a killer in. And clearly he's like, he was in a meeting. where he was being a killer and like negotiating like supply contracts or something. Oh, when he walks out. And that he walks, I mean, that's hilarious. Yeah, yeah. This is my inference.
Starting point is 00:23:34 But I like it. Yeah. Yeah. And that's why he was like, you know, like, he was like still killer. Like no, we never said be committed to $100 billion, you know, like. And it's like, I don't know. Wait, where did you even get the $100 billion number from? And it's like, well, you did go on C&BC and like, you know,
Starting point is 00:23:49 make a big deal out of it. So I think people would assume that it was, but, but they did say in the press release, or remember, these are. early talks, but they just kind of jumped the gun. This was the height of the press release economy. Yeah, yeah, what's funny is Oracle stock peaked just like a week after they announced the opening I deal. And so like the press release of like, hey, open eye is going to do this humongous deal.
Starting point is 00:24:11 Stock peaks. Same happened with a couple other vendors who announced deals with Open AI or NVIDIA, like sort of a lot of these like, like they all peaked to that and then it's sort of been like NVIDIA opening eye trade has been going poorly and sort of like the TPU, Anthroping, and Google Amazon complex has been doing well. It's quite interesting that this happened. This has been good energy back at home with the roommates. What's going on in?
Starting point is 00:24:36 I wanted to, yeah, I get one more thing. So yes, over the weekend it was sort of drowned out by all the Justice Department stuff. But- Wait, have you guys talked about Elon saying you can smoke a cigar in the fab? No. Yeah, yeah, yeah. I was going to say this is part of the whole thing. I don't realize that was related.
Starting point is 00:24:56 Yeah, that makes no sense. Indoor heaters. We have indoor heater technology. No one's taking advantage. Yeah, what does the fab look like if you have no humans inside? Like that's probably his long-term thing. It's like, yeah, there will be an optimist. But no one, like, the number of people who work in a fab is like irrelevant.
Starting point is 00:25:12 Yeah, but is it irrelevant because there's all these things you have to do when a human's in there because they sweat and they breathe. And if you don't have to do that because it's a robot walking down, even if it's puppeteered or teleoperated, you might be able to have different considerations. I don't know if that actually affects. Well, it's like a nesting of like, it's a nesting of like cleanliness, right? For example, you've got this wafer, you've put like down, let's say you put down copper. And now you're moving it from one area to another.
Starting point is 00:25:34 Well, it needs to be stored in a vacuum, but the easiest way to store a vacuum, like, or an inert gas. And that's like the thing that's being transported in. But then around that you want it to be super clean as well. If you don't, then the copper starts getting oxidized. It affects our yields. All this sort of stuff happens. And so like you kind of want it to be a nested layer of like, well, this thing inside the EVV tool. super clean and then the thing feeding it is super clean and then the thing it sits in
Starting point is 00:25:58 it's super clean because that's how you get to like there's zero particles because like you know in the in the in the in the in the food and the transportation devices like parts per you know trillion and maybe poop it's called F OUP front operated front opening I don't know something pod but it's called a FOOP it's like the thing that moves and it carries the way for sure sure and then and then the FAB is like parts per billion and you know sort of like you've got to like got this nesting relationship so everything is super clean you know I'm I'm bullish on robots, like super bullish on robots, but only for like, not for tasks that have like,
Starting point is 00:26:30 TSMC's Arizona fabs, or so, okay, let's say TSMC, Tynon, which I think produces like, you know, indirectly hundreds of billions of dollars of global GDP. Even directly, it's like still tens of billions of dollars. Has like five, 10,000 people in it. Like, it's like irrelevant in terms of the number of people who work there. In terms of the overall economic value is created. Right, it's like, it's like, it's like how many people fold laundry or how many people wash dishes or how many people like, Do construction work?
Starting point is 00:26:56 Like, these are way bigger markets. For robotics, yeah. Yeah. Speaking of China, what are you making of the Dario essay? I guess his comments at Davos about, you know, selling chips to China as equivalent to, you know, nuclear weapons these days. The Ben Thompson line was something like he's okay selling chips because he wants dependency on the NVIDO ecosystem, Kuta, but he would ban lithography tools.
Starting point is 00:27:24 from going to China and I'm always I've been wrestling with this idea of like I don't know if China would accept this but wouldn't there be a different world where you want them dependent on American LLM APIs and you don't even send them the chips and you say yeah you're you're you can have as much AI as you want as long as you're paying you know open AI and anthropic API yeah I think it's I think it's like a curve of like what they will accept it's it's you know one you you you you push someone into the corner they're gonna start swinging right and and I'm like very concerned that China does this, right? Do they, do you push them too far into the corner? Do they say,
Starting point is 00:27:59 screw this, we're gonna start being a lot more aggressive, we're gonna, we're gonna, you know, do more military actions or gonna- Or even just invest twice as much in- In global, in supply chains, like take over Africa more than they already have, like, Latam, like, etc. There's, there's, or just take over Taiwan. Yeah, right? Because if I can't have the chips, what values there in Taiwan existing? Sure, sure, sure. In its current state, right? So there's this, like, there's this, like, game theory aspect. Yeah. At the same time, you don't want China to be able to like you know if you believe AI is going to be do what I think many at least in San Francisco think it's going to do which is a
Starting point is 00:28:31 completely revolutionized humanity and cause GDP growth to accelerate do you want to have China also own that technology and all you know their ability to integrate that into their military and all these other things much faster you know so there there is like these competing like you know interests where where is the like right line and some people think it's like hey yeah sell them AI model well I think Dario would say don't even sell them AI model access. Don't even sell them tokens. Yeah, I think so. I think like I think Anthropic does not sell AI access to China. Yeah, they loop it through and you can see
Starting point is 00:29:00 this in the traffic data. They go through Korea and Japan and other places, but like, and so they get it. Yeah. And then the other other side is like sort of like I think like the Ben Thompson view, which is like, and I think I'm more sympathetic to that. Although I think I'm not exactly aligned with that, which is like, and we've been saying like don't sell them equipment, don't sell them equipment. And my argument is like more economic in the sense of like if you sell them like tens of billions of dollars equipment, they can make hundreds of billions of dollars in AI value or chips with that equipment. Whereas if you sell them AI model access, then it costs them this much to get the economic, you know, they're not able to. You're capturing more of the value.
Starting point is 00:29:33 Exactly. And so that's sort of the question that is at foot here, right? Do you want them to capture all this value of the supply chain in equipment or by buying the chips or using the models, right, and services? And we've seen, you know, across many, you know, stacks China refuses to accept, you know, using American ecosystem and they'll wait many years before they developed their own. Whether it was like, hey, they didn't use Windows, they figured out a bootlegging economy, or they didn't use Visa and eventually they came out with like Ali Pay and WeChat Pay or whatever it's called on. And like these things are way better than Visa, in fact, right?
Starting point is 00:30:09 Lower transaction cost and higher volume. I never used Red Star Linux. It's North Korea's Linux. Wait, really? Yeah. If you don't, if you put it on a network, it'll immediately call home. So you have to put it on a firewall network or else it just like steals everything immediately. I'm a fan of Tempelos, you know?
Starting point is 00:30:26 Yeah, there you go. Is Doug O'Loughlin suffering from a case of Claude psychosis? Okay, yes, yes. So I think everyone's like, Claude code is for coders. It's like, no. No. Claude code is for people who don't code now. Yes.
Starting point is 00:30:43 Right? And that's the big realization this year. You know, we've got a couple folks now in the firm who have psychosis, but Douglas O'Loughlin, who is like, you know, semi-analysis, number two, he's president. You know, he's my boy. In fact, he's the one who encouraged me to make a substack a long time in the go, a long time ago. What were you doing before? I had a WordPress blog.
Starting point is 00:31:02 Oh, okay. And I was like, consulting on the side, but I was like, okay, let me do a substack now. Because I saw him making money off and I was like, this is shit. Like, why are you getting paid for this? There were multiple times where he wrote something. I was like, I could do way better. And obviously, like, it was good because we both taught each other a lot of things and we've been great friends. And eventually he joined semi-analysis.
Starting point is 00:31:22 But, like, you know, his background is he was a hedge fund analyst. And then he decided to do a substack slash walk, hike the continental divide trail for like six months, walking from Mexico to, you know, and then, you know, came back to doing substack and tried to do a fun. Six months of touching grass and then he was like, I'm ready to lock it on clock code. Yeah. And so now he's, you know, like, he's never been a software developer. Yeah. Right, but he's been on a generational run. Like he's he's not coding anything, right?
Starting point is 00:31:46 He's just telling Claude to do stuff. And like it's to the point where it's like, our like head of data, head of IT is like, oh, can you send me that? And he's like, how do I do that? And then he's like, he zips the whole thing and sends it to him, it's like local host. He sends him a leak what's like local hosts.
Starting point is 00:32:00 It's like, bro, that's not how this works. But yeah, no, I've talked to some folks who vibe code and they'll be like, and I'll be like, why'd you choose no JS? And they're like, what's no JS? That's a very specific choice. Someone? Yeah, Tyler.
Starting point is 00:32:13 No, but it's, we went on a little tour of a lot of our clients, like, you know, roughly like half our business is, or 40% of our business is like hedge funds. So we went to New York a week, two weeks ago, and we went to all of our clients, and like, part of it's like them asking me, is opening, I fucked?
Starting point is 00:32:28 And I'm answering like, no, I think they're fine. And then like some, like actual ideas. And then like, a lot of his Doug's just telling them Claude is like, they're like, you don't have to hire any junior hedge fund analyst anymore. And they're like, the junior hedge fund analyst are like, and then he's explaining, you know, what can you do?
Starting point is 00:32:41 It's like, well, like, you can just do like financial models and perform a financial models and like everything in cloud code without ever opening Excel. And you can generate charts and like you don't need to know how to code. You just need to know how like how this stuff generally works and you can just do it. How many hedge funds are just trying to copy trade situational awareness now? I mean, I think everyone who's I think I think a lot of hedge funds obviously believe in AI. I think there's a lot of them who don't believe in it, right, to be clear. but a lot of them that have done the best, believe in it. Why are they selling software everywhere?
Starting point is 00:33:15 Oh, you mean selling software stocks? Yeah, yeah. Oh, yeah. Why the sell-off then? Yeah, I mean, of course, it's like an incremental thing, right? But anyway, so these hedge funds, like, and then the question is like, okay, if you believe in it, how do you manifest that trade? And so when you look across the, like, ecosystem, I would say almost all my clients sometimes think are two years out, numbers are too high. but like there's like Leopold's like your numbers are too low
Starting point is 00:33:41 and so it's like it's like in general right and I think I think like if you think about how much do you believe in AI and what's your access to information of AI you know there's not many hedge funds who live in San Francisco and like fully breathe and live and understand it and then depending on how much you believe in AI how do you manifest that trait right are you surprised that more hedge funds wouldn't like even just smaller shops wouldn't say like hey this AI thing seems like it's going to be big maybe we should set up
Starting point is 00:34:06 in San Francisco? Or hire? There's a number of people, right? So we're, you know, we're getting an office together. Leopold, myself, Dwarkash, and then a client of mine, another hedge fund. And they have one analyst here and it's like, and there's like a number of other hedge funds that are like hiring analysts here. But, you know, being plugged into the AI ecosystem does not mean you're just in San Francisco,
Starting point is 00:34:24 because you can just walk around and talk to like doofus like startups and VCs and, like, not actually, you know, see what's coming down the pipeline. And you have to combine it with all sorts of information, right? you have to have a good tune with like what's going on to Asia supply chains. You have to have a good tune with what's going on in New York. Sure. You have to good tune with like what's going on like in the financial markets, right? And then like what's going on in credit markets and what's going on in all, you know,
Starting point is 00:34:47 the data center, energy, blah, blah, blah, all these different industries. And so it's actually not like so simple to like be in tune with what's going on in AI. You can easily get like head faked, right? You know, for the longest time people were thinking, you know, Adobe's an AI company. And like, and it's like for a bit like, O'Dobie was going to. down on AI and then they like launched a few AI features and the stock skyrocketed and then now it's going back down again because people realize oh wait no actually it's not an AI company like I think it's it's the manifestation and thought of like
Starting point is 00:35:17 what is actually gonna the world gonna look like if Anthropic 3x is its revenue again this year opening I two X is revenue again this year or you know by the end of the year do how many people even believe by the end of the year AI startup revenue is over a hundred billion dollars I think that's an insane statement for a lot of people but that's what it's gonna be right and who Who believes that number, right? It's like very few people. And then you draw the continuation, it's like,
Starting point is 00:35:40 and who believes, you know, and when Anthropics says and their funding, like, hey, we're gonna have $300 billion of revenue by the end of the decade. And it's like, actually, I think that number's too low. Because the economic value of what they're gonna create is gonna be insane. Yeah. And you tell people, oh, excellent, you know,
Starting point is 00:35:59 opening eye is gonna have 18 gigawatts or 16 gigawatts by the end of 28, and they're gonna be able to pay for it. And that's like, well, that's hundred billion dollars to spend how they're gonna pay for it's like you you sweet summer child don't worry Sam can raise they're gonna blow up on revenue they're fine right like it is like a bit of a vibe thing it's a bit of like you know irrational exuberance almost right like um leopold's in his you know mid 20s like I'm 29 like we are irrational right because we have not lived through you know you get these these PMs who like
Starting point is 00:36:27 you've never been that humble I don't know like we almost my family almost went bankrupt in 2008 like you know because we lived in a motel and we almost foreclosed. And we actually did foreclose on one motel. It was like pretty bad. But like, yeah, I mean, I was still a kid, right? Yeah. Yeah, I've never been humbled in the same sense.
Starting point is 00:36:42 I mean, it's good to live through that and understand how things can go wrong. That's interesting. What are you expecting out of Zoc and META this year? We've been big Zoc defenders, especially. I mean, there's this pressure of like, oh, meta is spending so much. And yet they haven't created it, you know,
Starting point is 00:37:00 any AI product that's super compelling or that's really working. and our stance has generally been meta's making more money from AI than almost any company in the world outside of NVIDIA. So it's like, of course Zuck should be justified in saying, hey, this is real, it's big, like I'm going to back the truck up and go all in.
Starting point is 00:37:19 Yeah, I mean, it's clear if you look at the most recent earnings, I think there's CPM went up 9% when the consumer's weak, which means if you were to try and strip out, what is consumer spending increasing for a CPM of ads versus what is the effectiveness of their algorithms, Or algorithm got better by double digits in one quarter. Right? It's like actually insane how good the algos getting, right?
Starting point is 00:37:40 At serving you the slop and the ads, right? So, so in that sense, like, the big sound and the trough. I got a good. Slop for the, we're going all in on that. All in on the farm. Slops, though.
Starting point is 00:37:57 I love it. So, you know, If you think about it, right, like, okay, meta's, where are they going to, like, win, right? You know, I think if you have the Galaxy brain take, it's like, well, they've got the best, like, wearables coming down the pipeline. They're going to put AI on it. Apple won't be able to put good AI on their wearables, so they'll seat it all to, like, Microsoft Google or Anthropical. People have had this narrative, oh, as AI gets better, the value of real world experiences will increase. And I think that's a cool theory.
Starting point is 00:38:29 But if you actually play it out, AI getting better means more content. that's more like effectively crafted for you, more personalized, a hundred times more, a thousand, a million times more content, that would imply to me that people will just use digital products more, which means more time on site, more time in the app for meta. So I don't know. I mean, I'm with you entirely, but I think like the galaxy brain take is that you're just going to have a wearable and that's going to have an AI assistant.
Starting point is 00:38:58 Open eyes trying to make wearables, you know. You know, there's, you know, everyone's trying to make wearables Google is, et cetera, et cetera. I think metal will actually execute, and then they'll have a good AI. And then you stack on, like, a few things, right? How do they get users? Well, we've seen, at least if you look at the user metric charts, Google's, you know, open-anized users were growing, growing, growing. They were going to hit a trillion by the end of the year.
Starting point is 00:39:20 They had 800 billion. Why did they not keep growing in the last quarter? It's because nano-bonanau came out and they took all the incremental users. Right? And likewise, if you go look at, like, you know, Gemini III didn't actually make Google, that much it was Nana Banana and then Pro or two or whatever it's called, right? Those were the ones that made them really grow. Meta's licensed all of Mid Journey's code data models, right?
Starting point is 00:39:42 One, two, they're actually just like focusing hardcore on that. Was that a billion dollar plus deal? The number is undisclosed. Mid Journey still exists as a company. No, it looked to me like effectively a massive exit, but the best case scenario where they can just keep kind of being artists. I think, I think, if you had me guess, I would bet it's over a billion. Every deal that Meta did was over a billion.
Starting point is 00:40:09 Basically, whether it's an employment contract, a licensing deal, and acquisitions. Everything had to be after it. Well, so the interesting thing is meta... You're missing a zero again. Don't never miss the zero again. Yeah, every discussion was how many billions are we spending on hiring this person, buying this company? Well, you know, meta interestingly has gone down market for a compute because there's not enough
Starting point is 00:40:31 compute in the big size deal so they've actually gone and like me rent bought like small clusters oh because it's like well I want more computer from like long tail neoclods yeah just like yeah from a longer tail okay because that's the only place they can get the compute they need because you know they've already like went out and signed big deals with Google and core weave and so on and so forth cluster max three gonna be a smaller shark because of consolidation in the industry no it's there's more it's gonna be bigger bigger but but you know so metal
Starting point is 00:40:56 that's a thunder that's ominous it's ominous it's almost So I think meta will capture consumers through generative. If there's more content, people are just going to go to the content marketplace, right? The creator of the content captures less value as there are more content creators and more diversification of content, right? And so I think meta just wins by being a platform, right? Google does too and bite denseness too, right? But like those three win by having a platform.
Starting point is 00:41:24 And then the real question is, can they get in the assistant productivity game, right? And I think this is important. And through that effectively search. Like if you're an assistant, it means that you can, like, there's some commerce happening. Well, they spin out and poached a bunch of people from Google. So this wasn't in the media much,
Starting point is 00:41:40 but like they actually poached Google search people with similar sized deals as like these crazy. Yeah, and I always, you know, demoing any of the wearables, you can imagine like meta wants you to walk around in the world and see like, oh, what are those headphones? And like, while we're talking, I just hit my little thing and buy it, right?
Starting point is 00:41:58 And it's like, you didn't even necessarily know it happened. But like of course Meta's gonna want to monetize that. Everyone knows those of the Sony MDRX-su 272s, 4662. Dude, I've been screaming about them like doing some proper marketing. It's literally like they're over here is like WH-1000 XM-5. And then they're in here is like WF 1000 XM-1000. It's like dude, just call them like Bravia buds and Bravia like headphones or some shit. Well they China just bought? Yeah, yeah. Baravia brand's actually a Chinese company now.
Starting point is 00:42:32 Sony sold their TV. PlayStation Buds. Yeah, yeah, PlayStation. Walkman. Oh, yeah. Like, come on, like, something, something. For sure. Anyway, anything else, already?
Starting point is 00:42:45 No, this is fantastic. I'm excited for this weekend. Yeah, yeah, super excited. What are some plays that we don't watch a lot of sports? What are some plays? You are a football guy, right? Yeah, yeah, yeah. Georgia.
Starting point is 00:42:56 Yeah, rural Georgia. I like football. High school football was the thing. College football was the thing. I think NFL is a little less soulful. But, you know, now college football has the NIL. And so it's also soulless to some extent. It's fine.
Starting point is 00:43:10 We enjoy it, you know, primal desire of seeing heads clash. You know, and sometimes that manifests in, like, you know, Twitter drama and sometimes it manifests in real football. Yeah. All I can say is fuck the Patriots. Okay. Whoa. Okay.
Starting point is 00:43:24 Okay. I'm kind of bummed. We're going to, since we're going to be at the game, we're not going to really get the great experiencing the ads. I'm going to be like glued to. I'm going to be like glade to my phone. I want to see all the AI, the different. Well, don't worry. I got some more ads for you. Thank you so much for coming. Thank you so much. Great segue.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.