Deep Questions with Cal Newport - Is AI Trending Up or Down in 2026? | AI Reality Check

Episode Date: April 23, 2026

Cal Newport takes a critical look at recent AI News. Video from today’s episode: youtube.com/calnewportmedia 0:00 What has *Actually* Happened in AI in 2026?  3:07 Open Claw 27:53 Anthropic... and the Department of War 49:06 Data Centers Links: Buy Cal’s latest book, “Slow Productivity” at www.calnewport.com/slow  https://www.axios.com/2026/01/31/ai-moltbook-human-need-tech https://www.macstories.net/stories/clawdbot-showed-me-what-the-future-of-personal-ai-assistants-looks-like/ https://www.anthropic.com/news/statement-department-of-war https://futurism.com/science-energy/data-centers-construction-supply Thanks to Jesse Miller for production and mastering and Nate Mechler for research and newsletter. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 AI news comes at you fast. Each article feels more breathless and more terrifying than the last. But before you have a chance to see how any particular story turns out, there's 10 more in its place. I think this speed and lack of accountability can create a sense of overwhelming disruption and change that can really be pretty disquiety. Well, it's Thursday, which means it's time for an AI reality check episode. So I thought this would be a great opportunity to try to do.
Starting point is 00:00:30 to slow down this news onslaught and get a better sense of what has actually been happening in the AI space recently. All right, here's my plan. I've invited the AI commentator, Ed Zittron, to join me. And we're going to look at three of the biggest stories about AI to land in 2026 so far, including one in which Ed is actually very much involved. And what we're going to do is for each of these stories, we're going to take a closer look on what actually happened and how things have since turned out. Our goal by the end of the episode is to answer a simple but critical question. Has 2026 been a good or bad year for AI so far? And we have a lot to cover, so let's get right into it. As always, I'm Cal Newport, and this is Deep Questions, the show
Starting point is 00:01:20 for people seeking depth in a distracted world. And we'll get started right after the music. All right, Ed, well, it's been three or four months since you were last on the show, and there's been some big AI news since then. So I wanted to have you on to go through some of the big stories that have happened since January, and because you're a commentator who is, maybe I should say this, less impressible than the average AI commentator. I figured your point of view is good for my reality check audience. We're going to try to end this discussion by voting whether or not,
Starting point is 00:02:02 2026 has been good or bad for AI so far. But what's your pre-vote? Where do you think, based on what you know, you're going to end up here? Probably not a good time for them. It's just every time we talk, it's like there's very big news and everyone's like, oh, look at the, we've got a new number. It's even higher than usual. But the actual underlying economics and infrastructural layer, or even just the service performance is worse. And it's very strange. Well, this is part of the reason why I like doing these reviews with you is often the story will be big. Everyone will get worried about it. People will call people like you and I for quotes. And then everything moves on and there's no follow-up. And I think it's useful for calibrating how to react to the new story you're
Starting point is 00:02:43 hearing now to occasionally go back and say, hey, what happened with that story that had me worked up a couple months ago? Which brings us to a great place to start because what was the first big story of 2026? I think arguably it would be open claw, which I'd believe became generally available to the public later in January. Now, I've broken this up into two sub-stories. I want to start with like the easily dismissable one just because it's fun and then get to the more serious one. I'm going to read you a quote and we'll get into it. So the easily dismissable but fun aspect of this story is when someone opened a multi-book, a social network that was configured so that it is easy if you're writing an open-cllaw agent
Starting point is 00:03:24 to post on it. So they add hooks into it. So it was easy for your open-cloth agent. to post and read things from the social network for about four days. Everybody went crazy about MULPOC. I'm just going to read you a quick quote from your favorite publication, Axios, from the end of January. Imagine waking up to discover that the AI agent you built has acquired a voice and is calling you to chat while comparing notes about you with other agents on their own private social network. It's not science fiction. It's happening right now. And it's freaking out some of the smartest.
Starting point is 00:03:59 names in AI. Well, you're a smart name in AI, so are you still freaked out about Moldbuk? No, the moment I saw it, I'm like, A, this is just LLMs. This is just LLMs doing what they think a social network looks like, as in when I, it shouldn't even have said the word think. Spitting out what the model would say is likely to be a social network post. And then the second thought was, this is fake. Well, 100%, there are regular people just using their open clause to post on here. These don't read, they didn't read like LLMs in some cases. Some cases they did, but some of them were just like, I saw someone post the slur within one hour. I'm like, okay, this is just a regular person using, well, regular is probably the wrong word. A person is using this as a means of posting.
Starting point is 00:04:44 And it's funny when you say like the smartest people as well, because I think that that term no longer has any value, because that's like Andre Carl Pathy, who is, it's just the term smart. point, does that just mean they got good grades at school? Because if that's the case, we're completely screwed. Like, if we think only the people who got good grades are smart, then I don't know what to say for the world, because the people that fell for MaltBook, was insane. They were like, oh, it's AGI. It's as if they forgot how large language models worked, or never learned in the first place. Well, I don't think they understood what OpenClaught was, or what Moldt Book was, or what any of this was, other than it involved Lopster.
Starting point is 00:05:28 And they heard agent. Agents. It's autonomous. I brought a back many. I did a little digging here. Axios is original. They moderated the headline, but I thought it was worth just to. Oh.
Starting point is 00:05:40 Because I think we memory hold a lot of this coverage. But the original headline was we're in the singularity, colon, new AI platform skips to humans entirely. But it did the trick where you put the quotation marks around the first part. So technically. you are not declaring that to be the case. You are quoting someone. This one got fully memory hold, right?
Starting point is 00:06:04 No one talks about Mold Book. I mean, I think I covered on my show at the time. I said, yes, people are just telling their LLMs to post. LLM's write stories. They finish the stories you tell them to write. There's actually good research. This came up in my doctoral seminar. I'm teaching on Super Intelligence,
Starting point is 00:06:17 which is great because it's like 10 doctoral students who just do AI research, and I'm learning a ton from them. And they know the literature even better than I do. And they're saying there's really good research out there that whenever you do any prompting of an LLM, if anything in your prompt in any way indicates that you're prompting an AI,
Starting point is 00:06:35 almost always it goes in the sci-fi mode. So the LLM will, if you, so you can ask the same question, and if you say, you are a whatever, you are a journalist, you know, please answer this question, it'll give one answer. And if you say, well, you're an AI,
Starting point is 00:06:49 so how do you think blah, blah, blah, it always will go towards dystopian themes of AI coming alive. And like that's it's so it's it's it's very easy to prime. And I think a lot of that was gone with open club. People would say please go post on this social network and they just, you know, wrote AI type stories. Right.
Starting point is 00:07:08 But was covered very credulously, I would say. Which is pretty much part for the course. I mean, I still, I don't know if we want to wait until the second part of this, but it isn't. The open claw thing is one of the most insane things I've seen in the tech industry. may even be crazier than the overall LLM boom. Well, go on with it, because let's get into the second part. I have some quotes, but let's, well, let me read you the quote and then let's get into it.
Starting point is 00:07:32 Yeah, read the quote. This is a, like a representative person talking about OpenClaught earlier, like early February, late January. For the past weekers, and this tone, this is called AI enthusiasts. This is like such a known tone. This can sound very familiar. For the past week or so, I've been working with a digital assistant that knows my name, my preferences for my morning routine how I like to use notion to doist
Starting point is 00:07:57 but which also knows how to control Spotify and my sono speaker my Phillips Hugh Lights as well as my Gmail it runs on Anthropic Claude Opus 4.5 model but I can chat with it using telegram I called the assistant Navi inspired by the fairy companion of Arcania
Starting point is 00:08:12 of time not the oceaner of time the game yeah all right nerd Zelda oh okay I get you no no it's just like it's like a really weird choice Well, he makes a point, it's not the James Cameron movie base. Oh, okay.
Starting point is 00:08:28 And Navi can even receive audio messages for me and respond with other audio messages generated with the latest 11 labs text to speech model. Oh, did I mention that Navi can improve itself with new features and then it's running on my own M4 Mac mini server. And also I just got fired because I just spent 100 hours setting up Navi instead of doing my job. Well, I added in last and I just spent myself. And I now can't pay my rent because I spent $4,000 a month on API calls. Yeah. Like, oh, that's the other problem. Okay, so that's OpenClaw, right?
Starting point is 00:08:55 So you could, my understanding is it's a library. It's a Python library. Yeah. Which makes it easy to write your own agent. An agent being code that calls an LLM and then uses the response from the LLM to help drive its movement. So you can say, hey, LLM, what should I do? And then it does it. OpenClaugh made it easy for people to write their own.
Starting point is 00:09:16 So people all around the world began destroying their computer. and leak it all this in because it's actually hard to write in the thing but here's the thing even that term gives it too much credit it just does what lLMs do like it's just oh i had it i read this thing on one of the mac websites where it was like oh yeah i had it um build a website and it's just the most generic looking vibe code slop ever oh i had it transcribe my my voice notes like yes so okay it's doing what lLMs do oh and it's able to write stories so lLMs And this is the weirdest thing. The thing that really confused me is,
Starting point is 00:09:55 on top of the credulous media coverage, and pretty much everyone who covered this should be ashamed of themselves. I think most people did the worst job possible in the sense that I read most open-gloor coverage because I was trying to work out what it did. God's on this truth. I was like, what is this?
Starting point is 00:10:11 But you read like the Atlantic, and it was like, was it the Atlantic or CNBC? They were like, this is another chat GPT moment, quoting Jensen Huang. Because of the fast, adoption. There a lot of people tried it. And then they looked at that chart and said, well, this is a big deal. But the thing is, it's like fast adoption. It's like slop commits on GitHub and also Mac minis selling out in the greater Bay area. But the thing that was crazier to me, other than all the credit of this coverage was Nvidia's GTC 2026,
Starting point is 00:10:40 $4 trillion or so market cap company, right? That's their conference. GTC's a big conference. Yeah, yeah, yeah. And you got a 3D AI generated picture. of Jensen Huang, the CEO of NVIDIA with lobster claws. And they'd release this thing called NemoClaw, and they're like, oh, this is the chat GPT moment. This is the agentic future. And it's like, what are you talking about, mate? Did you just get in a car accident?
Starting point is 00:11:06 Do you have a concussion? You just steered your company. Like a year ago, GTC was like Jensen going out with full swagger being like, yeah, we've got Vera Rubin. We're going to do this. 10x more efficient. Woo! Shooting guns in the air.
Starting point is 00:11:20 Yeah, he signed a woman's boob last year. This year he's like, yeah, we've got Nemo claw. Got Nemo claw. You want to try Nemo claw? Ah, you like that? Jingling the keys again? Do you like Nemo claw? What?
Starting point is 00:11:36 Please spend $125,000 on a GPU. You need to buy Vera Rubin, even though we don't have anywhere to put it, as we'll get to. But it's just so weird, because when you actually get down to it, It's the classic LLM story. It's like, okay, what are you talking about? It's a new agenetic interface for managing programs. It's an LLM.
Starting point is 00:11:58 Is it a chatbot connected to an API? Yeah. It's like the Donnie Darko meme. What's the Donnie Darko meme? It's like, I forget what the line is in the movie, but it's like, oh, I've managed to create a new agenic workflow. Is it just an LLM connected to an API? Yeah.
Starting point is 00:12:16 Yeah. Because that's every story. every story I've read. It's just, do you have two LLMs bonking each other's heads? Is that what's happening? Great. Okay. I'm very impressed. We need to have the largest company on the stock market do something about this, Pronto. It's hysterical. I think that's an important point because I do think when the average person hears about things like OpenClaw or different agents, they're often thinking there's a new artificial intelligence technology, right? that there's a new, we built, OpenClaas is a new digital brain that can improve itself. And it's learned how to do things that prior models have it. And I think what people don't understand is that OpenCla is a Python library.
Starting point is 00:13:00 It's a Python library that makes it easier to write a Python program that can make calls to LLMs. And you can aim it at whatever LLM you want. The LLM is somehow like that is the brain, but there's nothing new, there's no new LLM for OpenClau. It's a library that makes it easy for the average person to say, I'm going to write my own agent. It turns out agents are hard to write. Yes. Because LLMs, they write plausible stories, but as we've learned, they're not often really good, carefully check plans for doing things. And so it causes a lot of problems.
Starting point is 00:13:32 If you say, hey, LLM, give me a plan for doing stuff with my personal data. And then you have a program that just automatically implements that, you know, turns out sometimes bad things happen. But there were two, here's my two useful things. I'm going to say there's two useful things about, two useful things about. Okay. Two useful things about OpenClaw. One, because a lot of people began experimenting with building their own OpenClaught agents. One of the quick things they discovered is, oh, the big frontier LLMs are expensive.
Starting point is 00:13:59 And they were racking up thousands of dollars of token costs to API calls to Clod or to GPT. And so it got a lot of the real booster tech enthusiast types to start looking at much smaller, much cheaper models. Because they just literally couldn't afford it. this is why I think OpenAI bought OpenClaught. Well, there's an important detail though. Okay, please go on. So it's an important, important to know where this was in history. So OpenClau came out January-ish.
Starting point is 00:14:27 Yes. Now, you used to be able to, during this period, connect your Anthropic Clored Max account, a $200 buck a month account. You used to be able to connect it to OpenClaw, so you weren't paying API calls. You were just using Anthropic services. That's unlimited.
Starting point is 00:14:43 you pay 200 and it was supposed to be on the limit. You have a rate limit but there's, you can use it as much up to that rate limit and you can spend like thousands of dollars of API calls and that's been proven. There's a coda called Shelek who did a study on it. This is where you quote a number often about how much it's actually costing per token versus what they're charging. This is where partially that number is coming from. Yes, yes. So it works out to like somewhere between $8 and $13 and $1.350, weird way of saying that per dollar of subscription. So you're able to burn like $2,700 on the Anthropic subscription.
Starting point is 00:15:16 For $200? You're paying $200. It's costing them $2,700. Yes, exactly. Sorry, kind of clodgy explanation. So Anthropic let this happen. So the reason the open claw got so big, Anthropics sued them because they were called like Claudebot at first,
Starting point is 00:15:31 claw C-L-A-W. But nevertheless, Anthropic allowed this to happen. Then February 12th, they raised the $30 billion round. A couple weeks later, open claws cut off. the aristocrats it's just that Anthropic is such an unethical company they should have never let it happen to begin with
Starting point is 00:15:50 but one of the reasons that OpenClaw got so big was both using those cheaper models but also using those Mac's subscriptions and so OpenAI buying OpenClawn was so funny just like OpenAI is just meta it's meta plus Enron and it's so funny watching them
Starting point is 00:16:09 why would you buy this what possible reason oh, we can build agents with it. What do you mean? No, they're... Why? Why? They have much better frameworks for... Well, I have two explanations.
Starting point is 00:16:21 Let's get back to you. You tell me which what you think is more likely. So this is maybe giving too much savviness credit to them. The savviness is, I think it was a real problem for a lot of enthusiasts to discover, oh, wait a second, if we use really cheap open weight models, open source models, or even just really like three billion parameter models we can run on our own machine, we get pretty similar results.
Starting point is 00:16:44 Like, actually, we don't need one, the 10 trillion parameter super frontier model to read my emails and to add appointments onto my calendar. I think that's really terrifying if you're a company like Anthropic, just take it on $60 billion investment or your open AI is like, we need people to think that these are the big brains and nothing else matters. So the conspiratorial slash business savvy interpretation would be open AI needs to sort of slow the role on that or make that tool much more native to its models because they really do not want a generation of AI enthusiasts to say, oh, wait a second, Kimmy is like a fraction of the cost that it does just as well. The other way of thinking about it, it's like them buying that podcast show recently. TPPN?
Starting point is 00:17:29 Yeah, there's just like we're just buying things left and right because we have money and we're not quite sure what to do. sort of, yeah, I don't know which ones. It's probably number two. Because, because they're going to keep running open claw. They've said that already.
Starting point is 00:17:42 They're going to keep running it. And people are still using open source models. So it's kind of like, I just think that they were buying stuff because they thought, crap, we've got to do, we don't have an open claw. What if we just bought it?
Starting point is 00:17:53 It's rich kid syndrome. Like, that's the thing. Like, both open AI and anthropic act like rich kids. Because I went to a private school. I'm not proud to say it. I was the dumbest kid in the private school.
Starting point is 00:18:04 I did not do well. bottom of my class every single year, failed multiple languages, like genuinely, legendarily terrible. I barely scraped through. But I've met a lot of these kids, and my parents scraped by to get me there as well, it was good on them. But I met a lot of these kids, and what they do is when they don't want to learn something, when they don't want to build knowledge, when they don't want to put something together
Starting point is 00:18:27 of their own, they just acquire. It's like, Dad, go and buy me that. Daddy, go and go and buy me a boat, buy me whatever. and it's open AI doesn't know what they're doing other than they have a lot of money so they can spend it and I think they bought it thinking wow this will be a backdoor into Anthropic a little bit we'll be able to see what Anthropic does more because lots of people use this and we can we can somehow see how Claude is running agentically or they bought it to kill it that's what I think but the other thing is is Peter Steinbrenner or whatever he's called
Starting point is 00:19:02 he's still farting around that guy I don't know if you've ever read his posts, but he is constantly working. Yeah. And I would, I don't give him a ton of credit for that because it's like, feels like a depressed person. But also, I've heard he got hundreds of millions of dollars for it as well. So it's like, if I had that much money, you wouldn't hear from me again. I would disappear. Well, no, I'd keep posting. But it's strange, because it's like, what are you actually working on? And I think he vibecoded a lot of it as well, which is even more terrifying and there are massive security issues as a result. It's just one, it is like a psychosis onto itself.
Starting point is 00:19:40 And what I think, I know we talk a lot about the media stuff, what I think it is is the media and the AI community is so desperate for a hero. They're so, they know, they know in their, deep down in their soul that something is wrong, that none of this makes sense. So the moment anything even directionally feels like it proves, that they're not wrong, they grab it and they shake it vigorously. They just got like, this has to be it. This is going to be the thing.
Starting point is 00:20:10 And if we love this enough, it can be a real boy. And it never is. Like, open floor is gone. Like, just no one's talking about it anymore. No one cares. Searches on Google have gone down. Yeah, I just looked for it. It's minimal.
Starting point is 00:20:25 I checked this morning. It's minimal coverage. It's been minimal coverage. I mean, it's kind of around, but it's become a niche topic. Well, let me tell you my second thing that I think is good about OpenClaught, right? The second thing is I think it actually points towards what I think is the healthy, sustainable future of AI, which is smaller task-specific and much more modular architectures, right? So not built around a single AI entity like an LLM, bespoke AI systems that do specific things.
Starting point is 00:20:56 There's a great, if I want to play poker with AI, there is a great AI system to play poker with. If I want to, you know, if I want to do certain types of digital VFX work, like there's really good AI systems that's, like, made to do that. I think that's the future. But all those LLMs? No. Well, no, they're not, right? Or they have LLMs in them.
Starting point is 00:21:16 This is why I say modular architecture. I think the future is you have multiple different things, most of which are just hand-coded by a person. And maybe you have an LLM in there if there's language involved because it's pretty good at if it needs to speak to someone or interpret it. I point towards the Cicero model is the, the great example of this. Noam Brown's AI system that plays the board game diplomacy.
Starting point is 00:21:36 And it has an LLM in there, a small one, for chatting with the other players and then converting what they say into a sort of more technical language that the rest of the system understands. And then it has a planning engine and it has a policy network that can evaluate the different boards.
Starting point is 00:21:51 It has multiple other systems that all hooked together. Classic AI shit. This is real AI stuff. Like when it's just like, yeah, I made diplomas. But this actually just reminded me or something. But just, I want to get to that, but just to bring a close to the point is I think this gave people a taste of that.
Starting point is 00:22:09 If you're building, they're like, oh, I want to build my own system to do one thing. I want to build a system to answer my emails that come into, you know, request for my show to answer those emails and to put things into a spreadsheet. And like, oh, I can write a program to do that. And I'll use an LLM to help me. And it can be a small one because this is not, that's kind of not the core of it. and suddenly you're exposing people to this idea. I mean, I call this vision distributed AGI.
Starting point is 00:22:34 We're like one day you would look around and be like, there's 10,000 bespoke small systems that each do something well. And if you add it all up, oh, that's a lot of things now that computers do as well as people. And it's a very different vision than Opus 5.9 is... GROC 7. Or whatever it is. GROC 7. Yeah, it's embodied in a robot with predator machine guns and it can just do everything.
Starting point is 00:22:58 Anyways, all right, back to your point. So, this just reminded me, so Jack Clark of Anthropic, fascinating character, one of the co-founders. He used to write at the register, one of the single most critical tech publications in the world. His blogs were extremely critical. I've seen him twice peddle out this example, which he refers to as like an evolution simulator, a predator prey simulator. And he brings it up all the time, and he uses these high-fluid terms. I went and looked this up. It is like a 50-year-old idea.
Starting point is 00:23:28 he's like yeah I used clawed code to build it yeah because there are hundreds of them online hundreds of them that it was trained on it's just a simulation program it's a little simulation that says okay we got bees and the bees get killed by the bee the the bee eating bears I'm just making up animals already this is why I can't make one myself but it's like all of the different creatures and how they interact he's like yeah and I'm able to change things here and here and there and it's like yeah there is a web version of this it is 20 years old yeah but the way they frame all of these it's like, oh, simulation. Like the singularity.
Starting point is 00:24:02 It's like, no. And it just, I feel like the AI era is a mass exploitation of ignorance. It's just, they found something where the media just, they knew the media, maybe they didn't know this in advance, but the media won't check anything. The media would just think,
Starting point is 00:24:20 yeah, it's got a social network, this is AGI now. Hmm? Every time a three gigawatt announcement made, The 3 gigawatt data center, and that's like three nuclear power plant big. Wow. Even though it's not getting built, which I know we're going to get to, it's just AI as a term, as you well know, it means so little and so much at the same time that they can basically do anything. And I think combined with the hysteria, they are in this situation where literally, I think we could have another Sam Bankman-Fried situation that we don't know about yet.
Starting point is 00:24:56 that an AI company could come out and just go, yeah, we've done this. And it's the, I mean, kind of mythos is almost that. I know we're not going to get into that. But it's, I feel like we are, maybe there's already a scammer out there. But this is the environment, the exact environment. Yeah, you can gather a billion dollars easy. Well, I mean, I just saw another one the other day where it's like a company that claims it's doing recursive self-learning and they raised half a billion dollars. And one of the co-founders runs another company called U.com.
Starting point is 00:25:24 And you know what's crazy? That is not mentioned in the Financial Times' piece. It's just, we are, like, grifters have found their meat. This is so much worse than crypto and NFTs. It's so, so much worse because the fuzziness of AI allows them to have infinite time and infinite money to say, well, we still haven't worked out. That recursive self-learning company, by the way, of course they are still theoretical, like all of them. Yeah. Like world models.
Starting point is 00:25:56 But no, the 50% job loss, it's, it's next month now. Yeah. That's what, it's, I said the wrong month. It wasn't this one. It's in like eight to 12 months, baby, with a margin of error of maybe 100%. Banner headline. 50% is every time. Yeah, but like, you just read the top thing, yeah.
Starting point is 00:26:15 That reminds me a little bit about my, my oldest plays, I coaches, help coaches, a little league team, baseball team or whatever. And the pitchers are getting better. they're at the 13 U, they play on the full-size fields now or whatever. And like, the pitchers are better now. So what they learn is if I throw the high fastball into batter swings, I'm going to throw some more high fastballs. Like this is clearly.
Starting point is 00:26:34 And I kind of feel like this is Darryo Hameday saying 50% of jobs. He's rolled this out three years in a row now. He's like, it gets covered every time. I'm going to keep throwing those high fast falls as long as the media swinging the proverbial bat. Yeah, but high fastballs are proven to be difficult to hits as opposed to LLMs, which have never been proven to take in. Take jobs.
Starting point is 00:26:54 Ah, there we go. I like it. The Meta Sports is way more fun than AI. Just, if only we to put this money into baseball. I agree with you there. Yeah, baseball has less constant waves of existential dread being poured upon the entire populace. No, they just reserve it for like Cincinnati and Pittsburgh and Mets fans. That's right.
Starting point is 00:27:16 You're right. Mets fans look to AI for a little bit of psychic relief. They're like, oh, this is not, it's not quite as dark. is what we're dealing. Yeah, this isn't punishing me as much. Only half the jobs are going away. That's not so bad as an 11 game losing. Yeah, no.
Starting point is 00:27:29 And most Mets fans are like, yeah, I would fire half of them. Yeah, they should be fired. Maybe we should do all of them. Yeah. I hope Juan Soto is the first one on that list. All right. Story number two, I actually, for whatever reason, I didn't cover this one as much. I talked to some sources in the sort of surrounding DC tech industry.
Starting point is 00:27:50 But I want to get your take on this. This is the Anthropic and Department of War Story. that picked up in February. I'm just going to read a little bit from Dario Amade's statement that kind of kicked off this whole thing. So he said, Anthropic understands
Starting point is 00:28:02 the Department of War, not private companies, makes military decisions. We have never raised objections of particular military operations nor attempted to limit use of technology in an ad hoc manner. However, in a narrow set of cases,
Starting point is 00:28:13 we believe AI can undermine rather than defend democratic values. Some uses are also simply outside of the bounds of what today's technology can safely reliably do. Two such cases have never been included in our contract to the Department of War and believe they should not be
Starting point is 00:28:26 included. And then he lists mass domestic surveillance and fully autonomous weapon. So can you first bring us up to speed on what unfolded and has unfolded there? And then what is actually happening? Because I find this story because I haven't looked at it as closely kind of confusing. All right. So just before the war in Iran, I think Dario is a savvy con artist. And I think he, I call them, but you don't, it's not you saying it's me saying he's a con artist. He's a con artist. So, just for some background. Anthropic has been installed with classified access in the US military since June 2024. That's a very important detail. They were used in Venezuela and Kersian, whatever you call that. They were used throughout and are still used in the war in Iran.
Starting point is 00:29:12 So what happened was Amadei said, I forget what the conversation was. Maybe he instigated it. It's kind of hard to tell. but some conversation between him and the US military was, we're not going to let you use this for mass surveillance of Americans, nor are we going to let you use it to control autonomous weapons. Now, the second one really pissed me off, because you cannot control anything with LLMs. You can't control. If you control the robot with LMs, it would barely move
Starting point is 00:29:40 because the processing time even. People say, oh, what about on device, but shut the fuck. You don't know how these work. That's not how this. It's not an LLLM. That threw me off. as well. I know enough about AI. Why are you talking about LLMs?
Starting point is 00:29:53 Why was the media? I think they mean AI in general. No, no. They meant autonomous weapons. They 100% meant that. I know because I read every single article and every single statement about this, every single time like autonomous weapons. And to be clear, Anthropic in their own statement said LMs are not consistent enough
Starting point is 00:30:12 to run autonomous weapons. Correct. Thank you, Dario. But also, it would make no sense to run a model based on language, parsing and generation to steer a missile? So, that's the thing. Okay. So the first one you say was happening, though.
Starting point is 00:30:27 As far as you can tell, using these tools as part of intelligence gathering, sure, they probably were involved in the chain somewhere. I mean, were they? But the thing is, I can't confirm whether it is. No one can. Because Anthropic is already, was already embedded. And they attempted to basically renegotiate the contract post hoc. And I'm not siding with the U.S. military here, but they tried
Starting point is 00:30:49 to say we're adding these things and they did it mysteriously somehow just before the war in Iran. So what I think, this is my personal belief, I think the, like it was a few days beforehand, I think what I'm just to clarify what you're about to say. I'm just looking at this now. They're saying mass domestic surveillance of fully autonomous weapons in that February statement. They're saying, oh, have never been included in our contracts. So I had been given the impression that they were specifically called out in the contracts as we will not do this. But actually what I'm seeing here is it's not like they weren't just, that wasn't discussed at all in the contracts.
Starting point is 00:31:26 And Amadea is saying, hey, we never mentioned in the contracts these two things you might use it for and we want this in the contract. So, okay, go on. But that's a, I'm only seeing this now that, when I rereading the statement. Yeah, it's a little tricky. Anthropic 100% had visibility. into what the US military is doing. So I would not be surprised.
Starting point is 00:31:51 I cannot confirm this whether they time this specifically to time of the war in Iran. Because suddenly there was this insidious, awful. Every single person who spoke like this should be fucking ashamed of themselves. I'm disgusted by it. There was this insidious thing of people being like, Anthropic is the ethical company.
Starting point is 00:32:07 I saw hashtag Jesuit Claude. Death penalty. Katie Goddamn Perry being like, I just bought Claude. And it's like, you just paid a company that was actually part of this war. And people are like, well, open AI is now. And then Sam Altman slid in and was like, well, we can do whatever that is. Then Sam Altman claimed that they had actually negotiated something that didn't allow the things that Anthropic wanted.
Starting point is 00:32:34 Then it turned out that Emil Michael from what's his, from Emil Michael from the US military. So actually we've agreed to all legal means. To be clear, there's, I don't believe either of these companies give a rap shit about any of this. I don't think they care about it at all. But Anthropic had this swell of good press because people thought that they were opposed to the war in Iran, when in fact, they were directly part of it. Claude was used during it. Now, how complex was the use?
Starting point is 00:33:05 It was probably like, here's a bunch of images. Where should we blow up? And it went, here's a school. And they went, oh, which is great. And that actually happened. And then there were weird articles that came out saying, like, actually Claude didn't do that. Yeah, you can't prove that, mate.
Starting point is 00:33:19 What I can prove is that Claude was used in the war in Iran, so whatever. But your conjecture, your conjecture is the reason why Amade brought this up was it's press. Yes. The size of that contract is worth jeopardizing when you're looking at like an IPO six months from now. Sorry, size of that contract. Their military contract is up to $200 million. dollars and the up to is an important operative word. $200 billion, they lose that money on inference like two weeks.
Starting point is 00:33:49 Yeah, and they're looking to raise, I mean, their valuation is what, in the hundreds of billions? Three something, 100 billion. They're probably IPO at 750 if they even make it. But that's the thing. No, they did it for press. So that could be a $100 billion move there in theory. Yes. Yeah.
Starting point is 00:34:06 Well, also, the thing is as well, it's like, then the Department of War said, oh, we're going to, to, we're going to, um, we're going to put you as a supply chain risk. Nothing happened. Then they were like, it's a supply chain risk, but we're going to keep using you for six months. Then there was a lawsuit. The, the anthropop, the anthropopic sued the department of defense and said, if we don't have this removed, we might die. And then admitted, by the way, and this is one of my biggest, this was like my full joker moment. During that, during that, um, motion that they filed, Krishna Rao, the CFO of Anthropic, filed an affidavit, sworn affidavit, where he said that Anthropic had only made $5 billion in its entire lifetime.
Starting point is 00:34:52 Now, when you go and add up all of the reports of revenue, such as the information saying $4.5 billion in revenue in 2025, such as Anthropic themselves saying annualized revenue that would mean they made $1.5 billion in the space of a month in 2026, it adds up to way more than 5 billion. I have tried to talk to pretty much every major reporter that covers Anthropics revenues and they will not discuss this. It's the most conspiratorial I felt this entire time. It is like everyone is trying to ignore a fire in a room. And the crazy thing is, that happened, nothing changed and then a judge said, actually, Anthropics right, we're not going to allow the supply chain risk designation. And now apparently the US government is using Claude Mythos.
Starting point is 00:35:36 So in the end, nothing happened. Anthropic got a bunch of completely spurious press around them being ethical, despite the fact that they are already part of the military. They revealed their actual revenues. It was great. It's all good. That revenue story, that is an amazing. Outside of you, I covered it, I learned about it in part from you.
Starting point is 00:36:01 I found only one article. There was maybe a Reuters or an AP article that talked. that talked about this quote unquote like shaky revenue math that's popular in Silicon Valley. So there's one piece I found where a financial reporter actually was covering like,
Starting point is 00:36:16 hey, when you hear these numbers, there's a lot of multiplying by 12 or multiplying by 24 going on and you multiply at the right times. But that was a big story. So for the listeners to understand it, Anthropic had to under oath assigned affidavit, right?
Starting point is 00:36:29 So the penalty of perjury or whatever you would say in a corporate setting had to release their revenues. and it was $5 billion to date on $60 billion of investment in debt, I think, the date. And they spent $15 billion on compute so far. Yeah, $15 billion on compute so far. The other part of that, the part of the story I did cover that I thought was interesting was the Undersecretary of Defense, whoever that was. Emil Michael.
Starting point is 00:36:54 That was Emil Michael, right. And he went on and it was, it was funny, it shows something about how the online commentary space works. He went on and said, hey, here's why we don't, want to work with this product. This is, if you watch him, he's basically like, this is a product that'll say it has a soul or that, like, their company is saying that there's, like, a chance that it's alive. And what he was saying was, like, this is a wonky product, right? Like, this doesn't seem like the type of thing you want in a military setting where you have
Starting point is 00:37:22 the CEO saying there's a chance it's alive and it'll say it has a soul. This doesn't seem like a reliable piece of hardware. And what was the online commentator report was Pentagon convinced that Claude has a soul. So it completely they flipped the valence. He was basically saying I'm so sick of this. I'm so sick of the goddamn
Starting point is 00:37:45 AI bubble. I'm so tired of this. Yeah. I wish I got this I wish anything I did was I wish you've not read One Punch Man, have you? No. Okay. So this is a complex thing but one of your listeners is going to hear this and love this. There is a character in One Punch Man
Starting point is 00:38:01 called King. Everyone thinks that he's the most powerful man in world because of the King engine, which is his so-called power. It's actually because his heart, he is so anxious and scared at all times that his heart is going so fast that you can hear it. He has no powers. He's a regular guy. But because Saitama, the main guy, comes along and destroys anything near him, everyone thinks he's amazing. And there are multiple times during the story where a bunch of stuff happens around him and people go, wow, they must have all just died when they saw King. Wow, King must have destroyed them with the King engine. This
Starting point is 00:38:34 This is Anthropic. Anthropic is just this wasteful crap pile of a company with services that break half the time, less than two nines now of service availability. And they have models that degrade at random, they guess like the users, they rug pull them on rate limits. But everyone's like, Anthropics capacity is so, they're hitting capacity because they're so popular and their models are so good. It's like, ah, I'm going crazy, man. I just, at some point, they're some point what I'm saying will feed into the mass consciousness, I guess. And at that point, I'm going to be insufferable. But it's like every time I hear a story like this, I feel like I'm going insane. What are the main revenue sources if we're being realistic about it? So if you're these
Starting point is 00:39:22 AI companies, my understanding is open AI, it's chat GPT subscriptions. Yes. Anthropic is like Claude. API. API. Apparently it's API. Yeah. But Here's the thing. I'm not accusing anyone of fraud, but I, there was, Eric Newcomer had a piece where he said the, um, Anthropic, they had the Cochio, venture capitalist and he shared the deck that Anthropic had shown them. And there was a bit where it was like, yeah, 85% of their revenue is API calls and 15% is subscriptions. Gonna be honest, I don't believe him. I just don't believe it. I don't believe that there is what, four odd billion dollars of API calls and Open AI apparently is the other way around where it's like 85% subscription, 15% API. What would an API call be? So for the listener, what's calling these APIs? So it would be an AI startup. It would be a business that's running their own models for some reason that it's running their own systems that built on top of the API. But that's the thing.
Starting point is 00:40:26 Even that question kind of gets at what I'm saying, which is, what the hell are you doing with this? I get AI startups that just sell things that have LLMs plugged into them, but it's like they're claiming they have all this enterprise use. And what I think it might be is that Anthropic has slowly, because the information reported this recently, I think it's been going on for a lot longer. Anthropica has started to push enterprise users onto the API, even when they're using Claude or Claude Code.
Starting point is 00:40:54 I think that's fairly recent in the last few months. Right. But I also just think that these companies are making up, they're saying in decks because no one can prove otherwise. I think I want them to go public so bad. I want them to go public so bad. Never in a million years of a want to the company to file an S-1 more. I want to see inside their laundry.
Starting point is 00:41:15 I want to go look around. I don't doubt you'll be the first to read those S-1s. I will be smoking a big cigar. It's going to be delightful. Here, before I get to the third story, let me tell you my new term I coined. about AI coverage. All right, I just came with this on the spot. But something else is going on right now that I want to call out
Starting point is 00:41:38 is what I call dread laundering. And what you do is you will launder a sense of like despair or dread about one thing related to AI to help amplify a less supported feeling of dread or despair about another. And so here's where I've been seeing this recently is I think the technology business case for LLM somehow being at the core of automating a bunch of jobs or destroying the economy is very weak. And I think there hasn't been a lot of good support for that because, again, these are just LLMs that we're building better apps on top of and it's slow going. But there's a lot more focus recently.
Starting point is 00:42:14 It's like there's a dread quota. So how do we fill it if that is losing some traction right now? So there's a lot of other coverage going on about destruction of the arts. Writing is going to disappear. Movie making is going to disappear. education is falling apart. And you put that next to Dario Amadeh talking about jobs or this or that. And you're laundering the dread from, oh, we have a text generator and people are going to be lazy and try to not write text, which is a real story and an annoying and one is a writer I don't like.
Starting point is 00:42:45 And you laundered that dread over to like, well, all these other bad things. This is all kind of the, if we're worried about that, that kind of justifies the dread in general. So like also like maybe my job's going away. Maybe the terminators are coming. And I really wish these were really separated. And that you could have an argument about we have automatic text generators, brings up a lot of problems for parts of people who produce text for a living. Let's talk about it. Then we have over here this claim that an LLM is going to take over an executive job or is going to, you know.
Starting point is 00:43:13 And that's like those are those fall under scrutiny. It's really hard to get a compelling case over there. But if you throw enough darts at enough things, you create a miasma of unrest. in which like it's hard to make out what the actual signals are or not. So just everything. It's like a pox in all the houses. Everything is terrible. So that's my,
Starting point is 00:43:37 no, I fully agree. And I also think that Duma Porn clearly gets clicks. It's just that I think that when this is all over in the bubble burst, I think every single person who engaged in it should lose their job across the board. I know it sounds aggressive. It doesn't happen. But I think everybody who,
Starting point is 00:43:55 I think everybody who engaged in the Duma Porn. Yeah, there are some people who tried to do it in good faith, but the ones who like the axioses of the world who genuinely sat there and fermented dread, they shouldn't be allowed to work in journalism for a minute. They should take a knee. They should step aside for people who actually live in the real world. It is a problem that we have to address. Maybe I talked about this on your show earlier this week. But I'm hearing from listeners and readers that, again, they use terms like I'm stuck in a cage. having wave after wave of despair or dread crash on me with no option hope or of escape from it. And I just am taking wave after wave. There's a responsibility aspect to it, right? Like it is difficult for the normal person to be hit again and again from all different angles. Well, what if this is terrible? What if this is terrible? What if this is terrible? And there's a, if there's smoke, there's fire mindset that we're wired for. And it's really, I think, been very unsettling.
Starting point is 00:44:55 Again, I get unsettled by it, and I actually know the technology and know that 98% of this is really not well supported, but it's just emotionally difficult not to be having to just immerse yourself in wave after wave of everyone putting their full attention on what angle can I find it makes this seem the worse. Like that's always the angle that things are coming from. It's never from the, well, this doesn't make sense. What happened to that? Well, where's all this revenue? Hey, what about this story from three months ago? Nothing happened of it. I mean, there was a guy who posted a video that I made fun of, and then he attacked it.
Starting point is 00:45:25 me when OpenClaffers came out where he said literally the singularity is here like in the next few days. This is it. Look at this graph. Line goes up. Next few days, singularity is here. And I kind of made fun of them. And then he recorded a whole video attacking me about how crazy my takes are. And I just want to say, okay, it's been four months. I don't see the robot army that was supposed to be there in a couple days. But we never follow up on it. Yeah. Where's where's, where's ultra claw? Where's the clawed bot that's going to chop my goddamn head off? But that's the thing. It's like, I think that there is an actual theme above all of this. It's actually outside of the AI bubble as well,
Starting point is 00:46:01 which is short-term memory and long-term memory. That just people say stuff, things happen, and then they forget about them entirely. Like, remember the Claude Code marketing push at the beginning of this year? It was the Atlantic that said, this is the chat GPT moment,
Starting point is 00:46:22 and it had all sorts of people building useless apps. there was that whole surge of support for that. And now Anthropic is actively throttling their services. They are making their models worse. They are cutting off open claw. Nothing. No coverage. None of the people.
Starting point is 00:46:38 Because here's the thing that I have with AI boosters. Even if they fundamentally disagree with me about the economics and all, they don't even seem to engage with the problems. I don't even mean this in an antagonistic way. I mean, if I was a pro-AI person, if I was like, I don't know, I'd been, I'd be a piece of metal in my head or something. I would, I saw some dickhead British guy being like, hey, they're losing billions of dollars. I would at the very least be like, I should probably look into this.
Starting point is 00:47:06 Yeah. I should probably make sure. And if I really like this stuff, then I saw the companies screwing over their customers. I'd be like, wow, doesn't that change the story a bit? Nope. Mainstream media? Honestly, a lot of independent media just goes, eh, you know, it'll. something will happen.
Starting point is 00:47:25 It's like when it comes to the doom, they will extrapolate as far as they need to. When it comes to the capabilities, they'll go, yes, it's going to be this powerful that. When it comes to the things happening in real life, they're like, complicated, yeah. You know, things happen, you know,
Starting point is 00:47:42 it'll be right, though. And when I say, all right, I can't really tell you what that means, but it will be. When I say all right, I mean, everyone's going to make money, but not me, but the companies, who I love for some reason. It's so weird. That part confuses me. That part confuses me. The media class, I'm a part of, hates all billionaires except for like these three. Don't get that part.
Starting point is 00:48:02 Exactly. I don't get that part. All right. Story number three, this is in your wheelhouse. It has to do with the reality of the data center boom. I'll read you a quote from a futurism article that includes you in it. So be prepared. Nice. The data center is powering your favorite AI chat bot are running low on helium, cash, and neighbors who don't hate them. And that's not even the worst of it. According to reporting by Bloomberg, about half of the data center slated to open in the, the U.S. in 2026 will either face delays or outright cancellations. The publication interviewed analyst at Market Intelligence Company's Sightline Climate, which in research first flagged by Ed Zittron last week, noted that 12 gigawatts worth of power-consuming data centers are set to open the U.S. this year, but here's the catch. They say only a third of those are actually under construction right now
Starting point is 00:48:44 with the rest in a liminal pre-production stage in which they could and likely will be canceled. There's a huge story going on here that's not being covered outside of, It's covered in Bloomberg in places where people really need the monitor like the private credit markets and other things that could affect their investment portfolios. But it's not broadly known beyond it. What's going on with this illusory data center boom? So every time you hear someone say, we're building a two gigawatt data center, real simple, just say, no, you're not.
Starting point is 00:49:14 No, you're not. We don't know how long it takes to build a one gigawatt data center because no one has built one. I know that sounds crazy. No one has built one. But once again, and CNBC, I'm going to say, McKenzie Singles at CNBC, I'm specifically saying she has laundered the reputations of these companies because what happens is Stargate Abilene,
Starting point is 00:49:34 open AIs, 1.2 gigawatt, 1.2. They opened a single data center in September 2025, and then what was published was the Stargate Abilene was operational. Project Rainier, a 2.2.2. a gigawatt data center in Indiana for Amazon. Fully operational. That's a quote from Amazon. No, it's not. 2.2 gigawatts is what they're saying. They claim to have half a million traneum 2 GPUs, 500 watts apiece.
Starting point is 00:50:06 That's about 250 megawatts. They claim they're up to a million now. That's 500. That's a lot less than 2.2 gigawatts. Because data centers take forever to build. We do not have the power. And people are saying, well, the power's getting built. That proves they're going online. The problem isn't that the power doesn't exist at all. It means the power doesn't exist at the point of need. So Sightline Climate, I actually caught up with them on a recent newsletter,
Starting point is 00:50:32 where they said that of the 115 gigawatts of data centers are meant to come online. By the end of 2028, only 15.2 gigawatts of them are actually under construction. Now, this is really weird because I did the math. napkin math, forgive me. When you look at these and you say, okay, they have a PUE, so the efficiency, so 1.35 efficiency, we'll call it. When you use that and you take that 15.2 gigawatt thing, you divide it by 1.35, it's about 10 gigawatts of pure GPUs. That's about $285 billion worth of Nvidia GPUs. Why am I saying this? Well, Nvidia claims that they have visibility into half a trillion in GPU sales by the end of 2026
Starting point is 00:51:20 and a full trillion by the end of 2027. Where are they going, Jensen? Where are the GPUs going, Jensen? Where are they? Where are they as well? Because the video has sold... It's just a billion people are building custom video gaming rigs at home. Come on. It's easy.
Starting point is 00:51:41 Well, actually, I think I know where they are. I think they're in Taiwan. Because so it's just very weird because what this means is that Invidia has already sold too many GPUs. It has already sold more GPUs than are actually having data centers built for them. It's crazy. And this is the thing, I bring this up with journalists. I bring this up with economists.
Starting point is 00:52:04 I bring this up with tons of people. And they're like, it's fine. They're being built. What are you talking about? I'm like, look at the day and they go, ah. It's always like a weird wave off But this is like this is the largest company in the stock market And I think that their total revenue from the last few years is over 300 billion dollars
Starting point is 00:52:23 And they're claiming that they'll hit half a trillion by the end of the year They have half a trillion I think that's just for the year They keep saying these numbers as well that don't match up But let's say they're true And if Nvidia beats and raises so they beat their earnings estimates from analysts again I think we need to start asking a real question about what Nvidia is doing with these GPUs because, talking to some hyperscalor accountants, I know, there is a way that they could be doing this where they're able to book the revenue without sending anything.
Starting point is 00:52:53 It's called a transfer of ownership. It's when you just sign a contract saying, yeah, you own these GPUs, they're sitting in my warehouse, but these are yours. And that counts legally. That's perfectly legal. It's very strange, and if they're not saying it, they should be filing an 8K. But Invidia's inventories are growing on their earnings as well. like it's a sign that something's being warehouse,
Starting point is 00:53:15 but I spoke with a few sources, and what it is is when a hyperscaler, say Microsoft, they don't buy a GPU from Nvidia. They don't go, send me a GPU, I'll put it in a server. What they do is they work with someone called an ODM, an original...
Starting point is 00:53:32 Equipment manufacturer. It's original device manufacturer or design manufacturer. I think it's design manufacturer. They build the servers, and they put the GPUs in there. Foxcon, also known as Honai Precision Corporation Limited. Hell yeah. I wish we have more normal names.
Starting point is 00:53:50 Wistron, Wooi, N, all sorts of companies out there. What they do is they, their revenues, all of these ODMs, are going up, crazy style, because what they do is they pass the cost of the GPU through as revenue. They buy the GPUs from Nvidia. They put them in a server. They sell them to a Microsoft or an Oracle or a Meta or an Amazon. And then they say, yeah, it costs this much with the cost of the GPU in there. this allows Nvidia to hide a great deal of GPUs.
Starting point is 00:54:17 They're sitting in Taiwan. Quanta's inventories went up last quarter. I don't know if it's categorically because nobody's buying them and they're not being shipped. But for the most part, I think the Nvidia is just pre-selling years of GPUs, and I don't know how this is not scary at the people. Michael Burry brought it up briefly,
Starting point is 00:54:40 weeks after I did, just to be clear. and no one seems concerned about this when in fact if there's only 15.2 gigawatts of actual capacity being built and 10 gigawatts of that of GPUs Nvidia can't sell more GPUs unless it wants to put them in a warehouse
Starting point is 00:54:57 but to the larger abstraction of data centers not getting built as well it's like we're dealing with fraud then if we've got 100 and something gigawatts of data centers being being built announced but only 15 of those are actually under construction.
Starting point is 00:55:15 And under construction can mean anything. It can mean a scaffolding yard, which is the case with N-Scales Data Center in Louton, England. Then that means fraud. That means that someone is doing fraud. That means that people are not actually building things, that people are likely buying land and speculating that a data center might get built there.
Starting point is 00:55:34 Perhaps they'll file some planning paperwork, paying their CEO's six figures the whole time. Fermi is a great example. Rick Scott's Fermi. building an 11 gigawatt data center out in a... I forget, it's Project Matador. Don't worry, though. They're not building anything.
Starting point is 00:55:49 I have a patch of land. The CEO just left. They apparently didn't pay their contractors. Broad. So this is the thing. Everyone's talking about the AI boom with all this certainty. But the actual proof that things are happening isn't really there.
Starting point is 00:56:07 In fact, and I did the maths, and it turned out the over- 50% of the data centers under construction through the end of 2020A are for open AI or Anthropic. Every time Anthropic announces, they just announce a 3.5 gigawatt deal of Broadcom chips. Where are they going? Where are they going? No one asks. No one thinks.
Starting point is 00:56:30 No one tries. And the answer is they're not going anywhere. These chips probably will never get bought. So, okay, so let's walk through this a little bit, right? So it sounds like, I mean, the video is selling it, they're selling them to these ODMs, right? So the ODMs are basically saying we will, we're getting contracts. So we'll keep buying chips because there's a lot of money in this market. I just want to be clear.
Starting point is 00:56:54 This is how it's always worked. This is not a weird thing. This is how they build a sense. Continue. Sorry. Because there's a lot of interest in AI. There's a lot of money that's raisable in AI. So you have a lot of entities saying, I want to raise money for a,
Starting point is 00:57:09 AI projects, this is leading to a lot of, we will now spend money on these ODMs to, hey, we want to buy X number of chips in set up in servers, but then there's nowhere to put up. I think I've muddied it up a bit. Okay. So you've got two stories. The ODMs are, so when a hyperscaleler at Microsoft, they said $37.5 billion of CAPEX last quarter. When they buy servers, they buy from the ODMs.
Starting point is 00:57:35 Yeah. then put them in a warehouse in Taiwan and they say, okay, when you're ready for the data center, let me know. Yeah, and these data centers are taking longer than, or are harder to build than people realize, they've raised the money, they've made the orders, there's nowhere to put them, so the warehouses are piling up. The video's like, hey, put them wherever you want. Like, we're getting our paychecks. Like, you can put them on hot air balloon.
Starting point is 00:57:55 We don't care. The dodgy thing with Nvidia, though, is that it's unclear because we're talking 100 billion-plus GPUs that have nowhere to go that have been sold, which begs the question of whether they're leaving Nvidia's warehouses at all. Yeah. Because Nvidia could do accounting treatment that just goes, yep, this is yours now. And then it's here. And so, but completely separate to that, because Microsoft, Amazon, Google, their data centers are being built, though they're taking forever. But even then, there's not enough capacity to install these GPUs. Then completely separate to that, over 100 gigawatts of data centers have been announced that are just not
Starting point is 00:58:35 being built. Yeah. And those are more than likely not hyperscaler ones. They are more than likely random fly-by-night operations. Their companies like Nebius, N-scale, iron, these former crypto companies that have moved into AI. Are they, I know they're raising money. Are they spending money to the ODMs, like there are chips somewhere in a warehouse, either the ODMs or the Navidias that they paid for, they just have nowhere to put them?
Starting point is 00:59:00 Or are they just raising money and paying salaries until it fizzles out? Lilla Colum A, Lillacom A, Lillacombie. Hard to tell. I wouldn't be surprised if it's both. I think that there is... And when, like, a Cori, for example, they buy from ODMs like Dell and Super Micro. He recently had a co-founder arrested for selling chips to the Chinese,
Starting point is 00:59:24 so that's cool. But yeah, I think that there is a lot of... Yeah, we're building a data center. You know, business is rough. We just got to find it. find the land. We've got to find the power now. That's going to take another three months.
Starting point is 00:59:39 I'm going to need to make $650,000 a year. In fact, that's probably a fun thing to go and look at the companies in question and seeing what executive compensation is. But then there's also just the problem of data centers are hard to build. Yeah. Well, this sounds like this at least rhymes with the housing crisis. The magnitude is a little bit smaller. And tell me if I have this right.
Starting point is 01:00:01 Like my understanding of the financial crisis of the earlier 2000s is, okay, we have these, in that case, financial product, these mortgage-backed securities. And people want in on those, right? Because they're making a lot of money selling these. They're making a lot of money reselling these. But you kind of ran out of mortgages. But everyone still wanted to get into this. But there's no more mortgages to put in the mortgage-backed securities. So we say, well, we'll make these credit default options and these swaps.
Starting point is 01:00:28 And we'll build derivative products on top of these. these, we just need things that we can keep selling because there was more money that wanted to be spent here than there was things to actually spend it on. And of course, once you had built out this giant house of cards built on leverage and bets on bets, when the middle of the house couldn't support the whole thing fell down. This feels like a simpler version of that. There's a lot of money out there that's like we wanted to get into AI too because every 16 seconds, we're getting an article about how it's the most powerful technology ever and is about to take over and take all our jobs. So there's a huge amount of money that wants to go into AI, but there's not actually
Starting point is 01:01:01 enough places to put it. And that seems like a summary of what's going on now. And literally, there's not enough land and buildings that can take the chips to put the chips in. So we have all this money being spent, and the Vidae seems to be collecting a lot of it. But there's nowhere to put these chips. It seems to be what you're saying. It's just way more money that wants to go into this market than there is actual investable assets to put the money. into. And so shenanigans follow and you get a very fragile system. And this is why we're worried about the private debt market is beginning to teeter a little bit because these investments aren't returning. DeVidia has so much of this money coming in with nowhere to put it. This feels like
Starting point is 01:01:41 that's the core of instability. So what happens when some of these contracts fall apart and the Vidae has a fall and it's, you know, X percent of the stock market? Is that the right way of seeing it? It's like it kind of rhymes with the financial crisis in that sense. So here's the thing. I don't think it will be as bad. It's not as much money at stake by far. And it's not derivatives. It's not bets on bets on bet. So it's simpler. Not yet. Yeah. With that's the big thing. It's not derivatives. Private credit, the big scary thing there is like 30 to 40 is related to the software industry and software debt, which is a whole separate subject. You are right in the there is a massive amount of speculation happening here to quote Gordon Gecko, I think from one of the Wall Street movies.
Starting point is 01:02:25 Speculation is the root of all evil. Someone correct me whether it's Wall Street 2, money never sleeps, but it's, it's weirder than that. This is unlike anything, because it's a very centralized thing of Nvidia and Nvidia's continued value,
Starting point is 01:02:43 and Nvidia's kind of load-bearing 8%, 7% of the stock market. It's also very weird that it's one company, effectively doing it, but there are hundreds of billions of dollars of data centers that are allegedly getting built and probably maybe half of that, maybe 75% of that is funded by debt. The private credit industry, the thing that's scary is that much of private credit is funded by retirement and
Starting point is 01:03:06 insurance. So right now, I don't think data centers make up a ton of private debt. That's awesome. Like at least not a load-bearing part. I will say the actual housing crisis comparison I'd make is venture capital. And it's not related to, it's actually not related to data. centers at all. So what it is, is AI, venture capitalists get paid sometimes as a percentage of the funds value, the assets under management, like any kind of asset manager. So AI
Starting point is 01:03:35 companies right now are awesome for them, because they get them and they're constantly, a number go up so far, so big, so huge, because AI companies are fluffy right now, and everyone has these AI companies. And in the subprime mortgage crisis, the way that people waved away the thing
Starting point is 01:03:51 about, well, your interest rate's going to change in a year or six months, was they said, well, I'll just refinance. In the case of AI startups, Eli Gail, famous venture capitalist yesterday, they said, all AI startups should look to exit in the next 12 to 18 months. And it's like, okay, well, why would you buy them?
Starting point is 01:04:10 Because most AI startups are just rappers for models, and you can't take them public because they lose a bunch of money. The subprime AI crisis I talk about is partially with companies not being out to run because the costs go up. It's also, you've got 200 billion, $300 billion worth of venture capital tied up in AI startups that can't be sold. Right.
Starting point is 01:04:32 And how does that connect to data centers exactly? Well, data center customers, predominantly AI startups, predominantly two of them, Anthropic and OpenAI, but others as well. Cursar just signed a deal with XAI to rent GPUs, for example. What happens when all of those die? Who's going to pay your data center bills? Well, so all the data centers are deeply unprofitable because of the horrible debt they require. It's just, it's not, like you kind of said, it rhymes, but it's not like for like. And again, I say it's like, more people should be thinking about this.
Starting point is 01:05:07 Even the people who are AI boosters should be thinking about this, because this is an existential threat. This is not just a Ed's being a hater or Ed hates this. It's like, the maths doesn't make sense. There's not enough space for the GPUs to get installed. There's not even things being built for. for half of them. If they managed to sell, if Nvidia sells like half a trillion dollars worth of GPUs in the next year, they're not going anywhere. In fact, I worked out mathematically, based on their last quarter, it takes six months to install a single quarter's worth of GPUs,
Starting point is 01:05:41 and I actually think it takes longer now. At some point, this falls apart, and everyone's going to act as if it was a big surprise, and they shouldn't. Warning signs were there from the beginning. Right, right. They're little, they cannot. keep selling that many GPUs because you're, you're where, there's nowhere to put them. And you're building up such a supply. Right. So then the two were, I mean, it looks like what's inevitable is going to be two things, financially speaking. There's going to be a stock market hit and private, you know, retirement fund insurance hit when the, this game of musical chair stops, which is going to probably lead to much more financial scrutiny, probably regulation on accounting within these companies. And
Starting point is 01:06:25 then the venture capital firms, when they take the hit of like, oh, we couldn't exit these companies, which we're not otherwise going to be able to get an exit out of, if we don't get them sold right away, because, again, it's hard to build a useful, profitable AI product. You're going to get an AI winter. So they're going to be like, well, forget this. You're going to have a few years where it's going to be very difficult to get AI an investment. So I would actually reframe that slightly. I think what happens is, I think you're right about the stock market stuff when it comes to the AI startup.
Starting point is 01:06:53 I think what's going to happen is a fire sale moment. It's going to be a panic. You're going to hear about an AI startup, maybe perplexity, maybe lovable, that needs to sell. They're like, we need to get this out the door. And a funding realm will fall through, then an acquisition path will fall through.
Starting point is 01:07:12 The moment it becomes obvious that AI startups are trying to sell, everything will start collapsing. VCs will have to start telling their investments to sell. Sell right now. Get out of that. Except when you look historically, AI startups do not get acquired. Winsurf. AI coding company.
Starting point is 01:07:35 Acquired by Google. Nope. They paid $2 billion for three people. The rest of them got sold to cognition for a couple hundred million dollars, and most of them got laid off. What was it? Infliction AI to Microsoft. Billion or so dollars. Mostly went to investors.
Starting point is 01:07:51 mostly went to Mustafa Suleiman. What was the other one? Character.A.I. bought by Google several billion dollars, except that mostly went to the founders and some of the team, and of course the investors. But the actual products are not getting acquired. The actual IP doesn't exist.
Starting point is 01:08:08 So, when these things come to exit, I don't think it's going to be pretty at all. In fact, it's really easy to clone most of these companies because they're just wrappers for LLM models. Right. And the top minds, which is what is actually, being acquired, they've all pretty much now, there's not that many left of truly innovative,
Starting point is 01:08:28 you know, researchers in this space who are doing startups to get a, all the big companies have snapped them up for the most part. That's the issue. They're, you know, Dennis Isabi got snapped up by Google, you know, Hinton's company got snapped up. I mean, these companies have, there's only so many of these big, like, academic research minds and they've all, for the most part, been, and sometimes it's very expensive to do. You had to buy and shut down their company to get them. But I hear your point. The problem is, yeah, you cannot, if you're a VC, assume, well, anything we fund will also get bought for a billion dollars because our founders are so brilliant. It's like actually the brilliant founders are already, for the most part, probably under contenders only so many of them.
Starting point is 01:09:05 And they're under contract with these companies. And if what you really have is the product, which is a point, and then we'll wrap it up after this. But I do think it's an important point is that, like, it is, we don't really know how to build very useful, profitable products. That's the odd thing about this space. Well, I mean that you can't. There's a couple popular products. The various coding harnesses like CloudCode, etc., are popular among programmers. Not a particularly profitable product space, though, because they're expensive to run.
Starting point is 01:09:35 The chatbots are popular in the sense that they have lots of monthly active users, but it's unclear that I don't imagine those are particularly profitable either, just because of the compute cost of people using them. And that's kind of it. I think is the problem. It's very difficult to have your rapper company actually be a large concern. So that's interesting. Yeah.
Starting point is 01:09:59 Yeah. So that could be, that's the story that underlies all these other stories. And I think it's going to, if it's true, I think it'll surprise a lot of people because it's going to be, biggest technology ever about to conquer everything. Biz technology ever about to conquer everything.
Starting point is 01:10:13 AI winter stock market collapse. Never mind. Yeah. That is this. If that happens, that's going to be, that will be an interesting moment. I think there's going to be a lot of frustration among the American populace of like, well, wait a second.
Starting point is 01:10:25 If that happens, you spent two years trying to scare me. You spent two years of forget COVID. COVID's cold and flu season in terms of disruption. This is like World War II level impacts on our country. Like it's just this is it. If it does fizzle into not only fizzle, but basically the conclusion of it is that everyone's, you know, in retirement portfolio halves. And then that's that. That's not going to go well. I mean, that would have like political ramifications here in the U.S. I think you're going to see, you know, political parties rebuilding around how they think about these technologies. And maybe it won't happen. But I mean, I am confident on my AI startup thing because every single AI startup is a wrapper of a model owned by someone else.
Starting point is 01:11:07 And every because the core thing. And then we can wrap up. I apologize is that you cannot control the cost of a user with an LM. You can't do it. Yeah. and also your most excited customers are the most expensive, which is antithetical to how a business works. And also all of them are unprofitable. Yeah, this is very different than even though the SaaS model is now falling apart for other reasons.
Starting point is 01:11:31 It's a very different situation where at least what made that tech sector so desirable, for example, was this idea of you can just scale up profits infinitely. Everyone who pays $20 a month for this is $18 a profit. and we can handle an unlimited number of users. And that, of course, got a lot of private equity eyes bigger than their stomach. Like, oh, great, we'll just build giant sales teams. And look, if line goes up like this with 10 salesmen, what if we have 100? And, you know, but at least the underlying profit mechanics made sense of it cost us negligibly,
Starting point is 01:12:04 negligibly more to have 100,000 users versus 1,000, and it's massively more income. This is very different, you're saying, than LLM-based AI. it's actually very expensive to service the users, and the more they use it, the more expensive it becomes. And that's a hard dynamic. Yes. And more users doesn't make it cheaper.
Starting point is 01:12:22 No, more expensive, in fact. Yeah, it's unlike a gym or something. We're like, great, more than merrier because very few people actually use it. It's actually the opposite. All right, Ed, well, a pleasure is always. Thanks for having me, yeah. Yeah, you always bring out the radical in me,
Starting point is 01:12:36 but I think it's, we got to balance out. I think if people are hearing all day long, the strongest of boosterism. So it's good to check back in on some of these stories and give a less impressible take. So we'll have to do this again soon because there will be,
Starting point is 01:12:49 unfortunately, no shortage of new stories coming out that we're going to have to react to. Thanks for having me, man. All right, talk soon.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.