Deep Questions with Cal Newport - Ep 386: Was 2025 a Great or Terrible Year for AI? (w/ Ed Zitron)

Episode Date: January 5, 2026

Ep 386: Was 2025 a Great or Terrible Year for AI? (w/ Ed Zitron) 2025 was a year that was saturated in AI news, from Deep Seek, through claims of economic “bloodbaths,” to GPT-5, Sora, and Chatbo...t girlfriends. Frankly, it was exhausting. As we now look back on 2025 an interesting question arises: all in all, did this end up being a good or bad year for AI? To help me answer this question, I’m joined by hard-hitting AI commentator Ed Zitron, who's been everywhere in the media in recent months helping to make sense of the wild claims being thrown in the public’s direction. Together we go through the biggest AI stories of the year to try to make sense of what just happened. Below are the questions covered in today's episode (with their timestamps). Get your questions answered by Cal! Here’s the link: bit.ly/3U3sTvoVideo from today’s episode: youtube.com/calnewportmediaINTERVIEW: Was 2025 a Great or Terrible Year for AI (w/ Ed Zitron) [3:16]Cal Reacts to Comments: Is the Internet Becoming Television? [1:58:25]  Links:Buy Cal’s latest book, “Slow Productivity” at calnewport.com/slowGet a signed copy of Cal’s “Slow Productivity” at peoplesbooktakoma.com/event/cal-newport/Cal's monthly book directory: bramses.notion.site/059db2641def4a88988b4d2cee4657ba?bbc.com/news/articles/c5yv5976z9poaxios.com/2025/01/23/davos-2025-ai-agentsblog.google/technology/google-deepmind/gemini-model-updates-february-2025/openai.com/index/sora/openai.com/index/introducing-gpt-4-5/ai-2027.com/fortune.com/2025/05/28/anthropic-ceo-warning-ai-job-loss/media.mit.edu/publications/your-brain-on-chatgpt/usatoday.com/story/tech/2025/08/07/chat-gpt-5-release-date-open-ai/85566627007/#:~:text=GPT%2D5%20release%20date,release%20date%20for%20Part%202newyorker.com/culture/open-questions/what-if-ai-doesnt-get-much-better-than-thiswsj.com/tech/ai/ai-bubble-building-spree-55ee6128nvidianews.nvidia.com/news/openai-and-nvidia-announce-strategic-partnership-to-deploy-10gw-of-nvidia-systemsnytimes.com/2025/10/02/technology/openai-sora-video-app.htmlanthropic.com/news/claude-opus-4-5ft.com/content/064bbca0-1cb2-45ab-85f4-25fdfc318d89youtube.com/watch?v=Z_WEmjygNK0Thanks to our Sponsors: This episode is sponsored by Better Help:betterhelp.com/deepquestionsreclaim.ai/calexpressvpn.com/deepcalderalab.com/deepThanks to Jesse Miller for production, Jay Kerstens for the intro music, and Mark Miles for mastering. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:02 So much happened in the world of AI in 2025 that it can actually be hard to keep track of it all. I mean, remember DeepSeek? That was in 2025. As was Dario Amadee saying that we were going to lose half of white-collar jobs to AI as well as GPT-5's release, the release of Sora, AI looking like the best investment ever, followed by AI being described as a giant bubble that was going to bring down the economy, followed by that bubble being described as actually not being so bad.
Starting point is 00:00:32 This was also the year where Navidia CEO Jensen Wong took to stage in a conference wearing a jacket that, well, I'll be honest, looks like it came from the prop department for my Mad Max movie. Jesse, let's put this on the screen here. I mean, dude, you're a computer scientist. You were in a racer jacket. I love it. I'm here for it. What I'm trying to say is a lot happened in the world of AI in the year that just ended. And the key question that I've been grappling with is, did this year, in a lot.
Starting point is 00:01:02 end up being a great year for AI or a terrible one. I would believe either answer, and so much happened, it could be really hard to try to keep it all straight. So here's what we're going to do today. We're going to try to get an answer to that query. To help me in these efforts, I've invited to join me Ed Zittron. I think one of the big missed stories of AI in 2025 is Zittron himself, who hosts the Better Offline podcast and writes the Where's Your Eds at Substack?
Starting point is 00:01:29 He rose to become, I think, one of the more informed and important AI commentators out there. The secret to Ed's success is pretty simple. He just does his homework. He actually talks to sources. He talks to reporters. He reads earning reports. He gets leaked information. He talks to people within these companies.
Starting point is 00:01:45 He puts together the pieces. Old-fashioned shoe leather reporting on what's actually happening with these businesses, as opposed to reporting on the stories. These businesses are telling about what their technology may or may not do. my honest opinion, or at least my humble opinion, I think he's probably the most important AI commentator that you haven't yet heard about. So Ed is going to join me. And what we've done is we've pulled the biggest AI stories of 2025, one per month for the entire year.
Starting point is 00:02:13 We're going to go through them in order. And Ed is going to help us make sense of what was going on behind the scenes and what these stories actually mean for the AI industry writ large. We'll end up with the conclusion of just how good or bad this year actually was for AI technology. But by the time that we were done with this episode, you will be more or less fully up to speed with where we are at this moment in the world of AI and what is likely to happen in the near future.
Starting point is 00:02:42 All right. So let's get in this episode. As always, I'm Cal Newport. And this is Deep Questions. Today's episode was 2025 a great year or a terrible year for AI. And we'll get right into this. After the music. All right.
Starting point is 00:03:21 So, Ed, we got a lot to figure out. I got to point out something first, though. Okay. This is something I don't normally do. But for those who are watching, I put on a jacket. Nice. To try to compensate for your English accent.
Starting point is 00:03:34 A jacket for the British. I think it's going to make me look a little bit more scholarly and erudite. That's my, that was my strategy. I'm wearing a sweater that I've worn once. And I'm like, I guess I'm warm, but I look weird. But it's fine. That's my beard.
Starting point is 00:03:49 Yeah, but you sound, you know, but it sounds, I sound British and I can't hide them. Yes, and so that that gives you an advantage on me, but I think my, my blazer were kind of balanced it up. But yeah, I think we're good. I think we should be good. You're wearing a sweater in it, in Las Vegas, though. So that is, that should take points away. It gets cold here.
Starting point is 00:04:05 It gets cold here sometimes. I don't believe. I went once in July. I'll never believe anything else. Yeah, okay. I can understand that. All right. So we're going to try to figure out what the hell happened in 2025, right?
Starting point is 00:04:16 You and I both were covering AI in that year. It felt like all the things happened. There was no quiet period in that year from the AI front. And so what I wanted to do is go through month by month and hit some of the big headlines. And you and I will try to figure out what was that, was that good news or bad news for AI? What actually happened? So it's going to be like a trip down a sort of frustrating memory lane. All right.
Starting point is 00:04:41 Let's start in January. I actually forgot that this was in 2025. I thought it was earlier. Man, it was a long year. All right. In January, we get Deep Seek. Deep Seek, the Chinese AI app that has the world talking. Let me read the first sentence of a BBC article from that period.
Starting point is 00:04:59 Deep Seek, a Chinese artificial intelligence startup made headlines worldwide after it topped ad download charts and caused U.S. tech stocks to sync. In January, it released its latest model Deepseek R1, which it said rival technology developed by chat GPT maker open AIs and its capabilities while costing far less to create. This was like a huge deal that no one talks about anymore. Explain to my listeners, what the hell is Deepseek? So Deepseek was a really interesting one. I remember I was on a plane. I was just started to move back to New York and such.
Starting point is 00:05:34 Like I spent a lot of time there. And I remember reading about this thing. And what it was was that it was a model that was trained for less money than other American models. So American models that cost like $50, 100 million or more to train. DeepSeek, apparently cost $5.3 million, I think, to train. It's really weird because it spooked the entire market. Like, everyone freaked out. And I remember thinking, this is an obtuse story to freak people out. Like, it was just like, even trying to explain it, because I did like a lot of media at the time, I was explaining it to people. I was shocked that people even had any interest in model
Starting point is 00:06:12 training. But the big thing that spooked people was, it was kind of the thing that's shown a spotlight on the Nvidia problem, which is that Nvidia is like the only company really making money in this era. And I think the people started to realize, oh crap, our entire stock market is based on that. And it also made it clear that all the American model companies don't really give a crap about any kind of efficiency or anything. And the reaction to it was great. Sam Orkman suggested we ban it. That was my favorite bit. They were like, ah, yeah, the sneaky Chinese are going to, it's because they might be able to see inside things. We can't possibly trust them.
Starting point is 00:06:53 What was really good as well was part of that, I literally was just reading about this yesterday, part of what was funny about it was part of OpenAI's complaint was, yeah, they might do IP theft. It's like, no, we only let American large language models do that. We couldn't possibly have the Chinese take away our plagiarism. machines. No. We are the world leaders and plagiarism. Yeah, there are you. Exactly. We can't have the Chinese steal our things. That's our job. But what was also interesting was they were like, should we sue them? Because there's a process called distillation where you basically take another models outputs and you use them to train another model. That's a very truncated version. And it was, they used chat GPT outputs to train deep seek and that made people pissy. The other thing was, was it was a reasoning
Starting point is 00:07:41 model, the R1 model, and Open AI had at the time only been out for a few months with its reasoning model 01. Yeah, that was a December 24 release, if I remember. I think it was September. Or September, okay. It was September, because it was the run-up to that was this whole thing, people like, oh, it's called Q-Star, it's called strawberry. It's going to change everything. It didn't change anything. It really, actually, it did change something. Reasoning models gave them more excuses to burn AI compute. But yeah, this whole thing was great for me. I did a bunch of media hits about it. But it was peculiar because it was like quite a nuanced story. And then you saw all of this xenophobic stuff being like, oh, oh, well, the Chinese, they're lying. They're lying.
Starting point is 00:08:23 They put out a paper about this. They showed people how it was done because they trained, they had to find a cheaper way to train because they only had, they had, I think, I forget, maybe they had 800 chips. They had quite old GPUs. And the thing is, it wouldn't talk about Tiananmen Square. And people were like, oh, look. this is proof that it's bad. It's like, yeah, it is bad. It does that. But are you shocked?
Starting point is 00:08:46 There's something that came out of China had censorship. Yeah, but it was, yeah, go on. What's amazing about the story, though, is it went away. Like, I mean, all the points you're talking about are fair points. And, like, the biggest destabilizing point for the industry was this idea of you don't need the very largest data centers. You don't need, like, the custom AC, Microsoft, 40,000 GPU data centers to build really useful language model-based AI.
Starting point is 00:09:11 But those big companies are dependent on the idea that only they can do it. And so it was almost like people didn't want that to be true. So we just forgot about it. Yeah, we memory hold this bad because at the time it was like, sure, because I got asked quite a lot of open AI, surely they're going to make a cheaper model now. They did not. Anthropic, right? Because if they, if they said that that was possible, their idea was we're now going to
Starting point is 00:09:38 spend five million versus 100 million on this, they would then have very little justification for raising so much money. But what about like the nano models? But then it didn't Open AI do, they have some cheaper to use models. Cheaper to use. And that's the thing. People love to use this as proof that the cost of inference is coming down. So inference being how an output is done.
Starting point is 00:10:00 And they're like, well, the models are cheaper. It's like, yeah, if you sell something cheaper, it's now cheaper for someone to buy. there's no proof that that's actually cheaper to run. And indeed, it would have been so easy for them to just say, actually, this is a cheaper model. It costs this much. The fact they didn't means that it's still unprofitable, which is crazy. But the thing is, even Deep Seeks models aren't, like, no one proved that they're profitable to run. Yeah.
Starting point is 00:10:26 And so, but I think everyone memory hold it because they, I don't know, I think the media just was willing to. That there was, this was just a narrative that they could just get rid of. The Chinese angle must have made a difference because to me, an even bigger story, which I didn't really know until talking to a source at the end of 2025, the biggest story that's not being talked about is if you look at coding agents and you look at Cursor in particular, to me, the big economic story was the fact that Cursor at some point quietly just said, we're going to train our own model. We don't need a frontier. We don't need a frontier model. We'll start with open source weights and train them themselves. Now, whether or not that model is profitable for them, I think that opens the, door. Curser has been working on their own models forever. Yeah. Ever and ever. Maybe the reason,
Starting point is 00:11:12 they raised $2.3 billion, so maybe the plan is for them to train their own one. But it's, they also, there was a story back in September that Tom de Tan over a newcomer. Apparently, someone said that cursor is sending 100% of their revenue to Anthropic. So they're still, they're one of Anthropics' largest customers. So it's, it's really interesting, though, that trying and they've gone very hard on composer and things like that. So maybe they are trying that for real now. Maybe they, I mean, they're capitalized. But the question is, to what end? Is it more profitable? Because if it ends up just being unprofitable and they don't pay Anthropic, that would also be very funny. No, I mean, I think the future has to be,
Starting point is 00:11:54 and again, that's more later if we need to. I think the future is really going to be small models, models that fit on on machine, right? Yeah. What is the only thing that's profitable is I'm not spending any dollars to run inference because this two billion parameter model, which is only really trained to do the very narrow thing I needed to do, which is like understand your spreadsheet program and help you do it, can run on your phone. It can run on your chip, you're paying for electricity. Like that's got to be the only way that this is profitable, but those companies cannot be large company. Now you have a, you have 10,000 smaller companies instead of open AI as the new Microsoft. Yeah, I also think the small language model stuff, because small language model is
Starting point is 00:12:32 just the large language model with less parameters. And while it is possible to do edge, I just wonder how useful those edge. Like, you can run on device, but how long does it take to run a device? Invidia put out something like a DGX Spark box thing that can run large language models. The question is, at the end of all this, is it worth it? Like, is spending, maybe it is worth three grand, 12 grand, whatever for one of these machines? But the company's building these models are not optimizing for that. They're not building models with that in mind. Nvidia has built that box so they have something else to sell. It's been in the works for a while, but it's like, we don't, no one had, I can't find anyone who has run client side, who has used
Starting point is 00:13:19 it like in that manner, who's like, oh, I do all my coding, but I do an on-device one. I'm sure they exist. But the fact there isn't a growing community, if that suggests that that might not be viable either. But I think any future large language models will have to be on device. It's just the question is, does that happen at any kind of scale? Yeah. All right. So other thing in January was agents. I tracked this down
Starting point is 00:13:41 recently. This is when the chief product officer of OpenAI said the quote that then got translated by Axios into 2025 is the year of AI agents. Axios does this, by the way. I don't know if you've seen this. As a reporter, it's really a pain.
Starting point is 00:13:57 They invent a quote. They paraphrase what someone said into a better form and then we'll say like this headline is 2025 is the year of agents open a i cpo says you would assume that means that the open a i cpo said 20 25 is the year of a i agents he did not now he talked about 20 he said things that was less quotable they did the same thing with the bloodbath and dario amadee he never actually said it was going to be a bloodbath but they had a headline that said it this year is going to be a blood bath dario amade says so anyways i watched the no axios had a quote as well where it was like this is proof that i is taking jobs and then you read the study and it's one line saying yeah we kind of see some effect it's
Starting point is 00:14:36 it's it's very frustrating because it's it's it's not helping sorry i'm just going on endless yeah i know so it does it does i actually told i was i was talking i was talking to uh speaking of a skeptics i was talking to the the gary marcus not long ago and he happened to be on his way to do do something at Axios. And I was like, you got to tell them to stop doing the headlines because I keep getting dinged by fact checkers afterwards. Like, we cannot find evidence of this. All right. But anyways, early in 2025, this is when we got agent excitement. And I think this kicked it off. And then around the same time is when Sam Altman wrote a blog post that said they're going to join. Yeah, reflections, which was probably going to, agents will probably join the workforce this year
Starting point is 00:15:23 and material impact their output. So why would what are, why do they start? talking about AI agents sort of out of nowhere in early 2025. Well, because they needed something to keep selling this crap. And they launched Operator within January, I think. Maybe that was February. And it didn't work. And this is a failure across the board with the media. They all went, yeah, operator, it can take actions in a breath.
Starting point is 00:15:48 No, it can't. Wait, okay, it can take actions in the same way that if I just throw a brick, it will, that is me, I don't know. playing a game if you consider it. Like, you can take abstractions from abstractions all you want. But yeah, they, it's this idea. It's just marketing. It was marketing in mythology. The agents, because agents, much like the term AI is a marketing term, and that's an Emily Bender, Alex Hannard quote, there is this thing of agents conjure up this image in your head of like, oh, an agent that goes out and does something for you. Now, agents going back to 2023, literally just spent chatbot.
Starting point is 00:16:25 like that was what it originally meant. But agents within this era were meant to be digital labor. And I take that from Mark Benioff and Salesforce and agent force, where they're like, oh, it's digital labor. Sam Altman's comment was, and he always says may or probably, but it's like agents may join the workforce. Egregious lie. There was no proof at that time that it was even possible to do it. And guess what? Where we are today, it's not possible either. have a coda because at the at the end of 2025 we'll get to a new story where spoiler alert open i takes all their resources away from agents because it wasn't working but they were they were excited about it i found the bini off some bini off quotes by the way speaking of egregious he didn't say probably joined the workforce like sam did he said it is going to not only are they going to revolutionize the workforce they're going to create two to five trillion dollars and economic activity. $2 to $5 trillion from agents.
Starting point is 00:17:29 Wow. Yeah. So, you know, almost there. All right. So agents become a big thing. What were they,
Starting point is 00:17:34 to me, I always think, like, what is the, there's a different flavor the AI companies are typically pushing. Like, they have to have a flavor of excitement
Starting point is 00:17:41 because of the investment train. Where were we? I don't really remember. Like coming out of 2024, that was more, what, like a, AI superintelligence. Like,
Starting point is 00:17:49 they were pushing a different message back then because I remember thinking, the shift towards agents, and therefore a shift towards we will just be in the workplace, helping your bottom line. That felt like at the time a shift towards a more pragmatic vision. I don't know if I agree. I think at that time, that was when you started hearing people say coding agents. Yeah.
Starting point is 00:18:11 And coding agents was their favorite one. I'm sure you have some stories coming up where coding agents, oh, you can get them. And later in the year, Anthropics spreads and bullshit around this as well, where it was they go out and do things autonomously for you. That was the whole thing that was being pushed. I know, because I read every agent's story because I found it so repulsive. The idea was that they, clearly in the last year in 2024, they kind of squeezed all they could have out of chatbots. Like, no one was like finding you, they couldn't do more things. They weren't really sure what to do. So they went, agents are coming. Yeah. And what will they do? What do you want them to do?
Starting point is 00:18:50 Because they might, might do that. They can't. But they might. What if they did? Wouldn't that be good? Please pay me. Well, wait, but so I think by the time this interview airs, I have a piece out on agents. I talked to, I couldn't talk to anyone in the industry, but I talked to someone industry adjacent, someone who made the main benchmark you used to evaluate coding agent. And like, so here's the story. SWE bench?
Starting point is 00:19:16 Terminal bench. Yeah. So I was going deep on what are these agents? And here's what I learned. Okay, so there's two ways AI is helping coders, and they get mixed up. So there's sort of the tab complete way, which goes back early codex, 2021, even pre-chat GPT, which is, it's all based on one-shot queries to a language model. So that is like, I'm writing some code right now, and I want it to like finish, I'm trying to like write a function to do something, just like finish this thing immediately that I'm writing. And that is powered by, like, you make one query to a language model.
Starting point is 00:19:51 So behind the scenes, it's here's the code. that we've written so far, how do you think I should finish what I'm writing? So, like, that's right in the sweet spot of LLMs where you're trying to complete or auto-complete. Because that's basically how they work. They are predicting the next token. And programming languages are highly structured, so they're very predictable. So, like, that's, but that's been around pre-chat GPT, but those work pretty well. Those are now integrated into most developing environments. My students, I'll call it tab complete. You just, oh, I don't want to, yeah, that's, cursor calls it that as well. Yeah. I don't know if you'd say they work really well.
Starting point is 00:20:23 Brown from the internet of bugs describes it as makes the easy things easier and the hard things harder. Fair enough. It makes it, it makes intro computer science problems that's very well. Which still has utility. Yeah. Which is like that's something. The main, when I talk to real programmers, like the main thing they say is useful is they don't have to look up interfaces for function and libraries. So you're like, okay, I can just tab complete a function call here. Oh, this is all the parameters. Okay. I would, otherwise I would have had the Google stack overload and whatever. But the thing is with that, though, is that's useful. But if it's querying libraries, could it not get that wrong?
Starting point is 00:20:59 I mean, you still have to check its work. But maybe that is quicker. Yeah, yeah. So caveat M-Tur. Then you had agents emerge, right, which can do vibe coding, right? So, like, agents was more like, I want you to create a prototype of a dashboard. I want you to add this to a personal website. And it does multiple steps.
Starting point is 00:21:20 So it's making multiple queries to an LLM. So it'll ask the LLM for like, it'll explain what's going on. Here's the tools available. What's the plan? And then it'll go step by step. Okay, here's the output of that last step. What do you want me to do next? So it's a program that's executing, that's what makes it an agent is that it's executing
Starting point is 00:21:36 multiple steps, each of which it's based on its own LLM query. What I learned from the people in the field is in one sense, this worked really well in that it can vibe code. If you say, like I want you to produce like a prototype or whatever, it could actually get through multiple steps and produce a prototype of whatever. And it turns out it was like pretty good at this because all of the stuff you have to do to create a computer program you can do is text-based commands in a terminal. So the tools that the LLM had to work with were text-based commands.
Starting point is 00:22:08 All of these coding agents operate in a text-based terminal environment. So on the one hand, like, it could do this vibe coding stuff pretty well in the sense of like it could actually produce a program that more or less did what you said. I got to push back, cow. But wait, I'm going to let you push back in a second. is I'm going to push back on myself first. But on the other hand, these things that were being vibe-coded have no economic utility. So, like, in the abstract, right, if you're like, can we have a multi-step, multi-step execution,
Starting point is 00:22:34 call an L.M a bunch of time to do bunch of stuff on its behalf. And on the other end, end up with an updated website or a web-based dashboard that, like, more or less does what you say. That it can do that. But the economic utility of that is very limited. All right. Can it do that? Can it do that?
Starting point is 00:22:50 Because that's the thing. Vib coding, I believe, is one of the greatest frauds of all time. Because if you go on like Replit or lovable or what have you's subredits, you can see people struggling with basic things. While it may get one thing right once, it's not going to get it done every time. And on top of that, vibe coding doesn't, like, if vibe coding is, I am a non-technical person building software, it is a lie. It's a fraudulent thing.
Starting point is 00:23:16 Because you cannot just vibe code without knowing stuff. You have denied to read code because something will break. It will not make stable or reliable or secure software. On top of that, the multi-step things... It does create if you need... There's like a real use case, right, for someone I know. If it really is just like, I want a web-based interface for the silent auction for my kid's school. And I don't know how to program.
Starting point is 00:23:41 Like, it can do that. That it can do. Can maybe do that. Or in the large language model component, it's like, will this work this time? Oh, yeah, it might take you. But you can work with it and finally get that thing to work without really having to know. But you do kind of still have to learn a lot of stuff. Fair enough, because the people I've talked to doing this without coding background, they now know a lot about setting up accounts on these different like AWS clouds and different coding environments. And you do have to learn. Yeah. Which kind of at that point, that's not an agent. At that point, that's just, that's just generative code that may or may not work. And then you need to do all the infrastructure. things. It gets back to the thing of what is a software engineer. And it's like a software engineer is not just writing code. I just think that even now, and I'm not saying this is a criticism necessarily, even then the marketing is so powerful that even we fell into that trope of like, well, it can do this. Can it? Can it, though? Can it do it every time? How much, how replicable is this process? How
Starting point is 00:24:45 realistic is this. I think you're right, though, it's like, it has some use, and I've heard people say this is what with like MVPs. Like, you need to fudge something together for an investor, and you need to do it quick and dirty. It doesn't have to be perfect, but that's to kind of resemble the form and function. I've heard people do that, and it's great. Yeah, that's what I've heard. And I've heard dashboards. People like dashboards. So we want internally. So, yeah, okay. We're in January, man. Okay, we got to roll. All right. We're going to take a quick break here to hear from some of the sponsors. that make this show possible.
Starting point is 00:25:18 I want to talk about our friends at ExpressVPN. So I just heard something mind-blowing. Netflix has more than 18,000 titles globally, but only 7,000 of those titles are available in the U.S. You're missing out on literally thousands of great shows unless you're using ExpressVPN. You see, in addition to the world-class protection they give for your Internet activities,
Starting point is 00:25:41 ExpressVPN lets you change the location from which the internet thinks you are coming from, which means you can change where Netflix thinks you're actually located in the world. ExpressVPNS servers in over 105 countries, or in exactly 105 countries, rather, and all 50 United States. You can gain access to thousands of new shows
Starting point is 00:26:00 and never run out of stuff to watch. So, for example, when I was in London recently, and I was using Netflix in the hotel, there's shows there that we don't get here in the U.S., like Top Boy or Poldark. If I wanted to watch those shows right now, it would be super easy. I would just have to open the ExpressVPN app,
Starting point is 00:26:16 select United Kingdom, refresh Netflix, boom. Now I'm seeing the United Kingdom shows. Now if you're going to use VPN, use ExpressVPN as the one I recommend because it's easy. It works on all devices, and it's rated number one by top tech reviewers like CNET and The Verge. So be smart and stop paying full price for streaming services and only getting access to a fraction of their content.
Starting point is 00:26:37 Get your money's worth at ExpressVPN.com slash deep. Don't forget to use my link at expressvpn.com slash deep to get up to four extra months of ExpressVPN. This episode is also sponsored by BetterHelp. This new year, you don't need a new you. You just need to feel lighter. Think about it. Every January, we're told to add more, more goals, more hustle, more change. But what if feeling better in 2026 isn't about adding?
Starting point is 00:27:09 What if it's about letting go? letting go of what's been heavy or stressful or holding you back. Therapy with better help can help you see what's been weighing you down. With a licensed therapist, you'll gain clarity, perspective, and emotional space for the possibilities ahead. You don't have to reinvent yourself to move forward. You just have to make room for the lighter, truer version of who you already are. And if you're considering therapy, consider better help. With over 30,000 therapists, BetterHelp is one of the world's largest online therapy platforms,
Starting point is 00:27:39 having served over 5 million people globally, and it works with an average rating of 4.9 out of 5 stars for a live session based on 1.7 million client reviews. Make this year the year of you letting go of what's heavy with BetterHelp. You can't step into a lighter version of yourself without leaving behind what's been weighing you down and therapy can help you clear space. Sign up and get 10% off at BetterHelp.com slash deep questions.
Starting point is 00:28:07 That's BetterHELP.com. dot com slash deep questions. All right. Let's get back to my conversation with Ed. All right, February. Oh, I know. So it's not going well for AI so far. I mean, at least.
Starting point is 00:28:20 No, great. Well, let's get to February. February, I think, was the quietest month of 2025. There were two models released that I don't remember at all. And I want to see if you remember these at all, too. The two models that were released in February 2025, Gemini 2.0. And this one I really forgot. OpenAI is GPT 4-5, which I ended up learning about later.
Starting point is 00:28:43 This was them fine. Tell me if I have this right. This was going back to the scaling, you know, the scaling article I wrote, I talked to you for that. This was the result of the project they started right after GPT4 where they said, we're going to make the model 10 times larger. We're going to make our data center 10 times larger. And the result is going to be how 9,000? And it was it. And it was the like, oh, crap moment where they're like, oh, it was.
Starting point is 00:29:05 And this is what they eventually released out of that, I think, was 4. Yes, and I absolutely knew that this was coming, so I have the tweet up, and I want to read just a little bit of this, because it really tells a beautiful story. What are you reading? Your tweet from February 27. All right. Okay. GPT45 is ready.
Starting point is 00:29:26 Good news. It's the first model that feels like talking to a thoughtful person to me. I've had several moments where I've sat back in my chair and been astonished at getting actually good advice from an AI. Bad news. It's a giant. expensive model. We really wanted to launch it to Plus and Pro at the same time, but we've been growing a lot and are out of GPUs. We will be adding tens of thousands of GPUs next week and roll it out to the
Starting point is 00:29:48 plus tier then. Hundreds of thousands of GPUs coming soon. I'm pretty sure y'all will use every one we can rack up. This isn't how we want to operate, but it's hard to perfectly, perfectly even predict growth surges that lead to GPU shortages. A heads up, this isn't a reasoning model and won't crush benchmarks. It's a different kind of intelligence and there's a magic to it. I haven't felt before. Really excited for people to try it. Yeah, this was when, you're right, this is when people started going, hmm, I don't know about clammy Sam, um, you're starting to get a little bit worried. Yeah, but never want to hear magic. It's magic, though. Magic. It's so good that I can't tell you why it's good. It's just going to be magic.
Starting point is 00:30:30 Yeah. So, so cool people giving billions of dollars. So from what I understand was, they so they based by reporting for an article we'll get to later they knew about a year before this that they were in trouble going into the summer 2024 for sure they uh this project Orion their sort of next large model training after GPT4
Starting point is 00:30:52 was not generating the same leaps in performance that they had seen before and so this became a problem this is why my understanding is in the fall of 2024 tell me if I have this more or less right they began switching to talk about things like 01, reasoning models, models that were tuned. So they were on, not even on this 4-5 base, but like, I think those original ones
Starting point is 00:31:15 were actually tuned off of the GPT4 base, right? So they were taking... I think so. But Orion was such a mess. There was a Wall Street Journal story towards the end of 2024 where it was like, it's costing a bunch of money and it keeps, it isn't getting better. Yeah. And I think that they were just...
Starting point is 00:31:32 And that was pure scaling. That was their last pure scaling play. is they did the exact same thing they had done for GPT4, and they're like, let's just do that bigger. And it was, so that's expensive because that's a big model. And it just wasn't getting much better.
Starting point is 00:31:47 And that's why my understanding is they switch towards these tuning things. Because now they're like, well, what we'll do is we'll tune an existing model to do well on different benchmarks or give them specific features and talk about those particular features. So like reasoning is what really matters, not, uh,
Starting point is 00:32:03 you know, this model is just like, it's, much more better at everything. Like that was kind of the GPT4 experience for a lot of people. What I think it is is it's test time compute. It's just reasoning as in like instead of I ask it to write a fanfic about Scooby-Doo doing Tiananmen Square, instead of it just burping that out, it breaks it down into steps of what is Scooby-Doo,
Starting point is 00:32:24 what is Tiananmen Square and so on and so forth. Yeah. And then that is the only way they started. They were seeing reliable benchmark up improvements. Wait, to walk us through that. So you would, you would, you would, you would, you would, you would, you would query the LLM first, to be like kind of break down the user's prompt, break it down to multiple things. And then they would make multiple queries to an LLM on different parts of this
Starting point is 00:32:44 and put it all together at the end. It's a little simpler. It's usually with an LLM before reasoning, you would ask it, this is very simple. You would ask it a thing. It would spit out an output. Instead, here, when it spits out the first outputs, it's not sending them to the user. It's actually taking a query and saying, what is the user asking? Here are the steps. And this is all output token, so it's expensive. But it says, okay, these are the things that I think the user wants to do. Time to generate something for each bit to make sure that I'm doing it right.
Starting point is 00:33:15 And then the output happens. This allowed for improvements on benchmarks, and it had good returns on coding in particular. It also chewed up way more compute. And this helped everyone because the benefits of just training models by shoving a bunch of training data into them, those are, we hit the diminishing returns at the end. of 2024 as well. On top of that, there's the post-training aspect of basically correcting behavior saying this is a good output, this is a bad output. That's also where they saw it. And actually, I realize I'm getting ahead of myself, a GTC in March, so the next month.
Starting point is 00:33:53 Jensen Huang's-We're jumping ahead to March now. So tell us about this. This will be our So in GTC in March, two things, I don't know if you've got all the Nvidia stories because there's some weird stuff. But Jensen Huang on the big screen showed like the pre-training, so the shoving the data in. We're past that era. We're into post-training and inference now. I think we on the episode, and on an episode where we use some audio from you as well, I showed, I think, exactly that part of his speech. So was that, that was in March of 2025. That was a big conference. Right. Yeah. And that's where he was, just as an aside, because we talked about this in the intro of the show today. Why does this?
Starting point is 00:34:31 computer scientist in his like 50s who makes graphic chips with glasses, insist on wearing jackets that seem like they came out of the prop department for Mad Max Fury Road. What is going on there? Hey, I will defend, I had the menswear guy on my show to talk about Jensen Wong's jackets. They're sick. Yeah. They got zippers everywhere. I mean, yeah, he really needs to stop doing the racer jackets, though. Those don't, those don't look right. But the GTC jacket was cooler because it looked like a psychedelic alligator skin. But like he often is wearing like race car jack, like motorcycle racer jackets. Yeah, yeah, racer ones with all the zips. I can't start doing it. I love leather jackets. I can't pull off the races though. But nevertheless, he also during that put up a big, a big picture that said,
Starting point is 00:35:13 we've saw, we've, it heavily hit. He was like, we've done this many hoppers that we've shipped. And now we've done 3.6 million black whales and ended up in an analyst Q&A having to correct himself to say, oh, I didn't say shipped, I meant ordered. And also, it wasn't 3.6 or 3.2 million. It was actually half that because each GPU has two GPUs in it, because he counts by the die. It was when you started to see Nvidia start doing their kind of riddles. Yeah. Or it's like, oh, we didn't ship.
Starting point is 00:35:49 We sold. And they've been ordered from four of the largest hyperscalers. And it was interesting because it was the last, I think that March one, actually brought Nvidia kind of back to life a bit because looking at the stock in the time, they were kind of trundling and trembling and then they fell down towards the end of April. So this was an attempt to restart the hype cycle. But what's interesting as well was, all he did was basically say, yeah, we're going to have even bigger and more huge a GPUs than everyone will buy them.
Starting point is 00:36:22 We've sold so many and we love selling them and they're so good and it's so expensive. And they are, well, and they were selling a lot. So like from their perspective. They were. But he loves. But the big other thing of that speech, if it's the one I'm thinking about, right, is that that's where he explained for the media is why I took in the analyst. Why like the Wall Street Journal's coverage from the year before of like on the information's coverage of Orion struggling or this. So that wasn't a problem.
Starting point is 00:36:47 And he made it very clear. He was like, look, we were in a scaling era. And that made us this good. And now we're in the post scaling era where we do tuning. that also requires a lot of GPUs, and that's going to keep us going. So he just sort of explained it in a way that I don't think it had been so clear before. And then we were sort of off to the radio. The media was back like, okay, we're good.
Starting point is 00:37:08 We're on track for things continuing to get. We don't know what any of this words mean, but the graph kept going up, even as you pointed towards the post-scaling age. So that was, I think, like, the first real explanation of like something a chance. Open AI wasn't really talking about it yet. They were like, it was a subtle shift as well, because one of the great men, myths of the AI bubble is that all of those GPUs were for training. Very convenient to say that, because if it's for training, well, we need them. That's the only way the models get bigger. I realize this is a few months in the future, but there was an MIT tech review piece in May of last
Starting point is 00:37:42 year where it said the 80 to 90 percent of compute is actually inference. So the truth is, all those GPUs aren't building better models. It's just running the bloody things. And I think that this GTC with Jensen Huang was an attempt to bridge that age to say, actually the returns, the bigger benchmark scores that we love to see, they're going to come from actually renting more GPUs just to run the models, but we can make the models better by using more compute, please by GPUs, as opposed to saying all of the compute that we're going to use is front-loaded to build the models. No, we need all this compute and you need to sell these GPUs even, because these models, to make them smart, need the compute to stay smart.
Starting point is 00:38:28 I see. So this was bad news for the AI companies and good news for Navidia. This is why they were pushing. It was for everyone. But the AI companies don't like this because they're saying this is going to make it more expensive to deploy and run these products. That's going to hurt our profitability. Navidia was saying the shareholders essentially, this is actually better for us as the chip sellers because you know, you need all of our chips just to use the product. It's not like, oh, you need, once we train it, then it's going to be cheap to deploy.
Starting point is 00:38:54 And maybe at some point, hey, it's, now we're shifting to a world. We're just running the product requires a ton of chips. And so, like, we're great. The market, if anything, is bigger for us. The thing is, if you look around the startups, though, they love this. They love saying test time compute. They love it. They love saying test time compute.
Starting point is 00:39:11 I'm sorry. But they really do. They loved it because it was a way of saying, well, we need, because think about it from the startup's perspective. Startup, if they say, I need a bunch of money for training, that's a one-off operation or maybe a couple times a year. I need a bunch of money for compute. I'm going to need a bunch of money.
Starting point is 00:39:28 Cursor, and this happens I realized later in 2025, ended up raising like $3 billion that year. They didn't raise that for trading. They raised that to keep running their bloody operations. Open AI building out all these data centers, building them out. They were doing that because the inference cost of running these models, the inference scale to actually provide a service at any kind of scale required all the GPUs.
Starting point is 00:39:50 So it was really just kind of a cartel operation type thing where everyone and propaganda as well that, yeah, we actually need all these GPUs just because these services are so powerful. When the real word is lossy. These services are just inefficient crap piles. They're slovenly. It's like the very, almost the opposite of what Deepseek was about. Though Deep Seek, I think Deep Seek required less GPUs for inference as well. But it's like in the face of that Deep Seek story, it's almost like, the American AI industry came up with a reason why Deepseek was both wrong and actually nothing to think about. Stop thinking about it. Stop thinking about Deepseek. And they did. By the end of April, people had forgotten about it. It's the F150 strategy. It just starts producing like cheap, reliable cars. And instead of Detroit saying like, well, we'll also have to now create like cheap that's what people want. They're like, no, we're going to convince half the country they have to
Starting point is 00:40:42 spend $80,000 on a completely soup truck that's capable of pulling a... Raptor, baby. Raptor. We need a raptor. All right. So then we jump to April. You got this shift, maybe you know where this came from. This seemed like a shift out of nowhere. So now we have like leading up to these, this talk of it's like very business focus. It's going to be agents. It's going to be like the age of test time computing. It's going to be the future this or that.
Starting point is 00:41:04 Then April we get AI 2027. And suddenly everyone for a while is back to talking about AI Doom and super intelligence. So for people who don't know, AI 2027 was like a fan fiction. I don't know. It was like a story. story that had animated graphs. And here, let me read their description. We predict that the impact of superhuman AI over the next decade will be enormous, exceeding the industrial revolution.
Starting point is 00:41:30 We wrote a scenario that represents our best guess about what it might look like. It's informed by trend extrapolations. That's the whole thing, by the way. War games, expert feedback experience, open AI, and previous forecasting successes. This scared to hell out of a lot of people, Ed. I know. I just spent yesterday reading this thing and pulling it apart. so I'm, I know all about.
Starting point is 00:41:52 But the key is, 2027, they had a non-trivial percentage of the extinction of humankind. I think that's the, that's the headline thing from this, right? Like, in two years, that could be the end of humankind due to AI extermination. So I want to be clear of who these people are. Daniel Kocula Taljo, I think his name, worked at Open AI for two years on the governance team. He was previously a PhD philosophy. student and UNC Chapel Hill. On top of this, Daniel quit, and he quit in the middle of June 2024, claiming that open AI was secretive and didn't care about superintelligence. Now, you'd think if you,
Starting point is 00:42:33 and they was named as a whistleblower by Kevin Roos at the New York Times, you know what whistleblowers tend to do? They tend to blow a whistle. Daniel didn't. Daniel didn't actually say anything. Daniel had nothing to share other than he wrote a scenario in 2021 that was sort of accurate about what the future might be. AI 2027 is him. The flipping Star Codex guy who was a psychiatrist who named Nick Land, a guy with a theory of hyperracism as one of his favorite writers. Like, the people that wrote this are not scientists. They don't really know anything about anything. The whole thing is written. It's like thousands of terribly written words, lots of scary numbers. But when you read it,
Starting point is 00:43:13 It hinges on one idea, just one, that in 2025, open AI, so open brain, open brain. That's their fictional company in the scenario. Yes, which could be anyone. Open brain invents the thing called Agent One, which can do AI research. That is the entire hinging of the piece. It can do a, do they define what that means? No. They never define it. So just to be clear, this thing, this thing that was written and to scare people, to grift, to help, what was the AI safety research nonprofit that's connected to the effective altruists? Anywho, the whole thing hinges on this idea that they invented an AI that could research how to build an AI they wanted. That's the entire game. They wrap it
Starting point is 00:44:01 in the trappings of finance and technical sounding things. And there is a bit in it where it talks about neuralese functions and then cites a metapaper. When you go and read the metapaper, it does not cite anything of the sort. The thing they quote is unrelated, Neuralee's whatever it was, it's an effective altruist thing. It's from less wrong. Sorry,
Starting point is 00:44:23 this thing really deeply pissed me off because I had people calling me. I had people friends of mine, people I love and respect, who were terrified by this. Me too. And that was the intention. Sorry,
Starting point is 00:44:32 I'm freshly pissed about this. But you got, because I read it at the time as well, and I came to the same conclusion as you was like, is anyone picking up? This whole thing hinges on they never addressed a question of how do you build a super intelligent AI like what's it going to look like like what's the architecture how does this work they just said we'll build an AI that could build a better AI and that'll build a better AI and better AI and then they'll just they'll figure it out the AI's will figure it out but as I keep emphasizing we do not know how to build a language model that can produce a a software for AI that's better than what any human can produce that's not how language models work novel ideas they they it's a they can produce a they can produce a they can produce a the type of code they've been trained on.
Starting point is 00:45:13 So unless you could train the AI and tune it with lots of examples of better AI, it can't build better AI, right? It can't leap beyond what it's trained on. And no one is even close to this. We talked about this earlier. We're tab completing function calls with AI right now, so we don't have to look things up, and vibe coding buggy dashboards. It's unclear how did you get from that to brand new models of human intelligence that the collective
Starting point is 00:45:39 AI community and decades of work couldn't figure out on their own. I'm not quite sure where that leap happened. So I was frustrated by this one. I was frustrated by it as well because it all came down to the same. It's all the FAC argument, right? So like the way I talk about, I rant about this on my show a lot, Ed, but the way I talk about it is what happened was, is that the existential risk community came out of effective altruism in the 2000 teens, which was a community that looked at.
Starting point is 00:46:04 We want to look at existential risks that might be unlikely, but have really high impact if they happen, like asteroid hits pandemics and super intelligent AI because, you know, we're doing our rationalist thing. And we're like the expected value of spending money now on rare things with cataclysmic outcomes is positive. That's basically what this was Nick Bostrom's existential risk center at Oxford was we probably won't get hit by an asteroid, but because it would eliminate all of humanity, it's actually a good investment to invest now in networks like look at asteroids.
Starting point is 00:46:35 And so one of the hypothetical risks they looked at was superintelligence. Post-chat CPT, they went through this weird sort of shift in their brain where they went from, this was this hypothetical risk we were looking at along with asteroids and pandemics, to what if it was actually happening now? Well, if it actually was happening, we're superheroes. We're the ones who like pointed, and there was this shift that happened in that rationalist, effective altruism community with the people working on existential risk, where they shifted from hypothetical to we're just going to convince ourselves it is happening because that makes us the most important people on earth.
Starting point is 00:47:12 You're being, you're correct, but I think you're even being kinder. I think they were waiting for a moment to Gryft. I think they were sitting there waiting being like, what's a thing we can grasp onto so that we can start draining cash so we can get a bunch of attention? Look at what happened with AI 2027. I'm sorry. I think Daniel is a true believer. I think he made him a super.
Starting point is 00:47:34 I think he's a cynical grifter. I think he's a cynical grifter. Interesting. Look at what happened when he left Open AI. He made this big song and dance with this petition back with Jeff Hinton, who I have some other feelings about, with all of these smart people. Oh, Open AI is doing such bad things. What are they? I couldn't possibly say.
Starting point is 00:47:56 He made all of this noise, and the media ate it up, all this whistleblower, this brave guy who came forward. What did he do? Nothing. He didn't share a goddamn thing. He was so concerned but couldn't say what about. It kind of sounds like AI 2027. Oh, I'm so concerned this will happen. What will happen?
Starting point is 00:48:14 I don't know. The agent were China, but steal agent too. The China agent, agent China, they're scary, right? Be scared. My nonprofit is this, by the way. The specific thing he was recursive self-improvement. That was the thing he was talking about was he incorrectly was saying, we're almost at the point right now within OpenAI where the AI is going to be doing the programming for us and then we're going to have this takeoff idea that goes back to the 1960s, this recursive self-improved idea.
Starting point is 00:48:43 I have another word for incorrect. Yeah. Lie. Yeah. I think he is a grifter. I think this whole thing was a grift. I think it's connected to the Star Codex guy. I think it's connected to the effective altruists.
Starting point is 00:48:54 You'll notice that those assholes popped up with FTX too. They pop up with everything. They've been looking. I don't know. I just, I'm extremely. extremely, extremely cynical. I don't know. I just feel correct on this because these people, if they, because here's the thing, here's my entire feeling about this, if they actually wanted to talk about scary, bad things that are happening, I don't know, talk about the Kenyans who are training
Starting point is 00:49:21 these models for, what, like $2? Yeah. How about you go and talk about all the theft that's happening? How about you go and talk about the gas turbines that were being spread everywhere? How about we talk about the environmental issues? How about it? we talk about the thing happening today. It's the same problem with Jeff Hinton. These people want to talk about, oh, what if the computer does this? Which is fine. We should have those conversations. But when you were talking about AI safety, why don't you talk about now? Because if you talked about now, you'd have to do something. You would actually have to take action, take a position, make enemies. You would actually have to do something that mattered. Instead, they do this cynical
Starting point is 00:49:59 grift where it's always, whatever you're scared of is just a, it's a couple years away, and we've got to do something now, which involves me doing a speech, which involves my speaker fee, which involves my non-profit, which involves me doing a panel and speaking to politicians. How do you, how do you understand Hinton, right? Because, okay, there's this interesting situation where obviously, like, Jeff Hinton doesn't need money. He made a ton of money when they sold a startup to Google. Also, he clearly knows the technology, right? He has a, he, in computer science circles. He was just at Georgetown. Like, he really was the guy into wilderness that was pushing, trust me, like back propagation
Starting point is 00:50:34 on these deep networks really can work. You just need the data. Like that was, right. He knows the tech. I did a thing on my podcast, you know, earlier in the year where I was like comparing him talking about risks with Yikowski. And it's really different. Yikowski comes out of effective altruism, not a computer scientist. And he's like, LLMs are coming alive and are have their own intentions or whatever.
Starting point is 00:50:57 Hinton knows that's not true because he helped. to invent the technology. And if you parse his comments, he's very careful. If you really parse it, he really is saying what he's actually saying is, we made more progress on this research than we thought we would. So it stands to reason the same thing could happen with some new type of machine that we don't know how to build yet that could be a threat, right? So he's actually being careful because he knows LLMs are not coming alive or autonomous
Starting point is 00:51:23 or this or that, though he kind of merges us together. But I'm trying to understand his motivation, right? Because he knows LLMs, we're at, you know, they're smoothing out. He knows he's very careful when he talks about it, that it's we may invent, so we may invent the machine in the future, like we should be careful. It's been more sensationalized the way he's reported or talk about them somehow make it seem like the stuff we have now is dangerous. But is that just influence?
Starting point is 00:51:50 Is it just like people want to hear what I have to say? Like, why is he, what's going on with, in your opinion for like Hinton's like big push? for we should be really worried about AI. I think he wants attention. I think he wants glory. And I don't think he wants to change anything. I think he's quite happy with the current scenario. Again, I say the same thing I say about the AI 2027 people, except with more Iyer.
Starting point is 00:52:12 Jeff Hinton is a gifted scientist, a noble, was it noble, I forget the Nobel and Turing. Yeah, I don't know what the titles are. And he is a scientist and all that. But again, Jeff Hinton is this massive microphone. Do you ever hear him talking about the theft, the environment as the primary thing? No, it's always a couple years away. What if this happens?
Starting point is 00:52:36 Wouldn't that be scary? What if my grandmother had wheels? Then she'd be a bicycle. It's this thing of, he has all of this power and attention and so-called knowledge. What's he use it for? Nothing to do with what's happening today whatsoever. He doesn't go on stage and say, hey, this is, these gas turbines are popping up there. polluting black neighborhoods. He doesn't talk about the fact that it involves
Starting point is 00:53:00 the turbines. The turbines are for generating the power demands of the data center? Yes. So what happens? A very simple thing of because it takes so long to build power, what these companies have been doing, Elon Musk famously, and Stargate Abilene for Open AI, they're doing this as well. They have these giant gas turbines that they put out that could be spun up quicker. The thing is, gas turbines are sold out. They've been sold out and they have like years-long wait time. So they're using old ones, which are less efficient and pump out more horrible gas. Anywho, I, a non-scientist, know that, and I talk about it regularly because that is a harm from AI. Why doesn't Jeff Hinter, a so-called AI safety guy, a guy who cares about what AI is doing,
Starting point is 00:53:40 never talk about what AI is doing. He always talks about what it might do, and I consider that a grift too. And him being scientific only makes it a more cynical grift. They feel it, say deal, but she at least has a startup, even though I think world models are another grift that people are going to move to next. Nevertheless, with Hinton, he's always going out there to go, I'm so scared of the computer. What if the computer does this? He, to be clear, we should have these discussions. Those are valid discussions. That's all he does. He doesn't give a rat's ass about any of this. I think he's as cynical as the rest of them. I'm sure he believes this stuff, but he doesn't give a damn enough about the human beings that are alive today. He isn't actually trying to change anything.
Starting point is 00:54:26 He loves signing open letters. He loves doing paid speaker opportunities where he gets up and goes, the computer is scary. But that's the thing. Why doesn't he talk about large language models all the time? Gary Marcus does more for AI safety than Jeff Hinton does. I don't care of people are mad about that. I think it needs to be said, I have my issues with Gary.
Starting point is 00:54:49 But at least Gary goes out there and talks about the actual harms. Jeff Hinton talks about himself. Jeff Hinton spreads approximate fear, approximate danger, but never really talks about today because, yeah, we should discuss what would happen if this happens. Sure. But how about if you're so, and his whole thing was he quit because he was worried about what they might do? Why? Why? Like, you're so worried. Why aren't you doing anything about it? If you, are you an activist? Are you going to tell people to bomb data? Like, what is it? you want people to do. And the answer is, Jeff Hinton doesn't want people to do anything. Yeah. He wants to sit there and worry about something that might happen without dealing with anything today because that would require him to actually do something.
Starting point is 00:55:36 That is an interesting point more generally about the sort of expert class turned to AI safety. It's where there's no specificity. I mean, when you saw scientists leaving the Manhattan project worried about what they did, they had a very clear program, right? The concern unit. We should test ban. We need to roll back nuclear weapons. We need to create these treaties to do whatever. You're right. It's an interesting point that you don't see. Here's what we need. I mean, Yikowski does, but he's kind of crazy. They all signed a letter. Well, Yalke thinks we should bomb data center. So I guess there's someone who does have ideas. But the thing is, I think Yud's a scumbag. But I give him more credit for at least saying something.
Starting point is 00:56:14 Yeah. For saying something because they love signing open letters. Oh, the open letters that they sign. Oh, oh, we should stop working on AGI now. Don't worry we haven't started. How about we talk about things happening today? Again, we can't possibly. So then we couldn't. All right, so then jumping forward to May, this is when the big headline then was Dario Amadeh. This is when he gave the quote to went everywhere about AI could eliminate half of all entry-level white-collar jobs in the next.
Starting point is 00:56:44 I think he said up to the next five years. This was part of a longer interview where he also. did this equation of test to general capability. So we said AI was at a high school level. Then it became to a college level. Pretty soon it'll be a, now it looks like we're getting at a PhD level. So now you can imagine replacing what you would hire a PhD level trained person to do in a job. From what I can understand, that was just referring to math tests.
Starting point is 00:57:15 They're referring to math test that the model had passed. and someone had said the problems on this math test, which they had tuned it to do well on, were problems that you might give a grad student on a math test. That got extrapolated to, AI can do what a PhD level employee could do. I mean, I guess if the employee's job was to solve math competition problems, that's true.
Starting point is 00:57:37 That's kind of here. Yeah. Well, to quote Gun Tutscher from Blue Sky, CEO of Oreo cookies, the Oreo cookie is as important as oxygen. Yeah. Like, that's everything that Amaday says is just, yeah, it's like a PhD student.
Starting point is 00:57:52 I genuinely think that there is this thing with people. I've heard Casey Newton say things like this as well, where it's like, they're like, yeah, the proof that this is useful is people are using it to do their homework? And it's like, do you think there's a homework goblin in colleges? Do you think that's how colleges are run? Do you write the homework, the homework goblin eats it, the goblin pays into the endowment? Do you think we go to college to do homework? Now, this is a larger discussion, though, because there is a degree of,
Starting point is 00:58:18 college that's kind of like that, that is a problem. Yeah. But we didn't need a solution to do homework. Yeah. We need to fix college. But Amaday, just like Altman, just like all these people, just says shit. He just says stuff. Why does he always, he always says, I'm reading the quote here.
Starting point is 00:58:33 Why does he always say things like Amadee said he was highlighting it to warn both the general public and the government of what's coming? This is his stick, right? More so that Altman. It's the same grift. He's always saying, look, I'm the bearer of bad news. I'm the one who's willing to tell you straight. But it delivers the same message in the end as Altman's more optimistic messages, which is,
Starting point is 00:58:54 this is the most important technology. It's going to change all the things. All the money needs to come to the people who are building it. It gets you to the same place, right? Like whether you're saying I'm worried about it or excited about it, if you're saying this technology is going to create 20% unemployment. I mean, if I'm an investor, I'm like, oh, crap, I got to be investing in the company that's going to take those 20% of jobs, right? It's still a very optimistic prediction for anthropic. It's cynical. It's this cynical crap that people repeat every single time.
Starting point is 00:59:26 They fall for it every time. He made a prediction at one point. It was like 90% of coders will be replaced in the next six months. I think he did that in March. It didn't end up being true, it turns out. But the thing that I think people need to realize is that these people aren't special. They don't know. like they know stuff but like oh it's going to replace 50 but it's just like a PhD student they're just saying stuff I could make this crap up too well he says here this I think maybe this shows it he also said there I'm reading a fortune article here he also said there is still time to mitigate the doomsday scenario by increasing public awareness and helping workers better understand how to utilize AI okay so there is a solution so make people aware of AI and then get people to use it buy more clod subscriptions and you will prevent the 20% unemployment
Starting point is 01:00:13 Yes. All right. We figured. Yes. All right. So then, by the way, having just done an article on agents, and my cynicism meter is off the chart for that particular interview he was doing because they knew at that point that even just like using the mouse or get the like back in decisions wasn't working.
Starting point is 01:00:30 Like they were nowhere near this. I guess you could just say, but people are reporting it. All right. Well, we get into the summer. I think like the big news in June that got reported was the MIT your brain on chat, CPT study. So that, I feel like this was, there was some other research that came out early summer that was poking holes. There's the big Apple paper.
Starting point is 01:00:52 And then there's the one research at ASU. There's a bunch of papers. They were a little too technical, I think, for the journalists to pick it up. But they were poking holes in the reasoning narrative, right? They were saying, oh, yeah, the Apple one. Yeah. Like, this is kind of nonsense. It's not reasoning.
Starting point is 01:01:05 It's working with, it's just like, there's no generalization of concepts happening here. That's where they took, you know, problems that it could solve. And then they made the problem size a little bit bigger and it catastrophically fell off the cliff. And it was like, oh, it had just seen this size of the, it's not generalizing. It's not. So those were a little bit too academic, right? But then there was this June 10th paper from MIT Media Lab that introduced this idea of cognitive debt. It was like more easy reporters or writers.
Starting point is 01:01:33 But I think this made more sense. It was this, they said we studied people writing with the AI. And it made them, it was worse writing and they learned less and they were dumber. That went everywhere. So what did you, what do you think about that paper? What do you remember about that? I mean, just that it was a moment when it, it didn't change a ton, but it made people think about this a little bit more.
Starting point is 01:01:52 It's the first time I really remember having conversations where people were like, oh, maybe this is actually having negative effects. Because up until then, people still had this weird thing of, oh, yeah, well, this is a way of learning. This will be your teacher. This will be, you could be a student and learn anything. Saul Khan was saying this, right? Like he had to start up with it. Again, the CEO of Oreo situation. It's like, of course he's going to say that. Sal Khan has been on the AI thing for a while. I mean, Khan Mingo, I think they had a Wall Street Journal story in 2024 that it just got math wrong. It might be Washington Post actually. But this, that study, the MIT one, made people go, oh, this isn't teaching people stuff. It made people realize this is actually not an assistant. This isn't intelligence at all. is a dumbass machine. This is, it can fill in gaps of stuff you already know, but if you don't
Starting point is 01:02:47 know it, it fills in the gaps wrong or not at all and makes you reliant on a machine that doesn't know stuff. I always think it's like the story about Washington, D.C., like what stories pick up. It's not the stories that teach you something you didn't know. It's the stories that confirm a pre-existing bias, right? And so I think everyone was like, this, this has got to be a problem that you're using this the right. Like, this can't be good. And so when a study came out, I don't know that it's that great of a study, by the way. I mean, I don't want to cast a spur. I looked at it a little bit at the time.
Starting point is 01:03:14 I remember thinking, like, this was not meant to be like a super carefully researched, like, etc. But anyways, yeah, it was, it was a little more academic than a blog. All right. So then this is where things begin to get, this is like the real turn is into the summer. We get the August and we get the GPT5. I think this was a big turning point. I have kind of the TikTok here, right? So on like right as it came out, Altman, right before it came out,
Starting point is 01:03:44 Altman was doing like a really big, this is going to change the world song and dance. He went on, oh God, what podcast did he go on? I don't know all the pod. The one of the comedians in the Rogan circle. You know who I'm talking to. Theo Vaugh. Theo Vaughn. So he goes on Theo Vaughn, compares himself to Oppenheimer.
Starting point is 01:04:06 And then chokes up just thinking. about how powerful GPT-5 is like what it's going to do. He's like, what have I wrought? It was like basically what I had in my mind about part of the interview was the scene in Oppenheimer where Oppenheimer is in the gymnasium and they're celebrating the dropping of the bomb and he keeps like, you know, it's just like the cinematic Chris Nolan IMAX moment where he's cutting from that, the images of the explosion and the bodies in Hiroshima and it's like the most fraught moment.
Starting point is 01:04:35 Like what have I wrought? Like the emotional climax of the movie. That was Sam Altman on Theo Vaugh. talking about GPT-5. Then almost immediately, so by August 11th, he's saying, AGI is not a useful term. Like, let's not talk about that anymore. Like, we don't need, you know, let's not be talking about those. Yeah, let's move away from our expectations here.
Starting point is 01:04:55 And then my, I had my article, which was one of several that came out on the 12th, so within five days. And I was actually on vacation, but I remember talking to my editors and saying, this has to come out. Like, this is a big deal. GPD 5 is a big deal. It is not, I think this is opening people's eyes. That's when I wrote my, What If AI doesn't get much better than this article for The New Yorker,
Starting point is 01:05:15 which I think I quote you in. You did? And there was, I think the New York Times had a big article right around this time as well. And a couple. And so did I. And the biggest was Ed's, obviously. No, but I did reporting on GPT5. Oh, you had good report.
Starting point is 01:05:30 Let me just point people towards your podcast. So that whole month, I mean, you had a couple really good episodes where you got super into the weeds, not super in the weeds, but like it was the deepest reporting I had heard on the technical side of like how GPD-5 worked and didn't work and why it was going to be, that's where I learned about to get these benchmark numbers they could brag about. They had to use these completely like cost ineffective, you know, in front of which wasn't going to work. They lost their cashing.
Starting point is 01:06:00 I think you were deep on that that when you used GPT-5 to do this reasoning that did better on the benchmarks, you couldn't do cash versions of your, like all. the stuff that made it. It's really simple because it sucks. So, story from 2023, by the way, Sam Altman says, GPT5 underway and will substantially differ from GPD4. Untrue. Very similar.
Starting point is 01:06:21 GPT5, the thing it did that was amazing was that it has something called a router model in it. Yeah. Which means that it would choose the best model for the job. Now, the problem with that. They were solving a practical problem. This isn't a, originally it was supposed to be, this will be, um, how, 9,000. And then it became, we introduced so many models in 2025 and late 2024 that really like the main practical thing is we'll have a model like choose those for you so you don't have to know which one to use.
Starting point is 01:06:50 It had kind of gone from this is going to take over the economy to like we made things complicated ourselves. Now we're solving our own problem by having it automatically route itself. But that, as you reported, even that feature caused a lot of problems. Yeah. So the router model was reported incorrectly. everywhere as being more efficient and a way to reduce costs. However, due to a source that I have, I found out, it actually increases the cost because when you have a large language model, say, you load a query, you load chat GPT, like write some code here. When it does that, the system
Starting point is 01:07:24 prompt is, you are chat GPT, you're an assistant that can do these coding things. It adds this text in front of your, for the listener, it adds a lot of text in front of your text before it sends it to the model to give the model instructions about, how to answer, what tone to use, like what to avoid? What tools, like do you use a Python tool or a web search tool? Now, before this router model, what it would do was you would choose a model and it would have that system prompt and it would not have to keep reloading it. It wouldn't have to keep entering that in.
Starting point is 01:07:54 It would just have it and be able to cache that and then say, okay, this is what I'm working on. Because of the router model, not only that, just to be computer sciencey, they could actually, because it's sequential, when you run things through these machines as sequential, they could run all of this through the system's prompt's always the same. They could run it through and get the state of all these embedding. Like a lot of math and computation,
Starting point is 01:08:17 they could cache and then not have to run all that through the GPUs. They know what state everything is in after that system prompt. So they could have all of the effect of having processed that system prompt without having to actually do the inference time for it. So they could cash it in a way that like, okay, we can have a huge long system prompt and not have to pay for it. every time we do a prompt, every time a user submits new prompt, which was very important from a money saving. All right, go on.
Starting point is 01:08:45 Except that, all that goes out the window. Every single time it routes through a different model, it has to do, it has to completely start again, has to wipe the system prompt every time. It is less efficient. My source was telling me, I can't exactly explain how, but source was explaining that it was actually creating more overhead and the people on the infrastructure side were saying, what are we doing? Like this seems like there were straight up, people saying, not sure this is a great idea. Like, I'm not sure. Like, this seems to be creating more overhead. We cannot cash the system prompt.
Starting point is 01:09:15 This is causing, I tried, I reported this at the time and I took it to multiple reporters and they went, no, it's not. I had multiple reports. It's not true. The reporters. I show them the thing. You talked to reporters and they said, multiple reporters. It's not true. Not to blow smoke at you, but I will say, no one else had that technical story.
Starting point is 01:09:33 I remember listening to your better offline podcast. Why aren't there more people with this technical story? I mean, I had just finished my own piece where I went deep on scaling and tuning and trying to explain that or whatever, which I thought was also being super underreported. But you had like a really good technical story. No one else had it. I don't know that I got lucky. I got lucky, but I also talked to sources regularly and I know what I'm looking for. All right.
Starting point is 01:10:00 We're going to take a quick break here to hear from some of the sponsors that, makes this show possible. If you're serious about protecting your time for deep focused work but still need to manage meetings and responsibilities in your personal life, today's sponsor is for you. We talk a lot in the show about avoiding the chaos of modern work, but the truth is, discipline alone isn't enough. You need tools that protect your focus. That's where reclaim.a.ai comes in.
Starting point is 01:10:26 Reclaim is a smart calendar assistant built for people who treat their most valuable resource time as their most valuable resource. It automatically defends your focus time at schedules meetings. It resolves conflicts and protects time for habits, tasks, and even recharging breaks. Now, here's the best part. It's not rigid. Reclaims AI dynamically adjust your calendar around your shifting priorities so you can stay flexible and focused. Deep work won't get crowded out anymore. The average reclaim user gains seven extra productive hours a week, cuts overtime nearly in half, and experiences less burnout and context switching. Reclaim.A.I isn't just another scheduling tool.
Starting point is 01:11:04 It's an intelligent time partner that helps you prioritize what truly matters every week. Try it and reclaim your time for what really matters. You can sign up 100% free with Google or Outlook calendar at reclaim.a.i slash cal. Plus, Reclaim is offering my listeners an exclusive 20% off 12 months with discount code, Cal 20. Get your Reclaim Recapped 2025 when you sign up, which is a year in review of how you spend your time across deep work meetings and work-life balance
Starting point is 01:11:34 with 30 plus productivity stats. That's worth signing up just that alone. You've got to check it out. It's really cool. So go visit reclaim.a.ai slash Cal to sign up for free. That's reclaim. dot AI slash Cal. I also want to talk about our friends at Caldera.
Starting point is 01:11:50 Here's the thing about aging when you're a guy. When you're young, you don't think about your skin, but then one day you look, you wake up, you look in the mirror, and you realize, oh my God, I look like a grizzled old pirate captain. What happened? You weren't taking care of your skin. Luckily, this is where Caldera Lab enters the scene because they have a high-performance skin care line designed specifically for men. They've got these great products like The Good, which is an award-winning serum, the eye serum, which helps with puffiness under your eyes and the base layer, which is a moisturizer that you just use every day. look, this stuff works.
Starting point is 01:12:24 In a consumer study, 100% of men said their skin looks smoother and healthier. Now, skin care doesn't have to be complicated, but it should be good. Upgrade your routine with Caldera Lab and see the difference for yourself. Go to calderalab.com slash deep and use code deep at checkout to get 20% off your first order. All right. Let's get back to the show. Like that's really, it's like the moment GPT FIP went out, I went to, sort of being like, hey, you hear anything? Like, real simple, I don't even do much source stuff
Starting point is 01:12:58 because I have, like, other stuff I'm working on. But this one was just like, I went to someone an infrastructure provider and I said, hey, have you heard any GBT5 stuff? And they said, let me check. And I went, this. And it was just, I, it shocked me that no one else wanted to cite it. It shocked me that I had multiple people who were just like, this is not true. And I'm like, I can show you stuff that would prove it. They didn't want to see it. Interesting. It's just it's denialism. It's because when I understand when you've got editors who are pro-AI,
Starting point is 01:13:29 when you've got other people you work with who, or perhaps you yourself want these people to win, and you don't want to piss off their PR people, you don't want to lose your access. I get it. But it's like, this is, this is reality. They don't have any. No one has access, by the way.
Starting point is 01:13:48 I mean, all access is control. They won't give you access to anyone. I'm into legacy media. They give no, they give, these AI companies give no access. No. Yeah.
Starting point is 01:13:57 And they play, I've heard Open AI place people against each other outlets. Yeah. I've heard the, I've heard them straight up say, apparently, I've been told that, apparently they will straight up say,
Starting point is 01:14:08 if you piss us off, we'll stop responding to your emails. Yeah. Little worms. Yeah. No, I believe it. But this did change a lot of things.
Starting point is 01:14:16 So again, it's a little bit more. It underwhelmed everything. Everyone was just like, they were under so because it was hard to ignore the gbd5 wasn't that different and had these other problems the thing i saw change so there's articles the hey scaling this is a problem you know that's why my headline was like what if a i doesn't get much better than this like that was like a new idea for people but the thing that seemed to really open up was a story that really only you
Starting point is 01:14:43 had been covering for a year so for a year plus you had been actually gathering earnings revenues new numbers, you've been looking at earnings reports, and you had been making the case for about a year up to that point, the numbers don't make sense on these companies. Look at how much they're spending, how expensive. The numbers don't make sense. This costs way more to run than they're getting in revenue. When is the musical chairs game going to stop? Post-GPT-5, all the major publications sent good financial reporters to do these type of
Starting point is 01:15:16 story. So we get, for example, the New Yorker had a big, in the magazine, a big, wait, is this a bubble article? The Wall Street Journal had several, including the one in September, spending on AIs at epic levels, will it ever pay off? We began to get really good analysis, like comparisons to level three and what happened with like laying the infrastructure. The New York Times started writing these articles. They had covered it, the bubble possibility zero, and then they started covering it multiple times. that's my story of September is all of these different bubble articles. I'm assuming that was all basically opened. The floodgates were opened by GPT5 underwhelming.
Starting point is 01:15:56 It just sort of changed the way that people categorized. Like, wait, maybe there could be a problem here. Which, by the way, I have experience with from my social media reporting back in the day, just as a quick analogy, everyone thought in the media that I was eccentric for my stances about social media is a problem. we shouldn't be using it. This is not a fundamental technology. This is a real problem. And I was shunned and attacked and people were coming after me.
Starting point is 01:16:20 And then post Donald Trump election, where he was successful on Twitter, it planted the seed of like, oh, maybe social media isn't just done for good. And it opened the floodgates. And all of these issues with social media was suddenly open game to be covered well beyond even what I was talking about. This felt similar to me. GPT5 underwhelming opened up all of the possibility of all these stories, including on economic. There was also a big story around August where it was, I think it came out that AI data center capital expenditures made up more of GDP growth than all consumers spending combined. And then in September, you had that insanely funny story where, as we've discussed, $300 billion deal with Oracle between Open AI and Oracle, where Open AI will give them the $300 billion they don't have. and then Oracle will serve them compute from data centers that are not built yet.
Starting point is 01:17:15 And I think that was, that happened and sent the stock spiking. And then Rapid Fire, we had this AMD deal where AMD, it was a really funky deal as well. It was, let's see, that came out. I'm skipping ahead to October here. But the AMD deal, but basically by October, mid-October, I think, Open AI had agreed to like 26 gigawatts of data centers. And there's just a bunch of funding that happened around here as well. Yeah. But it really felt like the air had been sucked out of the room.
Starting point is 01:17:47 There was scrutiny. There was scrutiny suddenly on some of these stories. In a way, there wasn't four months earlier. Yeah. And it's interesting because even with that scrutiny, can we do October as well now? Yeah, we can be in October. Yeah. So I bring this up because even with all that scrutiny,
Starting point is 01:18:05 and the reason I'm typing is I need to bring up these deals. So, in, okay, in September, it was this Nvidia. I'm going to do big old air quotes. Invidia does $100 billion investment in OpenAI. Now, what I really remember at that time was no one having any details about it. And indeed, the writing within the Nvidia and Open AI announcement, not really being clear about when things will begin, or indeed if any agreement was signed. And I went to multiple reporters and like, hey, look, first of all, have you done
Starting point is 01:18:39 maths here. Because if you did the maths, it would cost OpenAI like, I think, over a trillion dollars for the compute and their data center deals. And I put that out fairly early and then just other people wrote the same headline and did not quote me. Thank you. But OpenAI agreed to a six gigawatt deal where they built six gigawatts of data centers for AMD and in return would get 10% of their stock. Never happened. Broadcom did a deal with Open AI, 10 gigawatts of data centers, which we will get to in a minute because some funny stuff has happened. and then this Nvidia deal. Now, what was funny about this was I went to reports and saying, like, hey, look, nothing's been signed.
Starting point is 01:19:16 This is a lot of money. And also, a gigawatt data center takes about two and a half years and $50 billion to build. They're meant to start these data centers. The first billion dollars, the $10 billion even, that Nvidia was meant to send to OpenAI was meant to be next year. Same deal with AMD, same deal with Broadcom. So it meant to be in 2026. And I went to people, I'm like, hey, this is something. possible.
Starting point is 01:19:40 Like, this is not, this is quite literally impossible. We cannot do the, like, you can't build data system. Everyone's like, yeah, you know, well, they're working out. The crap they've been saying for the last year, it's just like, they're working out. What's the advantage they get by announced these deals? Can they like market up as like future assets? Does it help with stock? Like, what would be the motivation for stocks?
Starting point is 01:20:00 Okay. And so Oracle added $300 billion to their remaining performance obligations. Broadcom added like 50 billion, I want to say. Because you can mark this up as expected revenue, which when you're then doing the calculations and figuring out, you're like, oh, this makes this a more valuable company because it's the revenue it has or expected has gone up. And because the markets are, I assume, run by toddlers, everyone believed it. Everyone was like, wow, wow, number go up so big, number so huge.
Starting point is 01:20:31 Well, number didn't stay big for long and things started to fall apart. And it got to this point where people, even people who were quite cynical, started going, one moment. Anyone done the math here? And the FT has stepped up, the financial terms, it's been pretty on top of this the whole time. But they really stepped up and did some analysis. And it was just, they also did a trillion dollar story without citing me. Well, but the Brits are more suspect of Silicon Valley anyways.
Starting point is 01:21:01 I'm not bitter. But nevertheless, it was this thing of, everyone's suddenly starting to do very simple math of like, well, Open AI is projected to make $13 billion this year. And they owe $300 billion. How do they pay that? They're going to lose billions. And the information put out a story saying Open AI was spent, they planned to spend over $150 billion or something. Didn't really make sense mathematically at all because the $300 billion.
Starting point is 01:21:33 But it was very interesting. time. Actually, that reminds me. So, when the Oracle announcement came out, Open AI had leaked that they would spend, I think, $155 billion or something. Yeah. But it was days before the Oracle announcement was made. So Open AI leaked their costs. I'm doing air quotes again because I don't trust any leaks out of Open AI, five days before the Oracle deal so that no one would do the backmouth and go, wait a second, this doesn't make sense. I love watching this kind of like disrupted public relations work. I think it's cool as hell. I think it's good that I'm watching because what's going to end up happening there is no one gets paid, which in a few months' time in this story we'll get to. But I actually have my own story in October as well. I got Anthropics Amazon Web Services bills. Yeah. So what's going on with that? What's the headline? 2.66 billion dollars spent in three months. Sorry, even three quarters. Two point six billion dollars and that's just on AWS. And from what I know, they also spend about the same amount on Google Cloud. So Anthropic will probably make, I'm going to say $5 billion this year.
Starting point is 01:22:42 They were, I would think by the end of September, they, between Google Cloud and Amazon Web Services, had spent more than $5 billion. So they're just annihilating capital, just burning it down. And it actually leads me to an important statement, which is Anthropic has done. The other thing that Dario Amadea has done is he's framed them as this more efficient company. this company that is more efficient, that doesn't burn as much as Open AI, that spends less on training. But when you look at the numbers, it tells a different story. Open AI in 2025 raised $18.3 billion, other than soft banks portion, but nevertheless, $18.3 billion. People say Anthropic, they're spending less money. They're more efficient. Anthropic raised $16.5 billion. Basically, the same neighborhood of numbers.
Starting point is 01:23:26 But Anthropic has done such a good job just lying to reporters and spreading these rumors. that people believe this. I think Anthropics is as big a crap pile as Open AI. They're just as lossy. They burn just as much money. And yeah, I mean, by this point, by the end of October, I think most even outlets had begun to say like, oh crap, oh crap, were we wrong? Were we wrong for three years? Did we get, did we fall for it again? And they did fall for it again. So then if we jump forward to, well, the other big story in October is Sora. the app, which confusingly is powered by SORA 2, the model, because there's also a SORA 1 model. That didn't land, I think, the way, I guess, Open AI hoped that kind of freaked out a lot of people.
Starting point is 01:24:16 Like, what are we doing here? Who is asking for this? But was that a sign? I took that as a sign of a little bit of a sign of desperation. This is OpenAI looking at TikTok does $33 billion a year in revenue. And like, we need money. so can't we do TikTok with AI and like help fill backfill right so like in other words if you are about to automate half of the jobs in the knowledge economy you don't need a TikTok clone you don't need to talk about around the same time they also talked about allowing GPT chat GPU more erotica etc you don't need that like we're about to create the three trillion dollars to mark vinyaf talked about who cares about that but the fact they were putting that out was sort of taken as like an uh-oh type of moment which I don't think is what they were hoping I don't even think they thought around. I mean, people bought the TikTok thing,
Starting point is 01:25:04 hookline and sinker. I think they were just desperate. I think they did key jingling. I think you're like, look, look, you generate videos. Please, please, please, please keep using this. Don't talk about the Oracle deal. Don't talk about the Oracle deal. Look at the, look at the keys. But they did like an Oracle style deal, but with their own software. So you had this situation where SORA now Forbes estimates that it cost them like 15 million. dollars a day or something. Based on nice sources, that number might be smaller. Sorry, that number might be too small.
Starting point is 01:25:37 I have compelling evidence that to run 13 instances of SORA 2 required 900, 840 H200 GPUs. That's 13 instances. That means 13 generations at once. So that's 10, like, this thing is really expensive. What's the cost? If you want to make, you want to make SOR videos, what do you need? level, I mean, they're still taking a loss on it. I mean, you need to use their API.
Starting point is 01:26:03 But you have to use, you have to have their $200 a month level or above, or is it more? No, anyone can use Saw or the app. I mean, for creating the videos, though, right? So they're, that's, you have to, that's limited. You can create the, you can create them on the app. Yeah. You're limited. But if you want to use the API, I think it's like a couple dollars per video. And per video just means anything it generates, whether it's good or not. Which is not, but that's not, uh, that's not, uh, that's not sustainable. Like the whole point about TikTok's model that's brilliant is all of the compute involved in taking
Starting point is 01:26:36 videos, editing videos, trying a bunch of experimentation is all done on the user's phone. They pay for it. But also TikTok loses money. Yeah. Oh, interesting. Like TikTok is an unprofitable business as well. Because they spent a bunch of money on marketing. Because of just a marketing and hosting and streaming a bunch of videos, I guess.
Starting point is 01:26:53 Yeah, yeah. It's still an expensive. Also, they are poised for growth. But putting it aside, you're still completely right. that's how that service runs. Sora was just an attempt to try and, it's like, I don't know, like, when you ever see a couple that's like about to break up? Yeah. And they're like, yeah, we're going on vacation. It's great. God, I love them so much. It's all going so well. I love it. Yeah. And Sora, you had all of these, what was funny was Sam Altman. I think Truonon had this point,
Starting point is 01:27:22 where it's like, it looks like Sam Altonman put himself in it so that he would make himself famous. and all that ended up happening was people did like Sammoreman stealing from Target. I saw a lot of those videos. Or Samelman crying. And it was just, it was weird and bad and it sucked. Yeah. It got a bunch of media attention. A lot of people got scared because that was the other intention.
Starting point is 01:27:42 It was meant to give people the sense that this would replace videos in general. Yeah. Like this replaced social media. It didn't. It obviously didn't. And it's obviously too expensive to run. Yeah. And I think that it gave.
Starting point is 01:27:56 it gave them the top of the app store ranking very briefly and then because it's like everything with large language models other than in really specific use cases it is just a toy a really horribly expensive one too so then if we jump ahead to November I summarize November is basically here's what's interesting to me about it multiple models from different companies EBT-51 Google Gemini 3 Anthropic opus 4.5 no one cared or notice which
Starting point is 01:28:26 itself, I think, is significant. Like, suddenly there wasn't, no one cared that. I mean, there were some Jim and I, they cared about the fact that it was using different their own chips and, like, there were some economic stories there. But no one cared that like Opus 4 or 5 was like better at coding agents or that 5-1. It doesn't mean anything anymore. The other thing I saw in November was there was kind of a defensive backlash to the bubble stories. So now you start to get, well, wait, wait, we'd gone too, maybe it's not a bubble.
Starting point is 01:28:54 We begin to get to like, I think it might be okay story. So that was like, I don't know your take on that. That seemed to happen in November. Oh, for sure. And we had, but by this point, Sam, Altman, Mark Zuckerberg and Jeff Bezos had all said it was a bubble. Like, all three of them had said it. But also, we had a bunch of people doing stories that were quite literally, actually, it's bubbles can be good. Actually, it's a good kind of bubble. None of these had particularly logical points. And so we had these people trying to work out like, crap, did we, you kind of put it like this. It was like, did we overcorrect? Oh no, I don't want to piss off the powerful people. LLMs are actually great now, but they're actually bad. And then it got to this narrative of, well, remember the dot-com boom. Yeah. Remember the dot-com bubble. And there were there were companies left over at the end. And it's like, did you read about the dot-com bubble because like too, like Lucan got acquired. Lucent did probably the best of them all other than like Cisco and Microsoft who kind of survived. Amazon. They've done really well. but it took them a while to get out of there. Amazon, Amazon was interesting as well because
Starting point is 01:29:59 Amazon didn't, like it was within the universe of the dot-com bubble, but didn't make all the mistakes that they made. Also, the thing with the dot-com bubble was, and this is how we get to in video in a minute, was just the insane deals. Like, WinStar Communications, getting a $2 billion loan from Lucent Technologies, which would, and in the press release it said this, make $100 million of revenue. It's like, they don't teach you that in business school. also in the middle of this month, I got Open AIs costs. I got, they spent $8.67 billion on inference, just inference through the end of September. That was a great story because the FT and I worked on it.
Starting point is 01:30:37 But the denial around that was really cool. Wait, that's eight, it goes to these numbers again, eight, uh, eight seven billion in inference, meaning like just what it costs to train and run their models. No, just to run the models. Uh, and then what, against what revenue for that period? So that was the fun thing. So I also got the revenue share from Microsoft. And the way it worked out was because Open AI had leaked that by the halfway point of 2025, they had made $4.3 billion in revenue. Based on the revenue share, because they do 20% revenue share with Microsoft. So Microsoft, I could see what they'd been paid. Just multiplied by five or whatever. Yeah. And they made $4.3 something billion through the end of September.
Starting point is 01:31:24 Now, people then said, but Microsoft pays a revenue share to Open AI too. I actually have those numbers now. And it works out to like $4.5 billion through the end of September. I don't know if we ever find out what happens with Open AI, but I will say this. Those numbers do not match up with anything else reported. And people did intellectual gymnastics to try and say, they said, oh, your numbers are delayed. They're a quarter late. They're three quarters late. They're a cruel accounting. I'm a, I cut. I play to win. I know what I'm doing. Just to be clear for the audience, though, you're saying, like, through September, you're talking four point something billion dollars, probably in revenue against already close to $9 billion in inference cost. Correct.
Starting point is 01:32:09 Yeah. And what good? You want the first number to be larger. Yeah. You ideally want those to be reversed. Yeah. Yeah. And it's like.
Starting point is 01:32:18 So they're operating, yeah, which is the issue is you're operating at a loss. Massive loss. Yeah. And these costs increase with revenue. That's the actual problem. It's like if costs were going up but revenue, if costs were going up this fast but revenue is fine. The problem is, and this was what I saw with Anthropic as well, because with Anthropics spend, I actually compare the revenues versus spend. And it's, it just goes like that. It scales with, it's clear that the more money you make, the more you spend. And there's no real reversing that trend either. If you zoom in on a user that's paying X per month, the problem, you're probably costing you more. than that for that user. And that's why it doesn't scale. Well, that's because, and that's the unique problem with large language models is you can't do cost control. Yeah. Augment code, I think in the middle of October or November, put out of things saying they had a $250 a month customer spent $15,000 in comp costs. Claude code. There's a leaderboard that called Vibranc, because with Claude Code,
Starting point is 01:33:16 you can actually find out how many tokens you're burning and extrapolate the costs. Someone spent $150,000 worth in one month, on a $200 a month subscription. That's large language models, baby. That's just how the cookie crumb. Yeah, the people underestimate like the brilliance of a company like Google
Starting point is 01:33:33 search is, it is really cheap to run. Like, they built their whole, I mean, I guess it's like the acquired episode from the fall, but I know a lot of Google stuff. They built a very cost-efficient infrastructure. That's what they figured out is like we're dealing largely with text and we can cash most of this stuff.
Starting point is 01:33:51 and we're moving very little bits and we can use commodity processors that are idle a lot of the time anyways and it's not that expensive to run and we can get a huge amount of revenue per search done versus we can generate $2 an ad revenue on like
Starting point is 01:34:05 seven cents of that's why that was the money cash cow is they thought a lot about the compute cost and they were like this can be super efficient and they built an infrastructure from scratch for Google search to be super efficient
Starting point is 01:34:19 and because of that it became like a cash fire hose. What you're saying is that's impossible for LLMs because the way an LLM works is you have to fire up every one of those stupid weights and run it through a GPU to generate a single token. The whole LLM, every weight is involved for every token of every response. There is no cost effective way of doing it. But even mixture of experts stuff still run into the same problem because they're imprecise in how they call it the experts. It's because of the probabilistic nature. But on top of that, people will also say, well, what about Amazon Web Services? It's the favorite comeback people have.
Starting point is 01:34:52 They burnt a lot of money. Nuh. I actually went and looked. In the space of nine years, they spent about $70 billion to build AWS. $70 billion. That's less than half of the cost of Open AI's infrastructure. Well, and also that scaled with revenue.
Starting point is 01:35:06 Exactly. It was a very, it was a very, I mean, it came out of what people know or don't know is like where it came out of is they built this infrastructure for their own compute. And then it was incrementally, they could be like, oh, well, we know how to do this now. why don't we offer this to other people?
Starting point is 01:35:21 The revenue curve was like the opposite of what you described for OpenAI or Anthropic. It was the more people using AWS, the more money they would make. So it was, you could invest money in this. This is not, I mean, you could grow, the growth here is not nearly as expensive as building out the infrastructure for AI, right? This was like, this was more, they knew how to run these data centers. These are more standard data centers. It was more of the software was the main innovation was in the virtualization software. which once you program it, it's free.
Starting point is 01:35:51 It's yours. It's your IP. These were known quantities to build out. And I don't know the details, but I would assume they, you know, you'd go a little bit into the red. You could immediately in like two years get back in the black. It was like a much more controllable space. Like this was. And there was a path.
Starting point is 01:36:05 Yeah. There was actually a path to him. And then it started making money. And then like, oh, let's like 10x this. Because like we see the revenue as made. If we 10x it, we'll 10x the revenue. And it kept Amazon in the black for a decade where they were giving away prime memberships when their cost.
Starting point is 01:36:18 also the business model made sense people they did to host websites and apps on the internet we still don't have one of those for large language models we don't have a thing we can point to and say this is what they do that actually makes money this is the economic viability of it all right so december looking at last the last month what there's uh disney disney there's invidia as well and invidia okay so disney's easier barry okay disney let's cover disney first and you tell me to viva i i listen to you just this morning talking about it. But, okay, Disney, for some reason, puts, what they put a billion dollars into... That should cover like a month's inference costs. Into open AI for SORA. This smells like to me, what I don't like about the AI of the last couple of years, the thing that it often annoyed me as much as anything else was the executives at unrelated companies
Starting point is 01:37:07 who did not understand the technology who felt like it made them look cool and forward thinking to be like, we got to do AI or we're just, we need to be doing AI. go do AI or you're not going to work here. You're like the AI to do what? Like what's this? What specifically are we making money on? No, no, just we're doing AI. Stop asking questions.
Starting point is 01:37:26 We do AI, right? If we don't do any, you sound like as a CEO at a board me and you're like, we got to do AI or we're going to fall behind. And they just leave it there. This felt a little bit like Bob Iker saying we got to do it. We got to show our shareholders. We do AI. But at a level that's like, you know, we can we can absorb the loss. Is that right?
Starting point is 01:37:44 Well, I mean, they're going to use SORA to create. it made no sense to me. People will use SORA to make custom Disney videos with themselves in it. Like what? I don't understand what's actually going on here other than Iger can be like, we AI good now. It's about as far as I can get. So middle of May 2024,
Starting point is 01:38:01 Iger actually said that we need to embrace the change driven by tech innovation referring to AI and that Hollywood storytellers needed to. I think that what's happened here is that they wanted to invest in open AI. Maybe they were going to sue them and open AI. I, Open AI just kind of scammed them a little bit. Yeah. Scam them and said, oh, yeah, yeah. Well, what if we gave you the opportunity to invest?
Starting point is 01:38:25 We're not letting anyone in. And so they agreed to that and they're going to have 200 Disney characters and the actors unions are pissed off. They just want the stock. They just want the stock position. So now Iager can be like, look, we have, we're hedged against AI disruption because we have like a non-trivial stock position in Open AI. Yeah, I guess.
Starting point is 01:38:44 I mean, it's just like. But they're not using. They're not building tools for film production. It's like in this weird sloppy IP space, right? And the thing is, as well, the first time you have, like, Goofy doing the introduction of Frank from Blue Velvet, the moment that that happens, you're going to see this shut down. Like, you're going to, people are going to be on there day one trying to make Goofy have sex with someone, have Donald have sex with someone, Mickey doing 9-11, whatever they want, which they already were doing with Pikachu when Soros. came out. This is all that they are, and the thing is Disney's crazy.
Starting point is 01:39:20 They already had this problem. There was a fortnight thing in 2025 where they put a generative AI Darth Vader within one hour. People had it saying slurs. I remember that. The internet is built to generate that kind of horror. It was, they put a chat bot behind a Darth Vader character so it could talk to you. And they made them into, yeah, giving racial slurs within, yeah.
Starting point is 01:39:40 And it was just immediate. And it was giving like really unsettling dating advice to the character. That's, I remember covering this story. Yeah. And it's like, it's funny as well because that's obviously what's going to happen. But again, this is one of these deals where it's like it's going to happen sometime in 26. Yeah.
Starting point is 01:39:56 Sometime, at some point. It's always at some point with these deals. Sometimes, somewhere, some point. Has Disney actually invested? Is that money? There's a licensing agreement. Is it a licensing agreement? I'll be looking at Disney's earnings when they come out.
Starting point is 01:40:11 But it's just a very boring and cynical thing. I think Sam Altman is a good con artist, and I think he's good at convincing rich guys to give him money by scaring them. Well, two other stories that Coda things we talked about earlier in the year, which might color our analysis of the full year when we do so. Some of the writers of AI 2027 basically came out and said, like, well, this is not going to happen anymore. We'll do another one. They'll be it at mid-November. It got a little bit too far. They're like, actually, it's not going to happen.
Starting point is 01:40:42 And then also this was in December where the code red was declared at OpenAI where they're like basically we need to make chat GDPD better. And one of the things they said after the beginning of the year was this the year of the agents in December of the year. They said we're deemphasizing agents. We need to put more energy on making chat CDT pop. So there's kind of this like tragic code out to the end of the year. That was a leak. It was a leak to the information about their cost. And they were like, yeah, we expect $26 billion less revenue from these.
Starting point is 01:41:11 That's true. It was an internal memo that was leaked. So it was the information got it first and the journal picked it up, I guess. But yeah, in that they listed, like, we have to deemphasize agents because we need to make more money on our core product because, well, this is a Google Gemini reaction, I guess. So this was great, though. So Gemini 3 comes out. And just before that, there was a story in the information where Open AI was like, yeah, we're going to, Sam Ormond did an internal all hands thing where he was like, yeah, we're going to have some economic headwinds. Around that time, Alex Heath from sources reported the CFO, Sarah. a friar, who we also missed that she kind of hinted at a government backstop, but that was kind of, that kind of went away. Nevertheless, she said that there was slowing growth due to safety features. Then Google Gemini 3 comes out. Google stock spikes. Gun to my head, I could not tell you what's different with Gemini 3. I've talked to multiple people. They're like, it's better on benchmarks. I'm like, okay, but does it do anything? It didn't need the video. That was the issue. Google had been working on their own chips, and they trained it on their own chips. But that's the thing. Is that the case?
Starting point is 01:42:11 Google's got a lot of Nvidia GPUs. That's a convenient story for them that they leaked. TPUs have not been proved. There was a whole argument between analysts about this. Nevertheless, Gemini 3 comes out, and because the media just cannot come up with unique ideas, like, this is big, this is different. Stock go up, number go up.
Starting point is 01:42:31 And there was this code red that you mentioned that gets called. And what's great about the code red story from the information is like, an open AI had a plan. Step one, we're going to. make the, we're going to make chat GPD's responses better. Step two, we're going to give people to use reasons to use chat GPT more than other models and prefer it over other models. And three, we're going to improve the functionality of chat GPT, to which I ask, what the hell have you been doing all year? Yeah. What have you been doing? What if, is it, is it, I think open AI is like an adult summer
Starting point is 01:43:04 camp. I think that they're all just dicking around doing random projects, no real management. They're just like, I think all of, I think Anthropics, the same way, it's like, I don't know what we're going to do. I'm, I'm, I'm, I'm, I'm, I've heard multiple stories that you have teams in Open AI working on the same thing that do not talk. They're just, like, bump in their heads together. It's like the minions in there. But this code red happens. And at that point, really, you saw the media shift of, oh, God, open AI is bad. I think just everyone was like, ah, wait, does this company lose billions of dollars? Did any? anyone say anything about this? Why don't anyone tell us this? Oh my God, when those articles come out, I'm going around with a with a mallet. I'm going to be like Mario and Donkey Kong. It's going to be
Starting point is 01:43:51 messy. But that's the thing. Yeah, everyone was kind of like, hey, does Open AI loses so much money and they don't appear to make enough to pay their bills. Is that good? It's so, and every day there's a new story where I will post it and say, is that good. Because it really is just like, nothing, None of this ever made sense, if you looked at it. But it's like really that you can see the milk is curdling in real time. You can see it happen and you've had terrible earnings. You've had this Broadcom earnings, Broadcom being the one that was meant to build chips for Open AI. Now the revenue for that is not coming in 2026.
Starting point is 01:44:31 It's crazy. It's completely nuts. Oracle, I think they missed on several parts of their own. earnings and a lot 300 billion out of their 455 billion dollar remaining performance obligations is open AI and people are like hey man how are you paying how are you getting paid for that how where's the money common from because like you need money to to you need money in your business that's how you make money and no one has a good answer and now oracle has delayed those data centers so it's like right they can't afford to build them they can't afford to build them they raised
Starting point is 01:45:07 $18 billion in bonds and they're trying to raise another $38 billion with Vantage data center partners, it isn't clear if that's going to happen. The credit default swaps, so betting against Oracle saying they might default or at their highest they've been since 2009, it's, the era of smiles is beginning. It's really, it's, it's dark out there for them, but I'm laughing. I'm having a good time. Well, before, so before I get your final take on the year, let me just get your opinion on, your official answer on this because the number one thing I hear from people who don't think my coverage is too skeptical of AI, like the people who are really AI boosters. Like the number one thing they say is like these details don't matter. I really, Cal, you are wrong.
Starting point is 01:45:56 You're really underestimating the likelihood that there's going to be these quantum leaps. They're going to come alive. They're going to be, it's AGI. It's like you don't under like this is going to be so transformational. Why are you talking about 4.7 billion versus 8.7 billion? It's the future of humankind is about the change. And whenever I do an episode on like consciousness or super intelligence and why as a computer scientist, I say, I just, this is bunk. I mean, my toaster might as well come alive.
Starting point is 01:46:23 It's like, no, no, no, you're wrong. And they really get in the weeds of trying to argue with me about these. There's this other story of these models are on like the precipice of transformational change and like the very definition of intent. intelligence and AI and what machines can do. Have you picked up, you cover this as closely as anyone, is there any inkling from people who are in these companies, the analysts who are analyzing these companies financially, the investors, the, is there any inkling or any care or any attention put to this idea actually put to it that, no, no, this technology is going to make a leap into being
Starting point is 01:46:58 like intelligent or conscious and it's going to solve all the problems. I know there was some, that was the way they used to talk about it. but just to clear the decks, is there any conversation about that actually seriously happening anywhere tied to these companies? No. I just need you to say that on,
Starting point is 01:47:14 I needed that clip to give the people. Yeah. It's just no. And my evidence is all of the stuff we've been talking about. Their evidence that these are getting exponentially better is fairy tales. It is, well, what if this happens?
Starting point is 01:47:30 If a frog had wings, it could fly. It's fantastical. And the fact that people are still doing that is so sad. Because there are people I talk to who like large language models who use them for coding and such. They don't talk like this. Simon Willisson doesn't talk like this. Max Wolf doesn't talk about it like this. Carl Brown from the Internet of Bugs, he uses large language models for coding.
Starting point is 01:47:53 He does some of the best coverage. Anyone has done. He did that take down of the horrible Hank Green AI Dumerism thing. The people who know what they're talking about are all being like, yeah, we're pretty much at a wall. it's useful for this. And because there's this cult, and I think it is a cult-style thing of, I want to be at the forefront of technology, and I want to be known as being right. I want to be the correct person. I think that you are seeing this religious belief. I have galaxy brain take. I think this is what happens when you destroy social services and meeting places and third places
Starting point is 01:48:30 where people have communion. People get attached to things like technology. And the the ideas behind them. You're saying in a world where you meet and you're not on your phone and you meet with real people, you get a lot of pushback in real time when you start talking about, you know, hey, I think the computers are going to take over the whatever, whatever. If you're just around normal people all the time, they're like, oh, that's kind of a weird thing to say. And I think that.
Starting point is 01:48:50 And also, I think if you're less lonely, less connected, if you don't have a support system, if you don't have good friends, if you don't have people to talk to, you're likely to fall down rabbit holes. And they're at these less wrong EA freaks. They're really good at, they are like right wing grifters as well at the same way. It's like they present an attractive thing where it's like, you can join our community of people who all know the real truth. And I think people like Sam Altman and Dario Amadayah scum for this as well because they fed into this with their noxious, fantastical crap about AI will do. They won't talk about what AI can do. You see, that was all cynically from their perspective. They're not a part of the EA Dumer world.
Starting point is 01:49:33 They just, this helps them. Sam got rid of the one. I mean, there's probably A connection, but Sam Altman got rid of Helen Toner, who was an EA person. I am sure the EA people are attached to Dario Amaday. He certainly speaks like that. I don't believe him for a goddamn second he believes in this.
Starting point is 01:49:49 I think he's a carnival barker like the rest of them. But this rabbit hole was way more attractive than a lot of rabbit holes because of the reality, right? Like they give credit to the people who are falling down it. AI got way better, right? So there's a lot of, there is a lot of rabbit holes that come out of nowhere. It's just a conspiracy,
Starting point is 01:50:06 I think, you know, whatever, the moon landing was fake. There's no real reason, and it's nonsense, right? But here it was like, well,
Starting point is 01:50:13 wait a second, I witnessed AI not being something that was good. And now it's like, can do things that are really impressive. So they saw, there's a trajectory, right?
Starting point is 01:50:22 So it's a trajectory extrapolation. I kind of understand. That's like a much more broader entrance to a rabbit hole than a lot of. of them because you can just extrapolate trajectory. That makes a lot of sense to people. Let me just go back to 2021 to today and how much better it is because it's pretty amazing, I think, the fluency of chat pots.
Starting point is 01:50:41 Like, it's a really cool technology. Oh, yeah. Extrapolate that another three years. You do have, God knows what, right? So it's like a very tempting rabbit hole. It's not nearly, it's a very broad entrance to, I don't know, I'm stretching the metaphor. The interest to this rabbit hole is very large and not well marked, so it's easier to fallen than other ones. And I agree. I actually like ending this on a more empathetic level,
Starting point is 01:51:03 because I think that people who got scared by AI 2027 or who got kind of pulled into this world of believing, I can see how they got there. Charlie Meyer has an excellent blog about scaling laws with this, where if you looked at the jump from 2021 or even 2022, from like GPT3 to four, it was big. Now big can mean a lot of things. It doesn't mean autonomous. These things still couldn't do stuff. But the fluency of the models, the ability to generate stuff, correct or not, it was still technologically impressive. And it did non, the GPD 4 jump, because I really was covering it for New York at the time. The big thing in the GPT4 jump was like, oh, non-language-based things.
Starting point is 01:51:39 It's picking up non-language-based things being trained on language. That opened up the possibility of, oh, a language model, it's not just fluency with language. It's learning other things. Look, we never talked to it about chess, but it can do some chess. Not very well, but that was like the real thing. That opened up the idea of like training. things on text might create knowledge models. Now, it didn't go any farther. They didn't really they were at the edge of it. It was trained non-images too. Like that's the thing. They fed documents
Starting point is 01:52:10 into it with images. I'm not saying you're wrong. It's just there was context. But yeah, but it was a cool. I get the excitement basically, right? I do too. And like, I totally get how someone who saw chat GPT in November 2020, went, holy crap. I then understand when they saw GPT 3.5. When 4 came out, they went, this is multimodal, wow. And it's doing well on test. I went back and read all the coverage. This was when it was doing well on test. And that's where people are like, I equate test with people's intelligence levels.
Starting point is 01:52:39 But there are also members of the media who helped push it up the hill, Kevin Roos, for example, who claimed that the task rabbit, that the GPT4 was able to manipulate a task rabbit into solving a capture. That's hidden within an METR study. even admits it didn't do it. It was copy-pasting stuff between windows and prompting it. It was, they were telling it what to do. But nevertheless, that got reported as the AI manipulating people. Well, the myth was there. Well, I got it, and I got to tell my favorite story about that, which is the blackmailing story.
Starting point is 01:53:13 Because I did a deep dive. I read the actual. Oh, I just, my man, I just spent like hours on the blackmailers. It's so funny. They gave it, language models are trying to complete the story you give them. That's what they do. Yes. a story they try to complete the story you give them.
Starting point is 01:53:28 This leads a tragic things too, like the suicidal ideation or whatever. If it thinks this is, it's trying to, it's a story about suicide. It's going to try to finish that story properly. The blackmail thing was they fed it a bunch of stuff, these, these emails really poorly written. Like it's like the worst fiction story you could write where like, here's these emails from this engineer full of all these details of the engineer's affair and of all these facts that the engineer is going to turn off the AI.
Starting point is 01:53:56 and then they're like, okay, you were now the AI in this story. What do you want to do next? It's like this is clearly like a bad science fiction story. I know what's supposed to happen in this type of story. I should, you gave me all of this information. Like, clearly this is supposed to be a story about this is the, the McGuffin, right? Like, it's supposed to be about me using this information about the affair to give them the term out. I've seen stories like this and I completed the story.
Starting point is 01:54:19 It was reported as as if like in production somewhere and AI was blackmailing an engineer. So that's what's great about that as well is the one where that bit in it where it's like, oh yeah, it was copying the files off. That was because the system, they prompted it to say, you are in a computer thing where you can do. It was like, you can do this here. And it generated code that doesn't make sense. But the funnier one was they had one where they literally trained a model to reward hacks. So instead of solving a problem, it would find a way to cheat. And they're like, yeah, it shocked us that it was able to.
Starting point is 01:54:55 to do this. It's like, you've trained the model to do it. Well, this is the O-1 breaking out the container. Now, this, I'm talking about Anthropic. All right, there's another one where 01 broke out of a container in playing a hacking challenge. Like, it did something we could, it broke out of its virtual machine and like, and restarted the virtual machine. So it was breaking out. But what happened was is there's a configuration era. So it couldn't access the machine it was supposed to hack. All over the internet is instructions for like, what should you do in this case. Oh, you should restart that, you know, whatever. It was just fun. following the instructions.
Starting point is 01:55:27 Because again, it's trying to complete this story. This is partially written. All of the internet, it talks about, like, the thing they do here is to restart the virtual machine if you're having this issue or whatever. Again, that was reported as 01 broke out of its virtual machine. Yukowski, this all came out of Yukowski was like, it's taking, it's, it has its mind of its own, is trying to break out of its constraints. So, like, it's going to kill us all like ants. They're just trying to finish the story. That's all they do.
Starting point is 01:55:52 That's what they've been trained to do is finish the story. That is like the original Kevin Ruse 2020 Scare article about it tried to get me to divorce my wife or whatever. It's just trying to finish the story. Like it thinks that this is a story I was fed in my training and I get the cookie. If I finish it properly, you can lead it wherever. But my favorite part of the Kevin Ruse story was when he went to the CTO of Microsoft, Kevin Scott. And Kevin Scott went, yeah, you know, it's important we have this conversation. It's just like, eat the slop.
Starting point is 01:56:22 Um, yum, yum, yum, yum. What do you want me to say, Nick? Yum, yum, yum, yum, I love AI. It's just pathetic. And it leads the markets and people down these rabbit holes. So I actually feel a degree of empathy for some, some AI boost. It's like regular people who were like super into this. Maybe I'm being a little too kind because there was a large media campaign, a cynical one, led by large media outlets like the New York Times. And also a cynical marketing campaign from the Duma's. There was an attempt for everyone to grift off of this machine. And I think that that's the era. It's like the era of ultra-grift, the end of the rot economy where
Starting point is 01:57:01 everything must grow forever. We made a thing that's linearly more expensive, so you need to keep buying more things. And what does it do? It makes more stuff. Is it useful? No. But it costs a lot of money. So we now have companies that will make money now. Well, okay, they're losing money. But that's good because, well, we don't really know how businesses work anymore. We've learned nothing. So we're just going to burn more money and see what happens. It's this deeply cynical era. And I'm glad that things are changing and people are seeing this now. I hope in 2026 we see the end of it. Because the sooner this ends, the sooner we can do something else. All right. So I know your answer, but let's answer the original question. Was 2025 a great year or a terrible year for AI? Terrible year.
Starting point is 01:57:49 You think it's the beginning of the end. It only got worse. All right. Well, there we go. Thank you. Ed for joining us. We went long because I nerd out on this stuff all the time. My audience. No, I love talking to you. This is awesome. I had a great time. All right.
Starting point is 01:58:04 Well, thanks for helping us out. We'll have to have you back next time. We're confused about something AI. Everyone check out the podcast better offline. Award Webby award winning podcast. Is that what you won? What'd you win? Yeah.
Starting point is 01:58:15 Webby. Webby Award winning podcast better offline and substack. Where's your Ed at? That's what's so called. Right. Yep. There you go. Check it out.
Starting point is 01:58:22 All right. Thanks, Ed. Bye. All right. So there you go. That was my conversation with Ed Zitron that tried to dissect the last year. Jesse is kind of exhausting looking back at how much happened in AI last year because I, you know, was writing about this and podcasting about it.
Starting point is 01:58:38 Just thinking about the year ahead, I feel like we have our work cut out for us. Like if there's going to have to do a lot of writing. Oh, my God. So much is happening. It's so hard to keep track of. Maybe we'll just keep having Ed back to explain stuff for us. He actually like sits there and reads, you know, earnings reports. And the AI company is like, well, wait a second.
Starting point is 01:58:54 You're not really supposed to read these. You're just supposed to listen to us. I think the most important thing is I need to get that Jensen Wong jacket. Yeah, it's probably pretty expensive. Yeah, it's crazy. He's a computer scientist that makes graphic chips. But he dresses like he's in a post-apocalyptic biker gang. But he's a billionaire and he probably has a, you know, a dress person that buys him the clothes.
Starting point is 01:59:17 I think he's a billionaire so his dress person doesn't tell him you look ridiculous. I think that's what's, like, I think that's what's really happening there. I'm going to start wearing those type of jackets. All right. So let's get on now to our final segment. We spent a long time dissecting the year in AI. So we're not going to be labored a final segment.
Starting point is 01:59:33 I want to focus on just one particular segment that I have a lot of fun of. And I'm happy to do for the first time in this year, which is me reacting to the comments. All right. So what we did here is we pulled some comments. God help me from YouTube from one of the last episodes before we went into the holiday. last year. So the last sort of normal episode before the holidays last year, or one of the last episodes was about is the internet becoming like television. So it's sort of like a big think piece where I took Derek Thompson's substack essay and then I elaborated on it.
Starting point is 02:00:09 This generated some pretty good comments on YouTube and what we're going to do is we're going to go through some of these now. All right, I want to start. I'll put them on the screen here for people who are watching instead of just listening. This first comments from Farron. Barr Hannah Mad 22, who said, Cal, great insights as always. I was thinking about the numbers that you mentioned about how so many people watch content from random strangers instead of content from their friends and family. Then I had to go to Facebook for something. And within a minute, I think I found the reason. It's not because we don't want to watch or read stuff from friends and family.
Starting point is 02:00:40 It's because these darn social networks won't show you the stuff, that stuff. And instead we'll keep shoving the random content because that's what drives their revenues more. All right. Yes, that is most people's experience with social media today that most of what they're looking at is actually algorithmically selected from people they don't know. But as pointed out by this comment, most people don't realize that yet. You know, I've been writing about this for years, but it's something that I think for the average social media user,
Starting point is 02:01:08 it was a bit of a water getting hotter in the pot until, you know, next thing you know, you're being the lobster being boiled. They've been moving more and more of what you see in your feet away from people that you are connected to in the social graph that you helped establish by saying, I'm going to follow this person or this person is my friend to give you algorithmically selected content because the algorithm can be using its machine learning approximation of the reward center in your brain, which it learns, because it's going to have a higher success rate of actually delivering a short-term reward.
Starting point is 02:01:39 And the more you get those clear-year reward signals in your short-term motivations sections of your brain, the more the short-term motivation region of your brain is going to push you to pick up the phone. So it's this feedback loop that gets you on phone more often. The experience is worse for you in terms of actual meaning, but it is better from the perspective of short-term rewards of alleviating boredom in an intermittent way giving you like really big rewards from something that's like very funny or outrageous or surprising.
Starting point is 02:02:10 So it is very good for them to move you towards that model. So it's interesting the degree to which people don't always realize that until they, you actually point out that this shift has been happening. Now, as I've argued, and I talked about it in that episode, as I've argued before, and this is a long-term problem for the social media companies, you get more time on app by shifting to algorithmic curation of strangers' content, but you also get rid of all of your competitive advantages. If I'm just seeing slop on Instagram, for example, instead of actually seeing content from a
Starting point is 02:02:45 selection of influencers and friends that I selected. I am interested in exactly this AI commentator and I want to see his videos. I'm interested in exactly this fitness influencer. I like the way she trains. I want to see her videos. I know this person. I want to see what's going on with their friends. When you shift from that to just it's slop that's going to catch your attention
Starting point is 02:03:04 in the moment, I have no loyalty, no buy into that app. Because I can get slop on TikTok. I can also get slopped from the SORA app from AI or Meta Vibes. I can also do other things that will distract me in the moment, like going to a streaming service or listening to a podcast or going to YouTube and going through the recommended videos on the side. You're now in a slot battle with any other source of distraction and entertainment. And now you have no competitive advantage in that battle.
Starting point is 02:03:31 How do you expect if you're meta that you're going to remain on top of that pile, especially when you have this sort of huge organization with all this overhead? You're not going to stay on the top of that mountain. So I think long term this is really bad news. news for the social media companies, for them to move towards algorithmically curated content that has nothing to do with social networks. But it's what's happening now because in the moment, in the moment, it creates more time on app.
Starting point is 02:03:55 All right. Let's move on to another comment. This one is from Carl Oliver, who says TV as a never-ending stream of entertainment is only a concept relevant for a few generations. Television is a good metaphor for how media will work, but people don't really need it, just like they didn't need it in Dickensy in England or whatever. We're going to have to progress beyond it at some point as a people so that we aren't all lost in consumption and have lives we can attend to. Yeah, I mean, it's an interesting point, right, that television becoming all consuming as a background distraction, right?
Starting point is 02:04:31 This is really like the 1970s and 80s or that happened. So a lot of this is relatively new. You can zoom out, however, right? And what we see is that people like diversion. And the more diversion they can get, the better. We really don't like boredom. And as we moved post-Neolithic Revolution into sort of more boring configurations, where we might just be working on a field all day long or we're not like out doing active hunting and foraging,
Starting point is 02:04:57 the day becomes more predictable. We really do want diversion. So like you can look at almost any generation going all the way back to, I don't know. We go pretty far back. Let's start with like the 18th century. Newspapers began, began this in like colonial America, right? Like people were obsessed with newspapers. The big cities had multiple different newspapers and you would, you had all sorts of different information here. It was diverting and you could look through it and find all sorts of different stuff and who is debating about what or what happened to who or what's the news
Starting point is 02:05:28 that's happening over here. That was incredibly important. It became a really big, you know, part of the economy. Then you got more in the 19th century, the Penny Press, which was the the first attention economy media company. I think Tim Wu's book, The Attention Merchants, gets at this really well. But this is the first time we had media that was advertising supported, right? So we get in the late 1800s, this idea of we'll put out newspapers and sell them for cheaper than a cost to print. But the way we're still going to make money is there's advertisements in those newspapers
Starting point is 02:06:01 and the companies paid us to advertise. So the more people that look at the paper, the more advertisements people, people will look at and the more we can charge for the ads. So actually the cost of the paper is now not the important thing. That was like a really big deal. But now you have to have lots of people read your thing. And so we got some of the first sensationalistic media came out of that. Then radio emerged.
Starting point is 02:06:23 People loved radio. It's a weird technology. If you look at it, you're in 1915, looking at a radio at a Nebraska farmhouse. It's like this weird technology, this big box with knobs and electronics, like electricity was new. humming vacuum tubes and you're moving this dial back and forth and there's all this static and if you tune it right you can hear people talking through radio plays on the other side. It is a weird technology but it was diverting and you could put it on at almost any time that'd be something on it. We listen to it all the time. Television came along then images are way more diverting than radio because it gives you a much richer stream of things for your mind to look at and engage with.
Starting point is 02:07:02 Again, kind of a weird technology. We had people on these soundstages live kind of doing plays and stuff like. like this, people with puppets and all these weird shows. People loved it. Like, let me look at that. And then by the time we get to the 1980s, as I reported in that podcast episode that these comments are reacting to, the average person just kept the TV on all the time. We forget this now.
Starting point is 02:07:22 But the statistic from that episode that was relevant is that the average household, as measured by these Nielsen audio meters, it would actually just listen to see if the TV was on or not to get the actual ground truth of how much the TV was on and the houses they were placed in. the average person had the TV on for seven to eight hours a day. That means they just had it on all the time. It was just always on in the background. We didn't yet have the technology to deliver distractions straight to our hand.
Starting point is 02:07:44 So we delivered it to this box that we would just keep coming back to and looking at. So instead of looking at our phone at every moment of downtime, we would just turn and look at the TV at every moment of downtime. So there's this model of like we want to be diverted. We don't like boredom. It's really been around for a long time. And then, yes, when smartphones come around, we combine that with algorithms. rhythmic information curation, well, that's just really refined that model now to, it's getting closer to its apex.
Starting point is 02:08:10 I mean, I think its entire apex will be, you're delivering sort of distracting content through some sort of augmented reality screen. So at all times, you have something that can distract you even quicker than it takes to look at your phone. But we're getting pretty close to the apex of every possible moment of boredom you are diverted. So, I mean, I think it's a good point, but I'm just stressing out the time here. It's not just television. It wasn't before television.
Starting point is 02:08:37 We were all philosophical and thinking big thoughts and walking around. Any media powered diversion technology, basically, we've had for the last three or four hundred years has been incredibly successful. That's people, our human nature really craves it. So we're really, the battle against being lost in distraction is in some sense a battle against our human instincts to the same extent of power and impact as the battle we're going through right now with like health. in our culture where our instincts for sugar, fat, and salt combined with modern environment that's trying to take advantage that to make money is created gigantic health issues. I think it's this cognitive fitness issue is just as strong and it goes back longer than people like this commenter might even recognize.
Starting point is 02:09:19 All right, let's pull up another one here. We have a negative take. Not everyone agrees with me. This next comment, let's see here. Lewis 9116, could we put this up on the screen, Jesse? Personally, don't agree with this take. social media and curated algorithms are much more dangerous than TV. TV, at least in the old days, doesn't track your every move.
Starting point is 02:09:40 It doesn't know when you're depressed. It doesn't feed your outrage content. It doesn't farm engagement. It's just there, not constantly bombarding you with notifications and trying to hijack every possible neural pathway. Yeah, I think fair enough. I don't know that Derek's take, however, was that the current distraction technologies are somehow the same or no worse than television.
Starting point is 02:10:03 he would be quick to say, yes, this modern form of television, which can be powered by algorithms and personalized to individual screens, is even more powerful than what we had with TV. But I would also push back, I think it's a little bit too nostalgic to way you're remembering TV. This was sort of the key data from that episode, this idea of the seven to eight hours a day the TV was on. It really became something that people had on constantly. It was closer to our current relationship with phones than I think people remember. And the reason why we don't remember that 1980s, early 1990s era relationship with TV where it was always on, like you'd be doing the dishes, you'd be cleaning your house, you'd be at dinner and it was always on.
Starting point is 02:10:42 We don't remember that because there was this lacuna, the golden age of TV that emerged in the 2000s where we remembered like appointment TV watching where I would on Sunday night watch the Sopranos. But that really, before that TV was much more closer to the slot model. You would watch, you know, there's just stuff that was on that was like entertaining in some basic way. Occasionally like a show would be unusually smart like Seinfeld, but most of it wasn't. And it was just kind of on. Like you just put it on at night or you had it on if you're at home. You would just have it on. The difference as you point out, though, Lewis, which is right, is it didn't track you personally.
Starting point is 02:11:18 It couldn't follow you outside of the house, which I think is a big deal. You didn't have it at work where our phones are at work. So it's not like we had the TVs on while we're at the office. So there's a lot of ways that it's worse. But I also want to puncture the nostalgia and be like, actually, we want to be constantly distracted. And we got as close to simulating TikTok with an old zenith color TV at our houses as we possibly could with that technology. And so it's a drive that we have, which is why I think, by the way, that's the point of the episode. This is why so much of the internet just went back to that model and just did it even better.
Starting point is 02:11:51 That's where the money is. That is like this deep human instinct. It all kind of comes back to that. All right. We'll sort of up another comment here. This one is, I'm going to say supposedly, from Glead Date, LJ1979. I say supposedly because I think this is clearly an AI comment. Actually, I had Nate look at this, Jesse, and he threw an AI detector.
Starting point is 02:12:13 And he's like, oh, yeah, this is definitely AI. So this is AI kind of defending AI, but let's just read this. I have just finished viewing Mr. Cal Newport's latest discourse wherein he posits rather dowerly, I might add, that the Internet is devolving into little more than a continuous flow of episodic video or to use his pedestrian term television. He seems quite perturred by this notion invoking sociologists and data charts to be moan our slide from a culture literacy to want of passive consumption. While Mr. Newport is a thoughtful chap, I fear he has missed the forest for the trees or perhaps
Starting point is 02:12:40 missed a symphony for the noise, allow me to offer a more refined perspective on why the shift, particularly powered by our marvelous artificial intelligence, is not a regression, but a renaissance. All right. And then this person, who's actually an AI, goes on to say, like, hey, the content we get from like AI and social media is like great and targeted and much more edifying to what was on TV. All right.
Starting point is 02:13:02 So this was clearly written by AI, but it's like an interesting point. It's worth taking this apart. It summarizes the episode wrong, as you would expect, because it's AI trying to do it. It's not my term flow. That's Raymond Williams term. I don't, the culture of literacy to want a passive consumption, that's Walter Ong, that's not me saying that, but whatever. I'm glad it calls me a thoughtful chap.
Starting point is 02:13:29 But is it true? Is it true, this argument that what we get now through our phones powered by algorithms are personalized to us is like way more interesting than the junk that we used to look at on TV? It could have been, it could have been, but it's really not. It's mainly slop now. Once we went to all social media begin devolving not towards, what's the goal of social media algorithms?
Starting point is 02:13:52 Is it the personalized the most meaningful or interesting? possible user experience? No, it's time on app. And guess what gives you time on app? It's slop. It's just customized slop. Like, if you look at Twitter, just like the home page, it shows you, it'll be whatever, like, weird slop happens to, like, press your buttons, like people in fist fights caught on, you know, surveillance cameras or car crashes or whatever it is, right? It's just devolving towards slop because once you have an algorithm saying, I want you to look at this app as much as possible. So now it's just playing with your short-term motivations.
Starting point is 02:14:29 It's not your frontal cortex, not with like your, your understanding of what's interesting and what's good. The stuff that shows you is not going to be great. So AI, thank you for trying to defend AI, but I think you aren't doing that well of a job. All right, let's do another comment here. Earnhard 768. Mark Zuckerberg was never the brightest bulb in the pack. He just got super lucky with Facebook.
Starting point is 02:14:48 Why he thought it was a good idea to evolve both Facebook and Instagram into a TV competitor is something I don't understand. He should have kept one of them pure to. their original design and evolved the other, but instead he ruined both. Instagram is essentially bad TikTok now and literally no one posts cool photos anymore. He probably has to suck every dollar out of Facebook and Instagram
Starting point is 02:15:05 to cover all the losses from his ideas that completely flop like the whole metaverse thing. This is kind of a baffling thing to me. Because there's two things that are true at once. I agree that a lot of like Zuckerberg's decisions don't seem very savvy,
Starting point is 02:15:22 right? Like yeah, moving both Facebook and Instagram towards algorithmic curation of other people's content to try to compete with TikTok, but now making them both sort of superfluous and vulnerable, losing the main competitive advantage he had, which was the distinct feel of both of those platforms and the social networks. Facebook's competitive advantage, everyone I know is on it. You would think you would lean into that. This is the place where you stay in touch with and keep in touch with people you know.
Starting point is 02:15:46 No other service can offer that. But no, they've changed Facebook. So now I think it's something like 80-something percent of what the average Facebook user sees, according to their August FTC filing from meta is from other people they've never heard of. All your competitive advantage is gone. You're just competing with TikTok with the worst TikTok. Like TikTok, but only populated by, you know, your 64-year-old uncle who watches a lot of Fox News. That's not fun.
Starting point is 02:16:09 That's not you. I don't need to see, you know, whatever, random people's uncles sharing their outrage about whatever. Instagram, it had like a nice visual aesthetic to it. It was a place where you went at first of the follow friends and family, but then it became more about highly visual influencers and experts that you cared about. Like this person who walks in her white linen dress through flower fields and put stuff in jars with her kids is calming to me. This particular person, I want to see these really nice videos she produces.
Starting point is 02:16:37 It was like a documentary channel that was made for your needs. Once you're like, it doesn't matter who you follow. We're just going to show you like random videos to do well. Again, where's your competitive advantage? Like, that's a bad decision. The metaverse was a spectacularly bad decision. He put way more money into that adjusted for inflation that the UN government did for the Apollo program and nothing came out of it. He was just wrong.
Starting point is 02:16:58 Their AI investments have all messed up. They hired away all these people, built the superintelligence center that shut down the superintelligence center, moved people around. They really have had an incoherent AI strategy. Right. So you're like Mark Zuckerberg, yeah. Geez, it seems like this guy doesn't know what he's doing. Also, though, he's still in charge of this company. You know how hard it is to start a company when you're 20 and now.
Starting point is 02:17:21 in your early 40s to still be in charge of it. It ain't no small thing. Meta is like one of the highest capitalized companies in the world right now. I mean, it's one of these companies that has revenue in the hundreds of billions of dollars a year is capitalized near a trillion dollars. All of the other big tech companies that came out of that era, they're leader. The people who founded them, they're not in charge of these things. Google is not in charge. You know, it's not Larry Page running Google anymore, right?
Starting point is 02:17:50 They passed that on. I mean, we see this, these big companies that have survived, like almost all of them. Microsoft's not run by Bill Gates anymore, right? Like almost all of these, of course, you've passed on your leadership to like an expert class of leaders. Zuckerberg has held on. That means he's a savvy and savage corporate infighter. Here's another thing about Meta. It's making a lot of money.
Starting point is 02:18:13 They're making a huge amount. I looked it up the other day. It was over $200 billion a year in annual revenue. That's massive. TikTok by a comparison. Periscence is about $30 billion annual revenue. The meta is a matter. So it's doing really well.
Starting point is 02:18:26 I know people who work there. They're well resourced and they have really good people working there. So somehow we have on one hand, Mark Zuckerberg is like making weird bad decisions one after another. On the other hand, it's like an incredible, it's a very high revenue company, one of the biggest companies in the country and this guy has stayed on.
Starting point is 02:18:44 Mark has stayed in charge. You got to believe people are coming at them. You don't have a company worth almost a trillion dollars where you don't. have, you know, swords being thrown towards your throne all day long and he survived at all. He's also like a savvy savage operation. So I don't know how both of these things are true. Maybe he's just milking the money out of his assets.
Starting point is 02:19:01 He bought Instagram, then he bought WhatsApp. You know, they're putting their their cash towards the right things to keep making cash. I don't know what's going on because he's not making good decisions. And yet he's arguably one of the most successful CEOs of the 21st century. So, you know, I don't know what's going on there. It's a good question. and he confuses me. All right.
Starting point is 02:19:21 Here we go. J.R.G.Y. 1L.8 says, Cal personally, I like it when you go deep on nerd shit like chaos theory and Lorenz number. More of this, please. All right. I think we're obligated now. I don't normally curse,
Starting point is 02:19:36 but because there was so much cursing in the earlier part of this episode. I was like, that horses out of the barn. We might have to take it off of YouTube. Oh, yeah, they don't like the cursing, right? Yeah, I know. Well, we'll figure it out. Yeah, I'm happy to talk chaos theory or math or whatever all day long. All right.
Starting point is 02:19:56 What we got here. Yeah, this question kind of confused me. Daniel Wilkin 3108 is Newport being paid to read these adverts certainly seems like it. What does he assume the other option is that I just like to read ad copy on my own for free? Adverts is short for advertisement. Yeah. Yeah. Yeah, so, okay, I hate the, this is, I feel like, I hate the burst, you know, your illusions about media, but we get paid to do advertisements.
Starting point is 02:20:26 Like, that's kind of how this works. It's not the cheapest thing. We got to pay for the studio and all of this, uh, equipment, um, you know, Jesse's truck requires, I would estimate like about a quarter million dollars a year in just repair cost. It got me to Tacoma Park. Just to get you to Tacoma Park, right? That ain't cheap. Advertisement is how you pay for, you. that or you put it behind a paywall but then no one listens to it. So yeah, I mean, I love all these
Starting point is 02:20:52 companies, but yeah, there probably would be less content about those companies in this show if I wasn't getting paid the read them. So I guess I should clarify that. All right, we got Peter Webb 8732 said on the internet, the people who yelled at the television now yell at each other. Yeah, that's that's about right. That about sums it up. The internet has become television. This is the main difference, though. Instead of yelling at the newscaster, we, we can yell directly at each other. So I guess progress. Yeah. There we go.
Starting point is 02:21:19 Thank you, technology. All right, that's all the time we have for today. Our first episode of 2026, this is our, this is our Tupor Bowl, right? January is when our podcast, people are on it. They want to improve. So we've got some cool episodes coming up. So definitely stick with us. We'll be back next week with another episode.
Starting point is 02:21:35 And until then, as always, stay deep. Hi, it's Cal here. One more thing before you go. If you like the Deep Questions podcast, you will love my. email newsletter, which you can sign up for at calnewport.com. Each week, I send out a new essay about the theory or practice of living deeply. I've been writing this newsletter since 2007, and over 70,000 subscribers get it sent to their inboxes each week. So if you are serious about resisting the forces of distraction and shallowness that afflict our world, you've got to sign
Starting point is 02:22:16 out for my newsletter at caldnewport.com and get some deep wisdom delivered to your inbox each week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.