Deep Questions with Cal Newport - Ep. 367: What if AI Doesn’t Get Much Better Than This?

Episode Date: August 25, 2025

In recent years, it’s been hard not to react to the possibilities of generative AI without a mixture of euphoria or dread. But after OpenAI’s lackluster GPT-5 launch, a new, almost heretical-seemi...ng question has emerged: what if progress on AI is stalled well short of the wild predictions we were promised? In today’s episode, Cal draws from reporting on his recent New Yorker article to go deep into this question. What is going on with AI? How did we get here? What does it mean for our personal quest to live deeper lives? He then answers listener questions and ends by discussing his recent brush with literary acclaim.Below are the questions covered in today's episode (with their timestamps). Get your questions answered by Cal! Here’s the link: bit.ly/3U3sTvoVideo from today’s episode:  youtube.com/calnewportmediaDeep Dive: What if AI Doesn’t Get Much Better Than This? [1:00]Will AI leave me unemployed in 10 years? [1:06:26]How should I structure the next 10 years as a recently retired college professor? [1:13:58]I just moved. How should I arrange my book collection? [1:15:36]CALL: Overhead tax [1:17:47]CAL REACTS: Ed Sheehan and the Booker Prize [1:27:55]Links:Buy Cal’s latest book, “Slow Productivity” at www.calnewport.com/slowGet a signed copy of Cal’s “Slow Productivity” at peoplesbooktakoma.com/event/cal-newport/Cal’s monthly book directory: bramses.notion.site/059db2641def4a88988b4d2cee4657ba?newyorker.com/culture/open-questions/what-if-ai-doesnt-get-much-better-than-thisyoutube.com/watch?v=zju51INmW7Uyoutube.com/watch?v=Dtdue31z-X8youtube.com/watch?v=0SXCIfFK5r8youtube.com/shorts/dYZmGHOLNRUyoutube.com/watch?v=k82RwXqZHY8youtube.com/watch?v=qhnJDDX2hhUyoutube.com/watch?v=qbIk7-JPB2c&t=3syoutube.com/watch?v=k82RwXqZHY8youtube.com/shorts/JCImeOUVTJEThanks to our Sponsors: grammarly.com/podcastcozyearth.com/deepcalderalab.com/deepshipstation.com/deepThanks to Jesse Miller for production, Jay Kerstens for the intro music, and Mark Miles for mastering. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:00 In the year since chat GPT's astonishing launch, it's been hard not to get swept up in feelings of euphoria or dread about the looming impacts of this new type of artificial intelligence. But in recent weeks, this vibe seems to be shifting. Both the media and technologists no longer seems so certain that everything is about to change. Now, how is this possible? What went wrong? what should we really expect from this tech in the next few years. I'm Cal Newport, and this is Deep Questions. Today's episode, What if AI doesn't get much better than this?
Starting point is 00:00:51 Part 1, the week we woke up. Dario, you've said that AI could wipe out half of all entry-level white-collar jobs and spike unemployment to 10 to 20 percent. How soon might that happen? Well, first of all, thanks for having me on the show. But just to back up a little bit, you know, I've been building AI for over a decade. And I think maybe the most salient feature of the technology and what is driving all of this is how fast the technology is getting better. A couple of years ago, you could say that AI models were maybe as good as a smart high school student.
Starting point is 00:01:35 I would say that now they're as good as a smart college student and sort of reaching past that. I really worry, particularly at the entry level, that the AI models are, you know, very much at the center of what an entry level human worker would do. That was Dario Amadeh talking to CNN's Anderson Cooper. Now, Dario is the CEO of the AI company Anthropic. And if you want to know why we have become so worked up about generative AI, a big part of this answer is that tech CEOs like Amade have been saying, astonishing claims like the one we just heard. Remember what he just said there? AI used to be as good as an average high school student.
Starting point is 00:02:20 Now, they're as good as a smart college student. And he worries about entry-level jobs still being around for humans to actually do. Now, Amade is not alone in these types of claims. If you listen to the CEOs of these companies in the last six months or so, it's almost like they've had a competition to see. who could become more over the top. Jesse, play the clip we have of Sam Altman, CEO of OpenAI,
Starting point is 00:02:48 appearing on Theo Vaughn's podcast. There are these moments in the history of science where you have a group of scientists look at their creation and just say, you know, what have we done? Maybe it's great. Maybe it's bad, but what have we done? Like maybe the most iconic example is thinking about the scientists working on the Manhattan Project in
Starting point is 00:03:10 1945 sitting there watching the Trinity test and just, you know, this thing that it was a completely new not human scale kind of power and everyone knew it was going to reshape the world. And I do think people working on AI have that feeling in a very deep way. You know.
Starting point is 00:03:31 All right, that's enough of that. At least we know Sam Altman's a modest man, Jesse. or like the Manhattan Project. All right, not to be outdone. Meta CEO Mark Zuckerberg has... AI keeps accelerating. And over the past few months, we've begun to see glimpses of AI systems improving themselves.
Starting point is 00:03:50 So developing superintelligence is now in sight. So he didn't want to be outdone there. All right. So yeah, sure you're like the Oppenheimer, but I'm about to create super intelligence. And can I just say, Jesse, as an aside, I mean, Zuckerberg, he's got to remain one of the worst communicators in the history of really large companies.
Starting point is 00:04:08 I mean, this is neither here nor there, but why does he always sound like a robot whose a motion circuit board shorted out? You imagine? It's like, okay, here's my Zuckerberg impression. Hi, Bob. It's good to see you. I have news to share that is bad.
Starting point is 00:04:25 Your wife was in a significant wheat thresher accident. Going forward, she will need a machine to chew for her. I hope otherwise, your day is a good one. Zuckerberg out. And then there's a sort of like beaming of light. I mean, he talks like a android. Okay, enough of that.
Starting point is 00:04:43 You get the point, right? There's been this drumbeat from these AI CEOs that you cannot fathom the impact of the disruption that's coming and it's coming soon. Altman was almost in tears. He's looking at what he's created. He's like Oppenheimer looking at the Trinity test and quoting the Bargadilla. He was like, I cannot believe what I've done. Then, just a few weeks ago, OpenAI released GPT5. And if you haven't been paying attention, this was a key pivot point in our narrative about this technology.
Starting point is 00:05:19 Now, just to put this in the context, it had been over two years since OpenAI's last major model release, which was GPT4. So expectations for GPT5 were sky high. Altman had been bragging about this model almost immediately. after GPT4's power was first understood, this was going to be the next big leap that got us ever closer to the types of AI impacts of the tech CEOs we're talking about there. But then people actually got their hands on this demo
Starting point is 00:05:48 when it released a few weeks ago on a Friday. And while they weren't exactly dancing in the streets or running towards their Terminator-style bunkers, one of the first reviewers to go live with a take on GPT-5 was a YouTuber named Mr. Who's the boss because, you know, of course that's his name. He's a YouTuber. He had early access to GPD5, so he had a review ready to go.
Starting point is 00:06:12 And if you watched his review, this was the first review I saw after GPD5 came out. Here's what he said. He said, look, there's some things that GPD5 seems better at than its immediate predecessors. He did some vibe coding. He asked it to create a chess game with Pokemon his pieces because, of course, he did. If your name's Mr. Who's the Boss, you're making Pokemon chess. And he thought it produced something better than what GPT, O4 Mini High, had produced. It also produced a better script
Starting point is 00:06:36 for his channel than GPT4O. He sort of did these side-by-side comparisons, but also there's other tasks where the old GPT model, the GPD-40, was more successful than the new one. When he asked GPT-5 to create a YouTube thumbnail, it was worse than what GPT-4-O produced
Starting point is 00:06:52 when he asked it to come up with a birthday party invitation. And I'm not making this up, Jesse, it was a birthday party invitation for a grown man that was Star Wars themed. This was not ironic. This was like, obviously, this is what we would be doing. with AI. GPT4O produced a better one than the new GPT5. Within hours, other users who got their hands on GPT5 expressed more, I would say, pointed disappointment. It's pretty fun reading if you go
Starting point is 00:07:19 onto the R-ChatGPT subreddit as I did in the aftermath of this model coming out. One of the post on there in the aftermath said, GPD5 is the biggest piece of garbage even as a paid user. a pre-scheduled Ask Me Anything AMA. Altman and other Open AI engineers found themselves on the defensive, basically being grilled by users who are like, what is this? Like, there's stuff about it we don't like. It's not clearly better. Gary Marcus, who if you haven't heard this name,
Starting point is 00:07:48 you probably will hear it more often because for the last few years, he's been leading the charge to argue that generative AI in general was not going to ever deliver the claims that these tech CEOs were promising. I think he had a good summary of the overall reaction to GPT-5. Jesse, can we play this? GPT5 is just dropped. What are your thoughts? It's not what people expected or hoped it would be.
Starting point is 00:08:09 I keep telling them that it's not going to be what they thought. Kevin Scott a year ago was going around giving talks showing GPT5 as a humpback whale compared to GPT4, it was some smaller creature. And there ain't no humpback whale there. It's better in a bunch of different ways. Elon Musk's Grock 4 is actually better on Francoise-Chle's Arc AGI 2 task. You know, it's part of the pack. It's not separated from the pack.
Starting point is 00:08:32 and after 32 months of hearing people talk about it, I think it was reasonable to say, hey, we want to see something, you know, genuinely different here, and it's not there. And it's so late, too. Like, I remember after, or the day before Super Bowl, 2024 people saying, hey, it's going to drop tonight. It's going to be so cool after the Super Bowl.
Starting point is 00:08:50 Well, here we are 18 months later. And it's, let's be honest, it's a disappointment. GPT5 in some sense caused a needle scratch on the hyped up tune. The AI industry has been plumbed. So was it possible that we were not about to see half of new jobs automated and AGI empowered AIs taking over most of our lives? Were we actually not just a couple years away from having to negotiate our very existence with super intelligent computers? As GPT5 made us realize these hard truths, more and more people begin to ask a question that just a few months ago would have been dismissed as absurd. what if this is as good as AI is going to get, at least for a while?
Starting point is 00:09:35 Now, I tackled this question. I looked deeply into it in a recent article that I wrote for The New Yorker that came out a couple weeks ago. And today, I want to provide you a fuller version of what I found working on that article. And if we really want to understand what impacts that AI is actually going to have in the near future, A good place to start is to better understand what impacts is actually having right now. This brings us to part two. Wait, I thought AI was about to take my job. The thing is, for three years, we've had a myth perpetuated about what large language models can do.
Starting point is 00:10:14 Effectively, anything and nothing. And this myth has continued to be perpetuated to the point that now people are saying that companies that are laying people off are replacing them with AI, which just isn't true. That's Ed Zittron. He's a technology analyst who hosts the podcast Better Offline. He's also been a bit of a thorn in the side of those most crowing about AI's disruptive impact on the economy. Why do they dislike him? He actually checks the claims they make. And what he finds is often less than flattering.
Starting point is 00:10:45 Now, if you hear Zittron talk like we just did there, it can induce whiplash, right? Because what he's saying about AI's actual impact really can. seem different than, for example, the news coverage that we have been reading. So if we want to ask this question, not what is going to happen with AI. That's where the CEOs were saying these astonishing claims. But if we want to ask a question of what is happening already because of AI, Zitron's note there that basically nothing of note really runs a skew of what a lot of media coverage has been recently, not about the future, but about what's happening now.
Starting point is 00:11:22 I want to read you some actual headlines. Jesse, I grabbed these from the last month, maybe the last six weeks. These are real headlines for major publications just in the past month or so. Here's one. Goodbye, $165,000 tech jobs. Student coders seek work at Chipotle. Here's the subheadline of that article. As companies like Amazon and Microsoft layoff workers and embrace AI coding tools,
Starting point is 00:11:47 computer science graduates say they're struggling to land tech jobs. Here's another headline. AI is wrecking an already fragile job market for college graduates. Here's a third. CEO starts saying the quiet part out loud. AI will wipe out jobs. Final headline I want to read here. AI will replace most humans, but then what?
Starting point is 00:12:08 Right. So when we look at these headlines, you get this sense. Forget like what might happen in the future. Certainly it sounds like right now, already the AI technology we have is severely disrupting. our current economy. People can't get jobs. There's all these layoffs because of AI tools. So I asked Ed Zittron about this. I said, okay, what is your take about this coverage we see, not about the future, but the same that like even right now we're seeing big impact. So here's what Ed had to say. Now journalists who should try, I don't know, even once looking at the actual data they're quoting,
Starting point is 00:12:44 are conflating young people not being able to find work with AI and AI being involved. And this is partly, because the CEOs of these companies, when they're letting people off, will say, we're making adjustments for efficiency, and we're orienting our company around the power of AI. And people are conflating that with the idea that someone is being replaced by AI, despite the fact that AI is not replacing a damn person. No numbers, no data, just vibes, baby. Where we're going, we don't need the truth. We just have vibes. I think vibe is an interesting word there, because if you look closer at these art, articles, you see a lot of what Ed is saying actually showing up. This idea that different things that are not really related are being conflated. You'll take a job lost number that might be true.
Starting point is 00:13:33 You'll take the fact that AI has tools that are relevant to that industry, which is true. You put these two things together and you have the natural consequence of the reader coming away with the impression, the layoffs are because of the AI tools. So as I look closer at these articles, I was seeing again and again examples of what Zittron was warning. Yes, it seems at first glance that the economy is already feeling large impacts from AI tools, but you don't actually find that evidence in these articles themselves. Remember that first headline I read here, for example. Goodbye, $165,000 tech jobs. Student coders seek work at Chipotle. Look at the subhead again. As companies like Amazon and Microsoft layoff workers and embrace AI coding tools, computer science graduates say they're struggling to land tech jobs. The AI
Starting point is 00:14:19 coding tool thing is a non sequitur. If you read this article or know anything about the relationship between computer science jobs and the tech industry, here's the actual facts. Companies like Amazon and Microsoft are laying off workers because they heavily spent in the pandemic and now we're in a tech contraction. And as with every tech contraction that has happened in the history of computer science being a major, computer science degrees, majors go down with the tech industry and go up with it as well. They're tightly coupled.
Starting point is 00:14:55 When big companies stop hiring, the demand for jobs goes down. You get less majors for a while until those companies start hiring. We've seen this again and again in cycles. That's not super interesting news. It happens again and again. It happened when I was in college. It happened during the Great Recession. It's happening now because there's a lot of cutbacks after the pandemic overspinned and overhired.
Starting point is 00:15:14 That's just basic economics. unrelated to that, people are embracing AI coding tools. In software development in particular, not vibe coding, the sort of smaller stuff, but like professional software developers that would work at a Microsoft or Amazon, you know, you have these integrated coding tools that do make things easier in certain ways, in certain key ways. It allows you to get, for example, boilerplate code, you get templates, you can figure out how do I call this library without having to go look it up on the internet.
Starting point is 00:15:43 Some people are doing generation of small bits of code for parts of their program. We have recent data, including from the meter study that's showing like, yeah, but that actually spend more time debugging that than actually writing it. But there's some useful tools there. People are like an AI coding tools. But if you put that in the middle of the sentence that's talking about the job market is down in tech and so four majors are down, if you put the fact that people are using AI coding tools in the middle of the sentence, that has to be your intention is to try to make people believe that the jobs are going away because they're being replaced by. AI. It's not related. They hired like crazy in 2020 and 2020, 2021 and 2022. The bill is due. They're laying off widely across all of their division. So there is something to that. So if you're thinking this, how could this be true that everything's not about to change? Because things already are changing. Look a little bit closer at those articles. There's a lot of correlation going on that almost seems purposefully contrived to try to create a vibe when you don't actually have data. to back that up. So I asked, you know, Zitron, like, what is going on?
Starting point is 00:16:49 And he pointed at, like, yeah, sure, there's applications, but not on a scale that's really disrupting the economy, just useful stuff. There's things that people are using this for. We've mentioned coders have different uses for it. It has, like, an interactive search type. You know, it's useful if you're looking up information or trying to interrogate your own information. You can do it with narrative interaction as opposed to having to use some sort of more
Starting point is 00:17:08 structured information system. There's certain types of, like, text processing or auto- automation that like these tools have something that really understands text really well can be really useful. You could look at other types of modal models, not really large language models, but image generation, video generation. There's definitely some impacts in those fields. But this is not the economy right now is operating in a drastically different way.
Starting point is 00:17:32 And to make this even worse, as Zittron has emphasized and is reporting, these models are very, very expensive to run because you're redlining GPUs, which is very expensive in terms of compute and electricity, and these are very GPU-hungry processes. So it's unclear exactly how you overcome the cost of how much it runs to, how much it costs to run these models. How do you overcome that to make enough profit to sort of pay back the cap-x expense required to put into them? So there's all sorts of issues going on. Zitron is definitely farther out on the skeptical side, but he's been right about a lot of things. I asked him, like, okay, what's just your summary then of the current situation, not of what's going to happen,
Starting point is 00:18:10 what's happening right now with AI in the economic scene. And this is what, this is what he told me. Despite all the King's horses and all the King's men saying how important and beautiful and crazy these models are and how everything's changing, the actual revenue is smaller than last year's Smartwatch revenue, which is around $34 billion. They're expecting $35, $40 billion max of total revenue in this entire industry, including Open AI. It's ludicrous. the silliest time in tech in history. So where does this put us? So GPT5 wasn't the giant leap forward we were promised.
Starting point is 00:18:50 The claims that we've been seen in certain articles that our existing AI is already reshaping the economy. Those are proving to be hyperbolic. So what's actually going on here? How could we get to this point where there's such a disconnect between the promise and the reality of AI? To understand this, we're going to need to look closer at the story of the technological side of this latest AI.
Starting point is 00:19:15 Boom. Jesse, I'm going to bring out my computer scientist hat, so you've got to be careful here. That, by the way, is an awesome hat. It has both a circuit board and the Starfleet commander John Luke Bacard on it. So I'm trying to imagine what a computer science hat would look like. I'm going to put on my computer science hat because we're going to get into the technological narrative behind how we got such a disconnect between what we thought AI was going to do and what's actually. happening. This brings us to part three, the strange death of the scaling law. Now, there's a reason why there was so much excitement around the potential for AI, right? So this was not one of these sort of bubble situations where people have an irrational exuberance.
Starting point is 00:20:00 There's actually a very rational reason why we had the level of excitement and investment and commitment to AI that actually happened in the past half decade. To help explain what gave us this excitement. I want to play a clip here from the Navidia CEO Jensen Wong talking at a conference. Let's hear this audio, Jesse. AI. The industry is chasing and racing to scale artificial intelligence and artificial intelligence. And the scaling law is a powerful model. It's an empirical law that has been observed and demonstrated by researchers. and industry over several generations. And the scaling laws says that the more data you have,
Starting point is 00:20:49 the training data that you have, the larger model that you have, and the more compute that you apply to it, therefore, the more effective or the more capable your model will become. All right. So what Wong is talking about there is incredibly important. He's using this technical term the scaling law. And what he's referring to there is a series of equations that came out in a paper that was published in 2020. Jared Kaplan of OpenAI was a lead author.
Starting point is 00:21:18 Darryo Amadee was a co-author. Both of them are now at Anthropic. And here's what they did in that original paper. They took an existing language model. So it was basically what was OpenAI at the time GPT2 model. And they said, we're going to systematically measure. We'll choose some performance measures. And we're going to systematically measure how well this model does as we make it.
Starting point is 00:21:38 bigger. And I'm using the word bigger here broadly. As Jensen talked about in that video, bigger means as we make the model itself bigger, as we train it on more data, so we make the data set bigger, and as we train it longer, so we make the length of the training bigger as well. What happens when we make it bigger? What happens to performance? Now, this seems like, I don't know, shouldn't it get better? But that was not the conventional wisdom in machine learning at this time. In machine learning, there was this real sense that like the mathematical machine learning people who had all these mathematical theories about how machine learning actually works, said you can't get too big. You got to find a sweet spot.
Starting point is 00:22:15 If you make your model too big, it's going to memorize the stuff you're training it on, right? So you're like, okay, we give you these questions and you're great at them. You must be really smart. But it's so big, it was able to just memorize all the answers. And so when you give it new questions of the real world, it's going to do really bad. It's called overfitting. You don't want that. So the machine learning math types, very scolding.
Starting point is 00:22:33 So don't overfit. You've got to find a sweet spot. big enough that it knows a lot, but not so big that it memorizes. And you're going to get, that's the sweet spot you want. This paper, this Kaplan paper from 2020, they said, well, let's check that with language models. Is that true? And what they found is, oh, my God, as we make all this stuff bigger, the performance goes up. And it doesn't just go up like proportionally or like a nice gentle hill.
Starting point is 00:22:58 It goes up fast. So it follows something known as a power law curve, which is sort of like turning a hockey stick upside down. It starts to go fast. And that was not what machine learning people thought was going to happen. So what internally at OpenAI, after they were doing this research internally, he said, well, let's try this. Let's make a model that's like 12 to 15 times larger than GPT3. I mean, this is it going to be, I mean, two, there's going to be super expensive. And no one's ever really trained a model this big before because everyone's like, go slow because you got to find exactly that point where you're too big.
Starting point is 00:23:31 But this paper implied, like, I don't know, this curve looks like it goes up fast. If we extrapolate this, man, if we keep making these really bigger, the performance could get amazing. Let's try it. And so they made a model that was, you know, 12 to 15 times larger than GPT2. It was a factor 10 larger than the largest existing large language model at the time. They called this GPT3. And it was way better than GPT2. And it fell exactly where that curve predicted.
Starting point is 00:23:57 I said, yeah, if you made it this big, what seems crazy, you're going to get an even crazier jumping abilities. and it looked like that's exactly what happened. This is a huge deal because you have to understand when people in AI were thinking about having something like artificial general intelligence, like AI that could automate our work and do almost everything a human could do, but better. They imagine this would be a super complicated thing. Your system would have 100 different parts and it's going to require 20 different new theoretical breakthroughs. And finally, you would piece this thing together. The brain is complicated. It's going to take a long time to get a computer system that could act like the human.
Starting point is 00:24:32 human brain. And they suddenly had this new route there that was much easier. We could take this one type of AI model, the language model, particularly like transform or pre-trained model, take this one type of model and just make it bigger. And that might get us directly to artificial general intelligence. Maybe we don't have to have 20 more breakthroughs. Maybe we don't need that chip from Terminator 2 that comes back from time. They get it from Terminator 1.
Starting point is 00:24:58 And that's what allows the company to build Skynet earlier. Maybe we don't need that to happen. We just take GPT2 and keep making it bigger. This exact architecture bigger will eventually get so good that it can do everything we want. This was an incredibly exciting thought. Soon after GPT3 validated that scaling law by falling on that curve. This is when Sam Altman wrote his sort of infamous Moore's Law for Everything essay. And when she basically argued, look, we're going to have to have like attacks on the equity of like the four companies that are going to be laughed.
Starting point is 00:25:28 There'll be like four companies that control all the AI that runs all the economy. we're just going to have to tax them, just take a share of their value in common so that we can give like a universal basic income to everyone else because there'll be no work to do. That essay came after GPT3 validated the scaling law. People got excited. Chat GPT came along a little bit later, but that was really just a way of letting people have access to a sort of nerfed and more tamed well-behaved GPT3. The next thing that was exciting for AI researchers was they said, what if we got bigger? GPT3 got great. What if we made something way bigger than GPT3?
Starting point is 00:26:05 What if we got even bigger? Now they're getting the crazy territory. We don't even know how to build a building that could run enough chips to train something that was much bigger than GPT3. Microsoft, who ran the data centers for OpenAI, had to invent custom cooling technology that did not exist because no one had ever run that many high energy, high heat chips at full blast. that many thousands and tens of thousands of them all in one room. They had to invent new technology, but they said, let's see what happens. Let's do it, guys. Let's go many hundreds of billions of parameters, maybe hit a trillion.
Starting point is 00:26:41 Let's go 100,000, 70,000, 100,000 GPUs. Let's see what happens. And it got way better. And it landed, it seemed to land exactly where the curve, this curve that's getting steeper. It got even better. And in a big leap. And you have to understand, the scaling law leaps were massive. They were general improvements in not just what models could do before, but it gave them capabilities that didn't exist before.
Starting point is 00:27:08 GPT3 mastered language in a way that the earlier language models had not. GPT4 brought with it capabilities that no one even thought a language model could do. It brought with it capabilities unrelated to language. It seemed to be able, it could write code really well. It could do logic. It could do math. So it really emphasized and reinforced this idea. this one type of magic model, this language model, keep making this thing bigger.
Starting point is 00:27:33 Yeah, it's expensive, but it's worth it. Keep making this thing's bigger. It can learn to do everything. No more breakthroughs are needed except for in figuring out how to get more chips. That's the only breakthrough we need. Just keep making these things bigger. I want to play a quick clip here from a researcher for Microsoft research right after GPT4 came out. They had early access to it. He gave this famous talk at MIT. I wrote about this last year in the New Yorker. This is him talking about his. encounter, his first encounter with GPT4. I just want to give you a sense of the excitement that was out there.
Starting point is 00:28:03 Not 10% increase on this benchmark, you know, 20% on that benchmark. It's something else, okay? What I want to try to convince you of is that there is some intelligence in this system. That I think it's time that we call it, you know, an intelligent, you know, system. And we're going to discuss it, you know, what do I mean by intelligence? And, you know, at the end of the day, at the end of the presentation, you will see it's a judgment call. It's not a clean cut, whether this is, you know, a new.
Starting point is 00:28:28 type of intelligence, but this is what I will try to argue nonetheless. This was the excitement we were feeling, and I think at the time this was a very justified excitement. It worked once, it worked twice, and what it really looked like to move up so far up the scaling graph, the scaling law that Jensen Wong talked about, was really impressive. GPD4 did stuff that we never thought a language model could do. So think about what you were thinking now, if you were one of these AI companies. Let's build even bigger data centers, and they started working on them. You know, Elon Musk got on the game.
Starting point is 00:29:00 He built this data center called the Colossus that had something like 200,000 H100 GPUs in it. Like, they're like, let's just let's get all the money we can get and keep building these things bigger. GPT5. Man, imagine that. If GPD4 was blowing us away so much compared to three, what is five going to be like? And by six or seven, these things are going to be able to do everything. If anything, we have to be worried about their power. So this is why everyone got so excited.
Starting point is 00:29:26 So it wasn't coming out of anywhere. This wasn't like with the crypto boom where you were much more of this was very not the cryptocurrency, but more of the idea of like Web 3 or a sort of blockchain organized world of information and software. These were just theoretical ideas. Maybe this could be cool. With this AI scaling, it was really, really impressive what was actually happening. Like it was not an irrational claim to say like, why wouldn't this keep following that scaling law? That's what Jensen Wong was talking about.
Starting point is 00:29:57 in that clip from earlier, cool stuff was happening, and if it kept happening, that was a path to the type of things that we heard those tech CEOs talking about in the clips at the beginning of this episode. But then here's the thing. Almost all at once,
Starting point is 00:30:16 the scaling strategy stopped working. This is the piece of the narrative that we lost the threat of. The tech CEOs knew it and pretended like it wasn't happening, but the rest of us didn't really know so clearly this was going on. But I looked into this. Let me give you a couple data points here. So according to the publication of the information, by the spring of 2024, so remember, GPT4 came out in the spring of 2023. By the spring of 2024, Altman was telling employees that their next major model, which was GPT5, but they were codenaming it Orion back then, was going to be significantly better than GPT4.
Starting point is 00:30:52 They've been training it for months and months and months. And they were like, this is going to be awesome. but by the fall of 2024, it became clear that the results of making this much bigger model were disappointing. I'm going to quote here, while Orion's performance ended up exceeding that of prior models, the increase in quality was far smaller compared with the jump between GPT3 and GPT4. So, like, this is where the wah-wow music starts to play in the AI story.
Starting point is 00:31:23 when they tried to do the same trick for a third time, they didn't get the same applause. The model was like somewhat better. They trained this thing for months with huge data centers and it only got somewhat better. Open AI was not the only company to have these problems. Meta, you know, if you were following them closely in the AI industry, they were going to build this massive model that was going to be the next leap
Starting point is 00:31:46 that was going to get them back ahead again in AI. They called it BMF because it was that big. Well, earlier this year, they announced, they're going to delay releasing BMIth because when they finished training it, guess what? It wasn't performing much better than the models they had before. Just throwing a lot more size and compute at it wasn't having the same effects. XAI had the same problem as well.
Starting point is 00:32:08 This was Elon Musk's strategy with GROC 3. He said, we're going to build this. I mentioned it before. This Colossus supercomputer data struck, whatever you want to call it, cluster, computing cluster. I think they built it in Tennessee. It had 200,000 H-100 GPUs. These are like state-of-the-art chips you can use to train on it.
Starting point is 00:32:25 They trained GROC 3. The training used 100,000 of these chips. We hadn't seen compute like this go into it. He's like, we'll just throw money at it. And this thing is going to be so big, and we're going to train it so hard. It's going to blow away the competitors. We're just going to sort of do a bank account measuring contest and come out here ahead. They used – it's hard to get exact numbers.
Starting point is 00:32:44 My sources said is probably somewhere between 5 to 10x the amount of computing resources that went in the GPT for. So like we are going to go for it. And guess what? Correct 3 was okay, but it was sort of like the other models around at the time. No leap like we saw with GPT3. No leap like we saw with GPT4. People in the industry were noticing this. Just we weren't noticing them talking about it.
Starting point is 00:33:09 Here's Ilya Susskever, a founder of OpenAI, who left and has some other thoughts I could get into. But he was on this. Here's what he said last fall. The 2010s were the age of scaling. Now we're back in the age of wonder and discovery. Once again, everyone is looking for the next thing. A tech crunch article from last fall summarized the general mood as follows. Everyone now seems to be admitting you can't just use more compute and more data while pertaining large language models and expect them to turn into some sort of all-knowing digital god.
Starting point is 00:33:39 So this happened, but the general public, and I think a lot of the technology media, wasn't really queued in to how important this was. all of that excitement about AI controlling the world and automating everything, and we needing to tax the three companies that are going to be left so that we don't starve. That all came from the assumption that these models would keep moving up that scaling curve as we made these things bigger. But that stopped working. And everyone tried to make it work and no one could get it to work. And this was a big problem.
Starting point is 00:34:13 And by the last summer and certainly by the fall of 2024, we knew this. So what did the industry do? Well, they weren't going to say, like, look, guys, we oversold this. We put a lot of money to build the Colossus, you know, cluster and spent all this money on these giant data centers. But I don't think this isn't making much better models. We need a different plan here.
Starting point is 00:34:32 It's too late for that. So they said, we got to find something to replace this technique we were doing of just making everything bigger that can give us some sort of improvements because we need momentum. We have all these investors writing checks, and we need new investors to keep writing checks so the old investors don't get upset. So we need something to keep the momentum going here. So they found a new answer.
Starting point is 00:34:52 There was a shift to a new storyline about how AI was going to continue to improve that was different from the original scaling storyline. I'm going to return to Navidia's Jensen Hong here to explain. He's going to explain here the strategy they came up with after scaling failed. So let's play this clip, Jesse. But there are, in fact, two other scaling laws that has now emerged. And it's somewhat intuitive. The second scaling law is post-training scaling law. Post-training scaling law
Starting point is 00:35:25 uses technology, techniques like reinforcement learning, human feedback. Basically, the AI produces and generates answers based on a human query. The human then, of course, gives a feedback. It's much more complicated than that, but that reinforcement learning system with a fair number of very high-quality prompts, causes the AI to refine its skills.
Starting point is 00:35:52 It could fine-tune its skills for particular domains. It could be better at solving math problems, better at reasoning, so on and so forth. And so it's essentially like having a mentor or having a coach give you feedback after you're done going to school. And so you get test, you get feedback, you improve yourself. We also have reinforcement learning AI feedback. and we have synthetic data generation. These techniques are rather akin to, if you will, self-practice. You know the answer to a particular problem,
Starting point is 00:36:29 and you continue to try it until you get it right. And so an AI could be presented with a very complicated and a difficult problem that is verifiable functionally. And it has an answer that we understand, maybe proving a theorem, maybe solving a geometry problem. And so these problems
Starting point is 00:36:48 would cause the AI to produce answers and using reinforcement learning, you would learn how to improve itself. That's called post-training. Post-training requires a... I'm going to cut it off there because he's geeking out a little bit, Jesse.
Starting point is 00:37:05 You got to see, by the way, we don't have video of him that were playing. But can you see the video in front of you? I love his jacket. His jacket's awesome. Yeah. He is wearing, I don't want you, a mix between, it's a mix between lizard skin and rhinestones. It's not working, by the way.
Starting point is 00:37:22 He's one of us. He's a geek. We know it. I think this is their way, like, we don't want you to seem like Mark Zuckerberg. He's like, well, what if, hear me out, what if we have a diamond studded lizard skin jacket on? Then people will think I'm cool. All right. So what was he geeking out there about?
Starting point is 00:37:37 What he was geeking out there. And I think, by the way, it's intentional that he's being so technical because it makes everyone else be like, Yeah, they know what they're talking about. That seems about right because it's complicated. What he's talking about is what they replace the original scaling with. So I'm going to give you two terms here. I'm going to be less technical than Jensen. But let me just give you a couple terms here.
Starting point is 00:37:58 The thing that we were doing before that gave us GPT3 and GPT4 and then failed to give us those other great big leap models, we can call that pre-training-based scaling. So the type of training they were doing there is called pre-training, and that's what they were scaling up. That's what the original scaly law says. Bigger models, more data, more compute is going to make you better. What Jensen's talking about is post-training. And what you do here is you take a model that has already been pre-trained.
Starting point is 00:38:26 You've already done the pre-training. And pre-training is unsupervised. It's where you get a ton of real text and you chop it off in random places. And you tell the model what word comes right after we chop it off and then it guesses. And you know the real word because you chopped off the writing. and then you adjust its weights, and you do this with, like, all the texts on the internet,
Starting point is 00:38:44 and the models get really smart. That's pre-training. Post-training takes a model that you've already done that to. So basically you take, like, GPT-4, like a model that you've already done that pre-training, and you said, we're now going to, we're going to tune you to do certain things better.
Starting point is 00:38:58 Pre-training was like, just get smart. We can give you, just learn a bunch of stuff. Post-training was, now we're going to take particular things we want you to be better at, and we're going to try to make you better at it. And we're going to use a new machine learning technique within the world of language models, reinforcement learning,
Starting point is 00:39:13 which is an old technique, but they applied it to the language models. And we're going to use that to sort of fry and mess around with your wiring for specific tasks. We're going to take the smarts you already got during pre-training, and we're going to make you somewhat better
Starting point is 00:39:25 at applying them. And this requires, he was talking about synthetic data sets or this or that. It requires you have special post-training datasets, which isn't just text like in the pre-training, but you have a question and the right answer,
Starting point is 00:39:38 and so you can zap it to get it closer to the right answer. There's all sorts of different types of post-training. But this is what they realized we could do. It's like, we'll take a model that's already trained, and then we'll try to make it use it smarts better on particular types of applications. It's very more bespoke custom. Let's come in and soup it up here and soup it up there. There's another type of scaling they did.
Starting point is 00:39:59 Jensen goes on to talk about it, but I kind of got tired of hearing him. But he goes on in that same talk and says the third scaling law, there's different words for this. You could call it test time compute or inference time compute. But basically you could think of it as like other clever things we can do with the models we already trained. So, for example, you could have the model, you know, you could tell it to spend more time thinking on questions that are harder. Same model, but you spend more time thinking when the questions are harder, you get a better answer. Or you could say, here's what we're going to do. We'll ask the model a bunch of times to answer the question, and then we'll look at the answers and be like, which one comes up more often?
Starting point is 00:40:37 Like, that's going to give you better performance. right? Or we'll have one model look at your question, and then that model's whole job is just to say, what type of model, like what type of these different models we've tuned up in different way would be best for answering this question, and then we'll send it that model, and then we'll get a better answer. So there's also all of these other things that were about not necessarily changing how the models, their definitions are what they learned, but just using them in different ways to try to squeeze more performance out of them. So we have these two different scaling laws, but in my new, Yorker article, I call that whole thing post-training. So it's taking a model you already made smart
Starting point is 00:41:12 by pre-training and try to make it use those smarts better for particular things. Here's a metaphor I had in my New Yorker article to help explain this. I'm just going to read it verbatim. A useful metaphor here is a car. Pre-training can be said to produce the vehicle. Post-training soups it up. In the scaling law paper, Kaplan and his co-authors predicted that as you expand the pre-training process, you increase the power of the cars you produce. If GPT3 was a sedan at GBT4 was a sports car. But once this progression faltered, the industry turned its attention to helping the cars they had already built to perform better.
Starting point is 00:41:49 Post-training techniques turned engineers into mechanics. And this is what has been going on since roughly the late fall of 2024. All of those really confusingly named models that OpenAI has put out that, like, well, here's 01, here's 03, here's 03 mini-high, here's 040 mini, 04 nano, 04 nano-high, Pokemon. Belisavar. I don't know the names, Jesse. There's a lot of them. These are all just different bespoke combinations of post-training techniques applied to models. Things got incremental. Things got focused. And they began to turn their attention a lot more towards benchmarks. Remember that clip, that clip from the Microsoft researcher talking about GPT4.
Starting point is 00:42:29 And I'm going to tell you what he said. Remember what he said. He said, man, GPT4 is amazing. It's not just we're 20% better on some benchmark. It's just, just clearly better at everything. Well, guess what they started saying about the models after the scaling law failed. Hey, this thing's 20% better on some benchmark. That became the way they were bragging about these models. But you can see why this is problematic. If you have these, like, very specific test and you have a way to take a model and try to
Starting point is 00:42:55 get it better at specific tasks, well, then you can just start making these models better at like whatever the test were. Oh, look, we have a test of step-by-step reasoning and it does really well. And now ours does even better. Yeah, because we post-trained this version of GPT-4 to, like, do these type of questions really well. We left the world of just, we trained it twice as long, and when we came back, the baby was doing quantum physics. We've left that world, and now we're like very, like, insistently post-training, very nuanced, specific little improvements. It was incremental, and it was more focused.
Starting point is 00:43:33 Now, you could add cool features to it. This is why we had some jumps forward in computer programming because that's very well suited towards post-training. Because it's easy to generate a question and know the right answer. Like, does this code compile and work? And so you could get a lot out of the existing smarts by post-training and computer programming. So we got some good jumps there. Certain types of math problems, they have right answers. So we could have good synthetic datasets.
Starting point is 00:43:56 And we've got some results there. Some of these compute time things, like let's spin longer on certain responses. Yeah, you get there's, you spin more compute. you can get better answers. It's not really practically, it's too expensive to be practical. But these things could make it do better on benchmarks and some stuff that was useful in practice. Things like deep research require these sort of post-compute techniques where you can break down a thing in the, an LLM will break down the task in the multiple steps. That's its answer.
Starting point is 00:44:23 And then another LLM takes each of those steps and does it. And then a third LLM looks at the answers and puts it together. So using the LLMs more dynamically could get some cool stuff as well. But we had left the world. We were off that path that was supposed to leave. us to AGI. We left the scaling law path, right, which said we got a sedan, then we got a sports car, and then we were going to get an F1 car.
Starting point is 00:44:45 We left that path. We had a bunch of Camrys, and we just started saying, how can we soup these things up? And people are like, hey, look what I did. I put like a system on the exhaust, and we got 20, you know, 20 more horsepower out of it. That's the world we entered. That is why it's been so confusing to understand the models starting in 2025. That is why if you look at the announcement page for GPT5 on OpenAI, I counted this. It has 26 different bar charts and line graphs showing these inscrutably named benchmarks with things moving.
Starting point is 00:45:15 You didn't need that to know that GPT4 was amazing. It could write code and GP3T code. You could give it math problems and it did well. People just started giving GPT4 problems from standardized tests and it was like just doing really well on them. Now we have a bar chart showing a 4% increase on some sort of benchmark metric that probably one of the AI companies themselves came up with. So the scaling loss stopped. We weren't going to get the AGI, but we pretended like this post-training stuff was just as exciting. And a lot of us were going along with it because I don't know.
Starting point is 00:45:49 We didn't know what change. These tech terminology are difficult. That car metaphor took me and my editor a while to figure out. It's not easy to explain these things. until GPT-5 came along. And then people stepped back and said, okay, we were going along with this for a while. I didn't know why these were getting better,
Starting point is 00:46:08 but these had weird names, and we thought you were maybe working on particular features. But when you give us the next big number, you had promised for years, this thing was going to feel to us like GPD4 felt and like GPD3 felt before. And when it didn't, that's when people said,
Starting point is 00:46:20 I don't care anymore that you get a 16% increase on some benchmark whose name I can't understand. I'm beginning to suspect that the emperor is not wearing nearly as much clothes as I once thought that he was. So that, I mean, I know this gets technical, Jesse, but like that is, that's what started happening here. We shifted from pre-training, which was amazing. And then when it stopped being amazing, we replaced it with stuff.
Starting point is 00:46:46 It wasn't so exciting. I like the car metaphor. I know. I've been getting a lot of good feedback on the car metaphor. I have to get full credit to my editor. He said, you need a good metaphor here. And I walked away, came back and said, I think, car works and then so I didn't you know that's why editors are good but yeah
Starting point is 00:47:02 some computer science like it should be clear uh the post in the term post training clearly indicates that this is in a separate phase of the training cycle of the uh transformer based language models so it's good sometimes to have a non-computer science check you all right so what is going to happen now now that we know what actually happened what should we expect for the future this brings us to part four what the future is more likely to actually hold. I want to read here from the concluding section of my New Yorker article. If this more moderate view of AI is right, then the next few years,
Starting point is 00:47:41 AI tools will make steady but gradual advances. Many people will use AI on a regular but limited basis, whether to look up information or to speed along certain annoying tasks, such as summarizing a report or writing the rough draft of an event agenda, certain fields like programming and academia will change dramatically. A minority of profession, such as voice acting and social media copywriting, might essentially disappear, but AI may not massively disrupt the job market and more hyperbolic claims like superintelligence may come to seem unsurious. This is what I think is more likely.
Starting point is 00:48:16 Now, I went on to actually quote Ed Zittron because he argues AI hype might actually have introduced some new perils. There's some pretty sobering financial numbers here. according to Zittron's analysis, about 35% of the U.S. stock market value, therefore, we're talking like a large share of your retirement portfolio, is currently tied up in the so-called magnificent seven technology companies. So according to Zittron's analysis, those firms spent around $560 billion on AI-related capital expenditures in the past 18 months, while their AI revenues during this period were only around, as he mentioned in the clip before, $35 billion. When you look at these numbers, you feel insane, Zitron told me when I interviewed him. So it's possible we got some cool uses for the AI we have now. There will be some more cool uses to come out. It is a powerful technology.
Starting point is 00:49:09 The product market cycle takes a while. I think we can get a lot more customized tools. So a lot more people are more likely to find a customized tool that's built on this type of language model technology that is really useful to them and makes their life really cool. it's not giving us 20% unemployment. It is not like a college-educated entry-level worker, like Amadei said. Ideas like superintelligence are completely unsurious on our current technological trajectory. So in other words, yes, the AI we have now may be as good as it's going to get at least for a while. Does this mean we can permanently stop thinking about AI?
Starting point is 00:49:44 No. So some of the people I talked about for this article, I talked to like Gary Marcus, who had correctly said from the beginning, generative AI is going to hit limits. You can't just keep scaling. And he was right. If you talk to him thinking you're going to get a lot of relief about I can now watch Terminator 2 with impunity and spoke a cigar and kick my feet up and not worry about that and not bother doing Sarah Conner's jail cell pull-ups to get ready for the robot apocalypse. If you think he's going to give you that sort of certainty, think again, because if you talk to Gary, he says, Oh, yeah, language models, this is kind of a dead end.
Starting point is 00:50:21 Like, what we have now is kind of what they're good at. But it's like, I don't know that it's going to be that much longer until we do get to something like artificial general intelligence. It's going to take a lot more other technologies. He calls it neurosymbolic AI. But he's like, yeah, there's some new breakthroughs we need. It's a more complicated way of doing it. But he's like, yeah, 2030s. So I guess we get some relief.
Starting point is 00:50:42 We're not going to lose our job next year. But it doesn't mean that we never have to worry about AI again. I do, however, I can get a whole other episode about my computer scientist's thoughts on the likely trajectories towards much more powerful AI. It's going to be more gradual. It's going to be more fragmented. It's going to be less about – this is what was scary about the language model, scaling model. It's going to be less about this one type of technology where we just turn one knob is going to eventually just, whoa, this one thing can do everything. It's not going to be that.
Starting point is 00:51:12 Lots of different systems. This system got good at this, and it took a lot of bespoke work. this one got good at that. And it took a lot of bespoke work, but now we've mastered that. It's going to be more of this gradualism of, you know, now that I think about it, there are so many things that humans used to do that we each have a different system for that can do it really well. But that's going to be a much slower moving disruption than we feared from the scaling of language models. Because, again, that was going to be like in a year or two, this technology, this one right here, just bigger, is going to do all your jobs.
Starting point is 00:51:43 It's going to be more gradual and fragmented than that. But we can't stop thinking about the impacts of AI. We have to use this reprieve, I think, well. We have to think about from a regulatory perspective, from an economic perspective, and from an ethical perspective. The field of digital ethics is new. I direct the computer science ethics and society major at Georgetown. That's the first integrated computer science ethics major in the country. It's only a couple years old.
Starting point is 00:52:09 We need a lot more of those, and we need to keep developing these ideas. But at least now we have some time to get there, but maybe not as much time. as we might have hoped. But so this is where we are. AI was exciting. We were not being bamboozled. The venture capitalist and the tech CEOs weren't bond villains at the beginning of this story.
Starting point is 00:52:30 It really was that exciting what we were seeing up through GPD4. It really was a bummer when the pre-training scaling stopped making those same leaps. Where we get into some behavior that I don't love is what happened next, which was the companies start, waving their hands really quickly.
Starting point is 00:52:47 Look over here. Post training is going to be just as cool, even though people could clearly see bar charts moving or not cool. Like what is actually going on here? They waved their hands really wildly and hoped that we want to notice. A lot of people who were covering technology went along with that. And now the bill has come due of like, oh, this stopped working less in the summer 2024, didn't it?
Starting point is 00:53:06 And this new thing is just you polishing up the camera you already have. It's not the Ferrari that you promised. And I want you to make the camera better because, like, it's really useful for going to get groceries. a mixing metaphors here, Jesse. But the Camry is not going to take over half of new knowledge work jobs. That metaphor is kind of mixing up a little bit, but we don't have to worry about that. So, anyways, there we go.
Starting point is 00:53:27 You can read, I wrote about this for my newsletter. So if you want a sort of summary of my New Yorker article, kellnewport.com, subscribe to it. The actual New Yorker article is called, what if this is as good as AI is going to get? Or what if AI doesn't get much better than this? There's so many different ways of saying that. But just my name in New Yorker, you'll find that article. And we'll keep up on this.
Starting point is 00:53:45 But that is my epic, my epic tale of what's happening with AI. Did all that make sense, Jesse? Yeah. Did you see in a recent Andrew Sorkin dealbook email that Elon and Zuckerberg tried to buy Open AI for $97 billion back in February? Really? I just read that today, actually. You should have.
Starting point is 00:54:07 But now it's worth $500 billion at their recent valuation. Yeah, but what's it really worth? I know. I know. That $97 billion might be the. right price a year from now. That's fascinating. That could be interesting. What's interesting if you watch, if you're, because I follow the industry, they turned on a dime. Not only did the coverage turn on the dime. Like this,
Starting point is 00:54:29 my article came out. There was a couple other articles just like this, like right after GPT5. And then the floodgates open. And like every publication was like, what's going on with this technology? Is this stalled? And then the CEOs have all on a dime in the aftermath of my articles and similar articles. And some. other thing. So the other big thing to happen is that Fortune magazine then resurfaced, which I think is important, because it was out for a month, this MIT report, that they went and studied 300, like, actual companies building, like, trying to use GenervaI to, like, make their companies better. And they found that 95% of the cases were failures and they just turned it off. It didn't help. And Fortune mentioned that company again. that article and it went viral. And that was a big deal.
Starting point is 00:55:19 They're like, wait a second. Yeah, yeah. Is anyone? It was like my section with Ed Zetron. People were asking because they're seeing these articles. There's a huge contraction. People are being laid off. AI, AI, AI, AI.
Starting point is 00:55:29 Look at these job numbers over here. AI, AI. AI. Look at this person here. Their CS jobs are down. AI, AI, AI. People were just like putting these things together. Like, oh my God, AI is, I think AI.
Starting point is 00:55:39 I'm not quite sure. But I think AI is murdering computer science majors. I'm trying to get the vibe out. Like, that was the vibe. And then that article, that research paper got resurfaced and people like, wait, whoa, whoa, who is actually like replacing people with this technology? Everyone's looking around. Like, I don't think I over there that was doing it. Over here, they're doing it, right?
Starting point is 00:56:01 And they couldn't actually find anyone who was doing this. But the reason why I said the timing is important is that paper was out a month ago. But a month ago, if you're out there saying what I said, people would think you were certified. You're like, when you couldn't know, that paper got no coverage. But after GPT-5's failure, like suddenly people like maybe we should pay attention to it. So that happened. There's been a stock sell-off. And now the CEOs have all come out last week and been like, yeah, it's a bubble.
Starting point is 00:56:30 Sam Altman, man, he talks through his teeth. The day that GPT-5 came out, he said, this is a key step towards AGI. And then all this stuff happened. It's five days later. He says, AGI, I'm paraphrasing, but basically, what does that word even mean? We shouldn't be talking about AGI. Who's talking about AGI? Sam, you were, you were four days ago.
Starting point is 00:56:50 You were saying like you were on your way there because of GPT5. And then after all of this he came out, he's like, it's not really about AGI. It's really about the fun we have along the way. It's not about can our technology make you a lot of money and be worth the money you're investing. Is it a viable business? That's not some dirty question. The right question is, we had a lot of fun, right? Talking about it, we had fun on the journey.
Starting point is 00:57:14 That's what matters. So anyways, there's this whole scramble. The whole, all the coverage, everything turned on a dime. You've heard it. Like, because, you know, in my own sort of like neurodiverse computer science way, I was never swayed by hype. I'm just like, what are the systems? What's the trajectory? Why do they?
Starting point is 00:57:31 What's the architecture going to be to do this? So I was never that impressed with any of that talk. And I would say this in a lot of places. And people were just like, you are crazy. Like, and it's just like, I don't know, it reminds me of like when I was not using social media originally. And people were like, you were literally the devil.
Starting point is 00:57:48 And now they're like, who would use social media? Yeah, I've never used social media. I don't even know what you're talking about. It's kind of the same thing. Because again, I was like,
Starting point is 00:57:54 I don't know. I'm not that impressed by like, I don't use Twitter. Like I wasn't hearing the hype. And I'm looking at it. It's like I, the scaling's not working. I'm looking at these products.
Starting point is 00:58:02 This architecture can't lead to these things. They're saying. Where's who's replacing their jobs? And people were like, you must be dumb. You don't. See, this, this here automobile is like a horse.
Starting point is 00:58:14 But it's like a horse so you don't have to feed hay. Like they were talking to me like, like, you know, I didn't understand the new technology or whatever. And now it's like this. And everyone's like, I never thought today I was going to be a big, I knew that. I mean, come on. Reinforcement learning based fine-tuning techniques is big good for maybe guardrails, but it's not going to have substantive leaps and underlying cognitive capabilities. That's been my experience recently.
Starting point is 00:58:40 When you went to Open AI, did you meet Sam a couple years ago? No. Okay. No. I haven't met Sam. I don't, I'm sure he doesn't know who I am unless you read that New York or peace, but I'm sure he's not. There's a lot of people out there. I'm probably not their favorite person.
Starting point is 00:58:55 I'm not, but, you know, there's, there's a whole class of technocratics that are just like, I'm a professional technocrite and I just think all these people are like evil and bad. And I'll just shift what I'm upset of them about by whatever, like the current, you know, whatever whatever the current idea is within my like particular critical discipline, you're guilty of that, right? And I just don't like it. This is it. That's my job.
Starting point is 00:59:19 I don't know. I'm open mind. I'm not interested in disliking people for disliking people or who's the bad guys and who's the good guys. I just like call it. I'm interested in the actual technologies. Yeah. Like I think this is a really interesting technology. I think the tech CEOs got way out over their skis and were being disingenuous and the things
Starting point is 00:59:36 they're saying about it and they scared and tricked a lot of people. And that part wasn't good. But it's also, it's not like it's a bust. But I was very vocal against the blockchain movement because, you know, I got my doctorate in the theory of distributed system groups at MIT. I know a lot about distributed systems. I was like, this is stupid. Not the cryptocurrency piece. That's just bubble speculation.
Starting point is 00:59:57 But building distributed systems on top of blockchains. I was like, I can just tell you from a technical perspective, this is a dumb idea. You're building worse versions of products that could be easily built. right now based on just some sort of like hazy techno libertarian promise of this full decentralization will prevent the sort of like centralized control of what people don't care like we know how to build distributed systems you can just spin up a my sql sequence in like an amazon cloud somewhere and it costs you no money and it's never going to go down and it's fine and people are okay with the possibility that google is like evilly in the middle like tricking that they know they're not
Starting point is 01:00:32 and just give us a service that works anyways um but i got yelled at a lot for that as well but There's a lot in Silicon Valley I like. I am a tech guy. I'm a CS guy. I think language model technology is really cool. I just think it's more narrow than they're letting on. I'm really interested in what comes next with AI. The types of more complicated models, multi-mode type models, not multimodal in the language model sense, but multi-mode in the sense of you have different types of system components that are architecturally unique that work together.
Starting point is 01:01:00 I think there's cool breakthroughs that are coming, and this might have helped get people investing in it. and again, we'll see. So I don't think, like, this industry is bad or all these people are bad. But I do think it was what they did by trying to keep the hype alive after the bad stuff happened was, I think, is going to have some negative reverberations. So we'll see. All right, we have a shorter version of the rest of the show today because I know you've been with me for a lot here, but I just wanted to sort of get all this out of my chest. So we got a few questions at a call, and then I have something I want to react to later. First, I want to do what you really tuned into this show for, which was to hear from one of our sponsors.
Starting point is 01:01:40 Just I want to talk today about Cozy Earth. As listeners know, I'm a huge fan of their bamboo sheets. They're the most comfortable sheets I've ever owned. The fabric is soft in a way that it's just better than other sheets I have used. It also temperature regulates as you sleep cooler in them. I sleep hot. So having the cozy earth sheets is a big deal. But here's the thing.
Starting point is 01:02:01 It's not just about sheets. Cozy Earth sells other things. They're everywhere pants. I have a pair of their everywhere pant. They're clean cut. They're clean cut. They stretch. They have that same comfort you get from their sheets, but with you all the time. I love my pair of everywhere pants.
Starting point is 01:02:15 This shirt I am wearing right now, Jesse, this is true. If you're watching rather than listening, this shirt I am wearing right now, it's a cozier shirt. It's very comfortable. It's the same material. It's cooling. I really like the shirt. People like it as well. This is a true story, Jesse.
Starting point is 01:02:28 As I was walking to the HQ, there was a convertible driving by. And I don't know, they were here from Sweden. It was a Swedish volleyball team. You know, and they're wearing bikinis for some reason. And they were sort of like bouncing like a beach ball in the back of the convertible, just sort of driving. And they were had sunglasses on. You know, having a good time here.
Starting point is 01:02:48 There's the Swedish volleyball team. And they saw me and my shirt. And they were like, what? You know, like doing like the exaggerated glider like that. And the driver turned around. Like that is a nice shirt. So it's cool. tragically because of that they crash into a cement mixer
Starting point is 01:03:05 and so the front row they got the capitated and there's some pretty serious compound fraction in the back row but the key point is they thought this shirt looked good they thought it looked good anyways I do love it it's very comfortable the pants very comfortable and you have to get the sheets we have like four pairs now it's like we're we can't help ourselves all right so here's the thing cozy earth provides you
Starting point is 01:03:26 comfort that shows up day in and day out you're on the fence, 100-night sleep trial, 10-year warranty? Come on. You'll love the sheets, but they have all the guarantees. So go to cozy earth.com slash deep
Starting point is 01:03:42 to get 40% off the softest bedding, bath, and apparel. And if you get a post-purchase survey, tell them that you heard about cozy earth right here, built for real life, made to keep up with yours, cozy earth.
Starting point is 01:03:57 I also want to talk about our friends at Gramerly. That's one of the oldest in Dearest and dearest sponsors we've had on the show, Grammarly has been with us since almost the beginning of this podcast. For good reason, from emails to reports and project proposals, it's more challenging than ever to meet the demands of today's competing priorities without some help. Gramerly is the essential AI communication assistant that boost productivity so you can get more of what you need done faster, no matter what or where you're writing.
Starting point is 01:04:23 This is important, right? Unlike the large claims that, you know, AI is going to automate all the economy. AI actually having the impact right now where you build bespoke tools that focus on the things that are at the intersection of matters to us and language model-based AI is good at. Working with understanding and production of text is what these models are exceptional at and Gramerly has integrated this power, this focused application of AI very well into their product. They have things like tone detection.
Starting point is 01:04:55 You can ask about the tone of what you just wrote before you email to your boss or ask it, hey, can you rewrite this in a better tone? Those of us who write for a living take for granted, the subtlety in like, does this sound to, um, to official or too friendly and, and you worry about this in business? Now you have an AI powered assistant that can help you with that right there. Maybe you want a new way of stating something or like, hey, can you give me a couple variations of this title? Right there and where you're writing, Gramerly helps you out.
Starting point is 01:05:22 93% of professionals reported that Gramerlea helps them get more work done. The other 7% I assume works for a company that puts like folksy sains on samplers. So they need that grammar because, you know, it's like, I ain't done less in my house or whatever. Like they don't need Gramarly because you need the grammar to be bad. But everyone else, 93% of workers. Do you better with it. All right. I made the last part up about the 7%.
Starting point is 01:05:47 But the 93% is true and it's pretty impressive. So let Gramerly take the busy work off your plate. So you can focus on high impact work. Download Gramerly for free at Grammally.com slash podcast. That's Grammley.com. slash podcast. I'm kind of hoping this isn't one of those weeks
Starting point is 01:06:03 where the sponsors are like carefully listening to the ad entries. We're going to get the next cozy earth script notes and there'll be a highlighted
Starting point is 01:06:11 section that says try to avoid implying that our product led to the decapitation of several visitors from Scandinavia. Just a thought. All right.
Starting point is 01:06:23 Let's get to some questions. All right. First questions from Sam. I've been a software engineer for the past four years. I'm concerned that AI will leave me unemployed in the next 10 years. I'm considering doing an online master's in computer science. Should I stay put, pursue this master's, or potentially pursue another income stream?
Starting point is 01:06:43 There's two separate things going on in this question. One is just a computer science related, I have a computer science related career path. What's the right way? If I want to do that, what's the right way in? Masters, no Masters, maybe beyond Masters. Like, what's the right way into that career? stream. That's a very good question. So technical question, I have some answers for you. The other part of the question is saying, I'm concerned AI will leave me unemployed in the next 10
Starting point is 01:07:08 years. I mean, look, I can't predict 10 years from now. There could be new developments, but as I argued in the deep dive, the technological trajectory we're on right now is not going to put you, it's not going to leave unemployed. For get AI as I'm not going to have a job. Again, I think a lot of the reporting on this has been atrocious. It's a lot of, as I keep saying, mixing job numbers that are very real, tech sector contracting post-pandemic, tech jobs, therefore, going down, very real. And they keep saying AI, AI, in the middle of this. They're not related.
Starting point is 01:07:41 They are not related. They're not laying off jobs at Microsoft and Amazon because they're being replaced by AI, even though for whatever reason there's some reporters out there that really want you to believe that's true. I don't know why if they don't like computer scientists. So don't worry about that. AI is not about to replace all software developers. Don't worry about that. Should you get a master's or not?
Starting point is 01:08:00 There's actually for computer science. This is very specific to computer science. There's a lot of thinking about this because we're geeks where we think very carefully about salary expectation. Right. So this is why it's technical. Each higher level degree you have increases the level of position you can enter into a technology job. So like a master's degree, there might be a higher level job you can start with. And if you're coming with just a bachelor's degree, with a doctorate, you go to work for Google. There might be, you know, you would come in an even higher level, higher level more salary.
Starting point is 01:08:36 But on the other hand, it takes more time to earn these. So you have to sort of do this tradeoff between I'm not making money for the years it takes to get this degree. But then I make more when I get there. The folk wisdom on this, someone did this, these numbers before, assuming that these are high quality. degrees. You often, in computer science, would come out ahead with a master's because they don't take that long to get. And so, like, I had this two-year period where I'm missing out on salary, but I come in at this salary, and it would have taken me maybe four years to get there with a bachelor's. And so, like, it works out okay. The folk wisdom was always with doctorates, you got to be a little bit
Starting point is 01:09:16 more careful. They take a long time. So really, if you're getting a doctorate, it should be because you want to actually master an area well enough that you can do original research because you want a job doing research or you want to go into academia shouldn't just be a pure salary play there's been exceptions like until recently oh man the AI hiring have you heard about what metal was doing yeah i was going to ask you about that actually yeah so if you were if you were an AI PhD and you're really looking at the techniques that were relevant to large language model scaling they were throwing crazy money at you they were throwing i know baseball better but top 10 top 10 pick overall draft signing bonus money if you had that.
Starting point is 01:09:56 And if you were already established, like I am at a different company and I'm really good at something that's unique to scaling, Zuckerberg went crazy, only for like a few weeks. But he was giving some compensation packages to individuals that were worth up to $300 million. I know. I heard that. It's crazy. It's like Juan Soto money.
Starting point is 01:10:21 Well, Wants Soto makes a lot more than that. It's half of Wants Soto money. And probably some of these engineers. Well, we don't know the years on the engineers. Well, that's a good point. Yeah. Because Wonsoto signed like an eight or 10-year deal. Yeah, he's on a long deal.
Starting point is 01:10:37 I think it's like 15-year deals. But the vesting, we don't know how long to take the vest. But here's the thing. All right, it's like half the money as Wantsoto. But I bet some of these engineers could have an OPS within a hundred points of what he's hitting right now. You like my baseball joke there? Yeah. A little Juan Soto dig.
Starting point is 01:10:54 Half the money, but more than half the OPS. Like 400 ops. All right, enough of that. So, anyways, you can do this type of math, but basically, if what you care about salaries, masters should be, they can be very much in the mix. If what you care about a salary, be careful at PhD. It really needs to be about, I want the skills. It's not just I want to get higher up on the developer chain.
Starting point is 01:11:16 That really needs to be, I want to do original thinking or research. That's usually what they say. degree quality matter, so I'm a little nervous about the online here. So what you really want to do, because the biggest trap with graduate degrees is you want the story to be true that this graduate degree that you want to get is the right thing to do because you want to do that because it's convenient or you like it or it makes sense to you. You actually have to go verify. And by verifying meaning you really need examples of specific jobs where you have an indication
Starting point is 01:11:44 like from someone who works there that we wouldn't hire you now. But we would be likely to hire you if you had this master's degree from this institution. You want that verification. Right. So just because, like, if you have a, you know, a master's from MIT, you can get snapped up in this particular position at meta, it doesn't mean if, you know, you have an online master's degree from, you know, the Coney Island Institute of Computer Science and Rollercoaster Repair, that that same job is available to. You need to verify. They actually have a really good quantum physics program there. It's interesting.
Starting point is 01:12:20 You wouldn't expect it. Really, that is a really good program. Also, so is vomit cleanup. So they specialize in those two things in the Coney Island School of Computer Science and Rollercoaster Repair. So you've got to be a little bit careful. So let me summarize everything. A, I'm going to take your job. No, stop reading those articles.
Starting point is 01:12:39 This nonsense. And those reporters are kind of embarrassing. You're going to stop writing them anyways. Masters? Yeah, maybe it could help with computer science. just make sure that the program and quality you're going for is going to help for the particular jobs you care about. PhD,
Starting point is 01:12:51 that really should be because you want to do research. All right? What we got next? I was going to make a more, a couple more Mets digs, but I refrain because I don't want to get off topic. Can we do one more Mets dig? Let me do this.
Starting point is 01:13:03 Okay. Look, Sam, you're not going to be unemployed. I don't want Mets player to, I need a what Mets player to reference here. Who's struggling? Well, Matt Dogg went crazy on Pete Alonzo.
Starting point is 01:13:16 The break in the Met's home run thing. Okay. You might not be unemployed in 10 years because of AI, but Pete Alonzo will. Well, he probably won't be a Met next year. Yeah. Yeah. Okay. Well, they only sign them on a one-year deal?
Starting point is 01:13:29 Pretty much. Technically, too, but as Matt Dog said, it's one. They have a one-year option. Yeah. Yeah. All right. So, listeners, we're going to, like, workshop a Mets joke here. It'll take us, like, 20, 30 minutes.
Starting point is 01:13:41 We're going to leave the tape running. But then when we come back, we're going to have a right reference. Who else are they mad at over there? I don't know. Look, I don't want to kick someone when they're down, but the Nats did take the series from them earlier this week. All right. Who do we got next? Next is A-Lock. I'm a recently retired college professor.
Starting point is 01:14:02 I have a good reading and exercise routines. However, I'm trying to figure out what I should focus on. I can continue to help colleagues with academic papers, or should I finally dive deeper into writing? If you're a retired college professor, right. Yeah, I mean, this is like the time. Right now, by the way, while the momentum's still there. You're recently retired. Write the book you wanted to write.
Starting point is 01:14:25 Go to the research. I want to go to this place, go to the archives, whatever it is, or I'm a science professor, and I want to write the public-facing book about this idea or, like, applies ideas from this obscure field to the rest of your life. Or I'm going to, you know, I want to teach why this particular philosophers you should know about them. and write that book. That's what you should do. Because professors never have enough time to write. It is the curse of our existence, of our existence.
Starting point is 01:14:49 It is our myth of Sisyphus for professors. It's not our livers being eaten out repeatedly by the eagle. It's like just as we're about to get down the right, we get a calendar invite for a Zoom meeting. That's our Sisyphian fate. So, right. For you and that job, write a book. right who we got that's good because i've been email with him and i think he's going to i didn't know how you're going to answer that but what was he what was he hopefully it's not like he's an
Starting point is 01:15:19 economics professor he was so it's not a book about like new mexico why you should preemptively murder your dog because of their impact on the economy the puppy massacre theory all right next of is aaron i'm an attorney and also i'm writing a book It has been accepted by a publisher. We just moved and I have over 3,000 books. Any suggestions on how I should arrange them around my house? I do have a home library in addition to an office and other spaces.
Starting point is 01:15:54 I have a lot of bookshelves, as Jesse knows. I have a home library with a bunch of built-in custom shelves. But then in our family room, we have a bunch of built-in shelves. We've also filled with books. And then on my bookshelves here at the HQ, which we filled with books, and then I have the bookshelf at one of my two offices at Georgetown, which I've also filled with books. And then I have a large pile of books next to my desk and library that I'm going to use to fill in even more spaces in the shelf. So I think it's
Starting point is 01:16:24 great to have books wherever. Books are fantastic. I don't understand this need of like I have to purge my book. Books are fantastic. It's like such knowledge compressed into this little thing that you can just download into your head and someone like spent years to get it into this small form. If you have a home library, make the home library awesome, I kind of think if you're an attorney who's writing books, if you're a successful attorney, maybe you need a work from your home type space, like a cool office like I have like near your home where you go to write that you can just fill and surround those things with books. Look at, it's been a while since he talked about it, but Ryan Hollidays talked about, he has a massive book collection. When he moved, he had to have a separate library service, move the books. Like you couldn't just have his movers do it. Like it's a whole separate thing. has to happen. He has all these shelves he's built. Let me adjust you here. Jesse. I move that. He has all these shelves he built.
Starting point is 01:17:14 So you maybe look at some of his stuff. But yeah, build a lot of libraries. Maybe have a separate library or make your library at your house awesome. Like expanded or something like that. Expand the house. I think doing crazy stuff around books, I'm a big proponent of. To me, the people are like,
Starting point is 01:17:31 I want to get down to two books because it's wasteful to have them. I think that's like the puppy murderers. Like, yeah, I like my dog. but dog food ain't cheap. So like we had him and the other dogs on the street put down, right? To me, that's what it's like when people are like, I just get rid of all my books. I don't need my books.
Starting point is 01:17:46 Books are awesome. All right, do we have a call? We do. All right, let's hear this. Hi, Cal, this is Rina. First time caller, long time listener. I absolutely love your work and it has been so life-changing for me. So my question is, I know you talk a lot about the overhead tax
Starting point is 01:18:04 and how you want to take on less projects at a single time so that the overhead tax is less so that you actually have more time to do the project. And all of that sounds great to me. However, the thing that I've been struggling with is the overhead tax that happens after a project is done. So, like, I'm a composer, and I write the piece, I finish it up, I turn it into the commissioner, all of that is done. But then even years later, people will be emailing me to be like, you know, asking questions about the piece and about the interpretation, maybe finding some errors in the,
Starting point is 01:18:38 piece and I'm wondering if you deal with that in any of your books, you know, books that you've written years ago where you're still kind of managing the back end of them or if anything in your life has this or if you really feel that when you complete a project, it's completely done and you never have to deal with the overhead tax again. I'd be so interested on your insights on that. That's cool for a professional composer is cool. Yeah. Like you commission pieces and you compose them. I'm very impressed by music composing. We have all the different instruments and you sort of make it all work. That's really cool.
Starting point is 01:19:13 All right. So there's the overhead tax. And just to remind the listener in general what that is, because you should always keep it in mind, is the idea that when you accept something it brings with it administrative overhead, this is going to be meetings, it's going to be emails, it's going to be spacing your thought. And that's what you have to be worried about not getting too large. That's why you should not say yes to too many things. Even if the actual total time required to do the things you've said yes to is,
Starting point is 01:19:36 reasonable, it's possible that all of the overhead tax those things brings with it makes your life unreasonable. That the get to the 20 total hours this month that'll take to do these five projects might generate 20 emails a day and six meetings a week. And that just makes your life so fractured that you are completely frustrated. So you have to monitor overhead tax very carefully. It's the main reason why overload having too much on your plate is dangerous. In your particular situation, this is very particular.
Starting point is 01:20:06 It's creative production. You produce a creative artifact that is then available to a wider public. So a composer would have this. I have this as a writer. Musician would have this. If you were, you know, anything where you're putting out a creative artifact that is available to a larger public. As Jesse will tell you, I'm not necessarily, you wouldn't say I'm the world's most responsive author, if you have to summarize it. I am not exactly giving one-on-one attention.
Starting point is 01:20:35 Well, I think you're only the author I know, so I'm not really sure. Yeah, but you handle a lot of my correspondence, is why. So you get a sense. A lot of stuff comes in. I see very little of it. But then it used to be the case. When I first started writing books, I was writing books for students, and I felt like it was just part of my service I was doing to the world was like to help students not be stressed
Starting point is 01:20:54 out, not get overwhelmed, have a good experience in college, figure out how to be a meaningful adult to move on to satisfying lives. And so I tried to answer every email that people would send in for my initial books. and it took more and more time and it finally became, it just was not tractable. And I went through this sort of similar calculus that Neil Stevenson talks about
Starting point is 01:21:12 in his sort of classic essay, why I'm a bad correspondent. You do the math eventually and you say, there's enough correspondence coming in now that to try to keep up with all of it would take a sizable portion of my time. Now, the raw number of people here
Starting point is 01:21:26 is not massive. It might be 100 people a week. But that's going to take up a huge amount of my time. That's enough time now that it is going to significantly slow down my ability to, for example, write a new book. But if I write a good book, not like my really successful new books, but like a good, my student
Starting point is 01:21:42 books, like two out of my three student books are comfortably over, I don't know, I don't know what they are, but I think a straight-day students sold like 300,000 copies, right? I'm never going to answer 300,000 emails, but that reached 300,000 people and brought those ideas to them. So Neil Stevenson in that famous essay I talked about, he's a sci-fi novelist if you don't know, a speculative fiction writer. He said, look, my book's going to be read. by more people than I can talk to directly.
Starting point is 01:22:06 So I have to just become a bad correspondent. And that was a meaningful essay to me. It's uncomfortable at first because it feels selfish because you're not used to being a public figure. So you say someone's trying to reach out to me. In general, if someone's like trying to talk to you, it's rude to ignore them. But you're in a different situation where you're doing creative production. Because if it gets to the point that dealing with what people's questions, good intention,
Starting point is 01:22:31 positive questions for you about your work. If that prevents you from doing more work, now you're minimizing your impact. It's better to write that next book that's going to reach a couple hundred thousand people than it is over the next five years to talk to a couple thousand people. It's a couple different orders of magnitude.
Starting point is 01:22:50 So eventually, I think if you're doing creative production, you have to just have a hard rule. Like, I just, I'm not able to do one-on-one interaction. and it'll feel weird at first and then it'll feel better. But this is just the economics of creative impacts, say at some point, you have to maybe just have pretty big limits on. This is why I do this show now. It's a way for me to actually answer questions and try to be useful beyond my books, but still at a much larger scale. Because a good episode of this could, you know, hit 80,000 people or whatever.
Starting point is 01:23:22 So that's better than answering 80 questions. I can answer eight and maybe reach 80,000 people. So I think that economics, it's utilitarian. It does make sense, but it is uncomfortable at first. But you might have to just unilaterally take yourself out of this particular overhead tax. I just can't answer a lot of questions. I don't answer. I'm a bad correspondent.
Starting point is 01:23:42 I'm like Cal, I'm like Neil Stevenson. I'm just not able to really answer questions for the most part that people send me about like the compositions I put out. It's okay. It is a different type of interaction than just a normal interaction with someone you know. It's okay in the context of creative production to be harder to reach than it is, with like your cousin. So hopefully that's okay. I remember it's hard for me to become a bad correspondent,
Starting point is 01:24:04 but like now it's just necessary. There's just too many millions of whatever, whatever out there. It's just I would never get anything done. I got a cool reaction coming up with a figure, a cultural figure that most people know who he is. I know very little about him. And I suspect Jesse knows less,
Starting point is 01:24:19 but we'll find out. But first, even more exciting, I want to talk about another one of our sponsors. Guys, So why don't you like to look a little younger, maybe get a few more compliments on your skin? I don't worry about this as much because as part of his employment contract with me, Jesse has to compliment my skin no fewer than seven times per day. But for most guys, the only way to feel better about their skin is to actually take care of it.
Starting point is 01:24:43 And that's what Caldera Lab is here for. Their high-performance skin care products are designed specifically for men, simple, effective, but backed by science. The consumer study 100% of men said their skin looks smoother and healthier. 96.9% noticed improved hydration and texture, and 93.8% reported a more youthful appearance. Their products include The Good, which is an award-winning serum packed with 27 active botanicals. The eye serum helps reduce the appearance of tired eyes, dark circles, and puffiness. And the base layer, a nutrient-rich moisturizer infused with plant, stem cells, and snow mushroom impact. These are products that will make you look better and feel better,
Starting point is 01:25:26 and they're cheaper than hiring Jesse to comment on your skin throughout the day. Skin care doesn't have to be complicated, but it should be good. Upgrade your routine with Caldera Lab and see the difference for yourself. Go to calderalab.com slash deep and use Deep at checkout for 20% off your first order. I just want to talk about our friends at Ship Station. If you run an e-commerce business, what's the best way to be successful? It is to keep your customers happy. I learned this the hard way after I started letting Jesse Skeleton man our customer service lines.
Starting point is 01:26:01 Jesse was not good at customer service. He would trick people into revealing personal details and then would mercilessly berate them with off-color jokes. So we had to let him go. I realized, oh, your customers being happy is what matters. So how do I do that? In addition to firing Jesse Skeleton, you realized if you're not, if you're not going to you you're shipping things, you can earn your customers trust and generate their happiness one package at a time.
Starting point is 01:26:28 And how do you create the best e-commerce shipping experience with your customers? You ship station. With ship station, you can sync orders from everywhere you sell into one dashboard, and you can replace manual tasks with custom automations. Ooh, this is singing my type of song, to reduce shipping errors at a fraction of the cost. The single dashboard idea that ShipStation has is really cool, right? So it's like you have a super advanced fulfillment center, even if you run like a small business, right? The ship station treats all businesses the same.
Starting point is 01:26:58 And they'll scale with you. You get bigger. Their tools just scale with your business so you don't have to learn something new. They're there with you for the whole journey. Another benefit is it's discounts. It's the fastest, most affordable way to ship products to your customers because they'll give you discounts of up to 88% off UPS, DHS Express, and USPS rates and up the 90% off FedEx rates. You just save money just by using them. Another cool feature, automated tracking updates with your company's branding.
Starting point is 01:27:25 You know, those emails you get like, oh, this is like where your package has been sent. Even a small business for using ShipStation, you can get those. And the emails will come from, have the, your logo on it. Looks cool. Makes you look good. When shoppers choose to buy your products, turn them into loyal customers with cheaper, faster, and better shipping. Go to shipstation.com slash deep to sign up for your free trial. There's no credit card or contract required and you can cancel any.
Starting point is 01:27:48 that's shipstation.com slash deep. All right, Jesse, let's move on to our final part. So I want to react to a clip that a listener sent me
Starting point is 01:27:58 about Ed Sheeran. Here's my question, Jesse. Do you know who Ed Shearin is? No. Do you know like what field of profession
Starting point is 01:28:10 he is in? No. I learned this this week. Coincidentally, I learned this week. He's a singer, and he has red hair.
Starting point is 01:28:21 And I guess I kind of knew he was a singer. But then I was watching the new season of Limitless with Chris Hemsworth. And the first episode is he's trying to learn how to drum for like cognitive fitness. And he had no idea how to drum. He's going to join Ed Sheeran on stage in Croatia with 70,000 people to play the drums with him in a concert. So I learned about Ed Sheehan. The next issue is I was like, I don't know a single Ed Shearin song. but they played the song that he was going to play the drums with, and I recognized it.
Starting point is 01:28:54 Okay. I think you would, too. I think you would too. It's kind of like a ballady type thing. Yeah, I might know. Yeah, I don't know. It kind of looks like Prince Harry. Chris Hensworth, when you did his workout, that's when he got injured, right?
Starting point is 01:29:04 You're still a fan, though? Well, he's doing mental stuff now, so it's okay. It's okay. My kids were impressed because I love that show. And I was showing in the first season, a lot of the stuff he's doing with Peter Attia is, like, the producer of it. and I was like, I did an event with that guy and my kids were very impressed. But back in the day, didn't he do Hemsworth's workout on the app?
Starting point is 01:29:22 Yeah. Yeah, I don't do any more of a trainer. Shout out to Zach. Yeah, Emsworth, I'm coming for you, buddy. He's a big gentleman. All right, so anyways, Ed Sheeran did an interview and said something that I thought was interesting. So let's play a little bit of this, Jesse.
Starting point is 01:29:40 I haven't had a phone since 2015. I had the same number from like age 15, I think. I got famous and I had 10,000 contacts in my phone. people would just text the whole time. Well, even if it was in my pocket, I'd be having a conversation with you like this, and it would vibrate. And my mind would not be in the conversation.
Starting point is 01:29:57 It would go, oh, who is that? Until I take it out my pocket and then I answer the text, and then suddenly our moment has ended, and I'm in this, and now I find that there's no connection to anything. People can't reach me. I can't be interrupted. I go to dinner with my wife. I go to dinner with my dad.
Starting point is 01:30:11 I go for lunch for my friends, and I'm in it. So I got rid of it. I got an iPad. I moved everything onto email, which I replied to once a week. No one expects a reply. to an email. It's a cult. I love this. I think I like Ed Shearren. I think I know his songs, actually.
Starting point is 01:30:26 We can't really play them, right? We'll get demonetized or whatever, but I recognize the one they played, the one Ed Shearin song I heard, I knew. Let me see if there's some free YouTube. Did he, I think we're going to find that he wrote like really common songs. Did he write the song, Hey, Hey, Hey, Were the Monkeys? No, I think that was from the 1960s. I probably not him. Did he write? twist and shout
Starting point is 01:30:51 that was the Beatles in the 15 and the 60s I don't think that's right I think this is really cool here's what's really cool about this right like Ed had to make a change because he's famous
Starting point is 01:31:00 and they got over the top but he just said look there's not a law that says you have to look at a phone when it buzzes there's not a law that says you have to have a phone
Starting point is 01:31:11 and everyone can reach you at any time and that's just how we function in our society he said I'm not going to do that anymore I don't have a phone you can call you can email me
Starting point is 01:31:19 I'll check it once a week. And people are like, oh, I guess that's what Ed's doing now. And then they moved on with their life. They're not, this is what people imagine. People imagine when they make a change like this. That there's a war room somewhere. And there's guys in, their suits are on the back of the chairs and they've rolled up their sleeves. And they're chain smoking cigarettes.
Starting point is 01:31:39 And there's a picture of you up on a bulletin board and all these yarn going from your picture to various graphs that they've made of your response time on various text messages. And then if you ever, like, I just don't use my phone anymore, you're just to have to email that, like, a claxon was going to go on. And, like, the lead guy was going to, you know, put out a cigarette and be like, we're in the bleep now. And the other guy's just going to, like, nod, like, take a slug of whiskey or whatever. No one's monitoring. They don't really care. Like, they kind of care, but they have their own lives. So, like, they're like, this is weird.
Starting point is 01:32:07 Ed's using email. That's kind of weird. Is he a musician? Because other people don't know what he is. I guess he's a musician or whatever. And then they moved on with their life. But you know whose life was much better? Ed Sheeran.
Starting point is 01:32:16 because he's actually present in what he's doing. He just goes to dinner with his dad. I think that's code for some sort of like nasty sex thing probably. I mean, he's a pretty famous musician. I mean, I'm not distracted when I have dinner with my dad. All night long having dinner with my dad. Pete Diddy wanted some of the baby oil. But I was like, I need some of that baby oil because I have dinner with my dad.
Starting point is 01:32:47 200 of my dads are coming over tonight. I'm just saying this probably. The guy's famous. I mean, young, too. Anyways, I thought that was a cool example. Yeah, he's famous and get away with a lot more and the pressures were higher. But, like, there is an insight in there. You can do different things.
Starting point is 01:33:03 Like, it's okay to say I'm sort of prioritizing my communication to the world, in part to make, like, my day-to-day life better. Most people don't really care that much. They're just jealous of all the dinners you're having with your dad. And so, you know, let's take a page out of the book of Ed Shearer. in and say it may be being even a little bit radical about how accessible you are is something that should be on the table. I'm going to listen to some more at Shearin music. That's my...
Starting point is 01:33:30 Me too. Yeah. Now is my... You think I can become friends with him? Probably. Right? If she reads books. Yeah.
Starting point is 01:33:36 I feel like we should be friends. I'm sure he has a lot of time. I'll hang out with him. Ed Shearing. Other thing I wanted to mention, I don't know if you heard this, Jesse. I... In a technical sense, I... I have been nominated for a Booker Prize in a technical sense.
Starting point is 01:33:56 Here's the technical sense. There's a book out called Universality by Natasha Brown, which was long listed for the Booker Prize. It was called a twisty, slippery descent into the rhetorica power hailed as a nesting dollar satire that leaves readers uncertain with their loyalties lie. It takes place in academia and there's a murder and it's a Booker Prize nominee, so it's sort of like beautifully written. Anyways, this came out in March, a viewer or a listener to the podcast sent in, I have it somewhere, but anyways, they sent in, there's a part in the book or one of the characters mentions me. So I figure, like, by proxy, how can I actually put this on my, like, blurb things? Booker Prize Award winning adjacent. Booker adjacent.
Starting point is 01:34:42 I figure if I'm mentioned in a book that's nominated for a Booker Prize, I'm just going to count it. that's as close as I'll ever going to get to a literary award. So there we go. Universality by Natasha Brown featuring as a major part of the pot, by which I mean the phrase Deep Work by Cald Newport's mentioned once. You should check it out. Me and Ed Shearing are reading it in our book club. That's also a code word for like a nasty sex thing with Ed Shearing.
Starting point is 01:35:10 Like, no, no, I got to go to my fire up the hot tub. get the frozen brought worse, not the cook brought worst. Yeah, it's a, it's book club. Gotta go to book club. Ed Sheeran, yes, he's a crazy perv. And also we're going to be great. We're going to be best friends. And I like his phone habits.
Starting point is 01:35:30 All right. Before we get ourselves like demonetized and arrested, let's wrap up the show. We only have like 19 more audio clips to play and then we'll be done. Thanks for listening. We'll be back next week with another episode of the show. And until then, as always, stay deep. Hi, it's Cal here. One more thing before.
Starting point is 01:35:48 you go. If you like the Deep Questions Podcast, you will love my email newsletter, which you can sign up for at calnewport.com. Each week, I send out a new essay about the theory or practice of living deeply. I've been writing this newsletter since 2007 and over 70,000 subscribers get it sent to their inboxes each week. So if you are serious about resisting the forces of distraction and shallowness that afflict our world, you got to sign up for my newsletter at calnewport.com and get some deep wisdom delivered to your inbox each week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.