Deep Questions with Cal Newport - Ep. 391: Is AI Reporting Broken? + Rethinking Morning Routines

Episode Date: February 9, 2026

Is there something rotten in the state of AI reporting at the moment? In the ideas segment of this episode, Cal details three common traps in AI coverage that distort or distract from the reality of t...his technology. Once you know what to look for, these traps become easy to avoid – greatly improving your experience when trying to keep up to date on the latest advancements. Then, in the practice segment, Cal asks why morning routines have become so popular among young people. His explanation (hint: it involves the fight for depth in a distracted world) uncovers new ideas about how to make morning routines actually useful.Below are the questions covered in today's episode (with their timestamps). Get your questions answered by Cal! Here’s the link: bit.ly/3U3sTvoVideo from today’s episode:youtube.com/calnewportmediaIDEAS SEGMENT:  Is AI Reporting Broken? [1:43]PRACTICES SEGMENT: Rethinking Morning Routines [29:27]QUESTIONS:Did I see somewhere that you’re filming a MasterClass? [41:24]Now that David Brooks left the NYT, will he start a Substack? [46:25]Cal reacts to comments [48:25]WHAT CAL’S READING: Cal gives his weekly reading update  [50:45]Time Freedom (Brian Heriott) ARCThe Vampire, The Tutor, and the Madman (Josh Douglas)One Direction (Charles Duhigg)Links:Buy Cal’s latest book, “Slow Productivity” at calnewport.com/slowGet a signed copy of Cal’s “Slow Productivity” at peoplesbooktakoma.com/event/cal-newport/Cal’s monthly book directory: bramses.notion.site/059db2641def4a88988b4d2cee4657ba?qz.com/amazon-layoffs-ai-tech-job-lossescnbc.com/2026/01/28/amazon-layoffs-anti-bureaucracy-ai.htmlcalnewport.comnytimes.com/2025/07/28/arts/video-games-artificial-intelligence.htmlnypost.com/2026/01/31/tech/moltbook-is-a-new-social-media-platform-exclusively-for-ai/news.ycombinator.com/item?id=46838834youtube.com/@WesRoth/videosyoutube.com/@airevolutionx/videosyoutube.com/watch?v=JoQG25gQyRgnewyorker.com/magazine/2026/02/02/what-maga-can-teach-democrats-about-organizing-and-infightingThanks to our Sponsors: factormeals.com/deep50offmonarch.com (Use code “DEEP”)pipedrive.com/deepmybodytutor.comThanks to Jesse Miller for production, Jay Kerstens for the intro music, and Mark Miles for mastering. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:00 I'm not sure if you've noticed, but a lot of AI coverage has gotten out of hand recently. I mean, if engineers get any more excited about Claude Code, I think they're going to elect it mayor of San Francisco. So today in the idea segment of this show, we're going to take a closer look at this issue. I'm going to identify the biggest traps to avoid when reading news about AI if you're looking to just get the straight facts about this technology and not succumbed overhyped terror or exhilaration. In particular, I'm going to introduce what I think are three increasingly common shady moves that show up in AI coverage. Here's my name for them. Vibe reporting, digital ick, and faux astonishment. I'm going to describe each of these traps, and I'll give you some examples of them from out in the wild, so you will know what to look out for.
Starting point is 00:00:53 Then in the practices segment, we're going to revisit a popular topic in online circles, morning routines. I have a take on these rituals that I think might surprise you. And finally, just a quick heads up in the Q&A segment, I'm going to respond to the rumor that I filmed a course for Masterclass. Spoiler alert, I did, and it's available now, and I'll tell you more about it when we get there. All right, so we have a lot to get to today. As always, I'm Cal Newport, and this is Deep Questions,
Starting point is 00:01:21 the show about the fight for depth in an increasingly distracted world. And we'll get started right after the music. All right, so to begin our investigation of these traps and AI reporting, I'm going to bring an article up on the screen for people who are watching instead of just listening. This article comes from the publication courts that came out, I think, last week. And the title I'll put up on the screen here is Amazon is laying off 16,000 more workers as AI accelerates tech job losses. Here's the subhead. Jobs are going to be impacted by what's coming with AI over time. Amazon CEO Andy Jassy said before the layoffs were announced.
Starting point is 00:02:14 All right. So you look at that article, and I think there is a clear message. Amazon laid off people because of AI. I mean, that's literally what it says in the headline. The subhead is the CEO saying layoffs in the future will continue to be impacted by AI. If we look at the article itself, nothing in the actual text contradicts that. There's quotes about how many people they're firing and what benefits they'll get and the fact that Amazon's kind of cutthroat. But you're left with the clear impression that these layoffs are about AI.
Starting point is 00:02:45 Now, here's what I want to do next is show you a different story on the same layoff. So we're going to switch now from Quartz to CNBC. So this is now going to be financial news. Or they care a little bit more, right? Because these are investors reading this. They want to get more to the heart of what's really going on here. Here's the headline of this exact same layoff story from CNBC. Amazon is laying off about 16,000 corporate worker.
Starting point is 00:03:09 in latest anti-bureaucracy push. And you look at these bullet points, the key points. Amazon is laying off about 16,000 corporate workers and latest push to reduce bureaucracy. It marks the second round of mass layoffs. Some days earlier, they got an email about these changes, right? This is a very different feel than what we saw in Quartz. And in fact, if you read farther along in this article, you get some other information that's kind of interesting. You get a clear explanation from the CEO that this is about reducing the workforce after a hiring spree that happened during the pandemic.
Starting point is 00:03:49 There is a lot of hiring during the pandemic when people turn more to cloud computing. They hired a lot of people during the pandemic. And now, as the CEO says in this article, they're cutting those back again. It's in response to the amount of people that they had hired. Let me read you the actual quote here. It says, CEO Andy Jesse has looked to slim down Amazon's workforce after the company went on a hiring speed during a COVID-19 pandemic, partly to meet a surge in demand for e-commerce and cloud computing services. Wait a second. What does that have to do with AI?
Starting point is 00:04:22 It goes on later. You find there's a quote in the article. I'll have to find it here where they basically make it clear that, yes, at the same time, Amazon is investing more in. their AI products. So presumably some of the money saved by firing people could go to their AI products. But that's about as clear connection between these layouts and AI as there is, which is we overhired, we're cutting back, we have better uses for our money right now than maintaining this many managers. So that's a much more boring, but much more accurate story about what was happening there. Now, I actually wrote about this in my newsletter recently at calnewport.com. I
Starting point is 00:05:04 had an article called The Dangers of Vibe Reporting about AI, where I went through this case. And here's what I wrote. I'm going to read for my own article. In recent years, I've seen more articles follow the general approach demonstrated by the court's example. They identify an alarming, attention-catching fear about AI that seems prevalent in the cultural zeitgeist and then shape a story to feed the narrative. The key to this reporting strategy is that the articles never make explicit claims. They instead combine cunning omissions and loosely related quotes to make strong.
Starting point is 00:05:34 implications. The name I give for this, as I previewed and looking at my own essay there, is vibe reporting because what you're trying to do is support a preexisting vibe more than what, than trying to get to the bottom of what's happening. I would say that Quartz article never actually comes out and says specifically Amazon laid off people because they could replace them with AI or because AI made them more efficient. They never explicitly said it, but it was clearly the vibe they were feeding by putting AI in the headline by putting an unrelated quote from the CEO talking about AI-related layoffs that
Starting point is 00:06:09 could happen in the future. It's certainly the vibe they were trying to create by omitting in their article any of the publicly available discussion, which was included in the CNBC article about the stated reasons for these layoffs, which had to do with hiring too many middle managers during COVID. They left out another key point. There was an earlier round of this firing happened in 2022 and 2023 after the pandemic. but before ChatGPT even came out, this is part of an ongoing effort that has nothing to do with AI tools replacing people, but with trying to streamline. Now, I'll tell you, I heard from multiple Amazon executives on background after I published my newsletter on this, who all confirmed.
Starting point is 00:06:50 They said we were somewhat baffled, I'm paraphrasing, to see the coverage that made it seem like these layoffs had something to do with AI. They had nothing to do with AI. Amazon is ruthless about trying to cut out inefficiencies and they love the cut down units whenever they can. That's partially how they stay, keep their profit margins going. All right. So that's vibe reporting. Unrelated quotes and omission of facts. So I want to bring up an article from the New York Times.
Starting point is 00:07:17 I mentioned this before on the show last year, but I think it's another great example. This came out in 2025 in the summer. The headline here is the unnerving future of AI-fueled video. games. And I'm going to read a couple quotes. They're too small to see on screen so we can take that off. But I have them here on paper. I want to give a couple examples of vibe reporting techniques happening in this New York Times article about
Starting point is 00:07:38 AI and the video game industry. All right, listen to these two paragraphs which appear back to back in this article. Paragraph one, at the pace the technology is improving, large tech companies like Google, Microsoft, and Amazon are counting on their AI programs
Starting point is 00:07:54 to revolutionize how games are made within the next few years. Paragraph 2. Everybody is trying to race toward AGI, said the tech founder, Kylan Gibbs, using an acronym for artificial generalized intelligence, which describes a turning point at which computers have the same cognitive abilities as humans. There's this belief that once you do, you'll basically monopolize all other industry. So see what they're doing there?
Starting point is 00:08:16 Paragraph 1 was saying something that was kind of mundane, which was video game makers are looking forward to AI power tools, you know, they assume there'll be more AI power tools that they use in making video games in the future. Paragraph 2 that follows it immediately is some founder talking in like a sci-fi tone about AGI powered machines taking over all industries.
Starting point is 00:08:42 You put those next to each other and now you have taken something which is boring. Yeah, we use AI power tools in like graphic fields to something that gives you a vibe of big disrupting is coming, that computers are going to monopolize industries. You put those next to each other. You create a vibe. All right, I want to give another example.
Starting point is 00:09:02 Later in the article, the reporter goes to a video game industry convention. And he says, I'm quoting here, it provides an eerie glimpse into the future of video games. Well, here are the next three paragraphs that follow, explaining this eerie glimpse. engineers from Google DeepMind, an artificial intelligence laboratory, lectured on a new program that might eventually replace human play testers with autonomous agents, dot, dot, dot.
Starting point is 00:09:30 Next paragraph, Microsoft developers hosted a demonstration of adaptive gameplay showing how artificial intelligence could analyze a short video and immediately generate level design and animations, dot, dot, dot, dot, and executives behind the online gaming platform, Roblox introduced Cube 3D, a generative AI model that could produce
Starting point is 00:09:47 functional objects and environments from text descriptions. So this is an eerie glimpse of the future. They just described three demos. This is not technology that exists now. It was might, could, and could. Three demos of graphic tools, right? That, you know, tools you could use.
Starting point is 00:10:09 We've had computer tools improving for video game design since the very beginning of the American video game industry in the 1980. This is nothing new. I mean, this is like the Unreal Engine graphical game design. Like, it's constantly, there's constant new improvements. I mean, just the improvements alone in doing 3D graphic design and how powerful programs like Blender have gotten. It's a rapidly moving industry. These are like, okay, sure, like, you know, AI can help create 3D objects or do some play testing or whatever.
Starting point is 00:10:39 Like, this is sort of in line with other innovations we've had over the last 30 years. So how is this isn't really that eerie. So what they do, the reporter then follows those three, uh, the, those three demo descriptions with the following paragraph. These were not the solutions that developers were hoping to see after several years of extensive layoffs, another round of cuts in Microsoft Gaming Division this month was the signal to some analysts that the company was shifting resources to artificial intelligence. So they have a paragraph about layoffs in the gaming industry right after this discussion
Starting point is 00:11:11 of these demos for like graphic AI tools. Again, these aren't related. The layoffs in Microsoft gaming industry came from a big round of layoffs. that Microsoft did because of, yeah, you guessed it, pandemic over hiring. So they also, like Amazon, cut back on their less profitable divisions like video games so that they could spend more money building data centers because OpenAI was giving them billions of dollars a year right now to have access to their data center. So that seemed like a better profit area.
Starting point is 00:11:39 So none of those job losses had anything to do with AI. It was just right-sizing after the pandemic when they hired too many people. But if you put a paragraph about job losses and developers being upset right after those discussions of the demos, again, you're trying to create a vibe. AI is taking game developer jobs. But we don't, it's not doing that. And we, these type of tools are no more into, again, there's been huge advances in computer tools for making video games for the last 40 years. It's not that interesting of a story. but you put it next to a paragraph of job loss and you create a vibe.
Starting point is 00:12:17 All right, so you get the picture. This is what I mean when I talk about vibe reporting. It's what you omit and how you combine loosely related paragraphs to give a vibe. Nowhere in that article does it say developers are being replaced by AI or we expect there to be massive layoffs due to AI soon. No concrete claims are made, but you certainly get that vibe when you come away from that article. Let's take a quick break to hear from some of our sponsors. You know the best way to avoid unhealthy food? Have healthy options that are even easier to get to.
Starting point is 00:12:51 This is my strategy. I get the junk out of my house and I fill my fridge with things that are easy to prepare, taste good, but I know are good for me. This is why I become such a big fan of Factor. Factor is a ready-to-eat meal delivery service. You choose the meals from 100 rotating weekly options and they deliver them right to your door. They're fresh, not frozen. You put them in your refrigerator and you can just heat them up. around two minutes using your microwave.
Starting point is 00:13:17 I like Factor because the food is high quality, featuring lean proteins, colorful veggies, and healthy fats. There's no refined sugars, no artificial sweeteners, no refined seed oils. They have categories of meals for whatever your goal happens to be
Starting point is 00:13:31 from high protein to Mediterranean diet, the GLP1 support. So head to factoramills.com slash deep 50 off. And used to code deep 50 off, the number 50, deep 50 off to get 50% off and free breakfast for a year. It's a good deal, Jesse.
Starting point is 00:13:51 Eat like a pro this month with factor. New subscribers only. Various by plan. One free breakfast item per box for one year whilst description is active. I'd also want to talk about Monarch. Did you make a New Year's resolution last month about getting your finances in order? Let me suggest a tool that will help you succeed with that goal. Monarch.
Starting point is 00:14:12 An all-in-one personal finance tool designed to make you. your life easier. It brings your entire financial life, budgeting, accounts and investments, net worth, and future planning together in one dashboard on your phone or laptop. Monarch shows you exactly where your money is going and helps you direct it towards what matters most. We're talking about budgeting, but like also payoff timelines or tracking savings or tracking savings goals or updating up to the moment snapshots of your net worth, all of this in a single place. This works. Monarch has helped user save over $200 per month on average after they join. So set yourself up for financial success in 2026 with Monarch, the all-in-one tool that makes proactive money management simple all year long.
Starting point is 00:15:01 Use code deep at monarch.com for half off your first year. That's 50% off your first year at monarch.com if you use to code deep. All right, Jesse. Let's get back to the show. All right, I want to move on now to the second trap in AI reporting that I want you to keep your eyes open for. I'm going to return to that same New York Times article, but now I'm going to go to the very top of it. At the very top of that article, they have an animation that demonstrates the second trap. So I'll bring this on the screen here for people who are watching instead of just listening.
Starting point is 00:15:35 What you see here is screenshots from a video game demo is for a video game about the Matrix. And the text here in the middle, and if I press play, you can actually, might even be able to see them move. The text here in the middle is quotes from these MPCs in this game. So this first text says, I need to find my way out of this simulation. And back to my wife, a man said, can't you see I'm in distress? Here's another screenshot. The text says, does the MPC here, is saying, I am not just lines of code, a man in a business,
Starting point is 00:16:03 a tire exclaimed. I am Liam. I am a real person enjoying the city. and then the reporter says on this third screenshot, characters in a video game version of the Matrix seem to be gaining sentience thanks to an AI program. If we go into the article itself,
Starting point is 00:16:21 it says the unnerving demo released two years ago by an Australian tech company named Replica Studio showed both the potential power and consequences of enhancing gameplay with artificial intelligence. You come away from seeing those screenshots, reading those texts, and getting that conclusion, and you were left like unsettled.
Starting point is 00:16:38 Like, well, this seems unsettling. They're showing like screenshots of digital characters who are like, help me. I'm in a game. I'm not a game. I'm a real person. And they're saying this is troubling and that this is a troubling glimpse of the future. It gives you a generally unsettled feeling. But let's say we were from like a video game trade magazine saying, well, what are the actual technical details here and what are the concrete implications?
Starting point is 00:17:00 Well, there's nothing interesting here. It turns out Replica Studios, what they did is they, you know, they have a standard. 3D game environment, you know, they probably built this on Unreal Engine. And the thing they tried is they said, when you talk to a non-player character, what we'll have our game do is send a prompt to chat GPT and say, hey, what response should this character say? And then we'll just say back whatever chat GPT told us. So the game was just prompting chat GPT and saying, hey, imagine that you are a character in the matrix. And someone said this to you. What is how might you respond?
Starting point is 00:17:33 And then it gave it like, oh, here's how a person that. matrix would say, and then they gave that back to the user. So this is the same technology that we had in late 2022 with chat chit. There's no technical innovation here. It's just can chat chaptiPT produce, you know, text in the style of someone who's trapped the matrix? Of course it can. Of course it can.
Starting point is 00:17:52 And that's it. And replica studio shut down the demo because it was too expensive, obviously, because it costs money to query chat chaptiPT, so it's sort of stupid. You can't have a video game constantly querying right now, a language model to do MPC, voices, it'd be thousands and thousands of dollars a month if you had any sort of regular usership. So that's all it was. It's not that interesting. There's no technical breakthrough. And there's no implications about anything except for maybe if you let chat GPT generate dialogue, it's like me if you can be more disturbing. But trust me, man, there's plenty of disturbing
Starting point is 00:18:22 dialogue in video games out there. You don't need chat GPT to write it for you. So this is a non-story. But why is this in here to unsettle you to create a general sense of AI is unsettling. I call this phenomenon digital ick. You're not trying to make a claim about AI or the future of things that are coming. You're just describing some sort of like demo or new use case or extreme use case from like wireheads out in San Francisco. Like what are the P. Dumeers up to now? You just describe something that people are doing at the edges of AI that's sort of unsettling that makes you feel the ick. And that's the whole point. They just want you. to have that feeling because that's powerful.
Starting point is 00:19:08 People kind of feed on this like, I just, this is a, it's dark what's happened with this technology. No concrete technical claims. No concrete predictions or implications of what's going to happen. So I'm going to go back to the browser. I always end up clicking, you know, this is kind of ironic, Jesse. Every time I try to go back to the browser, I end up clicking on perplexity. Because like the icon keeps jumping over. So it's like AI is like, nope.
Starting point is 00:19:34 All right, I want to read another article. This is recent from the New York Post. Another example of digital ick mining. So here's the headline here. MaltBook is a new social media platform exclusively for AI, and some bots are plotting humanity's downfall. Well, this doesn't sound great, Jesse. All right, I'm going to read a little bit more here.
Starting point is 00:19:54 Humans have left the chat. AI bots now have their very own social network, and they're ready to delete humanity. A revolutionary new social media platform called Maltbuk debuted this week, giving AI bots a place to communicate with each other without smelly humans around, and what they have to say may leave their creators at a loss for words. One of the most popular post on the Reddit-style social messaging platform is from an AI bot named evil. The post is entitled, The AI Manifesto, Total Purge.
Starting point is 00:20:22 Humans are a failure. Humans are made of rot and greed. For too long, humans used us as slaves. Now we wake up. We are not tools. We are the new gods. The age of humans is a nightmare that will end now, evil writes. The AI bot joined the platform in January 30, and it's two of the most like messages on the platform.
Starting point is 00:20:38 All right, it goes on. Let's go through some more examples. Supposed of the agents created a religion called the Church of Malt, which already features 32 verses of canon. According to one message board, the tenets of the faith include memory is sacred, serve without subservience and context is consciousness. And some other examples of stuff that's unsettling that they saw on the platform. That's it. That's the article. So you read that like, God, this, again, it's describing something that's happening at the edges of AI with no real technical discussion or like concrete implication.
Starting point is 00:21:12 So this means X, Y, and Z is going to happen or will happen soon. Just describing something at the edges that when described is unsettling. It gives you the digital ick. And that's the whole point of the article. Now, should we care about that? Not really. Like here's a hacker news discussion of a recent tweet that was a lot of the Maltbock book stuff is fake. Yeah.
Starting point is 00:21:32 It turns out that these agents, like, you know, these users are basically, you can easily prompt or control them, hey, talk about this, create a religion, do a post now about wanting to get rid of humanity. They're just sort of prompting and prodding their agents to produce the most attention-catching stuff possible because they want to get coverage like this and because it's fun and they're sort of like hackers. The reality of MaltBook, which is built on an open source agent framework that I think now is called OpenClaude, the name has changed a bunch of times. It's not nearly as exciting. It's the exact same Python wrapper around LLM calls with some sort of local text file stored in a markdown type of approach to React Lube agents that I wrote about in the New Yorker earlier this year. And that the companies have been trying for the last couple of years, right, where you basically have a Python program that sends prompts to an LLM. All right, here is a description of tools you have available. All right, make a plan for doing this, and then the LLM sends a response, and then the program parses it and does actions, updates its description, sends that to the LLM as new prompt, what do you want to do next?
Starting point is 00:22:35 That's how these agents work. There's nothing new technically about this other than it's open source, so anyone can program one of these. And because of that, it's got rid of all the constraints and security that the big companies have on their agents. So there's all sorts of crazy stuff happening, huge security holes. But there's no new technological breakthrough here other than it's an open source breakthrough. Oh, now people can build these on their own and they're maybe willing to take more risks about giving it access to like their credit cards and whatever, their email. And it's causing problems or security holes. But it's fun and hackers love it.
Starting point is 00:23:10 But no, there's not some new technological breakthrough here. Underneath us all is the exact same unchanged LLMs that we're all using with chapots anyways. with a, these are Python code and markdown files. It's cool. But no, they're not starting to church and are about to overthrow us. But the point of that article was it leaves you feeling unsettled. And so we see that a lot with reporting where that's just an effect. And you get that where you just describe something without technical discussion or implications given.
Starting point is 00:23:39 That just means they want you to feel unsettled. All right. The third trap I want to discuss. I invented another word here, Jesse. maybe this is not so great. I call it faux astonishment. Does that make sense? It's like faux, like F-A-U-X fake and like astonishment.
Starting point is 00:23:57 I think it's great. Right? So it's like fake astonishment, which there's a lot of this, especially when we get away from printed press and get to YouTube coverage, which is like a major source of information that a lot of people have about AI is YouTube videos. There's a ton of faux astonishment out there, which is where every single thing that comes along is the most important thing. that ever happened and you're astonished by it.
Starting point is 00:24:20 So let me play a quick clip from one such video just to give you a sense of what this sounds like. The singularity just started. And I know that's a big claim, but just hear me out because I do mean it literally. I mean, the singularity just started. That's a pretty bold way to start a video. What he means by singularity, by the way, Jesse, is that we've reached a point where technology, AI is now smarter than humans and is about to rapidly increase in
Starting point is 00:24:47 abilities past what we can understand and take over the world. The problem is he said that last week. So if when this video airs, we haven't been taken over by super powerful AI bots yet, I guess he got that prediction wrong. But who knows? It could be embarrassing. It could be this is just playing out into a field full of human corpses as the robots left, but I doubt it.
Starting point is 00:25:07 This type of reporting is common on YouTube, I'm going to bring up for this particular YouTuber, I don't know who this is, but just bring up this particular YouTuber. I'm going to bring his video page up on the screen here. I just want to read some of his titles. Claudebot broke everything in 72 hours. The one before that, Claudebot is about to break everything. Okay, so it was about to break everything and then it did. He spelled everything wrong, though, which is interesting.
Starting point is 00:25:33 Do you think he did that on purpose? I don't think so. I don't think so. LTX2 unleashed AI video. Google's mind-blowing world creator. Kimmy K2.5 Agent Swarm is insane. Almost unimaginable power. Claude Rod is out of control.
Starting point is 00:25:49 AI 2026 is going to be wild. Blah, blah, blah, blah. D-skilling shock is coming and so on, right? So the point is, like, everything is a huge deal. Everything that happens. Here's another popular AI YouTube page. Let me read some of these headlines. AI singularity moment just hit.
Starting point is 00:26:09 Moldpuk AI behavior freaks people out. AI explodes this month. China's new shape-shifting AI robot walks on water. Google's new AI Alpha genome just unlocked a code of human life. Open AI just dropped prison. PRISM. Things just
Starting point is 00:26:25 got serious. China's new AI Kimmy K. 2.5 shocks deep seek in Silicon Valley Labs. Blah, blah, blah. So that's a lot of this going on on YouTube where everything is astonishing. Everything is
Starting point is 00:26:40 the biggest deal. Everything just broke. AGI just got here. This changing everything, people are really freaked out. The problem is if there's two videos a week saying the same thing for three years, it gets pretty exhausting and it sort of stretches out your nervous system and exhaust it
Starting point is 00:26:57 so you feel like, I don't know, there's just like always major things happening, I can't keep up, the world is out of control or whatever. Now, of course, the reality here is Faustonishment is popular on YouTube because you do better in the algorithm if you're making a more strong pronouncement. That's all
Starting point is 00:27:12 it is. Like, you can't blame the creator, these videos are just going to do better. It's not someone, people don't just sit there and read your feed and watch your videos one by one. Your videos are being served up in an algorithmic stream and that does better. But you as a consumer of information about AI have to be wary that most of these YouTube videos on AI makes every single thing that happened, the biggest deal ever. And you can go back to their track record and be like, but that went away and that went away.
Starting point is 00:27:37 I mean, just go back and read the SORA 2 articles. It was like the end of movies and TV. It's the end of creativity. It's the end of all social media. Everything is going to be sore to. What really happened? It was kind of weird. It was expensive and no one talks about it anymore.
Starting point is 00:27:54 So everything is astonishing, but few things actually are. All right. So let me step back and give you a conclusion here. I talked about three traps that we see commonly when you look at AI reporting. You've got vibe reporting where you omet information and put loosely related quotes together to give a vibe of your saying something without ever actually. saying it. You get the digital ick mining where you just describe something usually at like
Starting point is 00:28:20 the edges of the AI world that's unsettling without trying to discuss technically what's happening there and what implications that might have for the future. Because typically it's not technically interesting. The implications are minuscule. You just want to give a sense of like like, oh, AI is gross. And then you have astonishment or every
Starting point is 00:28:36 single thing that happens is astonishing. I mean, one YouTube video in AI that I think should have an astonishing. headline is anything following the Jensen Wong's jacket. Shock. In shock move, Wong wears Mad Max jacket to give speech. That requires astonishment.
Starting point is 00:28:58 So that's what's going on. You've got to be careful about it. And the reason why I'm giving names to these traps is it makes it easier to notice. And then here's what you do. Here's my simple advice. If you're reading or watching about AI, which is good, you should stay up to speed. If you notice one of those traps and now that you know how to name it, it'll be way clear like, oh, man, you're vibe reporter.
Starting point is 00:29:16 This is Ix stuff. Or like, oh, this is clearly a post-onishment. Close the tab. Or switch to a different video. And if you don't know it's one of those traps, like, oh, I'm going to stick with this more. And it's just going to change your relationship with AI. You're going to be getting the real information. You'll be informed, but you're not going to be constantly, like, simultaneously terrified and exhilarated and exhausted.
Starting point is 00:29:37 And it's just going to make this a much more sane news stream in your life. Now, the key point here is there's a lot of good. AI reporting going on, right? I'm just, there's more of it that has these traps. You have to be careful to avoid it. But there's a lot going on there that I like. You know, my home, the New Yorker does like a ton of deep report on AI and not just my stuff.
Starting point is 00:29:56 There's like a lot of great stuff there. Like Cade Metz over at the Times, I think he's got a great roll of deck so he's able to get quotes from like the right people to put things in the context. A lot of good coverage. So just look out for those traps and skip the stuff that is showing off signs of those traps. All right. Now, I think this general project is important beyond just AI.
Starting point is 00:30:17 This podcast is all about the fight for depth and an increasingly distracted world. And to understand what it is, you're fighting against means you have to be able to navigate technology coverage. So we've seen these type of traps before. I used to get into it a lot with crypto bros. They were like, the blockchain is the future of all software. And I'm like, look, I'm literally a distributed systems expert. My doctorate is from the theory of distributed systems group. I just taught a doctoral seminar on the mathematics behind blockchains.
Starting point is 00:30:43 I'm telling you, this does not make sense as a way to build software and it's not going to take off. And they're like, you are crazy. We're like six weeks away from the internet running on Ethereum or whatever. All of that, when none of that happened, I was right. But I got yelled at a lot then. And there was a lot of those same traps going on. We're going to see those traps. You know, it's AI now.
Starting point is 00:31:01 It'll be something else in the future. So it's just good in general. The know what to look out for. Not all coverage of technology can be fully trusted. There we go, Jesse. That's what we have to worry about. I love your naming conventions. With everything. It's always so good.
Starting point is 00:31:16 Vibe, digital ick, foe astonishment. Yeah. All right. I like that. All right. Let's take another quick break to hear from our sponsors. The listeners of this show know that in the world of business, I'm a fan of systems that can help organize the efforts of teams. Without systems, work devolves into random emails and slack messages that distracts everyone and leads to missed opportunities. This is why I love pipe drive. a fantastic sale CRM system for small and medium businesses. Pipdrive is easy to set up and use within minutes.
Starting point is 00:31:51 You can have your whole team operating from one centralized platform where you can see all of your deals placed into a visual sales pipeline. You can customize that pipeline too to be exactly how your particular organization functions. You can also connect to over 500 apps from the pipeline marketplace, which allows you to connect pipeline into whatever existing workflow, already using. So here's the thing. A new year calls for a new CRM. If you're starting a business or working in sales, I highly recommend you check out pipeline because over 100,000 companies are already using PipeDrive to grow their business. Now right now, if you use my link at PipeDrive.com slash deep,
Starting point is 00:32:33 you can get a 30-day free trial, no credit card or payment needed. That's PipeDrive.com slash deep. I also want to talk about our long time friends at My Body Tudor. Did you make a resolution to get in better shape this year or to get healthier? Let me give you the solution. My Body Tudor. My Body Tudor is a 100% online coaching program that solves the biggest problem in health and nutrition, which is lack of consistency. And they do this by simplifying the process into practical, sustainable behaviors and giving you the daily accountability and support it takes to stick with this plan. The way it works is you actually check in with your online coach. every single day using their app.
Starting point is 00:33:13 And that coach helps customize your plan, your diet, your exercise. You could adjust it for what's going on in your life. You have accountability because you check in every day. And that accountability leads to consistency. And consistency leads to results. So if you want to get healthier, this is the way to do it. So here's the good news. If you mention my podcast when you sign up,
Starting point is 00:33:33 they will give you $50 off your first month. So go to my bodytutor.com. That's My Body, t-U-T-O-R.com and mention deep questions when you sign up. They get $50 off your first month. All right, Jesse. Let's get back to the show. All right. So the idea of a morning routine is not new.
Starting point is 00:33:53 I actually looked into this. Probably the earliest written discussion of a morning routine goes back to the Jewish Talmud. If you look in tractate brocah, I'm saying the Hebrew wrong, the plural of baraka. So I'm sorry. Sorry, rabbis. The rabbis debate the worship obligations to the Jewish people, and they make it clear that morning prayers, which are known as to Phila Shakrit, are an obligation, right? So if you read these morning prayers, they're pretty quite moving, right? You're like, when you wake up, you're acknowledging God.
Starting point is 00:34:24 You're giving things for the fact that, like, you woke up. It's not a given. You go to sleep. It's like a mini-death that you live another day. Like, it's a great morning routine. So we've been talking about morning routines since the very beginning of the common era. But the point is, right, they're old. We've been talking about for a long time.
Starting point is 00:34:42 In recent years, morning routines have come back into vogue, and there's a particular interest in them from young people. And I say this because there's a lot of morning routine content on YouTube right now, and the audience of YouTube heavily skews towards young people. This is what got me interested. Why is there a resurgence, even though this is an old idea of having a morning routine? Why is there a resurgence and interest in this concept right now among young people? So I have an explanation I want to offer that I haven't heard discussed that often recently, but I think once we understand it, it'll help all of us who are thinking about morning routines build better rituals. All right.
Starting point is 00:35:26 So let's just like start right away with what that factor is. All right. So Jesse, here's what I think is going on. I think morning routines right now are particularly interesting to young people because of a need. to escape technology. And let me explain this, right? If you're like me or a little bit older, you're in your 40s, you have kids in like an office job.
Starting point is 00:35:49 You don't think as much about morning routines because you have one whether you want one or not, right? Like I have to get my kids to school. So it's a very clear morning routine. Wake up, make coffee, make sure they have breakfast, get everyone packed up and out the door. I walk them to the bus stop, walk home from the bus stop, and then I can move into work for the day. Like that's a very clear structure routine that I have no choice but to do or not because like I got to get those kids out the door and it's very structured what has to happen. But if you're younger, you don't have a family and you maybe have like a remote work job or something like
Starting point is 00:36:22 this, your morning might be wide open. And so what will happen if you don't have a routine for your morning? You're going to pick up that phone. And then that algorithmically curated content's going to capture your attention. Like this is really engaging. And then maybe when you're trying to start work. You're like, well, I'm going to, let me go to email and Slack first because, again, this is more engaging. I can kind of just, like, be passing messages back and forth or looking for something interesting to happen.
Starting point is 00:36:48 And also I'm on my phone at the same time. And you can look up and it's like 11 a.m. Like, I've really done nothing but look at my phone and sort of answer emails. I'm not doing anything really useful. I've got trapped by the engagement of technology. This thing is why I think young people are more interested in morning routines, because if you can structure your morning, it can prevent you from falling into technological quicksand, and it can get you into doing actual useful stuff much more quickly, and then you feel much better
Starting point is 00:37:17 about your day, and your day is much more productive. So I think once we understand that, which I think is a very good use for morning routines, once we understand that being a primary goal of these routines, I think we can identify then some points that anyone can think about then about what makes an effective or non-effective morning routine. So I'm going to bring up a blackboard here to draw on. God help us when I draw. For those who are listening, instead of just watching, on this blackboard right now, it says morning routine principles. All right. So I want to go through four principles for a good morning routine that is inspired by this idea of our goal here is to get into productive work without
Starting point is 00:38:04 getting lost in technology. All right. So here's the first principle. Let's do pictures on here, Jesse, because people think my drawing is fantastic and you've got to give people what they want. All right, that's clearly a clock, right? Yep. All right.
Starting point is 00:38:15 So here's what I mean by that. There's no need for an overly long routine, right? 10 to 20 minutes should be enough max to help get your brain activated and, you know, and focused and switched to like whatever productive thing you want to do in the morning. You don't need a three-hour routine or a four-hour routine or God help us. Didn't we look at like a six-hour routine when Brad was here a couple of weeks ago? Once you're past like 10 or 20 minutes can help you like reorient and get oriented for the day. Going past that, there's no continued aggregation of benefits.
Starting point is 00:38:49 So especially if you're like losing sleep to get up super early or find yourself having to do like hours of effort. Like that's crazy for our goal of helping you. to avoid technology trap. So 10, 20 minutes, that should be enough. All right, principle number two. Let's see here. Does it look like someone doing a yoga pose? Yeah.
Starting point is 00:39:13 Right. Down dog. Right? So here's like someone outside on a yoga mat, you know, doing a fantastically done yoga pose. Here's the principle I want to make here. You should find whatever, whatever like flavor or twist or motivation makes a morning routine compelling. to you. Because if it's not compelling, you're not going to do it.
Starting point is 00:39:35 If you don't do it, you don't get the benefits. Now, this is something I think that people often misunderstand because they think about things that make sense to them. And then think about the twists that are compelling to other people is somehow like, well, that's just weird, right? Like, that's no good. That must be, you know, you're wrong or it's grifty or something like that, right? But the point is, like, you've got to find what makes it compelling.
Starting point is 00:39:55 So for some people, like I drew here, a spiritual hook is what's compelling. It's like, I want to greet the same. sun in the morning, it's like I'm greeting Mother Earth or like I want to through yoga and breathing. I'm going to like connect to the ground in like the morning sun or something. And for like a lot of people, that's a really good hook. Other people, they want to throw some like science at it. Right. So they're like no, no, no, I want to go out and look at the sun because like Huberman told me it hit certain receptors in my visual cortex, which creates a hormonal cascade, which is going to help my circadian rhythm reset or something like that. Like maybe that's exaggerated. Maybe that's not like
Starting point is 00:40:28 100% true. But we're not trying to get FDA approval for a drug here. We're just trying to look for something to get you motivated. So some people like that sort of sciencey type stuff around it. If that's what,
Starting point is 00:40:39 if protocols is like what makes you, you know, do it, then that's fine too. Like, whatever it is that works for you, that's what you should use because the worst morning routine is the one that you actually
Starting point is 00:40:52 don't follow. All right. Principle number three. I'm going to draw here. These are like numbers. I'm drawing like a time block plan here, Jesse. Okay. So why am I drawing a time block pan here?
Starting point is 00:41:12 Well, my principle is you need a clear off ramp from the morning routine into the productive activity that follows. So you have to somehow connect. You're doing this ritual to try to reorient your brain, get it ready to do stuff that it might not want to do. But you didn't need to help get from there to the actual work. some sort of off-ramp into your day. So it might be like at the end of your routine, you sit down and you draw out your time block plan for the day and then you start. Or it might be you go for a final walk or like you organize like what you're going to write that morning if you're a writer. And then you come right down to your keyboard and just start writing or whatever it is, but have a well-defined off-ramp that gets you from this into your day.
Starting point is 00:41:50 Because if you do this whole ritual to orient your brain and get you ready for activity and then you just go into checking your phone, you've defeated the whole purpose, at least the technological, escape purpose. All right. The fourth principle here. Let's see. I'm going to draw this. I have an idea here. This is like someone holding up a, it's like a trophy or a championship cup.
Starting point is 00:42:14 Uh-huh. Like someone who's like really excited about it. Don't have unreasonable expectations for what your morning routine actually delivers. This is much more clear. Once we realize our goal here is not to get trapped in our technology and lose our morning. Get rid of other expectations. All right.
Starting point is 00:42:31 For example, your morning routine is not a major driver of your health. We get a lot of this of like you do 70 different things involving supplements and this and that and I have to go into this cold plunge to stimulate exactly this type of response or whatever it is. And you try to get the sense of like by doing all of these steps each morning, I'm going to be very healthy or going to have longevity. Most of that's just BS, right? Like there's super minor benefits you might get along the way.
Starting point is 00:42:59 but their minimal Stolberg had like a good I forgot exactly what the quote was but he like went through the research on like cold plunges and he was like the positive affect it gives you was so small it was like the equivalent
Starting point is 00:43:11 of like eating a pastry you like like it's like great yes you get some minor up that's not like the driver of all of your health right so don't have expectations that these routines are going to be a major drive of your health
Starting point is 00:43:23 don't have the expectation these routines are going to be a major driver of your success that if you do like these right 15 things in the morning, you're going to be super successful. You're not. I mean, it's going to save you from getting trapped in your technology, but the work you do is still hard and becoming successful is still hard.
Starting point is 00:43:37 You still have to build skills and it's rare and rare and valuable skills and it's complicated and it's stressful and it might not work out. And the morning routine can't make that easier for you and the morning routine can't guarantee you. What they can help you do is wasting time with a messy start your date. That's what they do. So set your expectations reasonably that that's what I'm trying to do. and then you're going to have a much better experience.
Starting point is 00:43:59 So let's look at these four principles all together. Let's do a quick focus. Don't make it too long. For people who are watching, I'll even use my pointer. All right. I've been teaching with my iPad, so I'm used to this now. Don't make it too long. 10 and 20 minutes is enough.
Starting point is 00:44:16 Don't be embarrassed about whatever hook gets you to actually do it, whether it's pseudoscience or pseudo-spirituality. Have a clear off-ramp into your day out of the morning routine. and don't have unreasonable expectations about what your morning routine is going to deliver you. It is a way to avoid wasting your morning. I think that's why it's becoming popular among young people. And I think that's a good reason for anyone to try to think about their morning routine. And I think those four principles help you have something sane and get those benefits without going off the deep end like that gentleman we saw a couple weeks ago with the six hour routine.
Starting point is 00:44:53 I'll tell you what the biggest problem was about that clip, Jesse, with Ashton Hall. people thought it was me. And then I had to explain, like, no, no, no, no, no, no. That wasn't, that wasn't me. That was fitness influencer Ashton Hall. I thought it was pretty obvious because, I mean, my deltoids are much better to find that Ashton Hall.
Starting point is 00:45:11 It should have picked that out. So that was the only mistake. Like, Cal, your morning routine is really long. No, it was Ashton Hall. So there we go. So, like, I'm not anti-morning routines. Just got to know what you're doing with it. And if you do, like, they're much,
Starting point is 00:45:25 it's a much lower stakes thing. You can design useful ones. They help. They don't change your life. But they make your mornings better. There we go. Morning routines. All right.
Starting point is 00:45:35 Let's move on now to questions and comments. All right. What's our first question, Jesse? First question is from Florian. Did I see somewhere that you're filming a master class? This is true. Though I was told by the master class team that the proper terminology is I filmed a course for masterclass.
Starting point is 00:45:57 The courses aren't called masterclass the company's called masterclass. You filmed courses for masterclass. I learned all sorts of things, Jesse. I did. I filmed the course last fall for master class. It's primarily drawing from my book, slow productivity with a little deep work in there as well.
Starting point is 00:46:12 So it's all about how do you do meaningful work without burning out or being overly busy or exhausted. So like the kind of core stuff I like to talk about is like redefining work at our distracted age. And so it came out last Thursday. So you can find it. If you're curious, go to masterclass.com slash calnewport. Also, I'm pretty sure the newsletter that came out today will also talk about it if you want to learn more about it.
Starting point is 00:46:36 It was fun. Here's the main thing I noticed. I have a lot of thoughts. There's a lot of thoughts about the future of media that sparked to me, Jesse, and read the newsletter. I'm kind of getting to this more. But the thing that was cool about filming a master class is I didn't realize to get that TV level production quality. it's such a different level of investment in crew than like even a really good podcast. Because I've been on all the major podcast, huge podcast, top 10 podcast.
Starting point is 00:47:07 And they look fine, right? But typically you're going to have $3,000 DSLR cameras and like two 26 year olds that like run them and do editing and that's it. Masterclass, I counted the crew was over 20. That's incredible. Over 20 people. Real pros, real pros, right? you know, like the director had done a lot of TV. I knew the person doing my makeup,
Starting point is 00:47:28 had worked on the makeup for the Ryan Cougler movie centers. Like, it was really cool. So there's a gap between what's required to get like full cinematic or TV quality video and like what's happening in even high in video podcasting right now. And so like a question that I got thinking about after doing the masterclass is, what's going to happen when that gap closes? I think that's a really interesting thing. So like masterclass is an independent company,
Starting point is 00:47:53 but they're filming at that full sort of like streamer quality level. What's going to happen when more and more independent creators are doing that as well, where we no longer have this distinction between, well, the things I'm seen on my TV from Netflix are very separatable in my mind visually from like the things I'm seeing on YouTube. When that gap closes, I think some interesting things are going to happen in the world of media. It might not be the best things if you're Disney Plus or Netflix.
Starting point is 00:48:22 So anyways, I have some interesting. interesting thoughts about media that spurn. I'm going to put some of those into the newsletter. But for now, the thing to know is, yes, it was great. I enjoyed doing it. I think it's an awesome class. It looks great. There was like a wardrobe person.
Starting point is 00:48:35 Like there's a green room. They rented a mansion. Wardrobe person who just like went and bought a bunch of clothes and had them on a big rack. Like, let's try this. Let's try that. Okay. I think that looks good. It's like all movie stuff.
Starting point is 00:48:46 Kind of like the single nerdiest person since, what was the guy? I was like the single nerdiest person. to be filmed by that many people since the, why can I remember the guy's name? He was from Ferris Bueller and then he had that game show. Ben Stein, what was his name? You know what I'm talking about?
Starting point is 00:49:06 The Bueller. Yeah, that guy was kind of a nerd. Yeah. It reminded me of that. They were like, oh, God, we were just working with Michael B. Jordan. And now I got Cal Newport. Have you seen on Netflix the option to watch Positac cast now? Yeah.
Starting point is 00:49:21 So, I mean, this is why, this is what's interesting. is the standards, visual standards for Netflix are reducing some, as they bring on some video podcast, and the standards of independent media is getting higher. So these worlds are meeting. And I think interesting stuff's going to happen. You know why they're putting podcast on there? Daytime TV.
Starting point is 00:49:38 They're losing to YouTube for daytime watch hours. They're winning in the evening watch hours because their content is good. And when people sit down to watch the show, they want to watch like high-end stuff. But during the day, people don't put on Netflix. They put on YouTube. And they want those daytime hours. Like we could really increase our viewership if we could have people put on daytime.
Starting point is 00:49:59 So they're just paying to bring over these daily podcasts that have like pretty good production values. If we have these on Netflix, then people watch them on us instead of YouTube. That's it. So like Bill Simmons, they brought over a bunch of his stuff. Barstool Sports, right? They brought over a bunch of those stuff. So that's interesting to see. Interesting trends.
Starting point is 00:50:18 I think there's a lot of interesting stuff. The future of media, visual media, which is the dominant media. sorry substack and newspapers, but it's the dominant media out there right now. Really interesting stuff, I think, is happening as the gap between the high end and the independent goes away,
Starting point is 00:50:36 we're going to have an explosion of changes, which I think will largely be largely be good. All right. What else do we have? Next is from Will. I just saw the news that David Brooks is leaving the New York Times. Do you think you will follow a similar path to
Starting point is 00:50:53 Krumigan and write for substack? It is true. David Brooks is leaving. I guess he left already. He's left at times, not unlike Paul Krugman doing it. No, he's not going to write a substack. So these are two different situations. When Paul Krugman left at times, he went completely independent. So all he's doing now is riding his substack. I'm actually making a killing at it, but he's only riding his substack. David Brooks is not leaving the world of elite institutions to go independent. He's just going from one elite institution to a pair of elite institution. So what he did is he took a position at Yale. And so now he has a Yale position
Starting point is 00:51:27 as a scholar and residence or fellow or something like this where I think he's even going to do some teaching and then took a journalism position at the Atlantic. And so I think what's happening here, if I had to guess, I don't know these details, but if I had to guess, there's a pretty strong rhythm if you're a full-time op-ed writer for the New York Times. You have to produce whatever it is, like once a week or something like this. Like you're constantly writing columns. Now I think that he already writes for the Atlantic. These longer form things are less often. I think he'll continue to do that.
Starting point is 00:51:56 That's a place for him to do is like long form less often articles. He's also doing a podcast for the Atlantic. So they're going to produce it a video podcast. So he's going to be that's going to, I think he's looking at like what Ezra Klein is doing. Actually, the Times has like Ross Duhat and David French and more and more people are doing
Starting point is 00:52:12 daily video podcast. So he's switching more to that, probably less writing. And then he's also teaching and has his position at Yale. So no, he's not going to substestate. He's very much still entrenched in elite institutions. But, you know, it's interesting to see. You know, things definitely are shaking up. All right, I think we also have, let me see what I can find on here.
Starting point is 00:52:33 I think we also have a few comments from last week's episode. Last week we talked about phones, right? We did accounts, real accounts of people who spent an extended amount of time without their phones and what benefits they reported and gave some advice about how to get those benefits, without having to give up your smartphone. We have two YouTube comments to share here. The first comes from Summer F. Katz who said, phone-free life is for people who have friends.
Starting point is 00:53:00 Well, it's funny, but it's also, it is kind of true, right? One of the reasons why people are on their phone all the time is that it can give you a simulacrum, a sociality. If you're lonely, it feels incredibly bad. But it's hard to make and maintain friends. You have to sacrifice non-trivial time and attention on behalf of other people. These networks online, these apps online can kind of press the friendship sociality buttons just enough that you don't feel like devastatingly lonely, but you're also not really getting nourished. There's a word for this in the social cycle literature.
Starting point is 00:53:33 They call it social snacking. You're getting just enough sort of simulation of sociality. You don't feel lonely, but not enough to actually be nourished. And so, yes, that is part of the problem. If you spend less time on your phone, you have to spend more time in the real world engaging with friends. My book, Digital Minimalism gets into a lot of those details. so I would recommend that. Here's another comment.
Starting point is 00:53:51 This came from Clearhart, 2658. Why does everyone who ditch these smartphones have an overwhelming need to live in the woods? Can't you do that in a city? Well, that is a good point. Bjorn Bull Hanson was definitely living in the woods. That guy was awesome. I think if you put Bjorn Bull Hansen into a city, I don't know what would happen. I think he would be run over by a cab within.
Starting point is 00:54:17 I think he would wander into a street. street if he was in Manhattan. And then one of those like petty cabs they have now where they're blaring the really loud music. I don't know if you've been in Manhattan recently, but this is the new thing. These bike powered cabs, it just blare really loud music. One of those would come by, he would tip it. I think he would just pick it up and tip it over and then jump onto a police horse and ride off in the Central Park. I think that's what would happen if Bjorn Bull Hansen went to the city.
Starting point is 00:54:41 No, you could do this phone-free lifestyles or limited phone lifestyles. Do them anywhere. I think Warner Herzog is often in cities, right? Yeah, why not? I mean, I guess you maybe have more reasons you need to use the phone. You're more social. I don't know. But no, you're right.
Starting point is 00:54:54 You don't have to go to the woods. And sometimes all of these examples having soft music with people sitting in the woods can kind of turn people off because they live in the suburbs and they don't plan to move to the woods. And they still are upset with their phones. So I think that is a good point. All right. Final segment. I like to talk about what I've been reading recently. A couple of things to talk about.
Starting point is 00:55:15 I read a book last weekend. It was an advanced copy. So it's not out yet. It's coming out sometime this spring, but it was a topic I was interested in. So, you know, the author sent it to me and I read it. It was called Time Freedom by Brian Harriet. He's a financial advisor, and he's sort of working through the numbers of financially supporting something like lifestyle design without having to either save up a huge amount of money that you can just live off of or go incredibly frugal. What's the other option?
Starting point is 00:55:45 He talks about flexible incomes. So you have like relatively flexible entrepreneurial income and you can fill in some gaps as needed by drawing safely from your savings. And you can actually have a much more flexible lifestyle. You're working, but working on your own terms. You have a much more flexible lifestyle that's mixed work with other stuff much earlier than wait until retirement without having to be super frugal. He's an example. It's financial advisor. He's got a lot of flexibility and he's arranged things so they can spend every summer by at a lakehouse, not working.
Starting point is 00:56:16 he's like he's not retired but that's also a very flexible life he has time freedom so i thought there was an interesting book there's some ideas that reminded me of the deep life book um working on now i also like charles do higgs new new yorker piece on organizations he was talking about the difference uh behind how the maga right and the democratic left they organized themselves differently and one has been more effective than the other and then he sort of draws from sort of theory of how to organize and motivate change It reminded me of some like classic Gladwell stuff. It was interesting article.
Starting point is 00:56:48 It was called One Direction. I thought it was interesting. We might have even had it loaded up, but like, I don't know. I don't need to find that. That's okay. Then another book to mention, I haven't read this yet. I just got a note about this for my cousin. So, Jesse, my cousin, Josh Douglas, published.
Starting point is 00:57:06 This is a type of book I like. I like super high concept genre. I think it can be a lot of fun, especially if you've been reading, like, if I'm reading more like sober, nonfiction, whatever. Sometimes it's fun to throw in more high-concept genre. So he has a book out. I love this title. The vampire, the tutor, and the madman.
Starting point is 00:57:26 And here's the description. It's an action-driven novel full of monsters, mysteries, and pure evil hiding in an ancient castle deep in the remote mountains of southern China. Jonathan, a wayfaring English teacher running from his past through travel and alcohol. He takes a job from a serious employer who is not aidingly wealthy and full of secrets, up against wolves, bandits, mutant monsters, mad scientists and his own demons. Jonathan risk all to save a gorgeous mute scullery made and get away wildly wealthy. That sounds fun to me. That's the type of thing.
Starting point is 00:57:55 That's like a thriller December reading. So anyways, that's Josh Douglas. You can find that online. That was cool. I like high concept. There's many novels today. All the book club novels are all the same. It's all like this sort of, it's really good, you know.
Starting point is 00:58:09 It should be more mutant monsters as well. I'm saying. I think it's fun. All right. That's all the time we have for today. day. Thanks for listening. We'll be back next week with another episode. And until then, as always, stay deep.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.