Deep Questions with Cal Newport - Ep. 293: Can A.I. Empty My Inbox?

Episode Date: March 25, 2024

Imagine a world in which AI could handle your email inbox on your behalf. No more checking for new messages every five minutes. No more worries that people need you. No more exhausting cognitive conte...xt shifts. In this episode, Cal explores how close cutting-edge AI models are to achieving this goal, including using ChatGPT to help him answer some real email. He then dives into his latest article for The New Yorker, which explains the key technical obstacle to fully automated email and how it might be solved. This is followed by reader questions and a look at something interesting.Below are the questions covered in today's episode (with their timestamps). Get your questions answered by Cal! Here’s the link: bit.ly/3U3sTvoVideo from today’s episode:  youtube.com/calnewportmediaDeep Dive: Can A.I. Empty My Inbox? [4:33]- Should I continue to study programming if AI will eventually replace software jobs? [44:40]- Is it bad to use ChatGPT to assist with your writing? [49:22]- How do I reclaim my workspace for Deep Work? [55:24]- How do I decide what to do on my scheduled mini-breaks at work? [1:00:11]- CALL: Heidegger’s view on technology [1:02:48]- CALL: Seasonality with a partner and kids [1:09:11]CASE STUDY: A Silicon Valley Chief of Staff balancing work and ego [1:20:07]Something Interesting: General Grant’s Slow Productivity [1:30:08]Links:Buy Cal’s latest book, “Slow Productivity”at calnewport.com/slow newyorker.com/science/annals-of-artificial-intelligence/can-an-ai-make-plansThanks to our Sponsors: listening.com/deeprhone.com/caldrinklmnt.com/deepshopify.com/deepThanks to Jesse Miller for production, Jay Kerstens for the intro music, Kieron Rees for slow productivity music, and Mark Miles for mastering. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:10 I'm Cal Newport, and this is Deep Questions, the show about cultivating a deep life in a high-tech world. So I'm here in my Deep Work, HQ, joined as always by my producer, Jesse. Though this is coming out two weeks after our event at Politics and Prose, we're recording this just a few days after that event. Jesse was there. We had a good time, wouldn't you say? Great showing.
Starting point is 00:00:38 Yeah, we had a, you guys turned out. We had a big line. Folks from Canada. Yeah, if someone flew in from Canada. And it brought me a first edition Michael Crichton book, too. Yeah, he was cool. Yeah. It was a lot of fun to see everyone.
Starting point is 00:00:50 We haven't really done a Cal Newport-only event since before the pandemic, really. So it's cool. So that was fun. Everyone showed up. Quick update on my books, low productivity. One thing I wanted to notice, note is the audio is doing particularly well. And this, too, I'm going to sort of give my nod to you, my podcast audience. I think I read the audio book myself and I think my podcast.
Starting point is 00:01:14 audience wants to hear it. So we're having 50-50 sales hardcover and audio, which is, you know, unusual. Usually you have more hardcover than audio. So I think it's great. We've been, the audio version has been on the Amazon charts, 20 best-selling nonfiction books of the week for the last two weeks in a row. It's actually moving up in that charts in the second week. There's a 15th best-selling non-fiction book on Amazon last week. So I think people are listening, and I appreciate that. It's because when you talked about your voice that you got to speaking on the podcast, I thought I wanted to check it out. This is my audiobook voice.
Starting point is 00:01:47 I have to be measured, not too dynamic, because the range must be clear. I was explaining this to an engineer the other day that when you podcast, you're very dynamic, and then you put a ton of compression post hoc to, like, keep that dynamic range from going all over the place in people's ears. When you do, and you're talking into like a dynamic mic and you can really like rock and roll, when you do an audiobook, you have to do the compression yourself as the human. It's like you have to keep your sort of range in tone because you're talking to a very expensive condenser mic that picks up every detail. So you have to do the compression yourself.
Starting point is 00:02:22 So everything has to be very even and you can't have the pauses be too long. So thanks for that. If you have not bought slow productivity, now is the time to do it. If you like this show, you're going to love this book. And we reference the book all the time. You need to know what we're talking about. It's the book form of a lot of what we've been talking about. So there's my call to action.
Starting point is 00:02:40 Dear podcast listener, whatever format you want to buy it in, please buy slow productivity if you have not already. Now, once again, we're going to have a topic to talk about today that's not slow productivity related because there's only so much so much I could talk about that, though we do have slow productivity corner showing up in the Q&A, and then we'll have something interesting at the end of the show, something interesting that someone sent me or I saw on the internet during the week. Remember, if you have suggestions for topics, you can email jessie at Calnewport.com. The three big categories we do in terms of exploring the deep life in a high-tech world is digital knowledge work, how to thrive in digital knowledge work, the promise and parallels of new technologies and analog alternatives to a distracted life. So any topic ideas you have, feel free to send those to Jesse at Calnewport.com. We keep an eye on it. Other housekeeping note, there's some visuals in today's deep dive. If you want to see video, if you're a listener and want to see video, this is episode
Starting point is 00:03:38 293, go to the deeplife.com slash listen, go to episode 293. Usually within a day or so of the episodes dropping, we put a link to a full video version of the podcast right up there. All right, Jesse, have we hit everything? Yeah, we've been getting a lot of topics. Oh, excellent. Folks have been emailing. Yes, mainly they want what, like my French accent, Washington National's baseball.
Starting point is 00:04:02 Monotone voice. Monotone voice, Washington National's analytics analysis and a lot of discussion of Brandon Sanderson in the name of the wind. Would those be like the top three? Yeah. Yeah, we're probably hearing.
Starting point is 00:04:15 Yeah, interesting. And some jaco, like, you know, remarks. Yeah, my jaco voice. Yeah. Get hard. Get after it. Clean that inbox. All right, enough nonsense.
Starting point is 00:04:29 Let's get started with today's deep dive. So Andrew Morantz, recently had a great article in the New Yorker. It was titled OK Doomer in the magazine and among the AI doomsayers online. In this article, Morant spends a lot of time with AI safetyist, or as they sometimes call themselves, de-accelerationist. It doesn't roll off the tongue. There's a group of people who live all in the same area in the Bay Area and worry a lot about the possibility of AI destroying humanity. They even have a shorthand for measuring their concern, P. Doom, probability of doom.
Starting point is 00:05:08 So they ask each other, what's your P. Doom? And if you say 0.9, for example, that means I'm 90% sure that AI is going to destroy humanity. All right, so how do we connect this? Why did this get me thinking about our discussions here about finding depth in a high-tech world? Well, I couldn't help but thinking as I read about these AI safetyists and their concerns about P. Doom, that they weren't really getting at what. might be the much more proximate and important issue for most people when it comes to AI, at least for the tens of millions of knowledge workers who spend their entire day jumping
Starting point is 00:05:41 in and out of email inboxes that feel faster than they can keep up with them. For this large group of people, a big part of our audience, perhaps the even more pressing question about AI is the following. When will it be able to empty my email inbox on my behalf? when will email AI make the need to check an inbox anachronistic? Like trying to put new paper into the fax machine or waiting for the telegraph operator to get there. When will AI give me a world to work where I'm not context shifting constantly back and forth between 50 different things but can just work on one thing at a time? In other words, forget P Doom.
Starting point is 00:06:18 What is our current P inbox zero? Now, I recently wrote my own article for The New Yorker that goes deep into the current limitations of language model. based AI and what the future may hold. I had this problem of AI and email firmly in mind when I wrote that. So here's what I want to do today. I want to dive into this topic. So there's three things I'm going to cover in a row here. Number one, let's take a quick look at this promised world in which AI could perhaps
Starting point is 00:06:44 tame the hyperactive hive mind. I think this is potentially more transformative than people realize right now. Two, I want to look closer at the state of AI and its ability to tame things like email right now. I actually use chat TPT to help answer some of my emails and we'll talk about those examples in part two of this deep dive. And then part three, what is the technical challenges currently holding us back from a full email managing AI? This is where we'll get into my latest New Yorker article. What's holding us back? Can we overcome those obstacles? Who's working on overcoming those obstacles? All right. So look, on this show, one of the topics we talk about is
Starting point is 00:07:26 taming digital knowledge work. Another topic we talk about is the promise and perils of new technology. Today, in the steep dive, we're putting those two together. We have this topic. When can AI clean my inbox? This should be relevant to both. Let's get started. Part one, when we talk about AI's impact on the workplace, especially knowledge work,
Starting point is 00:07:48 there tends to be three examples of general examples of how AI is going to help the office that tend to come up. Number one is the full automation of jobs. Right? So we hear about, for example, vertical AI tools that's going to take over a customer service role. So this means that job goes away. The AI does the whole thing. The second type of thing we hear about AI in the workplace is the AI speeding up the steps of tasks that you already currently do in your job. Hey, summarize this note for me right away, so I don't have to read the whole thing. Write a first draft of this memo. It's going to save me time actually typing. gather me examples to use in this pitch.
Starting point is 00:08:33 Create a slide that looks like this and use these graphics. So it's you're doing the tasks, but the AI speeds up elements of the task. The third area I see discussed a lot with AI's impact on the workplace is brainstorming or generating ideas. This is really big, I think, right now, because we're mainly interacting with these tools through a chat interface. Hey, give me three ideas for this. Do you think this is a good idea? what's something I could write about here. So there's this sort of back-and-forth dialogue people are having with chatbots,
Starting point is 00:09:03 in particular to help come up with ideas or brainstorm. As we know on the show, however, none of those three things are really getting at what I think is the core issue that I think affects every knowledge worker, the issue that is driving the current burnout crisis, an issue that is holding down productivity in the knowledge workspace. I mean that in the macroeconomic sense more than anything else. and that's the hyperactive hive mind. We talk about this all the time, but we have set up a way of collaboration that's almost ubiquitous within knowledge work
Starting point is 00:09:35 where we have unscheduled back and forth messages to work everything out, email and also in other tools like Slack. The problem with this is that we have to constantly tend to these back and forth messages. If I have seven things I'm trying to figure out, and each of those things has seven or eight messages that I have to bounce back and forth with someone else today to get to an answer,
Starting point is 00:09:54 That's a huge number of messages that I have to see and respond to throughout the day, which means I have to constantly check my inboxes so I can see a message when it comes in and reply to it right away. Every time I check this inbox, I see a whole pile of different messages, most of which are emotionally salient because they are coming from people I know who need things from me, so we take them very seriously. And the cognitive context represented by each message is diverse. So now I have to jump my attention target from one thing to another thing to another thing within my inbox, back to my work, back to the inbox, between different messages in the inbox. This is a cognitive disaster. It is hard for our brain to change its focus of attention. It needs time to do that.
Starting point is 00:10:36 So this forcing ourselves to constantly jump around and keep up with all this incoming, each of which is dealing with different issues and different contexts and information. It exhausts us. It leads to burnout. It makes work a trial. and it significantly reduces the quality of what we produce and the speed at which we produce it. This hyperactive hive mind workflow is a huge problem. In my book's slow productivity, I get into how we got here. The first chapter of the book goes deep on it, but it is a big problem.
Starting point is 00:11:06 This is where I want to see AI make a difference. Imagine if what AI could do is handle that communication for you. Handle it like a chief of staff. handle it like Leo McGarry in the Aaron Sorkin television show The West Wing, the chief of staff from Martin Sheen's president Bartlett. Someone, an agent that could sit there, see the incoming messages and process them for you, many of which they might be able to handle directly. Filter it, give a quick response. You never have to see it. You never have to shift into that context.
Starting point is 00:11:40 And for the things that it can't directly manage for you, it can just wait until you're next ready to check in after you finish what you're working on. And your AI chief of staff could in this daydream ask you questions to direct it what to do. Hey, we got something about a meeting. Should we try to schedule this? And you're like, yeah, but put it on a Tuesday or Wednesday. Don't do it too late. And it's like, great, I'll handle this for you. Or here are three projects which we got updates on today.
Starting point is 00:12:06 Do you want to hear a summary of any of these updates? And you would say, yeah, tell me the update on this project. Hey, there's this thing that we heard from your department chair. There's a departmental open house. This is Beebe and the AI. Do you want to sign up for this? Do you want to do this? And I'd be like, yeah, find me a slot on Friday that works with my schedule.
Starting point is 00:12:23 Great, I'll do this for you. And then you go back to what you're doing. So imagine that. You don't have to keep up with an inbox. You don't have to dive in in this daydream and see all these messages and try to switch your attention from one to another, which we do bad, but an AI could do well. I think the productivity gain of an AI agent that could mean you no longer have to even see an email inbox would be enormous. I mean, I think we would see this in the macroeconomic productivity measures. The quality and quantity of what's being produced in non-industrial work is going to skyrocket if we took off this massive cognitive tax.
Starting point is 00:13:00 I think we would also see subjective satisfaction measures and knowledge work go right up. Oh, my God, I'm just working on things. And I have this sort of assistant agent that I talk to two or three times a day and kind of handles everything for me. and then I go back to just working on things. To me, that's the dream of AI and knowledge work, much more so than, well, when I'm just in the inbox myself, the AI agent's going to help me write a draft, or when I'm working on this project,
Starting point is 00:13:28 it can speed up my steps a little bit. I don't care about the speed at which I do my tasks. I want to eliminate all the context shifts. I want to eliminate the need to have to constantly change what I'm focusing on from one to another project to keep interrupting my attention to go back and manage back and forth conversations. So that would be massive. All right.
Starting point is 00:13:48 So here's the second question, part two. How close are we to that daydream of an AI that could handle our email inbox for us? Well, I was messing around with chat GPT recently. And what I did is I copied some emails for my actual email inbox and asked it some questions. I wanted to see how well it would do understanding my emails and writing to people on my behalf. All right. So the first thing I did is I had a message here from a pastor. It was an interesting, it was a longer message. And I saw that in this message, the pastor was talking about
Starting point is 00:14:25 my recent coverage of Abraham Joshua Heschel's book, The Sabbath, and talks about some points from it, some extra points, and there's like offering to send me some book. So I asked ChatchipT, hey, can you just summarize this for me? And did a great job. It was like, this person, is a pastor with this church. He's reaching out to Cal to express interest in Cal's work on intellectual workflow and its application of pastoral duties. He noted the challenges based on this. He highlights blah, blah, blah.
Starting point is 00:14:53 He offered to send you a copy of this book. Like his one paragraph got to all the main points. So then I tested ChatGPT's people skills. And I said, can you write for me a polite reply? And in this polite reply, decline the copy of the book that was being offered. I should say in reality, I'm actually interested in this book. This is just I wanted to test the people skills of chat GPT, right? And it wrote a good email.
Starting point is 00:15:18 Hey, thank you for reaching me out with your thoughtful message. I truly appreciate your insights. I'm genuinely, genuinely grateful for your offer to send me a copy of your book on blah, blah, blah. And while I see it's valuable, I must regretfully decline. Your dedication is great. Thanks again for reaching out. It was actually pretty good response. All right, here's another example.
Starting point is 00:15:39 Someone sent me a message that was saying, hey, you should see this, this, this anecdote about general grant and slow productivity. Spoiler alert, I'm going to actually talk about this later in the show, but I said, give me bullet points. And chat GPT did. It gave me three bullet points. He expresses gratitude about this. He shares an anecdote from this book about that.
Starting point is 00:15:58 He attached the following to the message. So what I'm seeing as I look at and test chat GPT with my emails is it can understand emails. Like, it can understand what, and summarize. what are in these emails, and it's good at writing responses. If you tell it what you want to do, it can write perfectly reasonable responses. So are we at our promise future? Is P inBog 01? Well, not yet. Because here's the problem. Right now, I am still directing all of this. I am loading the message. I'm looking at the message. I'm telling chat GPT summarize the message.
Starting point is 00:16:37 I'm making a decision about what to do on behalf of this message and then telling that the chat GPT. So really, at best, it's marginally speeding up the time required to go through my inbox, writing some things faster, preventing some reading. But it has no impact on my need to actually have to encounter each of these messages, to actually do the context switching, to have to keep up with my inbox and make sure messages are being sent back and forth. It can under-process messages, it can write messages. But these large language model tools right now can't take over control of the inbox. All right, so this brings us to part three. What is needed to do that? And this is where I want to bring up the article that I wrote recently for the New Yorker.
Starting point is 00:17:19 So I'm going to put it up here on the screen for those who are watching. They have a really cool graphic. I love when they do these animated graphics. I don't know if you can see it in the little corner here, but it's a hand placing a chess piece. So my article is entitled, Can an AI make plans? Today's systems struggle to imagine the future, but that may soon change. So here's the big point about this article. The latest generation of large language model tools can do a lot of cool things, a lot of really impressive things, especially the sort of GPT4 generation of language models.
Starting point is 00:17:55 But there's a lot of recent research literature from the last year show that is saying there's one thing they can't do. And this has been replicated in paper after paper. They can't simulate the future. So if you ask a language model to do something that requires it to actually look ahead and say, what's the impact of what I'm about to do, they fail. So there was an example I gave in the article from Sebastian Bubeck from Microsoft Research, who wrote a big paper, led a research group who wrote a big paper about GPT4. He said, look, this is really, GPT4 is really impressive.
Starting point is 00:18:30 He says in a talk about his paper, if your perspective is, what I care about is to solve problems, to think abstractly, to comprehend complex ideas, the reason on new elements that arrive at me, then I think you have to call GPT4 intelligent. And yet in this talk, he said, there is a simple thing it can't do. And he gave an example of something that GPT4 struggled with. He put a math equation on the board.
Starting point is 00:18:54 7 times 4 plus 8 times 8 equals 92, which is true. And then he said, hey, chat GPT, GP4. Modify one number on the left-hand side of this equation so that it now evaluates to 106. For a human, this is not hard to do. If you need the sum to be 14 higher to get from 92 to 106, you look at the left-hand side and said,
Starting point is 00:19:17 oh, seven times four. We have sevens. Let's just get two more sevens. Let's make that seven times six. Chat, GPT gave the wrong answer. The arithmetic is shaky, Bubeck said about this. There's other examples where GPT4 struggled.
Starting point is 00:19:31 There's a classic puzzle game called Towers of Hanoi, where you have disks of different size and three pegs, and you need to move them from one peg to another. You can move them one disc at a time, but you can never have a bigger disc on top of a smaller disc. This comes up a lot in computer science courses because there's solutions to this problem that are basic recursive algorithms. GPD4 struggled with this.
Starting point is 00:19:55 They gave it a configuration on Towers Hanoi. They could be solved pretty easily, five moves. But it couldn't do this. It struggled with basic block stack. problems. Hey, here's a collection of blocks. These colors stack like this. Let's talk about how to move them to get this other pattern. It struggled with that. It struggled when it was asked to write a poem that was grammatically correct and made sense where the last line was the exact inverse of the first line. It wrote a poem and it mainly made grammatical sense. The last line was a reverse of the first line, but the last line was nonsense. The first line was not a palindrome. It wasn't an easily reversible line. And so the last line, sounded like nonsense. All of these, as Bubek and others point out, all of these examples are marked by their need to simulate the future in order to solve them. How do you solve that math equation?
Starting point is 00:20:48 Well, humans, what we actually do is we sort of simulate different things we could change. What would the impact be on the final sum? Oh, changing the sevens would move it up by sevens. Great, that's what we want to change. We're simulating the future. When you play Towers of Hanoi, you have to look ahead. if I make this move next, this is a legal move. But is this going to lead me a couple moves down the line to be stuck?
Starting point is 00:21:09 So we have to look ahead when humans solve towers at Hanoi. Same thing with the poem problem. When you're writing the first line of the poem, you're also thinking ahead. What is this going to give me when I get to the last line of the poem? Oh, this is going to be nonsense. So I've got to make this first line of the poem. I got to make this first line of the poem reversible. Like GPT4 going to do this.
Starting point is 00:21:31 It was just writing word by word. Here's a writing a good poem. When it got to the last line, let me look back at what the first line was in reverse. It was too late. It was going to be nonsense. We simulate the future all the time. Almost everything we're doing, almost all of our actions as humans, have a future simulation component to it. We do this naturally.
Starting point is 00:21:52 We do this unconsciously, but almost everything we do. We simulate. What's going to happen if I do this? What about that? Okay, I'm going to do this. Should I cross the street right now? Well, let me simulate. Where's that car?
Starting point is 00:22:02 How far is it? Where do I imagine that car is going to be with respect to the crosswalk by the time I'm out there? Ooh, that's a little bit close. I'm not going to do it. When choosing what to say to you, I am simulating your internal psychological state. That's how I figure out what to say that's not only going to accomplish my goals, but not make you really upset. This is why people who are maybe neurodivergent and they're somewhere on the autism spectrum accidentally end up insulting people by, you know, they're not trying to, but they irritate or insult people. frequently because part of what is being changed in their brain wiring is their ability to simulate
Starting point is 00:22:39 the other mind. And when that is impaired, they can't simulate the impact what they're going to say on another mind, then they're much more likely to say something that's going to be taken as sort of offensive or is going to upset someone. So we're constantly, we're constantly simulating the future. That is at the core of human cognition. It's also at the core of any rendition we've seeing sci-fi renditions of a fully intelligent machine, they're doing that. In my New Yorker article, I talk about probably the most classic artificial intelligence from cinema, which is Hal 9,000 from Stanley Kubrick's 2001. And we know the classic scene, right, where Dave, the astronaut, is trying to disable
Starting point is 00:23:21 how because its focus on its mission is going to endanger Dave in his life. And Dave says, open the pod pay doors. He's trying to get in to disassemble Howl to turn it off. And Howl's like, I cannot do that, Dave. It's like very famous exchange. How does Hal 9000 know not to open the Pod Bay doors? Because it simulates. What would happen if I open the Pod Bay doors?
Starting point is 00:23:42 Oh, that exposes this. And if this is exposed and this person could take out my circuitry, oh, that doesn't match my goals. No, I'm not going to open the Pod Bay doors. You need to simulate the future to get anywhere near anything that we think of as human-style cognition. GPT4 can't do this. Now, is this just because we need a bigger model? Is this GPT-5 going to be able to do this? Is this we just have to figure out our training?
Starting point is 00:24:06 The answer here is no. I'll put my technical cap on here for a second, but I get into this in my New Yorker article. The architecture of the large language models that drive the splashiest AI agents of the moment, the clods, the Jiminize, the chat CPTs. These underlying large language models are architecturally incapable of doing even the most basic future simulations. And here's why I'm going to draw this.
Starting point is 00:24:33 So if you're watching, instead of listing, you'll see this picture. But you have to understand what happens in these large language models is that you have a series of layers. I'm drawing some of these layers here. So GPT4, we don't really know how many layers we are. We think it's like 96, but we're not quite sure because they don't tell us. Open AI is pretty close, but we know from other language models. This is a feed-forward architecture.
Starting point is 00:24:57 The information comes in, the bottom layer, it works its way through all the layers until at the very top, you get the output, which is actually a token, a piece of a word. So you give it input, give it a sentence of input. It gets moves through these layers one by one in order, and then out the other end comes a single word with which to expand that input. So it's an auto-aggressive token predictor. These layers are hardwired.
Starting point is 00:25:23 What's in them? It's kind of complicated. you have basically these transformer sublayers first. It's a key piece of these new language models. It has to do with embeddings. It has to do with attention, what part of the inputs being paid attention to. And then after those, you have basically neural networks, sort of feed forward neural networks.
Starting point is 00:25:43 But the main thing to think about here is the information moves through these hardwired connections. It's numbers it's multiplied by a neural network connections that it simulates activating. And it inexorably inevitably moves forward through all of these layers. And out the other end comes as prediction. Now, because these layers are very big, and GBT4, they're defined by somewhere around the trillion different values. These are very big layers. They can hardcode, these layers can hardcode a lot of information. And what happens is, and I get into this in the article, but at a very high level, what happens is as your input goes through these layers, patterns are right.
Starting point is 00:26:26 recognized. Really complicated patterns are recognized. This is an email. This is an email about this. We're being asked to do this about this email. And then there's very complicated guidelines baked into the connections that then say, given this is the sentence we're trying to expand with a single word, and given all these patterns we recognized about this sentence. And looking at all the possible next words that sort of make grammatical sense to be next, let's combine everything we've looked at to help bias towards which word we should output. And this number of guidelines and the properties that can be combined and the number of ways these properties can be combined is commentatorically immense. There's sort of near endless categories. If it's an email about this and that person, we're trying to say this, there's endless categories about what we're recognizing and the guidelines that connect the bias towards what word we should output next. But they're all hardwired. And so, you know, you can you can hardwire if we recognize a specific situation. We can have hardwired. In these situations, this move is what we've learned before will lead to somewhere good. But once it gets novel, it's something that it hasn't, it can't
Starting point is 00:27:35 see or approximate with its hardwired rules, you're out of luck because there is no way to be iterative in here. There is no way to be interactive. These are completely hardwired models. There is no memory that can be changed. There's no looping. There's no recurrence. The information goes through. We apply the guidelines to what's seen. Something comes out. we do our best with what we've already written down. We can't explore on the fly. That's just the architecture of these models. We see this, for example, when you play chess with GPT4.
Starting point is 00:28:04 These guidelines have a lot of insights about chess broken into them. So the properties might be like we have a chess board. We have pieces here. These are all properties that are being identified. This piece is protecting the king. Now, given all this information, we have our hard-coded guidelines. What move shall we output next? And it might say, well, in general, in these situations,
Starting point is 00:28:25 don't move the thing that's protecting the king, let's do this. So you can have really complicated chess games that look really good. And I talked about in the article how if you play chess against GPT4, prompting it properly, you get something like an Elo 1,000 rated playing experience, which is like a pretty good novice player. But when you look closer at these games, what happens? It plays good chess until the middle game, and then it gets haphazard. Because what happens in the middle game of chess is your board becomes.
Starting point is 00:28:52 becomes unique. And when you get to the middle game of chess, you can't just go off of hardwired heuristics. In this case, with a piece in this position, this is the right thing to do, or here's a good thing to do. When you get to the middle game, how do chess players play, they simulate the future. I've never seen this type of board before. So what I need to do now is think, if I do this, what would they do? And then what would I do? You simulate the future.
Starting point is 00:29:17 GPT4 can't do it. So we see this. The chess game is good until it becomes bad. when the hardwired rules of here do this and these situations this makes sense when those no longer directly apply it has no way of interrogating its particular circumstance and the chess play goes down all right so this is why we can't clean our inbox because to clean our inbox decisions have to be made about what to say and to make decisions about what to say you have to simulate the impact well if i did this what would the impact be on my schedule if i said this how's that going to make this person feel how's that going to make this person feel how How's that going to affect this team dynamic? How is this going to affect we have this current order of operations for completing this project? If I agree to do this delay, what's the effect going to be on this project that we're doing? Is that going to be interminable?
Starting point is 00:30:05 If that's going to be a problem, then I'm going to answer no here. I'm going to have to find someone else to do it. Writing an email, a language model can do, figuring out what's a saying in an email you have to simulate the future. GPT models can't do that. So are we going to get there? Is that even possible? And here in the article, I say, well, yes. Language models, because they're massive and feed forward and unmoluble and they have no interaction or recurrence, no, they can't do it.
Starting point is 00:30:31 But we have other AI systems that are very good at simulating the future. GPT4 is bad at playing chess, but Deep Blue beat Gary Kasparov. But Deep Blue is not a language model. Deep Blue works by simulating hundreds of millions of potential moves in the future is a big part of what it does. Alpha Go beat Lee Sedell and Go. And how did it do that? Well, it simulates a ton of future moves to try to see the impact of different things that it might do. So in game playing AIs, we're very good at simulating the future.
Starting point is 00:31:03 All right, so that's optimistic for our goal here of having an AI cleaner inbox. But if we're going to simulate the future in a way that lets us clean email, it's not just a sterile positions of pieces on a board. We have to understand human psychology. So can an AI simulate other minds? Well, here in this article, I say yes. In fact, there's a particular engineer who's been leading the charge to do that. His name is Noam Brown. And what did Noam Brown do?
Starting point is 00:31:33 Well, first, he made waves with Pluribus, the first poker AI to beat top-ranked players. So they played in a tournament with seven top-ranked players, the people you would know if you followed poker with a $250,000 pot. So there was skin in the game. They wanted to win. Texas Hold'em. And no limit Texas Hold'em. And Plurvis beat him.
Starting point is 00:31:58 Beat him over the two-day tournament. Well, as Noam Brown explains himself, in poker, the cards themselves are important, but actually what's more important is other people's beliefs about what the cards are. So you have to simulate human psychology to figure out what to do. What matters is not that I have an ace high. What matters is do the other players,
Starting point is 00:32:18 what's the probability that they think I have an ace high. That's where all the poker strategy comes into. It's taking advantage of the mismatches between other players' beliefs and reality. That's where the money is made. Plyuribus has to simulate human minds. Interesting aside about Plytus, by the way.
Starting point is 00:32:35 Brown and his team first tried to solve poker with just a massive neural net, sort of a feed-forward chat CPT style approach where it just had played so much poker that it would just tell you, here's my poker hand, here's the cards that are out, and it would just sort of figure out, here's the best move to do in that situation.
Starting point is 00:32:52 And this model was huge. They had to use tens of thousands of dollars of compute time at the Pittsburgh Super Computing Center just to train it. And with plurality, he said, well, what if instead of trying to hard code everything you could see, we simulated the future? And this collapsed the size of this model.
Starting point is 00:33:08 You could now train this stuff on a laptop or an AWS for like 20 bucks. It was a fraction of its size and way out-competed it. So simulating the future is a way more powerful strategy than trying to build a really massive network, like a language model that just has everything hard-coded in it. So then Noam Brown said, let's play an even more human challenging game. Diplomacy.
Starting point is 00:33:30 And in the board game diplomacy, which is like risk, the whole key to that game is you have before every turn one-on-one private conversations with each of the other players. And you make alliances and you backstab people and you're trying to place, the whole thing is human psychology.
Starting point is 00:33:47 Noam Brown and his team at Meta built a diplomacy playing bot named Cicero. I talked about this in the article. Beat real players. They played on a web server for diplomacy. People didn't even know they were playing against an AI bot. And how did they do it? Well, in this case, and this is where it gets really relevant for answering our email, they took a language model and a simulator and had them work together.
Starting point is 00:34:11 So the language model could take the messages from the one-on-one conversations. And they could figure out, like, what is this person saying? and what does this mean? And they could translate it into a common, really technical language that they could pass on to the game strategy simulator. And the game strategy simulator is like, okay, here's what the different players are telling me. Now I'm going to simulate different possibilities. Like if this person is lying to me, how much trouble could I get into if I went along with their plan? What if I lied to them and this was secretly?
Starting point is 00:34:39 And it tries out different strategies to figure out what to do. And then it tells the language model in a very terse technical terms, all right, here's what we want. want to do. Agree to an alliance with Italy, declined the alliance request from Russia, put this into good diplomacy language to be convincing. And then the language model generates these very natural sounding communications, and they send those messages. So now we're getting somewhere interesting.
Starting point is 00:35:04 A language model plus a planning engine meant that we now had something that could play against humans in a very psychologically relevant, complex, interpersonal type of discussion where you had to understand people's intentions and get them on your side. and it could do really well. This is the path that's going to lead to AI taming the hyperactive hive mind. It's not going to be GPT5 or 6. It's going to be the descendants of Cicero, the diplomacy playing bot. It's going to be a combination of language models with future simulators,
Starting point is 00:35:34 with maybe some other models to try to model project states or your work states or your objectives. It's going to be the ensembles of many different models working together. that is going to make it possible to do things like have AI clean our inboxes. So the question then is, are the big companies taking this possibility seriously? I mean, is a company like OpenAI taking seriously this idea that, okay, if we bring in planning and these other types of thinking and then connect that the language models, that's when things really get interesting? Well, I think they are. What's one piece of evidence? will remember Noam Brown
Starting point is 00:36:12 who created Pluribus and Cicero OpenAI just hired him away and they put him in charge reportedly of this big project within OpenAI called QSTAR a reference to the ASTAR bounded search algorithm something you use to search into the future
Starting point is 00:36:28 to add planning into an added feature with their language models so I think P.MBOG zero might be higher than we think and this is not going to be a trivial thing or a cool thing or an interesting twist, I think it could actually completely reinvent
Starting point is 00:36:44 the experience of knowledge work. I've been trying for years to solve this problem through cultural changes. We need to get rid of the hyperactive hive mind. We need to replace it with better systems that don't have so many ad hoc unscheduled messages do we have to respond to? We have to stop the context shifting.
Starting point is 00:36:59 And I've had a really hard time making progress for large organizations because of managerial capitalism and entrenchment of stability. It's very difficult. So maybe technology is going to lap me at some point And eventually there'll be a tool we can turn on that takes me out of my inbox as well. But once we do that, those benefits are going to be so huge.
Starting point is 00:37:16 We're never going to go back. We will look at this error of checking an inbox once every five minutes. I think in the knowledge where context, similar to how, you know, the caveman looked at the age before fire. I can't believe we actually used to live that way. So I'm optimistic. There we go, Jesse. That's the site AI P inbox zero. That's the key.
Starting point is 00:37:36 as you were explaining it all, I had some questions, but you answered them all. I was curious about the deep blue and like driverless cars, but. Yeah. And I didn't know like the explanation until you explained it all. Not to geek out, but the difference between the advancement of AlphaGo, which won it go, and it did this in the 2000s versus Deep Blue that won in chess, which did this in the 1990s. DeepMind did AlphaGo. The big advancement there is that the hard thing about Go is figuring out is this
Starting point is 00:38:06 board good or bad. Right? So if you're going to simulate the future, what you have to do is be able to evaluate the futures. Like, okay, if I did this, they might do this and I would do this. Is this good? That's easier to figure out in chess than it is in Go. Like, is this a good board or a bad board?
Starting point is 00:38:21 So the big innovation in AlphaGo is they had these neural networks play Go against each other endlessly. They jump started them by giving them like thousands of real games, so learn the rules of go and get a sense of like what was good or bad. And then they played go endlessly against each other. And the whole point here was to build up a really sophisticated understanding of what's good and bad. Right. So they built this network that could look at a board and say this is good and this is bad based off of just hundreds of millions of games it played with itself.
Starting point is 00:38:52 Then they combined that with a future-looking planning system. So now when they're looking at different possible moves, they could talk to this model they trained up that self-trained. Is this good, is this bad to figure out what the good plays are? And it led to a lot of innovation in play because this model learned good board configurations that no human had ever thought of as being good before. As part of how it beat Lisa Dell was it did stuff he had never seen before. So what's going on here? Whereas with Deep Blue, it was much more like they brought in chess masters and it was much more sort of hand-coded in.
Starting point is 00:39:24 Is this a good position or a bad position? It was sort of more heuristical there. So in AlphaGo, they're like, oh, you can actually build your, you can self-finding. teach yourself what's good and what's bad, which was cool. But it still had to simulate the future. So we'll see. All right. So anyways, we got some questions coming up, some about AI and digital knowledge work,
Starting point is 00:39:44 some about other things. But first, let's hear a word from our sponsors. Hey, I'm excited, Jesse, that we have a new sponsor today, a sponsor, one of these sponsors that does something that is exactly relevant to my life. This is listening. So the app is called listening, that let's be. lets you listen to academic papers, books, PDF, webpages, and articles, and email newsletters. We're listening where it came to my attention, where it's known in my circle, is that people use it
Starting point is 00:40:15 to transform academic papers into something they can listen to like you would, a podcast or a book on tape. Now, it can do this for other things as well, like I just mentioned, but this is where it really came in the prominence. And it uses AI to do this. So speaking of AI, has a very good AI voice. It does not sound like a robot. It sounds like a real human. you can give it, for example, a PDF of an academic paper, and you can pause, play, listen to
Starting point is 00:40:39 this like you had hired a professional voice actor to read that paper. Now, why is this important? Because it opens up all that time when you're driving, you're stuck in traffic, you're waiting for something to start, time when you might put on a book or take or podcast. Now you could also put on something that is like very productively useful or interesting for your own work. hey, I want to read to me this new paper about whatever. Someone just sent me a paper, which I'm going to listen to and listening for sure, because
Starting point is 00:41:11 I have a long drive coming up. Someone just sent me a paper, for example, that looked at does Twitter posting about your academic papers, so it's very circular, lead to higher citation count? And it looks like the implication of the paper is no. So promoting yourself on Twitter as an academic doesn't actually help you become a better academic. This is fascinating to me. So this idea that I can just click on that and now when I'm walking, you know, back and forth or going to the bus stop, I could listen to this paper. Just imagine the amount of time you can now use actually learning interesting things. So it's really
Starting point is 00:41:46 cool. It's bringing other types of content into the world of audio consumption. One of the other cool features I like is a add note button. So like as you're listening to a paper, you can click add note and then just type a few sentences and it'll store that for you. Oh, here's a note for this section. So you can add notes as you go along. Anyways, really cool for people like me who have to read a lot of interesting, complicated stuff and don't always have a lot of time where we can't just sit down and actually read. So here's the good news. Your life just got a lot easier. Normally you'd get a two-week free trial, but for my listeners, you can now get a whole month free if you go to listening.com slash deep or use to code deep at checkout.
Starting point is 00:42:32 So go to listing.com slash deep or use to code deep at checkout to get a whole month free of the listening app. I also want to talk about our good friends at Element, LMNT. Look, healthy hydration isn't just about drinking water. It's about water and the electrolytes that it contains. You lose water and sodium when you sweat. So you have to replace both. While most people just drink water, you need to be replacing the water and the electrolytes.
Starting point is 00:43:03 Drinking beyond thirst is a bad idea. It dilutes blood electrolyte levels, which also can cause problems. So the problem is not to, the goal here is not to drink as much water as possible, but to drink a reasonable amount of water plus electrolytes, especially if you're sweating or exercising a lot. This is where Element enters the scene. I use Element all the time. It is a powdered mix you add to your water that gives you the sodium, potassium, magnesium you need, but it's zero sugar. And no weird artificial stuff.
Starting point is 00:43:34 It gives you what you need in your water without any of the other stuff, the sugar or the weird chemicals. Zero sugar, zero sugar, zero artificial colors, no other dodgy ingredients. It tastes great. It's salty and good tasting. I love citrus salt. Other people like raspberry salt. They have these spicy flavors like mango chili. You can mix chocolate salt into your morning coffee.
Starting point is 00:43:54 coffee if you really want to rehydrate after a hard night. I drink this, sure, after my workouts, but also if I've had a long day of podcasting and giving talks and I'm just expelling all this moisture through talking and sweating, Element is exactly what I go to when I get back. I add it to my Nalgin model and I get both back. Anyways, I love Element, and I love that I can, I don't have to worry about drinking it, no sugar, no, no nonsense. So the good news is Element came up with.
Starting point is 00:44:24 a fantastic offer for us. Just go to drinkelement.com slash deep to get a free sample pack with any purchase you do. That's drink element, l mn t.com slash deep. All right, Jesse. Let's do some questions.
Starting point is 00:44:40 All right. First question is from Zaid. I'm a student and feel lost with a fear that AI will replace all jobs. Specifically software jobs and web development are on the top of the list of jobs to disappear. After reading deep work, these were the two
Starting point is 00:44:54 fields that I wanted to pursue. My motivation to the study is dying out. Are these fields now a lost cause? No, they're not a lost cause. I do not think programming as a job is going to go away. And I do think it's a good skill to learn. It does open up a lot of career capital opportunities to shape interesting careers. So if you look at the history of computer programming, it is a long line of tails of new technologies coming in that makes programmers much more efficient. Right. And from the very beginning, right? I mean, we started programming used to be plug boards.
Starting point is 00:45:32 The program an early electronic digital computer, you're adjusting circuits by taking plugs and plugging them into other places. Then we got punch cards way more efficient. Now I can store a program on punch cards and run that. I don't have to redo it from scratch every time. That's a huge efficiency gain. And then we got interactive terminals. Oh, I don't have to make punch cards, give it to someone and come back the next day to see if it worked.
Starting point is 00:45:57 We're talking like massive, multiple order of magnitude efficiency changes one after another. Then we got interactive editors. I could edit particular words or lines of my code. I could run the code straight there, get their results, and immediately go back and change it. Then we got detailed debuggers. oh, this is what's going wrong in your code. Here is where your code broke. All of this stuff is every one of these is an exponential increase in the efficiency of programmer.
Starting point is 00:46:25 Then we got this sort of modern world where we have auto-complete and syntactic real-time checker IDs. As you're writing code, it's telling you you type this wrong. This is a syntax error here. It's telling you your mistakes before you even try to run it. You don't have to memorize all the different commands and calls and parameters because it can auto-fill this in for you. and we have Stack Overflow in Google. So now for like almost anything you want to do, you can immediately at your same desk in the monitor right here,
Starting point is 00:46:53 find examples of exactly that code. You have to understand that every one of these advances was a massive efficiency boom. So what did we see? Did we see as we made programmers massively more efficient that the number of programmers we needed to hire got smaller and smaller? If that's what really happened, there'd be like seven programmers like. left right now. Instead, there's sort of more people doing programming than ever before,
Starting point is 00:47:18 because what we did was follow a sort of common economic pattern as we became more efficient as programmers. As each individual programmer could handle and produce more complicated systems faster, we increased the complexity and therefore the potential value of the systems we built. So we still needed the same number of programmers, if not more. A programmer today, I would say, is a thousand times more efficient than a programmer in 19-19. But we have way more than a thousand times more applications of software today than we had in 1955. This is my best prediction of what we're going to see with AI. I think the push to try to fully replace a programmer with AI is quixotic.
Starting point is 00:48:03 Now, what we're going to do is to make programmers even better. That's what we're seeing, right? I mean, this is what co-pilot with GitHub is doing. It's like an even smarter auto-complete. it's making programmers more efficient. We're essentially removing the need to search for things on Stack Overflow. You can have an AI language model. You can ask it, and it will show you the example code or write you the example code.
Starting point is 00:48:24 That makes us more efficient. I think we're going to get more of the AI writing first drafts of code or filling in the easier stuff. So programming will become more complex. It'll be harder. But we're going to be able to produce more complicated systems with the same number of people. So we're just going to see more computer code in our world, more complicated systems in our world, more things that run on complicated code because the ability for programmers to produce this will be increased. So what that means for use of heat is if you like programming, keep learning it, but keep up with the latest AI tools as you do. Whatever is cutting edge with AI and programming, learn that.
Starting point is 00:49:03 push yourself to learn more and more complicated code with more and more complicated AI tools because the complexity curve of what programmers have to do has also been steadily increasing. So you've got to keep up with that curve. But the jobs are going to be there. At least that's my best prediction. All right. Who do we got next? Next question is from Kendra.
Starting point is 00:49:24 Do you ever use chat GPT to assist with your writing? I'm not a full-time writer, but do write a lot. Recently, I've been using chat GPT for assistance. Is this bad? Yeah, chat GPT in writing is an interesting place. It's something I've been looking into through numerous different rules, thinking about article ideas, and some of my roles of my roles of looking at pedagogy and AI at Georgetown. It's something I'm really interested in. The complexity about this topic is there's several different threads here.
Starting point is 00:49:52 And the role of AI in writing in each of these threads, I think, is different. So let's think about professional writers, for example. professional writers I know they're not letting chat GPT write for them professional writers I know and there's quite a few who are messing around with language models like chat GPT
Starting point is 00:50:10 are using it largely for brainstorming an idea formulation what about this can you give me examples of this also for intelligent Google searches hey can you go find me five examples of this and so for the new language models that have plug-in access to the web they can kind of give you
Starting point is 00:50:26 more modern examples it's a useful research assistant. But as you note, professional writers don't use chat chbt to actually write because professional writers have very specific voices. The art of exactly how we craft sentences matters to us. Like that's not outsourceable, right? Because it matters. That like that top 10% skill and making writing great is all in the little details. And it's very idios and critic how we do it. So like professional writers for the most part don't let chat GPT write for them. For non-professional writers who do have to produce writing, I think it's becoming increasingly more common to use tools like chat GPT to produce drafts of text or just text in general. I don't think this is a bad thing.
Starting point is 00:51:10 I think it brings clear communication to more people. We see big wins, for example, with non-native English speakers, the ability now to not be tripped up or held back because I can't describe my scientific results very well. My language is bad. Oh, if chat GPT can help me. describe my results in a paper better. Now what matters, of course, is my results. But now I'm not going to be tripped up presenting those results because I have helped doing the writing. I mean, I think a lot of people have communication in their job who struggle a little bit with writing. If they can be clear, I think this is fine. Can you write a short message in this style that, like, thanks, you know, the person. Again, I think this is just introducing more clear communication
Starting point is 00:51:51 to the world, and we are going to see more of that. And I don't think that's a problem. So then what's the thread where things are more controversial or open? And I think that's when it comes to pedagogy. So in school. And this is really an open question right now, is what role does learning to write play and learning to think? There's different schools of thought about this. Like should we teach students from day one how to write in this sort of cybernetic model of its U plus a language model? or should we teach students how to write?
Starting point is 00:52:26 And then later, hey, later in life you can use language models to sort of write on your behalf to be more efficient. But it's important for your development as a thinker. It's important for your development as a person to grapple with words. There's a lot of people who say writing is thinking. So to practice writing clearly teaches your mind how to think clearly. And we can't yet outsource our thinking to chat GPT. So we don't want to lose that ability. There's a clear parallel here.
Starting point is 00:52:52 We can compare this to other existing technologies. In particular, I like to think about comparing this to the calculator on one hand and centaur chess on the other. Explain what I mean. With the calculator, here's a technology that came along that can do arithmetic very fast and very accurately. From a pedagogical position, what we largely decided to do was preserve the importance of learning arithmetic without a calculator. So until you get to middle school or beyond, you're learning how to do arithmetic with pencil and paper because we thought pedagogically you need to get comfortable manipulating numbers
Starting point is 00:53:30 and the relationships to each other. You need that skill. As you move on into more advanced algebra and on in the calculus and beyond, we then say, okay, now if in working on higher order stuff, there's arithmetic that needs to be done, you can use the calculator. So you can automate arithmetic later on. But we felt that it is important to learn how to do arithmetic yourself earlier in your pedagogical journey. The other way of thinking about this is Cintor chess, which is where players play chess along with an AI, a player plus an AI, and they work with each other.
Starting point is 00:54:03 Sintar chess players are the highest ranked players there are. A player plus AI can beat the best AI. A player plus AI can beat the very best human players. This is a model that just says, no, no, human plus machine together is just much better than human without machine. So that's another way that we might end up thinking about writing and pedagogy. Start right away learning how to write with these language models because you'll be a better writer than you ever would be before. And the quality of writing in the world is going to go up. We don't know what the right answer is yet.
Starting point is 00:54:31 I think educational institutions are still grappling with is language model aided writing the calculator or is it sentor chess? And I don't think we know yet, but a lot of people are thinking about it. So I think that's probably the most interesting thread. But if you're just doing mundane professional communication, you're not a professional writer, and you have a language model helping you? I say Godspeed. I don't think that's a bad thing. All right.
Starting point is 00:54:54 Ooh, we got a question coming up next. Oh, this is going to be our slow productivity corner. Yeah, we get the music. Should we hit the music first? Yeah. All right, let's hear it. So there's long time listeners. No, we try to designate at least one question per week as our slow productivity corner,
Starting point is 00:55:13 meaning that my answer is relevant to my book, slow productivity. If you like this podcast, you really need to get the book, slow productivity. All right, what's our slow productivity corner question of the week, Jesse? This question is from hunched over. I have a really nice work from home setup in a special nook of my house. However, I've used this setup for two years for mostly hyperactive high mind type work, impulsive checking of email, switching between multiple tasks, Zoom meetings, distraction, etc. I now find it's very hard to get in a deep work mode at this desk, even when I have the set time aside.
Starting point is 00:55:48 I visually switch back into a shallow work mindset. How do I reclaim my desk to be a place of deep work? deep work for my mind. So I talk about this in principle two of my book, slow productivity, in that principle, the description. So the principles work at a natural pace. And as part of my definition of that principle, at the end of that, I say it's like working at a natural pace, varying intensity over time scales, and then sort of comma in settings
Starting point is 00:56:17 conducive to brilliance. And a big idea I get into in that chapter is that setting. Setting really matters when you're trying to extract value from your mind. Setting really matters, and we should take that very seriously and be willing to invest a lot of time and potentially monetary resources, if needed, to get the settings proper for producing stuff with their minds. So because of that, what we often see when we study the traditional knowledge workers I look at in that book, people famous for producing value in their minds, is that like you, they often have very, very nice home offices. really good desks at my computers and my files and a very comfortable chair and it's very well appointed home office and they don't work in that they don't work on their deep stuff in that office we see these separations david macola i found the picture of this i talk about it in the book
Starting point is 00:57:08 i found the picture of his home office from a profile his house in west tisbury martha's vineyard it's a great home office the window looks over a scenic view and it's got an l-shaped desk it looks great he wrote in a garden shed so he would use the home office to do all the business of being like a best-selling author and historian, but when he wrote, he went to a garden shed that had a typewriter, because that was what was conducive for his brilliance. Mary Oliver, the poet, her best poetry was composed walking in the woods. There was something about the nature and the isolation and the rhythm. That is where the good thoughts came.
Starting point is 00:57:43 That's a very specific process. Nietzsche also would do very long walks. That's where his best thoughts would come. And so we see these examples time and again, that the setting in which you try to do your most smart, creative, cognitive work really matters. And if that setting is the same place that you do shallow work, it's the same place you do your taxes and your emails and your zooms, your mind is going to have a hard time getting into the deep work mode. And so the answer is they have two places. Here's my home office that I care about function. It's got monitors and good webcam and my files are here.
Starting point is 00:58:20 and I don't want to waste time when I'm doing the minutia of my professional life. But then you need somewhere else you go to do the deep stuff. And it could be fancy. It could be very simple. It could be sitting outside under a tree at a picnic table. I used to do this at Georgetown. There was a picnic table in a field on part of the trail that ran from reservoir road down to the canal. And I would go out to that tree with a notebook to work.
Starting point is 00:58:46 It could be a garden shed that you converted. It could be a completely different nook of your house. I talk about in the book, people who took like an attic dormer window and just pushed a desk up there. That's for deep work. That's what Andrew Wiles did when he was solving Fermat's last enigma, last theorem. He did that up in an attic in his house in Princeton. So have a separate space for deep work from shallow, and it should be distinctive, and it should psychologically connect to whatever deep work you do.
Starting point is 00:59:14 It doesn't have to be specialized. It doesn't have to be expensive. It can be weird. It can be eccentric, but it needs to be different. So don't try to make your normal home office space also good for doing deep work. Have a separate space for doing your best, most cognitive stuff. You were going to find, I would predict you were going to find a significant increase, a significant increase in not just the quality of what you produce when you do your deep work,
Starting point is 00:59:41 but how quickly get into that state and the rate at which you produce that work. So anyways, in slow productivity, that's one of the ideas I really push. location matters. Don't reduce all work to this just frenzied to monitor, jumping back and forth between emails, busyness, freneticism. Don't make all work that. Separate. There's some of that.
Starting point is 01:00:03 And then there's also me trying to produce stuff too good to be ignored, the real value. And that's a slower thing, and I need a different location for it. All right. What do we got next? Next question is from Charlie. Excuse me. I time block my day into 50-minute deep work blocks separated by 10 minute breaks. I have little autonomy and I'm closely supervised. Sometimes I'm extremely
Starting point is 01:00:24 busy all week and sometimes I'm twiddling my thumbs waiting for my supervisor solicitate my supervising solicitor to give me work. How should I utilize my 10 minute breaks during a busy week? And also, how should I handle weeks when I don't have much work for my deep work blocks? Well, Charlie, don't worry too much about those 10 minute breaks. Have fun, right? Don't think about them. Just do whatever. Like, do whatever is interesting. I mean, I typically recommend if, If it's a busy day in the sense that you have the 50 minutes on the other side of these breaks is filled with deep work, take what I call deep breaks. So don't look at things that are emotionally salient. Don't look at things that are too relevant to the type of work you're doing.
Starting point is 01:01:03 Look at things completely or do things completely different than your work. That's going to minimize the context switching cost when you go back to your work. More generally, though, I don't love the sound of this job, right? What makes people love their work? This is an idea from so good they can't ignore you, where I noted that people think what they want to love their work is a match of the content to their job, but there's these other more general factors that matter more. And one of those general factors that matters more is autonomy. Autonomy is a nutrient for motivation and work. It's critical.
Starting point is 01:01:36 You don't have a lot of it. So I don't love this job. So how about this plan for the weeks in which you don't have a lot of deep work for the 50-minute blocks? you were working like a laser beam in that time on your move to something different. So you have a side hustle or a skill that you're learning that is going to allow you to transform what your work situation is to be closer to your ideal lifestyle. And that's what you're working on in unscheduled 50 minute breaks. I think you're going to get a lot of fulfillment out of that because you're not going to be bored.
Starting point is 01:02:05 And more importantly, you're going to find some autonomy empowerment. I am working on the route out of what I don't like about where I am now. And it could be a new skill that within your same organization is going to free you to go into a more autonomous position. Or it could be a new skill that's going to allow you to go to a different job that's going to be more autonomous. Or maybe it's a side hustle that is going to allow you to drop this to part-time or drop it all together because it can support you. I think psychologically you need something like that because otherwise a fully non-autonomous job like this, especially in knowledge work, can get pretty draining. All right. Let's see.
Starting point is 01:02:41 We have some calls this week, don't we, Justin? We do. Yeah. one it looks like. Yep. Excellent. Let's get our first one. Okay. Hi, Cal. This is Rhone, long-time reader and a listener since episode one. I'm a long-time fan of your work. I'm eagerly awaiting my receipt of slow productivity. I'm getting both a signed copy that was offered there by your local bookstore, and I'm getting the Kindle version as well. I'm especially excited that you have recorded the audiobook version yourself. I've really been hoping for this, especially since you've
Starting point is 01:03:11 started the podcast to hear these books in your own voice. I think that's fantastic. I'm particularly enjoying your forays into the philosophy of technology. That's an area of interest myself. I'm personally finally diving into Heidegger and my philosophical readings in general. In honor of your famous Heidegger and Heff Weisen tagline, I wonder if you've read Heidegger's views on technology. And if so, has that influenced or impacted your views in any way? Thank you again for all of the excellent work and all the excellent content. And I am looking forward to the Deep Life book to come after this one. Thank you very much.
Starting point is 01:03:45 Well, thanks for that, Roan. I'm vaguely familiar with Heidegger on technology, but I would say most of my sort of scholarly influences on technology philosophy are 20th and 21st century. This is where, if you go back to Heidegger, technology was being grappled with, but it was also being grappled with in the context of these much more ambitious, fully comprehensive, continental, philosophical frameworks for like understanding all of life and meaning and being. And it's these complex, it was the height. By the time you get like Heidegger and you see this a lot in Marx as well, this sort of totalizing, we're going to sort of have a new epistemology for like all of knowledge and understanding the human condition, very complicated.
Starting point is 01:04:35 And so it's a little bit less accessible. There's specific thoughts on technology. Whereas you get farther along in the 20th century, what you get is more of people because of the impetus of modernity, just grappling specifically with technology and its impacts. And so you start to see this with thinkers like Lewis Mumford, for example, or Lynn White Jr. And it's starting to grapple more specifically with what's going on. And so we get later, you get thinkers like Neil Postman and you have Marshall McLuhan. You know, they start working on this.
Starting point is 01:05:05 More recently, you get Jaron Lanier. Then you have full academic subdisciplines like SDS emerging, which has a, a very specific methodology for trying to understand the social techno systems. More recently, you get things like critical technology studies, which tries to apply postmodern critical theories to trying to understand technologies. The 20th, especially mid-onward 20th century and early 21st century, it's more focused. And I think the pressures of modernity give us a type of technology, an understanding of technology that resonates with a current moment. So that's been more influential to me, I would say. I do like your callback, however, to Heidegger and Heffawisen.
Starting point is 01:05:48 Most people don't know this, but, you know, when I was writing my books for students, my newsletter and blog, of course, were focused on students. And a big thing I was pushing for there was how do you build a college experience that's like really meaningful and interesting and sustainable and also opens up really cool opportunities? I did not like this idea of being super stressed out in school. Like, oh, but it'll be worth it Because I'm going to get this job And then it'll be worth it
Starting point is 01:06:16 And I was trying to teach kids How do you actually make your experience In college itself good? Not like something you're sacrificing yourself for To get to something better down the line And I have this idea called the Romantic Scholar And it was all about how to transform your college experience And being much more psychologically meaningful
Starting point is 01:06:32 And one of my famous to like My famous I mean among like the readers of study hacks back then So like seven people One of my famous ideas was Heidegger and Heffewisen. And I was like, take your, when you have to read Heidegger, don't just go white knuckle at the library the night before. Go to like a pub and get a pint and like a heffawisen and like sip a drink and there's a fire and like read it. Like put yourself into this environment of like this is cool.
Starting point is 01:07:02 It's an intellectual thinking and ideas are cool and life is cool. And approach your work with this sort of joyous gratitude and care about where you are and how you're working. I talked a lot about that. Anyways, it reminds me of our last question, or one of the questions we answered earlier in the episode, right? I told the, in the slow productivity corner, I told the question asker, build a cool space to do your deep work. Don't try to make your shallow work home office into the place where you do your deep work. Like, go somewhere cool, do it under a tree. And, you know, I really pushed that idea back then.
Starting point is 01:07:34 I talked about, I called it adventure study, and I think was my term. Go to cool places to do your work. work. So you make your work into something that's intellectually cool. It's exciting, not something that you're trying to grind through. I'm trying to think of examples. I think there was a someone, and people would write in, students would write in. Someone wrote in with a picture of a waterfall where they went to study. Someone else, an astronomy student snuck onto the roof of the astronomy building, the stars, and that's where she would like read and work on her problem sets. You know, I love that idea when I was helping students find more meaning in their student life. So I like the idea of
Starting point is 01:08:07 preserving that today in knowledge work, especially if you're remote or have flexibility. Find cool places to do your coolest work. Transform your relationship to it. Like, I still do this sometimes with, if I'm early in a New Yorker article, I'll go to Bevco at Happy Hour and do exactly Heidegger and Heffawizen. Like get like something they have on tap because just psychologically it's like, this isn't work.
Starting point is 01:08:32 This is interesting. I mean, there's all these people. I know the people at Bevco have, you know, like a Heffawizen. I'm just like thinking. Isn't it cool to think ideas? This isn't just me in my home office, like trying to make deadline. And I'll often do that at the beginning of a New York article just to put myself into, like, the mindset of this is cool. This is interesting.
Starting point is 01:08:49 This is thinking. Like, remember, like, this activity itself has value and it's entertaining. It's not just functional. So anyways, cool call. Maybe I should read some more Heidegger. I have to get more Heffawizen. That's the cool. It takes a lot of Heffawizen to get through Heidegger, by the way.
Starting point is 01:09:05 It's some long books. All right. Do we have another call? Yep. Here we go. Hey, Cal. My name's Tim. I actually met you over the weekend at your book signing. I was the Dartmouth guy that you met toward the end.
Starting point is 01:09:18 It's nice meeting in person. I have kind of a two-part question. I'm really drawn to the idea of thinking about seasons or chapters of your life and career. As somebody with young kids at home, I'm certainly in a specific type of season right now. So I wanted to understand, I guess, a two-part question. And when you're thinking about the seasons of your life, what's the time box you put around those? Are those like a quarter?
Starting point is 01:09:46 Is it half a year? Is it two years, 10 years? Like, how do you, when you think you're entering or exiting a specific season in your life, how long is that? And secondly, I guess is, you know, I'm in a relationship. I have a wife. And she's also got a busy life and career. How do you or do you have any advice on how do you,
Starting point is 01:10:08 synchronize or match up the seasons they may be going through in their careers. I find it's very difficult if you have two people trying to push hard at work or in a busy season at work, but also be able to give the attention you to home. So it takes a conscious decision on both parts on which season you're going to be in, which season you're going to be in. I wonder if you have any advice on that. Thanks, Cal, big fan. Well, Tim, good to hear from you again.
Starting point is 01:10:36 It's nice to see you at the book event. good questions. So first question when it comes to seasons, there's different scales that matter. So there's the literal seasonal scale of the seasons of the year. And this is a big idea from principle two of my book, slow productivity, is you should have variations within the seasons. Like for me, for example, my summers are much different than my falls. So my summers, it's much slower. There's much less phoneticism and meetings and I'm much more focused. Whereas like in the fall if I'm teaching some classes and I can do a lot more meetings. It has a different feel to it. So seasonal variation is good. We are not meant to work all out every day, all the days of the year.
Starting point is 01:11:19 Like we're meant for there to be variations. If you don't work in a factory, don't simulate working in a factory with your knowledge work. There's also higher scales of seasons, like longer time periods. And this becomes more clear to me as I get older as I've actually made my way through more of these I think of these larger, like the largest granularity of season I deal with is pretty close to a decade. And I think this is pretty relevant if you're having kids, right? Because so I think of my 20s as different than my 30s as different than my current season, which is my 40s. So in my 20s, for example, like one of the things I was trying to do if I'm thinking about professional objectives is trying to get on my feet professionally. It's like I want to be a professor, want to be a writer.
Starting point is 01:12:07 It's like I want to lay those foundations. And that's what I'm working on, putting in the time, putting in the skills. It was a lot of skill building, head down skill building. Like the stuff I was working on might not be publicly flashy, but writing the papers, learning how to be a professor, writing the books. There are student-focused books, doing magazine writing, doing newsletter writing, just trying to get my writing skills up. The three books I wrote in my 20s, each of them had a element that was more difficult. than the one before that I very intentionally added. So I was using the books to systematically push my skills, not to try to grow my career necessarily.
Starting point is 01:12:44 I got the same advance for all three of those books. My goal was not how do I become a very successful author in my 20s. It was how do I become a good enough writer, we're becoming a successful author as possible? And so that was my 20s, right? And that was largely successful, right? Because I got hired as a professor right when I turned 30. and my first sort of big hardcover idea book came out right when I turned 30. So good they can't ignore you.
Starting point is 01:13:12 All right. So then my 30s is a different season. So what I'm trying to do in my 30s is now we're having kids. So I have my first of my three kids when I was 30, right? So my wife and I were starting a family. And professionally, I was like, okay, now what I need to do professionally? What do I care about now when you have kids that age or you're starting to have babies? It's like I want to provide stability.
Starting point is 01:13:32 And so it was really about, like, okay, I want to get tenure. I want my writing career to be successful enough that, like, it gives us financial breathing room. I want to be a successful enough writer that, you know, like, we're not super worried about money and the stability of 10 year. Like, those are the two things I want to do. I want to become a successful writer, meaning it was, unlike in my 20s, and these are the smaller book advances. I mean, I don't always talk about number, but I'll tell you, like, the books I got in my 20s were all $40,000 book advances. So these were not by standards today. very small advances.
Starting point is 01:14:04 In my 30s, I was like, I need to now become a writer that gets like real hefty book advances. I need tenure. And beyond that, it's like trying to keep the babies alive. Right. So it was sort of a, there's a frenetic period. This is not a period of grand schemes. It's like, get your head, keep the babies alive and keep, you know, everyone, this baby is fed, you know. Okay, do they know that my wife's traveling, so I need to, like, get the bottle, like, when's the nanny coming?
Starting point is 01:14:31 Like, all that type of stuff. get 10 year, become a writer with like some financial heft, right? And that was what my 30s were about. And I think that was largely, and that was successful. I got 10 year, you know, five years later and my books became bestsellers. And now, like, I'm getting bigger book contracts and we could move to where we wanted to move. And, you know, okay, great. We got that all set up.
Starting point is 01:14:54 We're financially stable. The kids survived. I have 10 year, you know, I'm a successful writer. Now my 40s is a different season. I'm not keeping babies. live anymore. Now I have elementary school age kids. This is much more a play of, it's parenting. It's like being there in your kids' lives. They need as much of my time as possible. They're developing themselves as people. And I have all boys and they really want specifically dad times.
Starting point is 01:15:17 Now parenting is this whole other thing. And in my work, like, well, you know, I got tenure and I become a successful writer. So now when I think about professional goals in my 40, they're much more, they're much more, it's ambitious in a sort of legacy way. Like, well, but what do I want to be as a writer. Like, where do I want to, what do I want to do? Like, what do I want to do? Like, where do I want to leave my footprint, right? And this is a very different feel. And what I want to do in academia? Like, I, I was focused on getting tenure in my 30s. My 40s now, it's like, where's the, like, footprint I want to leave in the world of scholarship, right? It becomes much more forward-thinking legacy. It's slower with my kids. It's not, how do I make sure that, like,
Starting point is 01:15:54 every kid has picked up and got the milk when they needed it. Now, it's like, how am I showing up in their lives in a way that, like, they're going to develop as good human beings? So, like, in this current season in the 40s, everything is more lofty or more legacy, more forward thinking. It's slower and more philosophical and the depth is, there's more depth to it. So, you know, every season is different. So those life seasons could be at the scale of decades. But those are just as important to understand as the annual seasons and even the smaller scale seasons. That's for your second question, coordinating with your wife. What I have found is like what I hear from people, I found in my own life.
Starting point is 01:16:33 It is really important that you and your wife have a shared vision of the ideal family lifestyle and that you are essentially partners working together to help get towards or preserve this ideal vision you have for what your family's life is like, where you live, the role, how much you're working, how much you're not working, what your kids are like, what their experiences with you. You need a shared vision of this is what our team, our family, this is where we, we want to be. This is what we think is what we're going for. Like my wife and I started making these plans as soon as we started having kids. And they evolved, but we wanted a shared plan.
Starting point is 01:17:08 And then it's like, okay, now how are we both working towards this? What's going to matter? Right. You need the shared plan. What happens if you don't have this? Well, you get the other thing, which is very common, especially among highly educated couples, which is we are both independently trying to optimize our careers and therefore see each other mainly from the standpoint of an impediment to my professional goals. And we have this very careful tallyboard over here of like, mm-hmm, mm-hmm. You did seven units less of this. I did four units less of this, so you get sort of potentially resentment.
Starting point is 01:17:40 But even without the resentment, it's a huge stress and anxiety producer. Trying to individually optimize two careers without any approach to synergy or any shared goal of where you're trying to get your life writ large is a source of tension, right? It's very difficult. There's these really cool configurations that might be possible for you and your family's life that will be missed if you're only myopically looking at your own career. You're saying, how do I just keep this going forward? How do I just maximize these opportunities? Because ultimately, what is going to matter most for your satisfaction in life is going to be the whole picture of what your life is like.
Starting point is 01:18:18 And so you need to be on the same page. You have your shared vision. And then you have your shared plan at different timescale. So how are we going to get closer to this vision? So what are we working on for the next five years? Like, where do we want to try to get? How are we getting there? Okay, this year, like, what are we both working on?
Starting point is 01:18:32 What's our setup and configuration? What is the biggest obstacle we have to the shared vision of where we want our family to be? Oh, there's something about our work setups now that's incredibly stressful and it means we're not like our kid doesn't have this or this. Do we think it's important? Wait, maybe we need changes here. It opens up a lot of options when you're working backwards from a shared vision as opposed to working forwards from just what's best for me and specifically what I'm working on.
Starting point is 01:18:54 So you got to be on the same page. Whatever that vision is, be on the same page. And again, as soon as I see couples do this, it opens up so many options for them in their lives. And it's a hard transition because, like, coming out of your 20s, it's all about, I need to maximize what I'm doing to get some sort of abstract yardstick of impressiveness. It changes when you're older. What is my family trying to do? You know, where do we want to be?
Starting point is 01:19:22 What do we want like a typical afternoon to look like? What do we want our kids experience to be like? What type of place do we want to live? Like what do we want to be doing in like the evenings and the afternoons? Like who's around? Get this, when you get these visions really nailed down, you work backwards from it. All sorts of creative options show up. And yes, they might be options where someone is not optimizing their potential professional achievement.
Starting point is 01:19:47 It might be like, wait a second, I'm really good at this. So I could like do this half hours. And we could explore to live over here. Like you start to see these other options. once you work together. So, right, that's a good call. We have a case study coming up that actually is someone who thought about these same issues. So I think this is well-timed.
Starting point is 01:20:05 All right. So as previewed, I want to read a quick case study here from one of our listeners. It actually ends with a question. So it's kind of a hybrid. All right, this case study is from Anna, a repeat writer, to the show. Anna said, I wrote you a while back to ask about. whether or not I should take a job at a startup because I was bored at my cushy chief of staff job for a Silicon Valley tech company where I'm only needed to work part time to fulfill my full responsibilities. I decided not to take the startup job as you suggested.
Starting point is 01:20:41 I hope this would be where she says. This is where it would be unfortunate if she said, you know, and that startup was open AI and you cost me $20 billion of stock options, you son of a bitch. No, she said, by contrast, shortly after I made this decision, the startup went belly up. All right. So, whew, we pushed her into right direction. Next, I got a big promotion and pay raise at my current company and have even more reason to believe that they don't mind me working part-time. I do work remotely, which makes it easier. Now I'm getting bored again, and I feel myself getting antsy. I decided to learn to paint part-time and learn a fourth language, all the while continuing to work less than 30 hours
Starting point is 01:21:22 week at a job I do enjoy, although it's not overly stimulating. All right, so we've got a kind of a cool case study there of she resisted the urge to go to this high-stress job that was more impressive, and that turned out to be a good decision because that company went belly up, and in her current company, she got more money in a promotion. She does, however, have a question. How do I continue to go down this path without letting my ego get in the way of the cool life I had built? Everyone at my company thinks that their job equals their life.
Starting point is 01:21:50 I feel like there is this constant pull to believe this is the case. Will this feeling ever stop? Well, Anna, it is hard. I've experienced this in parts of my life where I have been intentionally having my foot halfway on the break where, hey, in this part I could go all out and the other people I know are and I'm not and it's difficult. I hear this a lot in particular from lawyers, right? There's this movement I really like right now that remote work in the pandemic. really exploded of lawyers at big law firms leaving the partner track and leaving the office and said, I'm only going to bill half the hours I did before.
Starting point is 01:22:32 And you're going to pay me commiserately less. And there's no expectation now that I'm trying to, there's no ladder for me to go up anymore. But I'm really good at this particular type of law and it's really useful to have me work on these cases. And so like you're happy to keep doing it. And I live now somewhere completely different. It's like much cheaper than the big cities.
Starting point is 01:22:50 So honestly, billing 50% less hours and working 35 hours a week, I'm making more money than anyone else in this town. And so this works out well. This is a movement I really like. And they also are struggling a lot with, yeah, but in my firm, if you get the partner, that's a big gold star. If you get the managing partners, even big gold star, we look at our bonuses and our salaries, and I feel like they're lapping me. So that's also a psychological issue. All right. So what helps here?
Starting point is 01:23:18 partially just recognizing that's just part of the tradeoff. Ego, accomplishment, this person's doing better than me. I think I'm smarter than that person, but they're moving ahead of me. That's never going to go away. So you just have to see that as one of the things you're weighing against the benefits. But two, you need much more clarity probably on what the benefits are. This comes back to lifestyle-centric career planning, right? Like you need, like we talked about with the last caller,
Starting point is 01:23:48 this crystal clear understanding of what matters to you in your life and your vision for you and your family's life. And if your current work arrangement fits into that vision, which probably it does, Anna, because 30 hours a week with high salary opens up a lot of cool opportunities for your life. If it's part of this vision that's based in your values and covers all parts of your life, and it's not just hobbies, it's not just like I'm trying to fill my time with hobbies. He says, no, I have an aggressive vision for my life. We live here. I do this.
Starting point is 01:24:17 I start this. I'm heavily involved in this. It's a full vision of a life well lived. Then it's much easier to put up with the ego issues. You say, yeah, but here's what I'm proud of is this whole life I've built, and I'm super intentional about it. It's a deep life. And my work is a part of, like, making this deep life possible. And what I'm proud of is this really cool life that I built.
Starting point is 01:24:39 The more remarkable you make this vision, the easier you're going to be dealing with dealing with the work ego issues. The more remarkable, the deep life you craft that this high pain 30-hour job is part of, the better you're going to feel about it even when your colleagues at work are doing 80-hour, 80-hour weeks and making more money and getting more praise. Because you say, what I'm proud of is not just my job. It's this remarkable life I've crafted. My impact on my family and my community and these other things I'm involved in and the ability
Starting point is 01:25:08 to just whatever it is you care about. So I would say, Anna, make your vision of your life much more remarkable than I'm doing hobbies in my free time. You need to lean into the possibilities of your life and do something that, and when I say remarkable, I mean that in a literal sense. That someone hearing about you would remark about it. Ooh, that's interesting what Anna is up to. You are a source of interested remark. That's what you want to get to. Now, it's possible when you do this exercise, the vision you come up with is super deep and meaningful.
Starting point is 01:25:40 it's going to involve you actually doing a lot more work on something that's like really important to you and that's fine. But you just want to have clarity about what I'm trying to do with my life. And so it's a good question and a good case study because just simplifying and slowing down without a bigger vision for what that slowing down serves can itself be complicated or a trap. If you slow down and simplify and then just find yourself just trying to find hobbies to fill the time, your mind's like what are we doing? So you've got to lean into the remarkability of your vision here, Anna. Given all that you've already done, I have no doubt that you're going to come up with something cool. So you'll have to write back in and let us know what you do. All right.
Starting point is 01:26:20 So we have a final segment coming up where I choose something interesting that I've seen during the week to talk about. But first, another quick word from a sponsor. Let's talk about Shopify, the global commerce platform that helps you sell at every stage of your business, whether you've just launched your first online shop or you have a store. or you just hit a million orders. Shopify can handle all of these scales. They are, in my opinion, and from just the people I know, these service you use if you want to sell things.
Starting point is 01:26:54 Like, it powers, Shopify powers, 10% of all e-commerce in the U.S. They are the global force behind Albers and Rothes and Brook, Lennon, and millions of other entrepreneurs of every size across 175 countries. I could think of a half dozen sort of writer-entrepreneur friends of mine who sell merch or other things relevant to their writing empires, that all you Shopify. And they love it because what allows you to do is have this very professional experience for your potential customers, very high conversion rate, it makes checking out very easy. It integrates so easily with other sorts of back-in systems. I mean, Shopify is who you use if you want to sell.
Starting point is 01:27:34 And when Jesse and I start our online shop for deep questions, which I think is, inevitable. It's got to be inevitable. We are going to use Shopify for sure. Yep. Yeah. I think we're going to use that for sure. People were talking about at the book event. You know, multiple people mentioned at the book event, the VBLCP shirts. Yeah. People wanted those. Values based lifestyle, syndrome career planning. People are like, where's my VBLC shirt? And so when we sell those, we're going to sell those 100% using Shopify.
Starting point is 01:28:03 All right. So sign up for a $1 per month trial period at Shopify.com slash deep. and you type in that address, make it all lowercase. Shopify.com slash deep, all lowercase. You need that slash deep to get the $1 per month trial. So go to Shopify.com slash deep now to grow your business, no matter what stage you're in, Shopify.com slash deep. Also want to talk about our longtime friends at Roan, R-H-O-N-N-N-E. Here's the thing. Men's closets were due for a radical reinvention, and Roan has step up. up to that challenge with their commuter collection, the most comfortable, breathable, and
Starting point is 01:28:43 flexible set of products known to man. I really enjoy this collection for a couple reasons. A, it's very breathable and flexible, and lightweight. When I got like a hard day of doing book events like I've been doing recently, if I can throw on like a commuter collection shirt or commuter collection pants, men often underestimate how much your pants lead to comfort. A thick pair of jeans can get pretty uncomfortable after a long day of running around L.A. So I have it lightweight, breathable, good-looking clothes makes a difference.
Starting point is 01:29:16 They have this wrinkle technology where you can travel with these things and the wrinkles work themselves out once you wear them. So you can look really sharp even if you've been living out of a suitcase, let's say, on a book tour. It all looks really good. It has goldfusion anti-oder technology. It's just good-looking, incredibly useful clothes, especially. when you have a very active day. The Rhone Commuter Collection is something I highly recommend. So the Commuter Collection could get you through any workday
Starting point is 01:29:45 and straight into whatever comes next. Head to Rhone.com slash Cal and use the promo code Cal to save 20% off your entire order. That's 20% off your entire order when you head to R-H-O-N-E.com slash Cal and use to code Cal. It's time to find your corner office comfort. All right, let's do our final segment. So what I'd like to do in this final segment is take something interesting that someone sent me or I encountered and then we can talk about it.
Starting point is 01:30:16 So today I actually want to go back to, I mentioned this email in my opening segment in the opening segment about AI. And I mentioned that someone had sent me an email about General Ulysses S. Grant. This was Nick, hat tip to Nick. He sent me a scan from a book. I put the title page up on the screen for those who are watching the video. The book, it's an older book, campaigning with Grant, written by General Horace Porter, right? So this is a sort of a contemporaneous account of what it was like being with General Grant during the Civil War. There's a particular page from this I want to read, page 250.
Starting point is 01:30:58 All right, so this is describing the general's actions in camp. He would sit for hours in front of his tent or just inside of it looking out, smoking a cigar very slowly, seldom with a paper or a map in his hands, and looking like the laziest man in camp. But at such periods, his mind was working more actively than that of anyone in the army. He talked less and thought more than anyone in the service. He studiously avoided performing any duty which someone else could do as well or better than he. And in this respect, demonstrated his rare powers of administrative and executive methods. He was one of the few men holding high position who did not waste valuable hours by giving his personal attention to petty details. He never consumed his time in reading over court martial proceedings or figuring up the items of supplies on hand or writing unnecessary letters or communications.
Starting point is 01:31:54 He held subordinates to a strict accountability in the performance of such duties and kept his own time for his thought. It was this quiet but intense thinking and the well-matured ideas which resulted from it that led to the prompt and vigorous action which was constantly witnessed during this year so pregnant with events. Now, we actually talked about this in my interview with Ryan Holiday on his daily stoic podcast, which was released over the past two weeks and two parts. We talked about General Grant. And the point Ryan made, which is a good one, is that it's useful to contrast Grant with General McClellan. who preceded Grant. McClellan was the opposite of quiet and deep and focusing on what mattered and thinking hard about doing that well.
Starting point is 01:32:38 McClellan, by contrast, was all activity. We got to maneuver the troops. I got to write some letters. I got to do this. Let's go over here. We got to make sure that this is working here. He was a constant activity, a consummate bureaucratic player. But he never actually pulled the trigger and made the attacks that mattered.
Starting point is 01:32:55 And finally, Lincoln said, I'm sorry, McClellan. Enough. Let's give this Grant guy a try. And Grant got it done. won to war, right? And so I think in here there's this really useful point about, you know, in an age of busyness, and of course in today's digital age, busyness has never been more amplified or pronounced. It is not ultimately the busyness that wins to proverbial war. It is not the reading over the court martial proceedings and writing letters and running around
Starting point is 01:33:22 and talking to people and giving your thoughts and doing useless maneuvers. That doesn't win the proverbial war. It's sitting down and thinking hard and getting to the core of it. This is what matters. Now let's go make this move. And you make the moves that win, you win the battles, you win the war. And that is an act of slowness. That is an act of slowing down, focusing on what matters, giving it careful attention, minimizing the non-important, and then pulling the trigger and executing on what matters and repeating. So in General Grant, we see a great demonstration of the power of slow productivity. In the moment, it might make you look like, and I'm quoting here, the laziest man in the
Starting point is 01:34:00 army. But when you zoom out, you're the hero who won the war. So I think that's a really cool example of a point that's all throughout my book, slow productivity, slowing down, focusing what matters, doing fewer things, having a natural pace, but obsessing over the impact and quality of what you do. That is the formula for making a difference. You don't win wars through activity. you win wars through smart strategy and that requires quiet that requires slowness.
Starting point is 01:34:27 So Nick, thank you for sending me that excerpt. I think it's a great case study that some of the best ideas are some of the oldest ideas. There's nothing new under the sun. General Grant knew about slow productivity and that we do as well. All right, Jesse, I think that's all the time we have. Thank you, everyone, for listening or sending in your questions, etc. I guess I should mention calnewport.com, no, the deeplife.com, rather. The deeplife.com slash listen is where the link.
Starting point is 01:34:53 sorry for submitting questions and calls. So please go do that. Send your topic ideas to Jesse at caldnewport.com. And buy my book, Slow Productivity. Find out more at calnewport.com slash slow. We'll be back next week. And until then, as always, stay deep. Hi, it's Cal here.
Starting point is 01:35:10 One more thing before you go. If you like the Deep Questions podcast, you will love my email newsletter, which you can sign up for at calnewport.com. each week I send out a new essay about the theory or practice of living deeply. I've been writing this newsletter since 2007, and over 70,000 subscribers get it sent to their inboxes each week. So if you are serious about resisting the forces of distraction and shallowness that afflict our world, you've got to sign up for my newsletter at calnewport.com and get some deep wisdom delivered to your inbox each week.
Starting point is 01:35:51 Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.