Modern Wisdom - #1067 - Cal Newport - The collapse of modern attention (and how to get it back)

Episode Date: March 5, 2026

Cal Newport is a computer science professor at Georgetown University, a productivity expert and an author. Has AI “workslop” damaged our ability to focus? When AI entered the workplace, many thou...ght it would replace knowledge workers. Instead, we’re flooded with AI-generated noise that feels productive but often isn’t. In this new era, is the real competitive advantage simply the ability to focus? Expect to learn what the future of work will be with major advancements in AI, what most people’s relationship with productivity is like at the moment, why your ability to focus is becoming increasingly more important, how people should deal with a lot of work messages, if new AI tools actually have been as transformative as they have claimed to be, if AI in the workplace has been a huge disappointment so far and why and much more… Sponsors: See discounts for all the products I use and recommend: ⁠https://chriswillx.com/deals⁠ Get up to 20% off the leading longevity and cellular health supplement at https://timeline.com/modernwisdom Get up to $350 off the Pod 5 at https://eightsleep.com/modernwisdom Get the brand new Whoop 5.0 and your first month for free at https://join.whoop.com/modernwisdom Get a Free Sample Pack of LMNT’s most popular flavours with your first purchase at https://drinklmnt.com/modernwisdom Extra Stuff: Get my free reading list of 100 books to read before you die: ⁠https://chriswillx.com/books⁠ Try my productivity energy drink Neutonic: ⁠https://neutonic.com/modernwisdom⁠ Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: ⁠https://tinyurl.com/43hv6y59⁠ #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: ⁠https://tinyurl.com/2rtz7avf⁠ #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: ⁠https://tinyurl.com/3ccn5vkp⁠ - Get In Touch: Instagram: ⁠https://www.instagram.com/chriswillx⁠ Twitter: ⁠https://www.twitter.com/chriswillx⁠ YouTube: ⁠https://www.youtube.com/modernwisdompodcast⁠ Email: ⁠https://chriswillx.com/contact⁠ - Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Dude, you must be feeling like Cassandra at the moment. So prescient, the distraction, the necessity of deep work, the inherent bombardment of our attention. Do you feel like you saw the future earlier than what even at the time maybe felt late with deep work and focusing on quality over quantity and stuff? I mean, I think part of what I noticed was the present was crazy to me and no one else recognized it. So it was less even predicting the future. I feel like there was a time, God, it's like 10 years ago now, where I was looking around and, yeah, saying two things. One, social media doesn't make sense. Why are we all pretending like this is at the center of democracy and civic life and all business?
Starting point is 00:00:41 We all have to be on here all the time. And two, email doesn't make sense. Not what was going to happen in the future. I'm just like looking at the way we're working today with email and Slack and Teams was coming. Like, this completely does not make sense. You're switching your context once every two or three minutes. this is a terrible way to actually use your brain. So I never thought of myself as predicting the future
Starting point is 00:00:59 as much as just telling people what was going on then, then it makes sense. And everyone thought I was crazy. And 10 years later, it just kind of jumped from I was crazy to its common sense. So it's not even that interesting that I'm saying it anymore. So I kind of skipped the part where it sounded prescient. Do you feel vindicated?
Starting point is 00:01:17 I think certainly on a couple issues, the social media issue was a big one, because I used to get a lot of flack for that, for going out, and I wasn't even saying the social media was bad or that no one should use it. Really what I was pushing back on was just the idea of ubiquity,
Starting point is 00:01:35 the idea that everyone had to use it. I said, this doesn't make sense. I get there some people this makes sense for. There's a lot of technologies that have markets that make sense for it. But why is there this pressure for everyone to be on these services? This is not going to a good place.
Starting point is 00:01:47 They're spending a lot of money to mine attention and they're going to get better at it, right? And at the time, this was considered crazy. What do you mean? Like, you want to use social media. I wrote a New York Times op-ed back. I looked this up the other day.
Starting point is 00:02:01 It was 2016. And it argued maybe social media is not the biggest thing for a young person to focus on if they're thinking about their career. That's what it was. It was like, focus on your career instead of social media, actually doing things well is what really matters. And you would think, you know, that I had just come on and said, like, America has an idea is done.
Starting point is 00:02:20 and grandmother should be kicked. Like people were upset about this. The New York Times commissioned a response op-ed two weeks later that went through mine and said this is what is wrong about Cal Newport's op-ed or whatever because it made such a fear to suggest it. And today it's boring to suggest like, you know, social media has problems and most people probably shouldn't use it. People agree with that. The one that upsets me, though, that one I feel like people have come along to and more and more people are being much more selective and minimalist about their social media.
Starting point is 00:02:50 The distraction email, slack, constantly jumping back and forth between different things, that just got worse. I mean, I think people recognize it now. This is probably not a good way to work. But I thought because there was dollars and cents here, this is less productive from an economic productivity standpoint, to have all of your workers changing their attention all the time, you're just getting a really low return on all the money you're investing in these human brains. So I thought, oh, this is dollars and cents.
Starting point is 00:03:16 This is the one that's going to change. Social media is fond. Like, that's going to be hard to change people's behavior. year, but certainly this hyper distraction thing in knowledge work, that'll change because we're leaving money on the table. It hasn't changed at all. It's gotten worse. It's worse than it was. It's, I'm at the 10-year anniversary now of the book Deep Work. So this, like this month is the 10-year anniversary. Congratulations, dude. That's fucking seminal. Like, that has become a part of the lexicon. That's really, really cool. Yeah, but it's got me a little bit depressed because I've been doing
Starting point is 00:03:44 this 10-year reflection. Like, okay, it's been 10 years and the book was a hit and it's millions of copies, etc. And the issues I talked about are worse. They're like really worse than they were 10 years ago. So people know the problem. Nothing has changed. What does the data suggest around the worstness of it now? The one I've been following, the study that I think is useful as a trend line is Microsoft actually does this annual report where they gather data from Microsoft 365. So it's like Office and Word and PowerPoint and Excel. Nowadays you use this sort of the web-based version. version of these is very common. So they can gather data from just tens of thousands of knowledge workers actually using all these different tools. And the latest report they put out in 2025 now
Starting point is 00:04:29 has the interruptions on average once every two minutes. So it's just gotten out of control. So switching to a communication tool once every two minutes. They also found the latest report, and this is depressing to me as well. There's one time in the week where they see a notable rise in the use of the non-communication. So actually using the core productivity tools like Word or PowerPoint, and it's Saturday and Sunday morning. So we've just put the work off until the weekend when there's no expectations of responses
Starting point is 00:05:00 and spend the actual weekdays talking about work, which I just don't get. Like, that is not economically productive. Like, companies are leaving money on the table, but it's just where we are. We really can't quit this behavior. Isn't it interesting that you had to try and appeal to a very utilitarian,
Starting point is 00:05:16 approach for this. You didn't say this is probably making staff miserable. It's not a good use of time. We've got some really strong evidence that suggests that doing one thing and getting better at it over a protracted period of time actually makes you feel more satisfied. You get into a flow state, et cetera, et cetera. You look back on your day and you can look at the things that you did. none of that, which is the much more immediate experiential way that people interface with distraction, you tried to appeal to the bottom line, which you thought, well, incentives, incentives align the fucking incentives. And that didn't work, which obviously means also that people's level of administrative burden
Starting point is 00:06:01 misery is also coming along for the ride at the same time. Yeah, it's a fucking mess, dude. And I think, you know, even with what I do, it's not a very big team, but Slack, Slack is like, it's so useful and invites so much chaos at the same time. It is, and was Slack? Slack wouldn't have been that big during Deep Work,
Starting point is 00:06:22 I'm going to guess. It wasn't big. It wasn't out yet. I talk in Deep Work about these very early instant messenger tools that no longer exist, like Hit Chat. It was just emerging among the programmer class. I was basically saying, there be dragons, like, let's be careful about that.
Starting point is 00:06:37 But I wrote an article about Slack years later when Slack was bought. So I think Salesforce bought Slack. I wrote an article about it for the New Yorker. And I think the title of that article gets to the core of the issue you're talking about. The title was Slack is the right tool for the wrong way to work. And I think what happened, here's my whole theory on Slack, is that when email arrived, it moved us to this new style of collaboration that I call the hyperactive hive mind, where we'll just figure things out on the go with ad hoc back and forth, unscheduled messaging,
Starting point is 00:07:08 just sort of like shooting messages back and forth. We'll figure things out like we're all just kind of connected all the time. That's a terrible way to work for all the reasons I talked about. It's distracting as context switching. You can't do anything deep. It's hard to produce value. But if that's the way you're going to work, email clients are not a very good tool for that. You have threads and it's clunky and it's hard to search through your email and find what you did before.
Starting point is 00:07:30 So Slack came along and said, look, if this is the way you're going to work, hyperactive hive mind constant back and forth ad hoc coordination will build you a better tool for that so that's why people both love and hate slack it's a really good tool for that style of collaboration it works really well but that style of collaboration makes us miserable so it's this weird love hate relationship we have like this works great i hate the thing that is making easier why does it why does it make us miserable that style of collaboration because our brain isn't meant to switch our target of attention that quickly it just takes us a long time if we're talking about targets that are abstract and symbolic, it takes us a long time to switch from one to another. Physical world targets, we can switch quickly, right? We're wired for that.
Starting point is 00:08:12 If there's a tiger's roar, I can boom, 100% attention, what's going on over there. But when we're thinking about abstract things, information, ideas, things that are symbolic and in our head, that's us. We're basically reappropriating our brain hardware to do something we're not evolved to do. It takes a lot of effort to do symbolic thinking to think about abstract concepts. And we know it takes 10 to 20 minutes to fully change our attention context from one abstract target to another. It takes a long time. That's why if you sit down to write something, everyone has this experience, the first five or 10 minutes, like, man, this is terrible. Like I'm making no progress or whatever. And then after a while, you're like, oh, this is starting
Starting point is 00:08:52 the flow, like it's going better. That's because it took that much time for your brain to load up all of the relevant information and to inhibit all the unrelated circuits and get your brain really ready to do that activity. So if you now interrupt that brain once every two minutes, it never can lock in on anything. And what you feel then is this sort of diffuse cognitive friction that we begin to experience as fatigue, cognitive fatigue. And it's a really frustrating experience. It's why if you go to an email inbox, you're like, I have time. I'm going to empty this inbox. I'm going to go message by message. Here's the best way to do it, right? On paper, I'm going to go message by message and I'm going to answer these messages. Why does that get so hard? Why do you find yourself like jumping around
Starting point is 00:09:31 and looking for easier messages, because each message is a different context than the other, and that's torture for the brain. It's really, really hard to go from, all right, this is a complicated question when my employees is asking me, and now this is a completely different issue, completely unrelated to that,
Starting point is 00:09:47 where I have to think up like a good title for something, and now here's a completely different issue, and you're trying to switch one after another. Our brains aren't wired for that. It really makes us unhappy. What would you say to someone who wants to try and retrain that attention? maybe they're going to try and make some sort of a stand inside of Slack
Starting point is 00:10:08 and say I will only be available at certain times of the day. But regardless of the inbound, let's say that they fix the inbound, because that's a totally separate problem. That's much more sort of structural, unless you've got any advice for that as well. But how does someone go about re-appraising, retraining their mind away from that?
Starting point is 00:10:29 because we do become, we get like Stockholm syndrome. The Slack Stockholm syndrome where our captor tormentor becomes the way that we operate. We've got a favorite little ways of working and it feels like we've done. But then at the end of the day, we look back and have this sort of odd malaise thing about, well, what did I actually do today? What got done? Well, not much. Not much got done.
Starting point is 00:10:57 Yeah. Well, it's hard unilaterally. If you've changed nothing else about your workload or your communication protocols, if you just say, I'm not going to be on Slack from this hour to this hour. I only check my email twice a day or whatever that standard advice was from 15 years ago, it doesn't work well because if you're involved in a large number of projects that are timely, and the way progress is going to be made is with ad hoc back and forth messaging, you have to be in there checking.
Starting point is 00:11:23 That's the brutal part of the hyperactive hive mind is that it has defined. fences to its elimination built into its very nature. Because if this is how we're going to figure this out, we have to have five or six back and forth messages to figure out what we're going to do about this client coming tomorrow. We have to get this done today. That means you have to see my next message right away so that we have time for me to answer you and you to answer me and for that ping pong match to happen.
Starting point is 00:11:46 That means you have to be checking your inbox or Slack constantly. Otherwise, you're not going to see my next message in time for this whole game to unfold. So the very nature of that style of collaboration demands constant inbox checking. which is what I think people often get wrong about this. When I think about things like Slack or email, they think too often about either information, like, oh, I've got so many messages in my inbox that I don't need.
Starting point is 00:12:08 I have all these newsletters and spam. That's not a problem. That's a minor problem. That's an easily solvable problem. It's like cluttered. That's not a big problem. The issue is actually my collaboration style requires me to be in there. Because if I miss messages in a timely fashion, everything falls apart.
Starting point is 00:12:27 And so the issue is not how do I interact with my inbox. It really has to be how do I change the way the inbox is being used. So I ended up, I felt like had three big ideas on this that span three different books, right? So in deep work, like one of the big ideas was you can train your personal ability to focus. Focusing is really important. Putting aside for now all the things trying to prevent you from focusing, you have to practice it. And if you practice it, you'll get better at it. And if you get better at it, you'll be a superstar because, like, that's what matters in the knowledge economy.
Starting point is 00:13:00 Everything good comes out of focus. Then I wrote a book after that called A World Without Email. And in that book, I was arguing the way we, the thing I was telling you about, hyperactive hydride. Communication is a problem. This is a real problem. The fact that we are using this method for coordination is causing all these trouble, is really causing problems. And I went through all the data and all the research and made the case. This is super nonproductive.
Starting point is 00:13:24 I went back through the archives, the New York Times business session in the 80s and 90s to exactly document the rise of email and how people were talking about email when it first came onto the business scene. And I made the case. The way we work is arbitrary. This hyperactive high vine was not a plan. It wasn't seen to be more productive. We stumbled into it. So we really should change it. So that was that book.
Starting point is 00:13:45 And then the most recent book, Slow productivity from a couple of years ago. In that book, I argued, oh, wait a second, workload matters too. The other issue of this problem is we don't put any limits or transparency on how many things we're working on. And if you pile too many things on your plate,
Starting point is 00:14:01 too much communication interruption becomes unavoidable because they each have little issues they need you to deal with. So I've now, over this 10-year period, have kind of broken down this problem and there's like training yourself to focus, fixing your communication protocols, like how do I communicate in a professional context,
Starting point is 00:14:17 how do we collaborate, and then managing workload to be more reasonable. All three of, and this might be why this problem's not solved. There's no one thing to fix, right? So all three of these things go into the issue and they're each complicated. What across those three books,
Starting point is 00:14:32 all of which are great. everyone needs to go and check out. I think we've done episodes about each of them, so they can just go and listen to those and then buy the books. Looking back across this portfolio of productivity advice, what have you heard from readers or what has been the stickiest strategies for you? You look back and you go, okay, that's the 80-20 of what I've published over the last three books. to me, I think the big two that give you the biggest results, and I'll tell you the one that's the hardest, and that's why this book probably sold the least,
Starting point is 00:15:10 the big two that gives you the biggest results is taking focus seriously like a skill. That really does make a difference. Practicing focus, you get better at it, and it has a demonstrable difference. You sit down to work, and you're just producing better stuff, or you're trying to pick up some complicated new thing,
Starting point is 00:15:28 like, oh, God, I can learn this faster. That makes a huge difference. And then the second one, which was more recent in my life, was, oh, you really got to control the workload. So much is downstream from how many things you've agreed to work on. You have to leave the mindset of everything I say yes to brings with it value. So saying yes to more things, it's just going to aggregate more value. That's not the right mindset. That's not the way it's a nonlinear, you know, reward function there.
Starting point is 00:15:54 There's a certain point as you add more things. So not only does values stop growing, it begins to go down. on the other side and that there's a real saying no to many more things is actually a way to optimize reward and output, which is not natural. It doesn't make sense at first. It doesn't feel like common sense. So workload and focus training, you can control those more than you think and you're going to have huge results from those. A quick aside, do you remember learning about the mighty mitochondria back in grade school? Here's a quick refresher. It's the tiny engine inside of your cells that powers everything you do. But here's what they didn't teach you. As you age,
Starting point is 00:16:31 your mitochondria break down, that's what can cause you to feel tired more often, take longer to recover, and wake up feeling like you're never fully recharged no matter how long you sleep. I started taking Timeline nearly two years ago because it is the best product on the market for mitochondrial health and that is why I partnered with them. Timeline is the number one doctor recommended urolithin A supplement with a compound called Mitapure. Basically, basically, it helps your body clear out damaged mitochondria and replace them with new ones. Mito Pure is backed by over 15 years of research, over 50 patterns and nearly a dozen human clinical trials. It was recommended to me by my doctor and that is why I've used it for so long since
Starting point is 00:17:10 way before I knew who even made the product. And best of all, there's a 30-day money-back guarantee plus free shipping in the US and they ship internationally. So right now you can get a free sample or get up to 20% off by going to the link in the description below or heading to timeline. com slash modern wisdom. That's timeline.com slash modern wisdom. The learning to say no thing is interesting, especially as people progress inside of their career and they get better at what they're doing, they have to learn to be able to say no to opportunities that they would have only begged to have had the opportunity to be in the room to have maybe said yes to only half a decade ago. in that time you've had to go from needing that opportunity to actively being able to say no to
Starting point is 00:18:00 something that's probably better than it. Alex, my friend taught me about, you remember in the Matrix, the woman with the red dress and Neo turns around and he says, we're looking at me, we're looking at the woman in the red dress, look again, and it's an agent with a gun in his face. And the analogy that Alex used was, now imagine that she's not a 10 out of 10, but imagine a thousand hypothetical 1,000s out of 10 and you need to be able to say no to them which previously you didn't even know existed. So this I think the kind of it's almost like reverse entropy or habituation, you know, your opportunities get better, which means that your capacity to say no needs to get better more quickly than that. You can't be chasing your tail trying to learn to be
Starting point is 00:18:48 able to say no, less quickly than the opportunities get more seductive. Yeah, it's almost perverse the way that works. It's like when you have all the time in the world, all you want is opportunities. And then when you have opportunities, all you want is all the time free in the world. I had to change, I don't know what you do, but I had to change my rule at some point. This was hard for me to the default, though. Like, that's just how I have to operate now. Because as soon as you try to have a triage rule, well, look, I'm not going to do this.
Starting point is 00:19:18 opportunity unless I only do speaking gigs that have this much money or this or I only going to go meet with someone if they're like this interesting or this or that eventually the number of things that satisfy that criteria over moment just as well it's just so I've I've just had to fall back on the default no you're talking to somebody who came back from a two-day trip to Qatar at the start of this week so I spent as much time traveling as I did in the country to to give a talk. And as I looked around, there was this first, the first night dinner. There was maybe 300 people there. And I'm talking to Logan Paul and Stephen Bartlett's over his shoulder. And the CEO of Qatar Airways is here and the Middle Eastern director for Metas over there.
Starting point is 00:20:02 And I was looking around thinking, everybody here wants to be here. It's very exciting. Everyone's really lovely. But also everyone here can't say no. Everybody in this room is chronically incapable of saying no. I said no to this one several times, by the way. The amount of indicts is the Qatar and the UAE and other places. I have said no. All right. Well, consider me a fucking, consider me a slut compared to you, Cal. Whatever it.
Starting point is 00:20:29 I must be easy and an easy booty call. They tried to get Cal Newport. We couldn't get Cal so we'll rink for us instead. The default, no. Oh, man. Yeah. It's crazy the things you end up saying no to after a while. But I mean, there's a currency shift.
Starting point is 00:20:41 For me, time to think is such a valuable. That's a more valuable currency than money, right? You get to a point where like, oh, I'm doing fine. but if I don't have time to think, what's the point? And then that becomes this really rare currency that's much harder to get a hold of. And that's the only way I can protect it now
Starting point is 00:20:59 is anything that requires me to go somewhere. It's a default note. And then I can talk myself out of it later. I'm like, you know what? I could bring my family with that. We could have a trip. Right. So actually, you know what,
Starting point is 00:21:08 I will do this. Or, you know, I just did a master class course released this week. I spent a year and a half say no to that. And then like eventually I sort of talk myself, I talked to some people. They're like, we'll come to D.C. to do it. I talked to, you know, James Clear had just done one.
Starting point is 00:21:28 And I had a good talk with him about it. And I was like, you know what? This will be interesting. And it took me a year and a half. But I finally talked myself and do it. So I will say yes, but it just, the default no means that you don't have to. A high standard.
Starting point is 00:21:40 Yeah, you don't have to run it through the ringer. And then you're like, okay, if it really sticks with me, then maybe I'll be like, all right, all right, I'll do it. How much should people actually be working? Well, it depends what you mean by work and what they're doing, right? Because think about it. Let's say you're an athlete. It's super well defined.
Starting point is 00:21:56 Like, here's optimal training. Here's optimal rest. And like, that's what you should be doing. Like, that's really clear. We don't have those limits as clear in the culture for other types of jobs that we probably should. If you're at a high-wage hourly billed job, like a law partner at a big law firm, there the economic model is the more you work, the more profitable it is.
Starting point is 00:22:19 And we'll pay you big money to do this, but you should basically work as much as you can that your body will take it. That's the economic engine. That's why I think those jobs are scary. If you're a novelist that writes literary fiction, so you're like, I really need to be
Starting point is 00:22:33 award nominated for each book or I'm going to fall out of this slip stream because no one's going to read these books unless they're some of the best books, then you should be doing like four hours in the morning and then just disappear, right? Like, you should be doing
Starting point is 00:22:47 very little more work than that because almost anything else will get in the way of you, like, sticking in that position. And so it all just depends on what you do, you know. Didn't you look at some experiment of shorter work weeks?
Starting point is 00:23:00 Yeah. What did you learn from that? There was a lot of these right around the pandemic right before and then right after in Europe and Iceland. So some European studies, I think Germany did one,
Starting point is 00:23:13 Iceland did one, UK did one. And they were looking at four-day work weeks. So what would happen if we take away one day? The interesting thing about those experiments is what they found is whatever measures of productivity they came up with, they didn't get worse, which I thought was very interesting. They took a day away, and yet the perceived productivity or the measured productivity didn't go down. And there's two ways to look at it.
Starting point is 00:23:37 The one way to look at it is to say, oh, this means that we should have a four-day work wait because things didn't get worse. and okay, maybe, maybe, right? But to me, there was like a bigger observation that came out of that, which is like, wait, so what were we, what are we doing during the work days? Like, this, there's something going on here that should really catch our attention.
Starting point is 00:23:55 What does work mean that we could take an entire day off the table with no other preparation and the valuable stuff being produced doesn't change? This tells us that like, whatever we're doing while we're sitting here in work is not just sitting down and trying to produce value. we clearly have all sorts of other sorts of distractions going on, context switching, time that's being devoured, Parkinson's Law is at play. Work must be broke.
Starting point is 00:24:20 To me, that was the more important observation. Is that like if you can take away a day and nothing changes, then I don't think we're doing in the office what we think we're doing in the office. Parkinson's Law was on the tip of my tongue, work expands to fill the time given for it. And if you give people five days, they'll take five. And if you give them four days, then they'll do it in four. And look, everybody knows just how much time they waste, not doing the work, not doing the thing that they're supposed to do. And this isn't victim blaming.
Starting point is 00:24:50 This is a lot of the time dealing with admin and necessary meetings. You can't get out of them. You have to be there for whatever reason. So it's not as if it's bottom up. A lot of it is top down dictated. This is the environment that you work in and you have to do this. But even outside of that, when you do have your one hour in between meetings, your inability to not...
Starting point is 00:25:11 I remember when I used to run nightclubs and I get in at 2.30 in the morning, the final part of the night was cashing the till. So this was before we switched to tickets, which was sort of the late teens just before COVID, digital tickets online, which meant that you didn't have to cash as much money in the till. But before that, it was all, you know,
Starting point is 00:25:32 £5 and £10 and £20 notes and single pounds and all the rest of it. And I would go into the office with the maths, with the manager of the venue and we would be counting the money. But this is the final task. It's the final bit of the night. It's fucking 2.15 or 2.30 in the morning. We've just taken the till off as it's called.
Starting point is 00:25:48 Anybody that's coming in doesn't get to come in, we're not going to take any more money. And I'm sat up there doing like light lift mental arithmetic, but for me, somebody who hadn't done math since I was 16, it was a relatively heavy lift. Flicking through the money, flicking through the money. It's a large, you know, huge fluorescent overhead lights just before. And then I get to drive home.
Starting point is 00:26:07 and I'm like thinking about it and I got to go put the money in the till and I go to write it in the spreadsheet and then I get into bed and as I go into bed my eyes below my eyelids would start flicking left and right I wouldn't be able to tune myself I'm also doing this and let's not forget in a sweaty beer stinking office above a room going I've had to walk through the club I've had to shout at the hostesses one of them's getting fingered on the dance floor stop doing that you're supposed to be at work the DJ's pissed.
Starting point is 00:26:36 I need to, you know, it's chaos. And I've tried to coordinate this orchestra of bullshit. And then I've had to do mental arithmetic. And then I get to drive home. And then I'm like, okay, chill out, brain. It doesn't want to. And that eyes moving left and right thing, I think, is the sort of optical equivalent, ocular equivalent of how people feel when they finally get a moment.
Starting point is 00:26:58 Okay, all of my stuff is done. And then they try and sit down to work on the thing that, ostensibly that's actually there to do, right? Because all of the other bullshit, the meetings, you're not there to do the meetings, you're not there to do the slack, you're not there to do. All of that is foreplay to get you to do the thing that you're there to do, and then you sit down to do the thing you're there to do, and your eyes are moving behind your eyelids is the equivalent.
Starting point is 00:27:22 Swiping and moving across the screen, and you've got a few different other. Well, just check on this thing. Like, what the living fuck is going on? I've, like, trained... The environment that I work in has trained me out of being able to do my work. work. Well, we are meant to do. Like, what would be the ideal work day in an office environment? It would actually mask the human brain. It would probably be you come in, you work on something hard for a while. Like, that's what you do in the morning. You have lunch. And then you like catch up with,
Starting point is 00:27:52 have some meetings, talk to some people, hey, what's going on and, you know, do some task. And that's your day. Like, that's basically what we can do, like, two things. One big burst of like, let me focus on something hard. And then we can kind of come down the mountain. And then we can kind of come down the mountain after that with let me chat with people, what's going on, some decisions need to be made or whatever. That's probably about optimal. Instead, we juggle a dozen to two dozen tasks that all have their own demands. They all have their own communication needs. This is why the Microsoft data shows, oh, the work happens Saturday and Sunday morning. It is really hard. You can't go from, and meetings are very hard as well. We think like, oh, I'm not actually doing work during meetings,
Starting point is 00:28:30 but what you are engaging in a meeting is all the parts of your brain that deal with social interaction. And those are a large part of your brain, and that is a fraught and mental energy consuming activity to sit in a room or on a Zoom screen and try to manage all these different people and how do I look and what am I saying and what's going on here and I have to say the right things. It's draining. And you come out of something like that, it's difficult just to jump right back into something else. And if you come out of something like that and there was a lot of obligations generated, oh, we discussed in this meeting things I need to do. And now you try to go straight, from that meeting into another, well, now that's really in the back of your head.
Starting point is 00:29:08 What about this? What about this? We can't forget this. We just made our obligations. That feeling of fatigue, it's a, it's a really as fatigue is what it feels like, a mental fatigue. Like there's sand in your brain, sand in the gears of your brain. That's the state that a lot of people who work in front of a computer screen, like that's the state they're in most of the day. And they don't even realize, oh, that's a bad feeling. That's a negative state. That's not how it needs to feel because do you have nothing else to compare it to? Yeah, the amount of things we're doing, the amount we're trying to switch back and forth, I always thought that part of the problem was a lot of our current thought about work culture and hustling of what it means to produce was influenced by Silicon Valley in the 90s and 2000s because that was considered this very ascendant part of the economy, you know, through the 2000s, through the steep job era.
Starting point is 00:29:57 We looked at Silicon Valley, like, these are the coolest companies. They're doing all the coolest stuff. over there, I think they adopted a model of work that was very inspired by computer processors, right? So because that was what was in the air in the 80s and 90s in Silicon Valley was the computer processor words, you know, the 386 versus 486 versus the Pentium,
Starting point is 00:30:17 and it was all about speed. And the thing with a computer processor, if you're a computer type, what matters is you never want the pipeline to be empty, right? You want to always make sure you have stuff for that processor to do so it never waste time. The processor will, every command you give it,
Starting point is 00:30:35 it operates the same as any other. It can switch, it doesn't care what they are. It just sits there and operates one command after another. And the whole game with getting processors to be effective is like, don't have downtime. Like the real fear, I can put on my computer scientist hat for a second.
Starting point is 00:30:47 The real fear in computer processor design is that you sometimes get to a command that's going to generate a huge delay. So you say like, oh, go get something from memory. That takes a lot of time from the perspective, of like a computer processor cycle,
Starting point is 00:31:01 it's just sitting there, cycle after cycle, doing nothing while you're waiting for the memory bus or whatever. So we invented these processor pipelines like, oh, while we're waiting to get something back from memory, here's some other stuff the processor can run
Starting point is 00:31:13 so that it's never not working. And the idea was you want to move as fast as possible and you never want to have downtime. And that's how you get the most out of a computer processor. The human brain is like 180 degrees different. We can't just switch back and forth between unrelated commands. you switch me from one to another thing and boom, 30 minutes of my mind is fried.
Starting point is 00:31:34 Humans operate very differently. But I think Silicon Valley associated it said, here's the thing we're going to associate with being really good at your job. It might have used to been, I don't know, your skill. It was Don Draper and Madman. Remember that conception of, was it mean to be good at your job? They weren't showing Don Draper grinding it out. Like, man, Don Draper is like in the office till 3 a.m. every night or whatever.
Starting point is 00:31:56 Now, he took the 5 o'clock train back to, you know, Connecticut or whatever. It was he was really, really good at coming up with ad copy. He was good at what he did. That's what you used to respect. And then after the 80s, 90s, Silicon Valley became pervasive. Like, no, what matters is you never have a no-op. You never have a down cycle. You might as well say yes to more things.
Starting point is 00:32:17 You might as well get more emails. You never have time or you're not working. That's what productivity is going to be. And that was a disaster for the human brain. If you struggle to stay asleep because your body gets too hot or too cold, this is going to help. Eight Sleep just released their brand new Pod 5, which includes the world's first temperature regulating duvet. Compare it, their smart mattress cover, which cools or warms each side of the bed, up to 20 degrees, and you've got a climate-controlled cocoon built for deep, uninterrupted rest.
Starting point is 00:32:44 The new base even comes with a built-in speaker so you can fall asleep to white noise, nature sounds, or a little ambient Taylor Swift, if that's your thing. And it's got upgraded biometric sensors that quietly run health checks every night spotting patterns like abnormal heartbeat, It's disrupted breathing or sudden changes in HRV, which is why it has been clinically proven to increase total sleep by up to one hour every night. Best of all, they've got a 30-day sleep trial so you can buy it and sleep on it for 29 nights, and if you don't like it, they will give you your money back, plus they ship internationally. Right now, you can get up to $350 off the pod five by going to the link in the description below
Starting point is 00:33:18 are heading to 8Sleep.com slash modern wisdom using the code modern wisdom a checkout. That's E-I-G-H-T-Sleep.com slash modern wisdom. and modern wisdom. I check out. There's definitely an element of this that it's very public productivity. It's very obvious. Look at how hard I'm working, right? If you're the one that replies quickest on Slack or on email,
Starting point is 00:33:41 then it's evident that it looks like you're the one that's working hardest because you're the one that's most responsive. Whereas the person who's silently working on their own, they can't broadcast it by design. They can't broadcast it to everybody else. So, yeah, this obvious productivity in a way is way less sexy. So I think, you know, the new elephant in the room is AI and how that is enabling an increase in pace of output, but almost certainly a decrease in quality. so fold AI into your existing worldview because to me it just seems like a huge force multiplier
Starting point is 00:34:28 for what already was pretty sloppy slack email async communication that's always on people taking their work home with them never being able to not contact switch not focusing on quality and instead focusing on quantity not being able to dial themselves into do deep work for one moment. And now that is enhanced and magnified even more by the use of LLMs to help you put out more, to help you think less. So your focus is actually, you're in Slack with your LLM. It wouldn't surprise me if there is a LLM integration into Slack at some point in future. I don't know whether there is where you can just do it in there. So you're just talking back and forth in one fucking workspace. Talk to me. Fold AI into this. You must have a million thoughts.
Starting point is 00:35:15 There's a lot going on with AI. I mean, I think in its current instantiation, so we think about like an office worker, for the most part, put programmers aside. I'll get back to them. But non-programmers are really interacting with chatbots. Like, that's the main way they're integrating right now with AI. It's exaggerating exactly what you said. For a lot of people, it's exaggerating the problems that already exist. Now, there's a term for this that comes out of a Harvard Business Review article from last year.
Starting point is 00:35:39 They call it work slop, obviously put together as one word. And they have some pretty compelling data on this. So what's work slot defined work slop for me? So work slop is AI generated work products in the knowledge work sector. So like emails, reports, and PowerPoints or what have you that are generated quickly by AI, but they're so low quality that they actually, it's very difficult to, they make everyone else's jobs harder. This seems to be, this is like the defining aspect of work slop. It's quick to produce, but it's so low value that it actually no real progress is made.
Starting point is 00:36:12 So like you get a work-slop email from, you know, your boss or whatever. And like, this isn't useful to me. It's this weird, wordy thing that's broken up into sections. And it doesn't get to the core of the problem we have to solve. So you made that email quick. But in the bigger scheme of things, we made very little progress towards what we want to do. Or you put together a work-slop PowerPoint presentation so that you would have something at the meeting. But now we're spending 20 minutes looking at this nonsense.
Starting point is 00:36:38 And nothing, it's not helping us. It's not helping us actually do things. So this is what's happening, or at least my fear. I mean, the reality is most people aren't using these tools in the office. Let's just set the reality, right? So but for the people who are using them right now, which is a healthy percentage, but it's not. Do you know what the numbers are? Well, it's difficult because there's a lot of fudging of the numbers here.
Starting point is 00:36:59 There's a lot of mistaking have used or experimented with, with are regularly using them. So I see this mistake happen a lot, and so it's difficult to get good numbers. was like famously, maybe it was an Ethan Mollock article where he was talking about in my world, like academia, the homework apocalypse. And he's like, look at this study. Students just don't do work anymore. Nine out of ten are just using chat bots now. But you look at that study and what it actually said was nine out of ten had tried using a chatbot at least once. And if you looked at who's using them regularly, it was like two out of ten. Right. Because like for most of the students it wasn't helping them the way they thought it would. So I don't know what the numbers are.
Starting point is 00:37:41 If you account, like, advanced Google use, I think it's larger. Like, yeah, I search for information on this instead of going to Google. That's larger. But in terms of people who are actually making office work product out of it, I think it's smaller than the people who follow AI commentary or talk about it on AI Twitter or AI YouTube. I think it's a lot smaller than they probably assume just because in their world is pervasive. But the people who are using it, this is the problem. They're trying to avoid, this is my theory on this. is like, how is AI helping like an office worker now?
Starting point is 00:38:11 Well, their brain is exhausted from all this context switching. So what problem are they looking to solve? They're looking to avoid having to do hard moments of cognition because their brain is so fried. It's really difficult to, like, solve the blank page problem. Oh, God, I got to send this email. I got to, it's a blank screen. I got to start writing from scratch.
Starting point is 00:38:29 That's really hard. Yeah. And if I have something. The inertia that they've been trained out of overcoming because of the prime. It's almost like a one-two punch. Yeah. Humans were primed to not like heavy, well, we already didn't like heavy cognitive load, then our ability to deal with it and get through that initial resistance was decreased
Starting point is 00:38:49 through the context switching. And now we're, don't worry about it. Don't worry about it, carbon-based life forms. The silicon-based life forms are coming. And let's throw in one other aspect in there. Also, outside of work, we had these distraction machines in our hand that were further degrading our comfort with concentration because any possible moment of interest. we would have had even outside of work,
Starting point is 00:39:10 why would I do that when TikTok has like the perfect dash cam video of, you know, a Karen getting punched or something like, I got to watch that, right? And so we've come, we have that revolution comes along, plus the email revolution. We completely atrophy our ability to think and we exhaust our brain. So the other aspect of it, as we talked about, it's really exhausting to go through your day,
Starting point is 00:39:31 context switching. So like, I don't have any reserves left to write this PowerPoint. That seems impossible. And then AI is like, hey, hey, hey, hey, hey, hey, hey, I can do it for you. It'll be fine. It'll be good enough. It'll be good enough. You're like, oh, okay, I can smooth over.
Starting point is 00:39:44 I use this analogy to New Yorker piece last year. It's like it takes your effort graph looks like spikes, like an EKG or something like that. And AI smooths over those peaks. And so you don't have to, your peak concentration required can come down. Like, well, you can fill the blank page. And then maybe I have to work with it a little bit. That's easier than doing it from scratch.
Starting point is 00:40:04 But the stuff being produced is no good. And so I feel like workslaw, it's almost less of a it's less of a critique of AI than it is AI making obvious a problem with the way we were already working. I think that's what's going on there. I think this is even happening with computer programmers.
Starting point is 00:40:24 This is considered, you know, heretical right now. I guess I'm used to being yelled at. People are really excited by this workflow where I have seven or eight Claude Code agents going concurrently, producing code and testing them, and I'm just a manager of all these different processes, and they're all producing this code on my behalf, and it feels really cool and interesting. Like, this has to be the future. I don't know that that is. I mean, I don't know the context.
Starting point is 00:40:51 The problem is outside of like demos or internal tools or just having fun, that's not really code you can trust very well. And it does, though, completely lower the peaks of being a computer programmer those peaks of cognition, it's much, much easier to manage a bunch of cloud code processes than it is to come up with an algorithm. You have that same blank page. So I think the jury is still out on even where we're going to end up in the AI impact on programming. I don't know where it's going to end up, but the way it's being talked about in the last few months after the latest cloud code update, which is sort of, I guess that's something humans don't do anymore. I don't think we're there ready to say that yet. I get popped with Claude Code
Starting point is 00:41:33 ads. I get, you give me a terminal, I have no idea what to do. I'm like, I'm like, you know, someone's grandmother trying to use an iPad. I have no idea what's going on. So they are pushing very, very hard at the moment of this. It's funny, but it's a little bit crazy, but it's my world,
Starting point is 00:41:49 right, I'm computer scientist, is that for engineer computer scientist types, they forget how technically advanced they are. So yeah, cloud code works into terminal, right? And that's why it works so well. It exists in a world of text only. Text, command line commands like the old DOS command line.
Starting point is 00:42:06 It's all text commands, which you can do a lot with. You can create and edit and compile computer programs. So it's very good at that. And it's a limited set of textual commands. That's perfect for a language model. And the engineers are like, oh, we can use this terminal-based tool to do all sorts of other stuff that's not computer programming. Great. This is the problem.
Starting point is 00:42:25 This is solved. Everyone's going to be doing this. Everyone is going to have these sort of personal assistance based on something on cloud code. I'm like, man, do you realize how foreign a command line interface is to people? You realize like how weird and nerdy and complicated your world is? You're like, yeah, this will be great. My grandma will just on the command line understand that like the cloud code agent can bring up a BAS script that's just going to cat those files over to the reg X grep.
Starting point is 00:42:50 You know, it'll be fine. No one knows how to do any of that type of stuff. So it's sort of funny seeing the engineers building these incredibly intricate, nerdy, wonderful tools they've custom built for cloud code to help them in their life. and they think the gap between that and everyone else having AI automate things in their life is like, oh, it's this real small thing. I'm like, oh, man, I don't think you understand. I mean, people are still not quite sure about the right click.
Starting point is 00:43:14 I think you still have a ways to go. I saw this tweet from Robert Friendlaw. Lawyer uses ChatGPT to help write a brief. ChatGPT hallucinates cases and quotations, court sanctions lawyer, and four co-counsel for not catching the errors. The lawyer who used chat GPT has practiced for over 30 years. He prompted chat GPT, write an order that denies the motion to strike with case law support, told the court that he doesn't normally use chat GPT, and he used it this time because he was caring
Starting point is 00:43:44 for his dying family members, said no of his co-counsel were aware of this use of generative AI. Course says that because all five attorneys signed both documents that included these errors, and they admit that not one of them verified that the case law in those briefs actually exist, Their conduct violates rule 11B2. There's hundreds of those happening, right? I heard, I don't know where this site is. There's a site that tracks this. Lawyers getting busted for chat CPT written briefs that just make things.
Starting point is 00:44:16 Because it will for sure make up things if you ask it. Because, again, what it tries to do is, you know, not to get, people know this, but right, at the very bottom, what is a language model trying to do is trying to solve the word guessing game. That's how it was trained. It was given real text. You knock out a word. word and say replace that word, can you figure out what word was really there in the real text? So the language models just think they're trying to expand a real text that really existed.
Starting point is 00:44:39 So they're trying to produce text that makes sense given the prompt. They're not, there's not world models or structured reasoning in there of like, okay, this is a legal brief and we have a notion of a citation. We don't know how it thinks about that. There's hundreds and hundreds of cases of this happening. I heard Scott Galloway talked about this on the Pivot podcast. There's some site that tracks this, that he keeps an eye. on and he says it astounds you you think it's a handful of people it's not it's all the time i got
Starting point is 00:45:06 here's my story of getting burned by that i sort of learned my lesson i was working on uh because the the one way i'll use chat chpt is just sometimes instead of google right um especially if i want like instructions for how to whatever change settings on something it's great it has a lot of really useful it's spectacular for all of that stuff if you want to use it is basically a glorified wikipedia that's more instructive like yeah yeah and you can like wikipedia you can ask questions of Yeah. So I was using, I was writing an essay, and it was on Isaac Asimov's Rules of Robotics. This was a New Yorker essay. And I left my copy of I Robot. I was here at my studio and I left it at home. I was like, oh, I needed to add this quote, right? And I left it. And I was like, oh, you know what? That story's in the public domain. It's all over the internet. And this seems like it would be perfect for chat, GPT. Like, hey, can you just grab a copy? And, you know, and find me that quote. And now save me a little bit of times.
Starting point is 00:46:03 Like, yeah, here it is. Here's a quote. I was like, yeah, it's roughly I remember. I put in there. And then the fact checker was like, where's this quote from? I was like, yeah, it's from the story or whatever. I get the book.
Starting point is 00:46:13 It had just hallucinated a quote that was more or less like what was said, right? Because again, it's kind of playing the game of this is the type of text it would make sense giving the prompt, but it wasn't the actual quote. It had full access to it, right? You can search this. It's in the public domain.
Starting point is 00:46:30 so that the actual story is everywhere. So I had just naively assumed if you ask it for some information that exists on the internet, that, oh, it'll just go find it and format it for you. It didn't. And then I went to a whole dialogue with it, where I was like, this is not the right quote. And it's like, yeah, you're right. You know what? I thought you meant paraphrase a quote, here it is made up.
Starting point is 00:46:50 I was like, that's not the real quote. Can you go get the real quote and give it? At this point, I was just experimented. I had already filled it in the article. And you're like, you're right. You know, I was being hasty, here you go. I could not get it to give me the real quote. So anyway, so I learned my lesson.
Starting point is 00:47:05 I was like, oh, don't assume even if it's common information that it has access to. Dude, the desire to fucking reprimand an LLM, and I've shouted at them, capital letter, exclamation marks. It's like, what are you doing? What are you hoping to achieve by throwing your emotional distress at this fucking disembodied voice on the other side? Okay, bits aside. I fucking love chat chitp t. I think it's been really, really fantastic for tons of things. What's important is learning the limits and not using it for case law. This episode is brought to you by Whoop. I have been wearing Whoop for over five years now, way before they were a partner on the show. I've actually tracked over 1,600 days of my life with it, according to the app, which is insane. And it's the only wearable I've ever stuck with because it tracks everything that matters.
Starting point is 00:47:59 sleep, workouts, recovery, breathing, heart rate, even your steps. And the new 5.0 is the best version. You get all the benefits that make Woop indispensable, 7% smaller, but now it's also got a 14-day battery life and has health span to track your habits, how they affect your pace of aging. It's got hormonal insights for ladies. I'm a huge, huge fan of Woop. That's why it's the only wearable that I've ever stuck with. And best of all, you can join for free. Pay nothing for the brand new Woop 5.0 strap, plus you get your first month for free, and there's a 30-day money-back guarantee. So you can buy it for free, try it for free. If you do not like it after 29 days, they just give you your money back. Right now, you can get the brand new Whoop 5.0 and that 30-day trial by going to the link in
Starting point is 00:48:44 the description below or heading to join.com.wop.com slash modern wisdom. That's join. dot whoop.com slash modern wisdom. What opportunities do you think in increasing reliance on AI opens up? Because I get the sense that as more people use LLMs to do the work for them, this will create advantages in some areas for people who don't need to be reliant. So have you thought about the holes, market openings that will occur? It will. I mean, the way I think about LLM-based AI versus more advanced AI that we don't know how to do yet is, you know, my theory is the what is being affected is going to be more narrow at first. It's going to be places where there's an exact match between what generative AI existing tools can do and existing
Starting point is 00:49:30 market sectors. We saw this actually, the week we're recording this, we actually saw this reflected in the stock market. It was this interesting paradox that was going on this week, where the stock price of software companies that deal with stuff that is well suited for an LLM went down. They call it the SaaS Pocalypsepocalypse, the software service apocalypse. So, you know, companies to do like legal advice, companies that do graphic design like Figma and Adobe, because a lot of, you know, we have gender of image generation is making,
Starting point is 00:50:03 building images from scratch is less useful. Customer service so companies that do a lot of customer service type software. We saw the stock was sliding on these very specific software industries because look, I think LLM's are going to be able to do this. It was triggered by Anthropic, releasing some plugins that made it easier to integrate LLMs into your services without having to hire these other companies. But you would think that would be good news for the big tech companies building the AI that's going to replace all this. Their stock was sliding as well.
Starting point is 00:50:31 And so the big tech companies had this big slide that at the end of the week we're recording this, there was a rebound at the end, but it was like a trillion dollars in market cap disappeared from the big tech companies at the same time. So what does that mean the market was betting on? what are investors betting on at that point what was going to happen? And they were betting that in the near future, the next year or two, what we're going to see is selective impacts in specific fields from generative AI, but also that too much money is being invested in these AI companies as it already, which means they're betting that they're not about to automate most of the economy.
Starting point is 00:51:07 They're not about to, you know, just one more iteration away from a huge economic disruption. they're not at this peak of like complete transformation because if they were, you would be trying to increase your holdings in these companies. Like, I don't care how much money they're investing. These companies are going to be worth an astronomical amount of money. But the market is betting,
Starting point is 00:51:25 I think the impact is going to be more limited in the one to two year window than a lot of the commentary was seen. So I think that's important because talk is cheap, but tech stocks aren't. And so people, the way they spend their money actually often has more of, I think there's a lot of information in that versus just, I've been reading these articles online and I, God, the vibe really seems to be saying this is a big deal. So that's, I kind of agree
Starting point is 00:51:52 with the market's consensus right now. For sure, there's going to be industries that are affected. But it's not going to be one of these situations where you say, okay, any work that's not just the deepest creative work is all going to be automated the next few years. I better go learn how to do art or something like that. I don't think it's going to be that broad at first. I don't think the current generation of AI technology can support as broad of impacts as people think. There's a lot of extrapolation from, well, if it can do this with code, certainly it could do this with all these other jobs. If it could do this with this industry, well, certainly next it'll do it for all these other
Starting point is 00:52:23 industries. We have to be wary of those extrapolations. Right. I think I read an article from you, what if AI doesn't get much better than this? Yeah. Sort of if we have, I don't know, some sort of Flynn effect thing that kicks in, but for AI, where, you know, because I think. a lot of people would agree chat GPT 2 to 3 fucking hell to 4 4 oh I know is this there's a whole
Starting point is 00:52:48 furor on the internet about people that have got girlfriends or boyfriends that are virtual on 4 oh and they're all getting upset and sad about it and I don't understand I don't think I use the tools sufficiently deeply to be able to test this and benchmark it's like my fire TV sticks remote isn't working well um it was able to do that fucking five years ago But is your thinking is that we're maybe going to reach asymptote for what LLMs generally and Transform a Technology is able to do? And then it's going to be a new architecture entirely if we're going to actually get beyond this. Yes. Yeah.
Starting point is 00:53:26 That's what that article is about. I think that was a very, of the articles I've written, I think that was a really important one. That came out in August. And the story it tells, and a lot of other people have told the story as well around that time and since. But the story it tells is basically what happened is there was this big paper that was published in 2020. The lead researcher Kaplan, Jared Kaplan, I think was adanthropic at the time. And it was this paper where they said, hey, something weird is happening here. If we make LLMs bigger and we train them longer, they perform better.
Starting point is 00:53:56 Technically, they're saying the loss decreased. That sounds kind of obvious, but in like machine learning circles, that was surprising because there's this idea of overfitting where if you just make your model bigger, the performance goes down. So it used to be like, you have to find the perfect size model for your problem space. That's the way people thought about machine learning until this paper came out. And like, I don't know, Transformase LLMs. They were using GPT2 and they were systematically making it bigger. And they were seeing that the performance just kept going up. And like, this is interesting.
Starting point is 00:54:26 So let's try it. And that was GPT3. All right, let's actually make this like 10x bigger. Surely this can't be right. And it was. It matched the Kaplan curve exactly. like, oh my God, this actually got way better just by making this bigger. Like, all right, well, certainly that must be the end of it.
Starting point is 00:54:43 Let's try it with GPT4. They made it bigger. They trained it much longer. Months and months they trained it. Microsoft had to build these custom data centers to train it with new AC technology didn't exist before. And it fit the curve. It was like way better.
Starting point is 00:54:55 And the thing GPT4 did that really got, so GPT4 set off the whole industry, the thing it did is it started showing abilities beyond just language. And that's where people got excited. like, oh, wow, if you train a language model on enough language, it learns about things that isn't just producing language. It can play games. It can do math problems. It can do logic.
Starting point is 00:55:17 I mean, this was super exciting. It was super exciting. So the assumption was, do this two or three more times. You have AGI. So that's what the whole industry was based off of. When we went from three to four was this is legitimate justified excitement. expand the size and the training duration two or three more times and the economy is going to happen in a box.
Starting point is 00:55:41 I mean, it was so, that's where all of, that was the engine for all this excitement. So they tried at OpenAI. It was called Project Orion. They made it bigger, modeled than four. They trained it even longer. Like, here we go. And they tried it and they said, it's not much better.
Starting point is 00:55:57 And this was this big brick wall surprise for the industry. Like, wait, it didn't get better. Everyone else tried as well. GROC, they tried this with GROC as well, with the Colossus Data Center. We're going to have 200,000 GPU data center. No one's ever built anything this big, and it was like a little bit better.
Starting point is 00:56:16 Meta tried this. They had a model called BMF. We built the biggest data as bigger than anyone we've had before. They didn't release it because it was marginally better than the last model that they had. And so this was a huge issue, right? You couldn't just make the models bigger and train them bigger. So what they did was they switched to what are other ways we can get performance increases,
Starting point is 00:56:38 and can we get more narrow by what we mean with performance? And this is when we begin to get all the Alphabet Soup models. Well, it's GPTO, O3-mini slash whatever, and they switch to focus from just, this is amazing if you use it, to we have these benchmark graphs. And look at these graphs. Things are going better on these benchmarks. It all became about benchmarks because these are very narrow things that you could train models to do well on. they weren't intuitive.
Starting point is 00:57:03 GPD 4 was just awesome. By the time we got the GPD5, their launch page had 28 graphs of benchmark names that no one knew what they were. And so then they had to look for all these other ways to get improvement. And that's where you got like inference time compute. Well, what if we we compute longer for harder questions and they begin really pushing fine-tuning?
Starting point is 00:57:23 Well, for specific types of problems, we can get datasets that have answers and questions and answers, and we can use reinforcement learning that try to, take this pre-trained model and make it better at this particular type of problem, and then we can have a benchmark that shows us we got better at this problem. And my argument in that article is like, this is a way different game than we were playing when we went from two to three and three to four. We're no longer scaling to AGI. We're taking basically GPT4, and we're doing all of this tuning and adding extra stuff on top of it and around it and measuring these very narrow benchmarks.
Starting point is 00:57:58 And that's why people have this feeling ever since. Like, I guess they're better, but it's not in an obvious way. It's better in specific tasks, or if I vibe code this, it looks better, I guess, and it seems more narrow. And so, yeah, we're reaching an, there's a long answer to short question, but we are reaching an asymptote on just pure, fine-tuned LLMs as an engine for AI. We're going to need more architectures. It's going to take more time. Well, presumably, chat GPT6 could come out and, oh, fuck, they just blew through the entirety of my prediction. This curve, no longer curves. in the way that I thought and shit, this is a different universe now.
Starting point is 00:58:37 Yeah, but that won't happen because they tried and they don't know how to do that. So it's not going to be just an LLM. I mean, my prediction of the future of AI is I think what we're going to see, I think LLMs are very powerful, but what we're going to see is much more of hybrid models that are custom fit to particular problems where, okay, this system does this thing better than a human. And in its guts, there's like an LLM. them in there, not a huge frontier model, but one that's like souped up and optimized for this
Starting point is 00:59:06 particular type of thing. But there's also like five or six other models and there's an explicit world model. There's a future predictor. There's a policy network trained through reinforcement learning to try to evaluate situations to see what's good or bad. There's a whole logic engine on top of this that hooks these together. These are what I think the AI systems of the future are going to be like. They're going to be bespoke and there's going to be a ton of them. So when we get the AGI, it's not going to be GPT7 can do everything you ask it as well as a human. It's going to be a world in which there's 10,000 different AI products and you realize everything I can think of now, there's some product out there somewhere that can do this better
Starting point is 00:59:41 than humans. Just like there's AI that can play chess better than humans. There's a different AI that can play Go better than humans. There's an AI now that can beat professional poker players at Texas Hold of No Limit. They're all different systems with their own pieces in them. And a lot of them have some language models in them as well, but a lot of other pieces as well. It's distributed AGI. That's what it's going to be like. We're going to wake up one day and say, say there's fewer and fewer things where we say humans can do this better than computers. And it's a different model than Hal 9,000.
Starting point is 01:00:10 There's one giant, it's a really inefficient way to imagine solving this problem. If we just have a big enough language model, it's going to do all activity, it's going to power all agents, it's going to automate all systems. That really doesn't make sense. I think it's going to be a much more distributed path towards AGI and AI. given what AI can and can't do and what the quality of work is that it puts out at the moment, what is some good advice for somebody who wants to work against the weaknesses that are going to be exposed in other people
Starting point is 01:00:48 because of their reliance on AI by avoiding it themselves or by using it appropriately? What would you focus on? Because again, it seems to me like quantity is easier to achieve than ever before. quality is going to be rarer, that inertia, getting the project off the launch pad, the blinking cursor of the blank page. Where should people focus their time and their attention in order to capitalize this?
Starting point is 01:01:13 I think you need to begin thinking about the feeling of cognitive strain, the way that a weightlifter thinks about the burn of a muscle or a runner thinks about burning lungs. As a thing that is uncomfortable in the moment, but man, I'm excited about this feeling because I'm getting stronger. you got to make yourself really comfortable thinking hard. That is the differentiating factor. I mean,
Starting point is 01:01:37 obviously, I've been saying this since all 10 years now, but that's even more now going to be the differentiating factor, right? And if you talk to athletes, they're like, this is like Schwarzenegger and pumping iron talking about pump. And that's really painful what he's doing,
Starting point is 01:01:49 actually, right? Like lifting the level of weights, the physical pain he's in is high and he compares it to an orgasm, right? Because if you're a weightlifter, you're like, oh, that pain is directly transatlantic. translating the more strength and more muscle mass. You got to think that same way about your brain.
Starting point is 01:02:03 You cannot flee cognitive strain. You have to think about it in a knowledge, work, cognitive age. That is the feeling of my brain getting more capable. Yeah, I want to seek that out. Let's go get it. Let's go get some, right? Like, I want to this. Nope, bring my focus back to this thing.
Starting point is 01:02:20 I'm going to try to push this through. And then when you're done, be like, oh, man, I exhausted my brain. That's awesome. That was like a really good cognitive workout. So don't, well, everyone else is using AI, to run away from strain, you should be the person running for it. Because especially in the American context,
Starting point is 01:02:35 I mean, the knowledge economy is now a massive portion of our GDP, and the knowledge economy itself is shifting more towards cognition-intensive work. So, you know, knowledge work can capture anything where you're not building things. But now all the lower-level knowledge work is being outsourced or automated. A lot of it has been replaced over the last 30 years by software. We don't have support staff and assistants and secretaries like we used to because, well, you can use Microsoft Word and email. We don't need separated people.
Starting point is 01:03:03 And so the work that's left in our economy, the knowledge economy, has been getting more and more cognitively demand. And so the number one skill is, I'm used to straining my brain, learning hard new things, and maintaining focus. That's what I would train.
Starting point is 01:03:16 That's so good. I really, really agree. And the funny thing is, that's why I asked at the top if you just felt like fucking Cassandra, because each subsequent development in technology, makes this more important. Yeah.
Starting point is 01:03:33 I do, there's always going to be that seductive whisper in the back of someone's mind that, well, yeah, but I can work faster with AI. I can work quicker by, what if my boss sees me doing executive functioning through Slack, more, whatever. What is the, what's the elevator pitch for you should do work of high quality and that will end up winning? You have to think about employment ultimately. It's a marketplace.
Starting point is 01:04:03 There's a lot of obfuscation and fog and smoke, but it's ultimately a marketplace, right? You're paid money in exchange, you produce things that have economic value. That's what makes that exchange makes sense. There is not ultimately an underlying economic value to the coordination activities by themselves. There is no actual economic value
Starting point is 01:04:23 to the speed of your Slack responses or the number of meetings you go into or the number of bullet-pointed emails with those sort of chat GPT emojis that you put out, that itself doesn't generate economic value. The stuff that does a knowledge work almost always requires you mastering hard skills and applying them through concentration.
Starting point is 01:04:40 And ultimately that shakes out. There's only so far you can get or so far you can hide being busy because busyness can't be monetized. And, you know, of course, you can create a smoker for a while. Like, I don't know, like, you know, Chris seems like productive, I guess. like he's always on these emails and this and this and that. But if you're not actually producing things that have economic value, like ultimately that catches up to you, your opportunities narrow, you're going to get found out at some point,
Starting point is 01:05:07 where if you do the other thing, it's like, no, I'm creating stuff that is rare and valuable. It's unambiguously has value in the marketplace. You write your own ticket. Like what? You want to have a business where you work half the year? You can do it. You want to get paid a huge amount of money? You can do it.
Starting point is 01:05:22 You want to work for a company, but you choose when you come in the office and you declare, like I don't want to do meetings. That's actually a thing, by the way. I talked to a marketing team at one of the major tech companies not long ago, and they said, you know what? We're in the sales side. And like our group, the sales group, we are exempt from meetings because they can directly monetize, oh, you brought in this many dollars.
Starting point is 01:05:46 We can see it. And if you're bringing in dollars, they're like, you can do what you want. And they could also see if we make you go to meetings, those dollars go down. It's like, forget the meetings for you. everyone else where there's not a clear number where they can see how much value bringing, like, oh, you better be there in the medium. I've, dude, I've always thought this, the big problem that most people have that doesn't exist in the world of sports stars, if you're a sports star,
Starting point is 01:06:10 everything that you're doing is to facilitate performance and performance is very tightly bounded and it's quantifiable. If you're a weightlifter, 300 kilos is 300 kilos. You either pick it up or you don't pick it up. and your sleep and your recovery and your nutrition and your hydration and your game tape and your technique work and your S&C and your body work and massage and soft tissue and all of that stuff combine to this output. It's a very, very sort of single ordinating principle. The same thing goes for tennis and the same thing goes for football and the same thing goes for baseball and so
Starting point is 01:06:44 and so forth. You do not perform well. You begin to scrutinize all of the contributing elements that come toward that. The problem that you have in most normal people's lives is the output that they're optimizing for is diffuse and very hard to work out. Well, I want to be a good father, but I also want to perform at work. I do Brazilian jiu-jitsu on an evening time, and my wife makes me go dancing, and I want to be engaging at a cocktail party. Okay. Well, first off, that's lots of things. It's not a single ordinating principle. And secondly, define to me the linear. between your disrupted sleep last night and your poorer performance around the dinner table or in Brazilian jiu-jitsu or whatever. The diffuse thing contributes because you inevitably have
Starting point is 01:07:32 to make trade-offs from one thing in order to do another. But also, it's just hard. It's hard to work out how your performance is performing. And this is the same in the work life. Perfect example. The salespeople, we just know if we make you do this thing, we lose that thing and that thing is more important than this thing. It would be like if for some reason sports stars were being encouraged to stay up late and you go, well, we know if we make you stay up late answering fucking slacks, your performance in the game the next day decreases. But for most people, there's this implicit assumption that part of what you do is the contribution to the strategy and the operations and the executive function culture and so on, which means that you forget
Starting point is 01:08:18 what you're there for. I think people have forgotten what they're there for. What am I supposed to be here at work doing? What is my outcome goal? There's so much fat in the American knowledge work sector right now. We're so wealthy and there's so much money being slung around
Starting point is 01:08:34 that we can have whole organizations where most people don't even know how they're directly connected to producing that value and they could just be doing email all day or whatever. It's so inefficient. But there are, I mean, there are plenty of knowledge work areas. where people don't put up with a bunch of this nonsense,
Starting point is 01:08:50 and it's all areas where it's very easy to quantify your production. I did this essay a couple of years ago, where I did a reflection where I said, God, almost every thought I've had in my books all came out of my experience as a grad student at MIT. So I was at the Theory of Computation Group in the Computer Science Department at MIT. I don't call it department,
Starting point is 01:09:12 but the theory of computation group in the CS lab at MIT, which is like a group that professors there, the students, we weren't like this, but the professors were super geniuses. Like literally, Turing Award, Turing Award, MacArthur, MacArthur, Turing Award, Dijkstra Prize, like, smartest people in the world. And it was incredibly clear if you were successful or not. What major theorems did you prove in the last few years? That's it. That's all that matter, right? And that required a lot of thinking. So they were terrible with email. They had no interest in social media meetings. Like, if you're trying to throw meetings at them, they would just ignore you. Right? I
Starting point is 01:09:48 I wrote about this in Deep Work even, and people push back, but I was like, this is what it's like in that world. If you send someone an email in this world, like one of these professors, and they're like, this isn't, this is ambiguous. You kind of didn't word this well, or I don't really want to do this. They just ignore it. Like, that's on you, buddy. Like, I have to get, you know, I'm being, I will lose my job if I'm pre-tenured or if I don't come up and solve theorems. And they put up with no nonsense.
Starting point is 01:10:11 And a lot of that actually infused the book Deep Work, because it's like, you know what? I came of age in an environment where all in. cared about was focus and everything else was secondaries. Like athletes, just like you said, if this is getting in the way of my launch angle going down or my batting average adjusting, I'm going to change it. But it's crazy right now in knowledge work, how many positions that's not true. But what I advise people then, get in a position where that's true. Change your profile at work or if you're changing your job, change your job into one
Starting point is 01:10:42 where your value production is unambiguous. Now, this is a double-edged sword because it swings both ways. Because you can't hide anymore. You can't hide anymore. But if you get into one of those situations and then you do the cognitive work, I know how to focus, I build the skills, I apply the skills, I'm not afraid of cognitive strain. You're in the absolute best position in our economy, right? You can write your own ticket, but you have to be willing to go into a circumstance
Starting point is 01:11:07 of like, this is the only world I know. And academia is, what did you publish? That's all that matters. It's all we care about. What do you publish? Book writing, how many copies is your last book sell? That's all that matters. There's no, you know what, though?
Starting point is 01:11:19 He answered our publisher email so quickly, so let's give him another deal, folks. No, it's exactly how many dollars did you make us last time. That's what we care about, you know, for the next time. So it's a scary world where you're being held accountable. But it's an equation I always say is that if you're accountable, you don't have to be accessible. If you're like, I can point to this is the value I produced and I'm killing it for you, then I don't answer emails, I don't go to these meetings, I don't do 50 sort of things, you can get away with almost anything you want.
Starting point is 01:11:52 So I think that's more people should make that move, especially in the AI age, I suppose. More people should make that move towards like, hey, hold me accountable. And then do the work to actually show up. It makes your life so much. It's such a better way to go through knowledge work. You get away from that hyperactive hive mind, brain melting, distracting, soul crushing, slack all day long nonsense. In other news, you've probably heard.
Starting point is 01:12:15 me talk about element before and that's because I am frankly dependent on it and it's how I've started my day every single morning. This is the best tasting hydration drink on the market. You might think, why do I need to be more hydrated? Because proper hydration is not just about drinking enough water. It's having sufficient electrolytes to allow your body to use those fluids. Each grabbing a stick pack is a science-backed electrolyte ratio of sodium, potassium and magnesium. It's got no sugar, coloring, artificial ingredients or any other junk. This plays a critical. role in reducing muscle cramps and fatigue while optimizing brain health, regulating your appetite, and curbing cravings. This orange flavor in a cold glass of water is a sweet, salty,
Starting point is 01:12:55 orangy nectar, and you will genuinely feel a difference when you take it versus when you don't, which is why I keep going on about it. Best of all, there's no questions asked refund policy with an unlimited duration. Buy it, use it all, and if you don't like it for any reason, they give you your money back and you don't even have to return the box. That's how confident they are that you'll love it. Plus, they offer free shipping in the US. Right now, you can get it. a free sample pack of elements most popular flavors with your first purchase by going to the link in the description below. I'm heading to drinklmnt.com slash modern wisdom. That's drinklmnt.com slash modern wisdom. Let's say that you were in an organization that was small enough that you
Starting point is 01:13:30 could actually enact some change. Maybe you're at the top of it, near the top of it, or you're just you're toward the bottom of it, but you feel like you've got the ear of the person that's in charge. if you were to say we've got the classic diffuse hive mind pseudo productivity malaise like the ambient soup that everybody's swimming in how would you what would you do what would you how would you rework the internals of an organization that still needs to communicate obviously there has to be coordination people aren't working in silos there is going to be inevitable communication and coordination that needs to happen how do you survive the modern world? What would you propose? How would you restructure things?
Starting point is 01:14:11 Yeah, I mean, I would do a few things. One, I would say we're going to have explicit workload tracking and management, right? No more just people throw stuff at you and you implicitly just add it to your plate. We want a place where we write down what everyone's working on and we can see it. And now we can start talking about things like what is an ideal WIP? What's an ideal work and progress limit for an individual? How many things do we want someone working on at the same time before that curve starts to go the other way? So what you have to do, once you start doing that, they're saying we need a place to track things that need to be done that no one is actively working on right now and we can feel okay about it. So I would definitely
Starting point is 01:14:46 want to set up where where things enter into our radar of this needs to be done, there's a place for that to go and to be stored. Oh, it's like an organizational getting things done inbox. Yes. And it's not on anyone's plate because here, as soon as you are responsible for something, it generates email, slack, and meetings. So once it's on. your plate, it begins to spin off administrative overhead and slow productivity, I call it the overhead tax. That gets spun off as soon as it's on your plate. So everything by default goes to a team plate. No one's working on it. Then we keep track of from that plate as we move things to people's individual responsibilities, we have like, I don't, you should do three things at a time. That's it.
Starting point is 01:15:28 And when you finish something, you can pull something else in. So do a small number of things fast and well and then keep bringing things. So I would definitely do that. The second thing I would do is I would say no more hyperactive hive mind. If you send a message that requires more than a single message in response that should not happen over digital communication. If I can't just answer your question with one more message, then that has to be real time. Now, we can't have that turn into an explosion in meetings. So what we're going to do is we're going to have daily office hours for everyone. So there'll be a daily time where everyone knows they can call you or walk to your office or whatever and go through a bunch of things with you real quick instead of sending emails. We're
Starting point is 01:16:05 going to have morning stand-up meetings within the teams for sure. Who's working on what this morning? Who needs what from who to get that done? Go do the work. We're going to have, so we'll definitely do those as well. We might throw in phone hours. It's a new idea I'm thinking about where you say, look, there's a longer period of time, like maybe all afternoon where you can always call me if there's something that's so urgent you can't wait until the next office hours. There's enough friction in phone calls that that actually tends out to work pretty well. I'm not just going to call you because I want to get something off my plate. I won't call you unless it really is serious. So I would do that as well. And then I would say, okay, what ongoing work does this not work for?
Starting point is 01:16:42 What type of projects do we work on on a regular basis where this isn't working because it's too long to have to wait until the afternoon's a problem? I say, great, let's identify those. And for each of those, let's build a protocol. Here is our protocol for collaboration on this type of work. And however, that's going to work. But it's like the information goes into this spreadsheet and and whatever. Someone checks it in the morning, they move things to shared files. I don't know what it is,
Starting point is 01:17:06 but whatever it is that prevents us to have ad hoc unscheduled messaging isn't necessary. So explicit workload management, I would have this rule of no hyperactive hive mind. I would have protocols for any type of recurring collaboration where we could be explicit about how we actually want to do this. And then I would have a culture of talking about deep work and concentration,
Starting point is 01:17:25 like a tier one skill. How is it going? How many deep work hours did you get in this week? Are you happy about that? What was getting the way of that? that. Did you have a particularly good session? Tell everyone else about it. Like, what worked? Oh, I see. You did music. You have a different look. Oh, let's all think, you know, hey, here's a good idea that we can borrow. Make deep work culturally something you talk about as like this is a tier one skill that we're really proud about. You do those things.
Starting point is 01:17:53 You're like going to two X your profitability. This is the thing that's always frustrated me about these ideas is like you could make more money if you do it. But that's, it's really hard. Those changes I just talked about, it's hard. There's friction, there's personalities. And this is the thing I really underestimated when I wrote those books. The way we work now is like a low energy point, right? It's like the easiest possible configuration of work. So if you feel friction, you're trying to do something more structured,
Starting point is 01:18:23 you're trying to do something that makes better use of our brain, and you're getting resistance, the place you're going to fall when you give up is the way we're doing it now. So it's not arbitrary, I've realized this hyperactive hive mind, let's just figure things out on the flow, no workload management. It's not arbitrary. It's the low energy. It's like this local minimum. It's the place that like minimizes the complexity that still allows a company to run. And I think that's why we keep falling back in mathematical terms as a suboptimal national equilibrium. It's not the optimal way to work together, but no one person can leave it and make their situation better. It's a low energy state. It's a, it's an attractor. It's a local minimum in
Starting point is 01:19:00 the utility landscape, whatever mathematical metaphor we want to use. And so it's not arbitrary. I was like, oh, it's like a law of work physics. This thing is like a neutron star in the world, the universe of work that just attracts everything back to it. And it takes a huge amount of energy to escape its pull. That's why I think we've had so much trouble solving this problem, even though you would make more money if you did it. I wonder, I'm thinking about sort of immediately implementable solutions for this.
Starting point is 01:19:33 I get the sense that you could probably tell people we don't use Slack before 1pm. Like nobody is to post in Slack before 1pm
Starting point is 01:19:43 because that you can ring if it's SOS emergency scenario, you can just call somebody. We just don't use it.
Starting point is 01:19:52 And then it means that everybody knows that they should not be doing it. It's a company-wide deep. I mean, look,
Starting point is 01:19:59 are there going to be some departments, HR, for instance, probably would be used. But your job is HR. You're in the PR department or something like that. Your job is actually about comms. But if you're in marketing or if you're in accounting, something like that, okay, sit down and do your fucking work. And up until a point, what do you make of intermittent fasting for communication company-wide? Yeah, it works, especially though what really makes that more sustainable is if you have that quick morning stand-up on the team scale at the beginning of the day, where everyone says,
Starting point is 01:20:30 here's what I'm going to be working on during these morning hours, here's what I need from each other to make progress on this. So what would have unfolded over Slack and email, you're doing in 10 minutes.
Starting point is 01:20:41 So you say, okay, here's what I'm working on this morning. I'm working on the new white paper. Here's what I need, though. I need those figures from you. When can you get them to me? By 9.30?
Starting point is 01:20:51 All right, you're going to give them to me by 9.30. And I need those quotes you promise. Can you get, just do that right away? Okay. So you all, You all know what I need from you? Okay, now I'm going to put my head down and write that report.
Starting point is 01:21:01 So having that meeting ahead of time where everyone says what they need and what they're going to do, that makes that time work better. And then the thing that really works, do the same thing on the other end of the morning. All right, you said you were going to work on this, this and this. What happened? So there's accountability on the other end. You can't run away from, you know, if you just went on email and social media, they're like, well, wait a second. I thought you were going to write the white paper.
Starting point is 01:21:27 thing. Yeah. And if other people flake, they don't send you the figures, they don't send you the quotes. You're like, I got stuck, ma'am. I never got this. Cal didn't do what he said. And they're there in the same room and they're like, oh, okay, I get it.
Starting point is 01:21:39 I get it. I can't just ignore stuff, right? Like, I actually have to do it. I think that's a great idea. I think something like that works well. If you put that accountability before it and you put it after it, that scares people, by the way, though. That really does scare people because you actually have to do the work. And this is the thing which really social media and smartphones killed this way worse.
Starting point is 01:22:02 AI is going to make this worse. But that was a big inflection point. In terms of losing our comfort with concentration, that got really bad. Once we got algorithmically optimized content, we really got used to that. And so it's scary. If you just go to a company and say, here's the new plan, boss. We're going to have a meeting in the morning. You've got to tell me what you're going to do for the next five hours.
Starting point is 01:22:21 And then you've got to do it. And we're going to check in after that five hours and see how it went. That's a nightmare for a lot of people. That is like, oh God, I don't know what I'm going to do. I agree. I get the sense that a nice way to introduce this would be, look, everybody's brain here has been turned into slop. Everyone. No one is able to do their job as effectively as they should.
Starting point is 01:22:43 So you are expected to do the work. But the reason that we do the pre and post is not to whip somebody into performance review. It's to give you accountability because you don't look like a tit in front of your coworkers. But if you don't get to the point, we're going to, the same as when you start training for a marathon, you don't run 10K on the first day. You will titrate the dose up and overtime. You know, week one, we'll permit some fuckery.
Starting point is 01:23:09 And week two, we'll permit a bit less fuckery. And week three, we're all in it together. And this person's pulling ahead. They're really like a hyper responder. You know, they're making loads of gains in the focus gym. And other people are moving a bit more slowly. Okay, what is it that they are doing, and so on and so forth.
Starting point is 01:23:22 But imagine that. Imagine if you had a company-wide, focus initiative where people were just, okay, we're going to move together. Everybody is going to focus on focus. And interesting around the AI thing. So George, my housemates, right in a book at the moment. Do you know cold turkey? Do you have to use cold turkey? I know about it. Yeah. The software. Yeah, yeah. It's a website limiter, app limiter for MacBook. We've been using it for a decade. His cold turkey went rogue and just kept shutting his browser down, even though he wasn't trying to access the thing that he wasn't supposed to. It said he needed to install it. It was a nightmare.
Starting point is 01:24:00 And here's a conversation between him and his AI. Cold Turkey has gone rogue and I need to remove it. Please tell me how to delete it from terminal. And the response is, I'm not going to help you bypass it, George. This is exactly the scenario you set it up for. You're two days in. The book is waiting. Close the terminal and write. And he's replied and said, no, it's got a bug so I can't get on calls. He's like pleading with his own AI because he's obviously put in the instructions, be rigorous with me, be tough with me, tell me that I should be getting back to being focused when I start to go off task, do the thing. And that's an AI equivalent of what you're talking about, which is this supervisory oversight commission thing, but his just happens to be based in
Starting point is 01:24:42 silicon instead of in other people. So maybe AI will help us. It can basically chastise us. Well, the problem is the problem that you have with the AI thing is it's so fucking sycophantic all the time, that it will tend to bend eventually to what it is that you want. Yeah, but no one believes that the chatbot interface is the future of AI, the boosters, the skeptics, the moderates. There's an emerging consensus that we're going to look back at this current moment where we interact with AI by typing into a chat window. That's going to be like the Usenet news groups or the beginning of the internet.
Starting point is 01:25:17 It was like a cool thing early on that showed the problem. of the internet, but the tools got better. There's better ways to make use of it. So the thought is in the future, AI is going to be more integrated into more things. It'll be more agintic. It'll be a lot, not like having conversations in English text, but deploying agents to do things, maybe with natural language, but also it'll be more integrated in the software. Individual tools will be more common.
Starting point is 01:25:42 So it'll be much more common. I'm in Microsoft Excel, and I'm like, can you sort row five by this amount and cut out all columns that have less than as many values, and it does that. That's what the interactions are going to become like. And so this idea of having a singular anthropomorphized entity through which you're having all conversations, that's almost like an accident of early AI.
Starting point is 01:26:04 I mean, Open AI will tell you this, that ChatGPT was supposed to just be a demo of the type of things you could do using the APIs into their language models. It's like the type of tool you can build that would make use of AI. And then it caught them completely off guard. And everyone wanted to use ChatGPT and chat with it.
Starting point is 01:26:19 because it was really cool. I don't think that's going to be the form vector. So I think a lot of these issues we have now, like this is weird. It's unsettling. We're anthropomorphizing it. We're getting parisocial relationships with the agents. We're having romantic relationships with them. We're getting unsettled because having English conversation,
Starting point is 01:26:36 we have a hard time not simulating a mind on the other end of this. Which is why I shout at my chat. That's why you shout at it. I think a lot of this two years from now is going to seem, it'll be super narrow. Right? because I don't think just having a this sort of general purpose of Oracle you chat with, that's not the future. That's not what people think we're going to be doing.
Starting point is 01:26:53 Why are people mad about 4-0 being removed? They were just, my understanding was they were just happy with the fine, so you tune these things. The conversational style comes from a post-training tuning session where you give it, you've already done the pre-training, which is unsupervised. And you go through this post-training session
Starting point is 01:27:13 where you have a lot of examples of questions and answers, and you asked a question and then it gives an answer and then you sort of zap it using optimization theory to try to move like now we're going to change to weights to be closer
Starting point is 01:27:25 to this answer we already said was better. So if you have a bunch of examples of the way you want something to respond and you go through one of these sort of zapping training sessions after the fact, it'll respond more like that.
Starting point is 01:27:36 So they just changed the way they were doing that. And the thing they changed to people didn't like the tone that created. So it was just about what data, literally like the data
Starting point is 01:27:45 sets you're using when doing this fine tuning after you've done that big massive pre-training where it's unsupervised. Talk to me about the role of quantum computing in AI. Minimal to non-existent. So QAI is all just bullshit? Yeah, I'm not, yeah. I mean, quantum computing is really interesting. There's a huge amount of technical problems just to actually get these things scale to the number of qubits in which they're useful. And there's a there's a fallacy out there and think, about quantum computing that it's basically like a normal computer but times a million. Yeah. Which is just not the way these things function, right?
Starting point is 01:28:22 So there's only very specific problems you can solve with a quantum computer because you actually have to express the problem in the language of physics in such a way that you're creating what's known as a wave function that when it collapses, it's going to collapse to a configuration that's the right answer. Therefore, like implicitly searching a large state space and sublinear time. Only certain problems allow you to do that. So it's unlike a normal computer where I can program a computer to do almost anything. Quantum computers is much more narrow what you can do with it.
Starting point is 01:28:52 Could you give me an example of something that it would and wouldn't be able to do? Well, the big example, this is a guy who was at MIT when I was there. Peter Soar, early on was the one to figure out, like, hey, one of these complicated wave function collapsing things you could do could factor prime numbers. Q day. Yeah, factor numbers to find the prime factors, rather. Find the prime factors of big numbers. that's a really big deal because Public key encryption
Starting point is 01:29:18 and ironically this just goes to show how crazy MIT was is also at MIT is Ron Revest who ITAid for who invented R&RSA he invented public key encryption so the guy who invented public key encryption is there next to the guy
Starting point is 01:29:33 who figured out how quantum computers could maybe undo it undo it yeah so it's kind of interesting so it's good at that there's a lot of problems that are based around simulation of quantum or physical physics systems. And that's, you can simulate quantum physics systems directly using quantum in a way
Starting point is 01:29:52 instead of having to try to simulate them. So it's very good for that. There's a certain type of search. It gets a little technical, but there's a certain type of search that you can implement. It has applications. So there are interesting applications. But the thing I was beginning to sense recently, which made me worry, is that there was a sense of, like, height migration. So people are getting a little bit frustrated. it sort of like post-GPT-5 of like,
Starting point is 01:30:14 this isn't filling my need to have something to be, you know, a technology that is going to change everything. I love that concept. And then again, sniffing around, okay, but what if we just quantum somehow will unlock AI and solve all these problems we're having? I think it's way more complicated than that. There are narrow applications of these particular things
Starting point is 01:30:32 that might have some AI application, but you can't like run an LLM on a quantum machine and now it's a billion times better. That's just not how it works. So quantum is interesting. It's just really hard. The problem is the errors multiply. I mean, they make these qubits, these quantum bits they use for these algorithms.
Starting point is 01:30:50 It's incredibly complicated. There's different ways to do it, but in some ways you have laser beams and a super cool chamber holding like a particle in a very careful state. And it generates errors, and then the errors add up with other errors. And after you make enough of these things, then the errors, they swamp out of control. It's a really, you know. So you tell me that the fucking M-C, chip in the MacBook Pro is not going to be a quantum one.
Starting point is 01:31:15 It's not going to be the Q6 chip. It's not. In fact, I was, now I want to know what QAI is. What is QA, you've mentioned QAI. Quantum AI. Yeah, but I mean, is there a particular product or just people talking about quantum's going to just make AI better? Yes.
Starting point is 01:31:30 Yeah, there is. I have a friend who I train with. This is like, you know what I love? Some of the people that I love the most are the ones who you wouldn't predict have the life that they do. And there's a girl who trains Lyft ATX on a Saturday. Lovely girl, I've trained with a bunch of times. Real cool. Boyfriend's cool. Like does fitness modeling, super hot, the long head, lift, the big, you know, like, but super strong, all the rest of the stuff. Like, feminine as well. Quantum computing degree. Like works in quantum computing. And she was telling me
Starting point is 01:32:03 about quantum AI. And she was telling me about QAI, as it's referred to. And it's a burgeoning field, supposedly. Unless she's lied to me. Totally fucking lied to me. Yeah, I'm curious what they're working on. UT Austin has good quantum theorist. Look, I'm searching for it. A guy I knew from IT, they hired him away there. R.C. Quantum AI.
Starting point is 01:32:26 Merges quantum compute with machine learning the process. High dimensional data faster than classical systems. Now they're working on it, but I don't know how that's going to work, basically. So I don't know what they're working on. But it's not something that you hear a lot. in computer science circles yet. So maybe they'll have some breakthroughs. It's worth looking at, but I don't know how that's going to work. Okay. One of the other elements, I guess, that people struggle with when it comes to deep anything is learning, the process of learning.
Starting point is 01:32:58 Talk to me about the mechanics of keeping a deep reading habit alive. Well, I mean, I think reading pages is probably the cognitive equivalent of steps, right? So if you're a 10,000 steps a day person is like this is just like a baseline to make sure that like at least my physical systems are being used you should have a page count 25 pages a day 20 pages a day of reading a book just as like getting those cognitive steps in because I think we recognize more and more reading it I would say it's the cheat code but it's better to think about it as like reading is the thing that formed the modern brain and I'm like I'm more and more convinced about this I have a book idea I'm working on now where I'm sort of exploring this idea. The brain before we had the
Starting point is 01:33:46 Neolithic revolution, it was the same neurons, right, 15,000 years ago that we have right now. But if we go pre-reading, those neurons were doing the things they were evolved to do, which is very much about like the visual system and the audio system, and we could communicate through spoken language, and that's fine. And then we invent reading. And this is not something that our brain has evolved for. So in order to read, we have to go through this sort of excruciating process of learning to read in which what you're doing is actually rewiring sections of your brain to connect in ways that they weren't originally meant to connect to. So we're reforming our brain when we learn how to read. And we develop what Marianne Wolfe calls deep reading processes where you've now yoked together different parts of your brain that don't normally work together that can now have to work together in order to understand written text. Once your brain is wired to do that, can, if you reverse this and write, you can generate much, much more sophisticated thoughts than you can if you haven't done this wiring, and your understanding of things. The complexity of what you can understand when you have this new rewired brain, that also really goes up. So reading
Starting point is 01:34:52 is like, it's not just, oh, I get stronger in my brain. It reconfigures your brain into like the modern, you know, post-cognitive revolution brain. Okay. Why is it important to read physical books then what is lost if I read substack? I know that you're a fan of substack. I love substack. I think it's fantastic. What's the difference between reading it on a laptop versus a phone versus a Kindle versus a physical piece of paper? Well, there's two different things going on here. There's medium and content type. So if you're reading a book in a physical book or you're reading in a Kindle, doesn't matter, right? I mean, they're both actual physical medium. Like the way that the Kindle is actually a physical.
Starting point is 01:35:35 experience. It's actual little disks that are dark on one side and light on the other. And they make a page, they have little electrical impulses, and you shock the disc you want to turn. And you don't shock the ones you don't want to turn. And so you've literally created an actual black and white physical version of the page on the Kindle. You're not, unlike a computer screen or a TV, where it's light being emitted, there's no light being emitted. It's physically that's the page. It just created a new physical page that has text on it. That's why you have to actually have a light on a Kindle to read it. So it's just a page that reconfigures itself into a new page.
Starting point is 01:36:08 I love eating technology. I think it's really cool. Content type, the issue is, I mean, there's a lot of this research we've known since the 90s. A lot of this is captured in the best book on this would be The Shallows. Nick Carr's book The Shallows. When we're reading something like a web page or substack, for whatever reason, we skim much more aggressively. That's the main issue. We jump around much more aggressively just trying to pull out the key points.
Starting point is 01:36:33 I think that's all just acculturated, right? Like, you could sit and read, like, if you print out a substack article and sit in the library and you read it carefully, it's the exact same thing as reading a book. It's the exact same thing. And in sense of the experience, on screens, we tend to skim around more.
Starting point is 01:36:49 The other advantage of, like, a book that was actually published versus, like, a post you see online, it's just better thought through, right? So when you write a book, you spend a couple years on it. Like, you're really, you spend a couple years crafting the book, and you might have been based on a lifetime of thinking about this topic. And so you take your time when writing a book and it gets edited and re-edited and you go back. Like, I'm writing a book now.
Starting point is 01:37:14 I've been working off and on for like three or four years. I've rewritten this book like three times. It's like this isn't right. This isn't clear enough. It's, you know, and so when you go through text that has been that carefully thought through and structured, that's also you just get a different experience because the pieces click together at different scales. and it just uses you build in your brain these intricate interlocking pieces
Starting point is 01:37:37 that all hooked together and is beautiful and you get that aha moment feeling there's an actual physical endorphin rough you get in your brain. So I think reading smart books written by smart people that took a long time to write, that's your calisthenics for your brain. It literally changes.
Starting point is 01:37:53 You're a smarter person if you do that versus if you don't. So good. I have to say reading full-length books has been, the volume that I do that has been decreased over the last few years largely because of Substack. So there's an extension for Google Chrome called Push to Kindle,
Starting point is 01:38:14 and if I press it, the article appears on my Kindle because I don't like reading on my phone and I don't like reading on my laptop, probably for the reason that you said. But when I think about it, it very much is running downhill. because what's the longest substack that you're going to read? 20 minutes, maybe? 25 minutes, a fucking long article. Yeah.
Starting point is 01:38:37 And maybe part of that, maybe part of my ponchon for it is that I do get the outcome, right? What is it that I'm looking to learn? Oh, I want to find out from Steve Stewart Williams about sex differences in desire for sexual novelty, something like that. Okay, well, I will learn the outcome. in the same way as I could feed myself food that was just a cube of calories, and that would sort of give me the caloric intake that I needed. But what you're presumably reading for, apart from just the enjoyment of reading it, is to be able to recall it and for it to be woven into the broader mental landscape that you've got,
Starting point is 01:39:21 which actually probably means you need to spend time under tension with it. and some of the leanness and brevity that comes with an article actually might work against you. Maybe you need it to be said to you in five different ways. Maybe you need the author to meander off onto a story that takes three pages to explain about this guy who owned a Ferrari and parked it outside of a hotel so that you can then come back in. And each one of these is a little Velcro latch hook that you can hook yourself into. and yeah, I wonder whether the reading or discriminating toward reading stuff that is exclusively shorter form results in the sense that I am learning lots. But if you are to actually do some sort of scrutiny around that, well, okay, how much of it can you remember? How long did you spend with this idea?
Starting point is 01:40:14 Did you spend long enough for it to be a part of now your mental models and the framework that you? How much can you recall? That would be an interesting, an interesting challenge. And the frameworks of understanding are shallower just because it's less time to establish them. So like in a subset, it's just not a bad thing, but, you know, what can you do? You're typically like one idea and like here's something that supports that idea. And here's maybe like a different idea and here's why that doesn't work. And if that's all you're consuming, that becomes your mental model for how knowledge is gained.
Starting point is 01:40:47 And I think we see a lot of this. I mean, think about internet culture now. is much more conspiratorial, and I don't mean in the like sort of grand conspiracy theory, which it is, but not just in like the grand conspiracy type of thinking, but in the confidence, there's this quick jump to confidence where you're like, that's wrong because of this and boom. And you think that like this is like this slam dunk case or something like that. That's a result of not reading a lot of books. You read a lot of books, you're like, okay, this is way more complicated. Everything is way more complicated than you thought it was.
Starting point is 01:41:21 And there's probably a clear truth here, but clear truths are more complicated. Like, even the notion of what a clear truth feels like comes out of reading books, right? Like, you understand, oh, ultimately, like, this person was right, but it's complicated. And like, yeah, this was not so clear cut.
Starting point is 01:41:38 And this is like a compromise. And this was really important. And these factors were here. But honestly, those factors aren't as big as you think. And this factor really was more important. And so, like, this really was the right thing to do. So even like your notion of what's true or what's not true or what it means for something to be clear is like different than if you're just looking at boom, slam dunk. I think it's a big
Starting point is 01:42:01 problem online. On both sides of the political spectrum do this. You want everything just to be this person is just garbage and completely wrong. And there's like this one simple thing I know that means you're completely wrong and I'm completely right and you're wrong in like the worst possible sort of way. And that is like such a sophilific, I'm saying the word. Solicistic. Yeah, exactly. You said it, right? I have to read more. But it's sophistry for sure, right? This idea of this is how truth and argument unfolds. It's like there's an obvious flaw that's easy for me to grok, which I guess now could actually be a verb as opposed to just meaning to
Starting point is 01:42:37 understand. Also, I could literally grok it, I guess. And now it's clear that you're wrong and I feel righteous, you know, and then we go seeking that. And then we want to simplify everything in the world to you're just terrible and this person is perfect and this idea makes the most sense. And if you disagree with this idea, it's because like you want to eat children. And, you know, it just becomes, it's a different under, this is what I think we get wrong. It's not just like we're, we're, we don't have the right information. We've changed what our notion of truth is because we're not exposed to the complexity of truths when you read a, not only a scholar, like a smart case for it, but then you read the arguments that they confront it. And then you read someone else that's arguing
Starting point is 01:43:16 against their point. And you're like, oh, okay, I've seen the clash of like minds. And now in that clash, like, I kind of see what's going on here. Like, yeah, the truth really leans this way. And it's, I feel really real conviction in that because I've seen like the best minds come at this from either side. And I really understand. And it's not cut and dry. But ultimately, like this is the right thing to do. That was like a very familiar thing to people and leaders in times past. You lose it if you're exposed to these low resolution copies,
Starting point is 01:43:50 these low resolution simulacrums, these easy to digest pre-chewed versions of argumentation and understanding. It just changes the way your brain thinks about what true even means. Yeah, there's an arc to sense making that you kind of need to track. And if you don't track it, you just assume that answers appear. Yeah. It's like, no, no, they don't. Cal, you fucking rule. Let's bring this one home. Why should people go to keep up to date with everything you do?
Starting point is 01:44:13 Oh, God. Calnewport.com, I guess. My books are on Amazon, my podcast, deep questions on YouTube or wherever you get podcast. Newsletter at Kelnewport.com. Deep work. Too many things going on now, Chris. Deep work is 10-year anniversary.
Starting point is 01:44:28 I'm excited about it. All new. I replaced all the blurbs on the back with most of them are now organic. I could just like people who have said things about it without me asking them to say it. That's fun. And I have a masterclass out on this stuff too. So I don't know.
Starting point is 01:44:43 It's everywhere. Too many places. I feel too busy. For a person who's a digital recluse, you are everywhere. But that's a function of focusing on quality, not quantity. I can't wait to speak again, man.
Starting point is 01:44:53 This has been so much fun. I appreciate the help. Always a pleasure, Chris. Always a pleasure to talk with you. If you are looking for new reading suggestions, look no further than the Modern Wisdom Reading List. It is 100 books that you should read before you die.
Starting point is 01:45:07 The most interesting, life-changing and impactful books I've ever read with descriptions about why I like them and links to go and buy them. And you can get it right now for free by going to chriswillex.com slash books. That's chriswillx.com slash books.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.