Offline with Jon Favreau - Endless Slop, Cancer Cures, or Robot Apocalypse? Derek Thompson on Our AI Future

Episode Date: March 7, 2026

Derek Thompson, journalist and co-author of Abundance, joins Offline to hash out some hard truths about AI: who it will actually replace, why we haven’t seen more labor market disruption, and why t...he Department of War’s battle with Anthropic spells the end of private property rights in America. Then Derek lays out his Postmanesque "Everything Is Television" theory of media for Jon, where politics becomes theater and news becomes performance. The guys wrap it up by discussing how becoming fathers changed their views on parenting—and on living.For a closed-captioned version of this episode, click here. For a transcript of this episode, please email transcripts@crooked.com and include the name of the podcast.

Transcript
Discussion (0)
Starting point is 00:00:01 If you love Positive America and what more of my political analysis, you should subscribe to my newsletter, the message box. I'm Dan Pfeiffer, former senior advisor to Barack Obama, and in Message Box, I break down what's actually happening in politics and what it's going to take to beat Donald Trump, MAGA. If you follow every poll and every twist-intern in the campaign, message boxes for you. This isn't just hot takes. Every edition delivers clear analysis, behind-the-scenes insight and practical strategy you can actually use, whether you're working on a race, organizing your community, or just trying to win the argument in your group chat. So if you're listening to this, hit pause, go to your browser and head to crooked.com slash yes, sweet Dan, because I have a special offer for crooked media fans. You'll get 20% off a message box for an entire year. So go to crooked.com slash yes we did. Dear McDonald's, your breakfast menu, fire.
Starting point is 00:00:52 Tens across the board. I could be happy with anything, even though I order the same thing every time. Thanks for not judging me. I'll try something new next time. Maybe. Score a two-for-five dollar deal and a sausage McMuffin with egg and more. Limited time-only price and participation may vary
Starting point is 00:01:12 cannot be combined with any other offer single item at regular price. It's not like the AI CEOs are giving us a whole lot to root for. The advertisements are like, it'll help you do more pull-ups. That's cool. Like, you know, I wish I could do like three more pull-ups in a set. So the advertisements are like, this is good because it'll help you cook like a better pasta and do more pull-ups. and then the CEOs are like, it's going to displace 30 million jobs
Starting point is 00:01:40 and totally transform the U.S. economy so that we're going to need a universal basic income because you're so not going to recognize the labor market that follows AI. And also, we might be building something like a nuclear bomb. And, God, we would really like to be regulated because we don't know how to solve these problems. I'm John Favro, and you just heard from today's guest, Derek Thompson. I love talking to Derek. He's one of those people who will stop by to office. offer a thoughtful, fresh take on just about anything, the loneliness epidemic, crypto fraud,
Starting point is 00:02:13 zoning reform. Today I wanted to talk to him about three topics, AI, television, and parenting. AI, because he's been writing a lot about the effect of AI on jobs and the economy, most notably why we haven't seen more of an effect on the labor market just yet, and whether this whole thing is just a bubble. I wanted to talk television because of his excellent piece from last fall called everything is television, where Derek argues that everything online, podcast, social media, even AI, are slowly evolving into something that looks a lot like TV, specifically an endless flow of episodic video. And what happens when everything becomes television? As Derek points out, borrowing from Neil Postman, the way we communicate begins to reflect television's values,
Starting point is 00:03:00 quote, immediacy, emotion, spectacle, brevity, when everything is urgent, nothing is truly important. Politics becomes theater, science becomes storytelling, news becomes performance. This, of course, almost perfectly describes the hell that we're currently living in. So we talked about what effect it's having on our politics. Finally, Derek and his wife recently had a second child, so I asked him about a beautiful essay he just wrote about being a parent and why it's so special. It was a great conversation, and we'll get to it in a moment. Before we do, you can get this episode and more of your favorite Crooked Media podcasts. ad-free by subscribing to Friends of the Pod, our subscriber-only community. Friends of the Pod get access to ad-free episodes of all your favorite Crooked shows,
Starting point is 00:03:44 as well as episodes of our subscriber-only shows, terminally online, Dan Pfeiffer's Polarcoaster, and our new extra episode of Pod Save America called Pod Save America Only Friends. Plus, you get to feel good about supporting independent pro-democracy media at a time when we could really use it. So, subscribe today by heading on over to cricket.com slash friends. Now here's my conversation with Derek Thompson. Derek, welcome back to the show. Great to be here. Thank you. Welcome back from Parental Leaf. Woof. Thank you very much. How's the baby? Oh, she's great. She's great. She's beautiful, chunky face. She's really lovely. We're at three months now, and so we're just starting to get
Starting point is 00:04:29 those kind of like lopsided smiles. And it's amazing when a child who's, you know, two years or older, smiles at you. It's beautiful. But, you know, hopefully if you have a good relationship, smiles are common. It's unbelievable how magical it is to see the very first smile from a person. I know. The very first smile that you get from a person is just such a special thing. And it's a, I think an indelible part of parenting is that the most, and this is why I won't continue going on about this for 60 seconds for the podcast audience, the most banal, cliche things are the most heart-wrenching. It's true. The first smile, the first laugh, it's really special. So it's a joyous time at home, but also, as you know, so much free time. I mean, when you're a dad of two, you know,
Starting point is 00:05:18 two years and three months old, just so much free time, such abundant amounts of free time, you know. You know what happens, though, is that, like, at some point, when you get an hour of free time, it feels like you had a week of free time. And so that, that changes, which is nice. I was traveling recently and I was in a hotel. It was like a nice hotel and I had breakfast alone. And I swear to God, it was like touching the face of God. It was like, it's amazing. It's like eggs and bacon and like a little bit of potato with ketchup and hot sauce alone. And my phone. And my phone. Just doom scrolling on Twitter. I was like, I've never been happy. This is the life. This is what it's all for. It's incredible. So since before you went on leave, I wanted to talk to you about your excellent piece.
Starting point is 00:06:03 Everything is television. I still want to dive into that. But before we do, there's been what feels like a year's worth of news about AI, which is the topic I know you think and write about quite a bit. So I would love to start there. You have written that there's a divide in the discourse around AI that you have experienced yourself. You worry that some journalists think you've become a big AI booster, but you've also gotten
Starting point is 00:06:28 shit from people in the AI industry for suggesting that it might be a bubble. how would you describe where you are on AI right now? I don't know. I don't know where I am on AI right now. And I find that it's very difficult to talk about this subject without making people upset. AI is, I almost like I'm worried about like, when I jump off of this cliff, where do I go with, like, the first comment?
Starting point is 00:06:54 There's a lot of phenomena that you and I talk about where everyone can kind of agree on what the thing is that we're talking about. If you would start this conversation by saying, what's going on with gas prices as a result of Donald Trump's war on Iran? We've got one number to start with, which is at oil futures, I think, top $90 for the first time today. And we can start with one number that everyone can agree on. AI is almost too many things to start with. AI is the video slop that people see when they open TikTok.
Starting point is 00:07:27 AI is the janky, generative summaries you get at the top of a good thing. Google search that sometimes are correct and sometimes are annoying and sometimes simply stealing information from journalist articles. AI is something that you hear your boss wants you to try out when you don't want to try out. Or maybe it's something that you're trying when your boss is actually the one who's afraid of it. The metaphor that I've used before is that because artificial intelligence in the workplace is exquisitely dependent on the prompt that you use and the technology that you're using that prompt I mean, the most important part of the large language model, Claude Opus 4.6, might be the term
Starting point is 00:08:10 0.6. That's how many different AIs there are. You have to get to the first decimal to be specific about what technology you're using. There's some things that are like, I said this before, but it's like a light bulb. You turn on a light bulb that's, you know, a hundred lumens, and it's the same 100 lumen for everybody. AI is like a light bulb where some people turn it on or try to, and it produces zero watts, and some people turn it on, and it's a million lumen or watts. It's almost too capacious to nail down. But what I find most interesting, as an economics writer, and so I'll start with the big picture of who the hell knows what AI is, but to answer your question a concrete way, I'm interested in macroeconomic questions when it comes
Starting point is 00:08:48 to AI. I want to know what it's doing to unemployment. I want to know what it's doing to hiring. I want to know what's doing to the economy and to productivity growth, which is so incredibly important. And the really frustrating thing about all this, John, is that, like, we just don't have great information about any of it. Like, you can get a bunch of really smart economists in the room, and you can lock
Starting point is 00:09:08 the door, and you can say, you guys don't get out of this room until you come to an agreement on what AI is doing. For all that things, all those things I just said, employment and GDP growth and productivity, they can't agree. They don't know yet. The picture is too confusing. And so I'll round out that my answer
Starting point is 00:09:24 here by saying that, I think, People listening might be familiar with a few days or weeks ago, I guess now, time when you're the parent of a three-month-old passes at a strange pace. A week or two ago where there was this famous Satrini essay that came out. The predicted that AI was going to crash the economy. You can ask a follow-up in just a second, but, like, you didn't crash the economy. It took a trillion dollars off of stocks. It did nothing to the economy.
Starting point is 00:09:47 But what's interesting about it is that typically what moves equity values are pieces of information. This was not information. this was a sci-fi story. When on Earth, have you heard of a sci-fi story moving markets the tune of a trillion dollars? That only makes sense if investors are starved for non-fiction. If you're starved for non-fiction, then science fiction moves markets. That's where we are with AI and the economy. The picture is so murky that science fiction stories are moving hundreds of billions of dollars worth of equity value.
Starting point is 00:10:21 That's how strange the situation is. Yeah, I was going to ask you about this because, um, I thought the same thing. I thought it was wild that this memo, right, from Satrini research, this is an investment research firm. And basically, like you said, the memo just lays out a completely imaginary, but plausible sounding scenario where this, you know, massive AI-induced economic crisis hits in 2028. And it goes viral. It's all over social media. Like you said, it clearly scared all the right people just enough that all the companies mentioned in the memo. the imaginary scenario, all those companies in real life actually lost stock market value because of the memo. And that's unbelievable. Like, again, this is not a memo of someone who saw the future,
Starting point is 00:11:08 came back in a time machine and was like, I saw 2028, you guys won't believe what the hell just happened. He's just, he's, I don't want to say making stuff up because there's thought and care in it. But it's a science fiction story about the future. And it caused some companies to move by like 10, 15% in stock price. Like, this just doesn't happen. And it speaks to how strange a moment we're in right now with this tech. It also speaks to how little confidence, like so many people who are in these companies or experts or building AI, like have in the product, what they're doing, the technology, the future. Like, it just, it's a very, I don't know that I've seen any equivalent in any other period of time that I've been alive in.
Starting point is 00:11:51 And it wasn't like this around the various dot-com booms and the advent of the internet and social media, right? There's all the uncertainty that I tried to describe in my first answer. Uncertainty about what this technology is for different people. Uncertainty about what it can do. Uncertainty about what it is doing to the macroeconomy. But there's also the way that it's talked about that's very unusual. Dario Amade, the CEO of Anthropic, who I think made a very moral decision in refusing to come to terms with the Pentagon and their recent contract negotiations that ended an anthropic being designated as supply chain risk.
Starting point is 00:12:25 He describes this technology as something that if it works, if they build what they say they're trying to build, tens of millions of white-collar workers could be displaced in the economy, especially entry-level white-collar workers. I'm like, I'm so workshopping this idea. But one point that I have is, like, no one's ever talked about their product like this before. Right.
Starting point is 00:12:45 No one making television was like, you know, if this thing takes off, grown men, are going to spend seven hours a day gluing their butt to the seat watching this thing in a way that could really lead to mass obesity. Like, what? And also, by my television, invest.
Starting point is 00:13:03 And also by my television, right? If I succeed, the consequences could be horrendous. Like, I understand to a certain extent what he's trying to do. I think one thing he's trying to do maybe is raise alarms for how powerfully he thinks
Starting point is 00:13:15 that this technology is. Maybe he's right about it. Maybe he's not. I think he's also trying to, as other CEOs have in the space, dramatize the power of this technology in order to justify the amount of capital expenditure going in to this tech?
Starting point is 00:13:29 I mean, just by the numbers here, and this is something we do have good data on, the so-called hyper-scalers, the companies that are spending the most on AI, so Amazon, alphabet, anthropic, open AI, Microsoft, those kind of guys, Oracle, they're going to spend $600 to $700 billion this year
Starting point is 00:13:47 on AI infrastructure. How much is 600 to $600 to $6,000? $700 billion. One way to frame it is that the Apollo program, which landed a man on the moon, spent $300 billion in inflation-adjusted dollars between the early 1960s and early 1970s. We're spending $300 billion every five and a half months, right? It's an Apollo program every six months, except it's not funded by the government. It's funded by private companies. Nothing like this has ever happened before, and maybe one reason why these CEOs, you could say, are using such sort of confusingly apocalyptic terminology when describing the impact of their technology
Starting point is 00:14:24 is that they're trying to justify an infrastructure project that is simply unlike anything in modern capitalist history and might very well turn out to be a bubble, which I've written about as well. I was going to ask, I mean, stipulating that there's so much that we don't know that no one knows about AI, I know you've written about this. Maybe we can do the strongest case that AI is a bubble. And then maybe we can do it. the bull case for AI. One thing I realized after doing some interviews on AI
Starting point is 00:14:55 and sort of listening back and thinking like, you know, I think I was kind of like overconfident in representing some of these points because I myself am just always changing my mind. I love the opportunity to just play act. Just be like, Derek, play this character and then play that character. So this is actually, this is a fun exercise
Starting point is 00:15:10 that I think I can get behind. All right, the case that's a bubble is threefold. Part one is history. There's a theory. Carlotta Perez, who's an economist, who's written about this a lot, there's a theory that general purpose technologies are always overbuilt, always, always, always. And a simple reason why is that different actors, different companies,
Starting point is 00:15:31 can't coordinate on the ideal amount of spending in order to, say, build the perfect amount of canals in the 1820s, build the perfect amount of transcontinental railroad in the 1860s, build the perfect amount of fiber optic cable in the 1990s. They always overbuild it. And so the canals were a bubble, and the railroads were a bubble, and rural electricity was a bubble,
Starting point is 00:15:51 and fiber optic cable, the internet boom, the dot com bubble was, of course, a bubble. Every time this happens, every time there's an infrastructure build out like this, we always overbuild. So that's the first reason to think that it's a bubble. We still live in history,
Starting point is 00:16:07 and every time this has happened historically, it's been a bubble. That's part one. Part two has to do with something a little bit more technical, which is these companies are spending an enormous amount of money on chips, on GPUs, often made by NVIDIA. And there's a thinking that they're going
Starting point is 00:16:25 to have to keep spending on these chips, going to have to re-up their spending year after year after year. But according to their earning statements, these chips have a depreciation schedule of like five to six years, which essentially implies that they're going to be used for five to six years. And there's a lot of analysts who say, if you have to essentially replenish this investment every two years, but you're depreciating the chips over five to six years, after a few years, you're operating income, your actual profit is going to be decimated. You just don't have enough money to keep up this level of investment. And if you don't have enough money to keep up the level of investment, then ipso facto, the investment can't continue. And we're, again, in an industrial
Starting point is 00:17:01 bubble. And then reason number three, I think, is that really every bubble pops because of leverage, because of debt. And in the last few months, we've started to see, especially with Open AI and Oracle, some of the big boys, start to get into debt, start to finance this build out with debt. that's the part where a lot of folks are a little concerned that we could see the beginning of a real leverage bubble. So that's why I think we might be in a bubble essentially. I guess the final thing that I said is that, you know, the amount of money coming into AI from folks like me paying for Claude or, you know, folks at Crooked paying for Chatsby-T is I've seen estimates around $30 to $50 billion is sort of the size of that economy. $700 billion are being spent every
Starting point is 00:17:45 year to build out this technology. The difference between, let's say, $50 billion and $700 billion is, depending on exactly what it is, it'll probably be between $600 and $700 billion is the gap. So that's just, you know, you're talking about the difference between like the economies of Sweden and Somalia. And so you might look at that and say, of course we're in a bubble, the revenue will never catch up. This has become a long answer. That's why we're in a bubble. The short answer for why we're not in a bubble, I can make very, very quickly. The revenue is going to catch up. And the reason to think the revenue is going to catch up is that this is already the fastest adopted technology in modern history, and the growth rate of revenue for the frontier labs like Anthropic
Starting point is 00:18:28 and OpenAI is already at a near record pace. Those companies had a combined ARR, so an annualized revenue projection of about $6 billion, I believe, in early 2025. 2025, I think the revenue projections are $30 to $40 billion. So that's just an enormous growth. No company this big. has grown that fast. And even if you look at sort of third parties, like Stripe, which just has this God's Eye view over like 1.5% of the entire global GDP, they have said that companies that self-describe as AI companies
Starting point is 00:18:58 are growing faster than any generation of company they've ever seen in terms of revenue. So the answer there is just, it's not a bubble, because bubbles require that revenue doesn't follow. Why was Pets.com a bubble? Because they didn't have any goddamn revenue. Anthropic is going to make $15, $20 billion a year this year Open AI is going to make $30 billion a year this year. They're spending a lot more than that,
Starting point is 00:19:19 but the revenue is growing quickly, and so that would be the case for why it's not a bubble, is that the revenue is going to show up. But to your point about the gap between revenue projections and the amount that they're investing every year and the amount they have to spend on chips and etc. Are all of their revenue projections coming from people like us and businesses and individuals like signing up for Claude and Chaty BT? Are there like other sources of revenue that they're imagining as the technology advances, or is it more like, well, you know, there's other, it's not just LLMs, there's other forms of AI that businesses are going to need. And like, I just don't know where you close that gap. I got to be honest, this is, this is a gap in my
Starting point is 00:19:59 understanding as well, right? So right now, the companies make money off of what is technically known as inference or tokens. And I think what is more commonly known as subscriptions, right? Like, you subscribe, let's say, to Chachb-T. I subscribe to Anthropics, Claude. I'm paying them $20 a month. that's the most straightforward business model that they have. Open AI is now talking, however, about advertising. And Anthropic ran those famous or infamous ads during the Super Bowl. They were making fun of the fact that Open AI is now strongly planning on running ads. So there you have advertising plus subscriptions.
Starting point is 00:20:33 It sounds like a media business, right? Right, right. I don't know for sure what comes next. I mean, the other business model that's been very much in the news is Anthropic, as many people listening know. had this little run-in with the Department of War. Anthropics signed a $200 million contract with the Pentagon last year to provide clawed services
Starting point is 00:20:55 for various military purposes. $200 million is a nice chunk of change. It's not $700 billion, so I don't want to pretend like, oh, it's not a bubble because they have government contracts. But that's another model you can imagine, a combination of private sector contracts,
Starting point is 00:21:11 public sector contracts, and advertising revenue is certainly where we are are right now, where it goes the next five years, I truly can't project. Offline is brought to you by MintMobil. I don't know about you, but I like keeping my money where I can see it. Unfortunately, traditional big wireless carriers also seem to like keeping your money, too. After years of overpaying for wireless, if you're fed up with crazy high wireless bills,
Starting point is 00:21:38 bogus fees, and free perks that actually cost more in the long run, then switch to MintMobile. You've probably heard us talk about Cricket Media Staffer and MintMobile evangelist Nina. She was fed up with the hidden fees on her previous wireless plans. She made the switch to Mint Mobile. Now she's saving big bucks every month on her phone bill. Nina loves her Mint Mobile. Stop overpaying for wireless just because that's how it's always been. Mint exists purely to fix that.
Starting point is 00:22:01 Mint Mobile is here to rescue you with premium wireless plans starting at $15 a month. All plans come with a high-speed data and unlimited talk and text delivered on the nation's largest 5G network. Bring your own phone and number. Activate with ESIM in minutes and start saving immediately. No long-term contracts, no hassle. ditch overpriced wireless and get three months of premium wireless service from MintMobile for 15 bucks a month. If you like your money, MintMobile is for you. Shop plans at Mintmobile.com slash offline. That's mintmobile.com slash offline. Upfront payment of $45 for three month,
Starting point is 00:22:30 five gigabyte plan required, equivalent to $15 a month, new customer offer for first three months only, then full price plan options available, taxes and fees extra. See MintMobile for details. Offline is brought you by Quince. A thoughtfully built wardrobe comes down to pieces that mix well and last. That's where Quince shines. Premium fabrics, considered design, and everyday essentials that feel effortless to wear and dependable, even as the seasons change. Quince has the everyday essentials I love with quality that lasts. Lightweight cashmere sweaters, short sleeve Mongolian cashmere polos, linen bottoms and shorts, teas and 100% Pima Cotton and European Jersey linen. These are the versatile pieces that make a wardrobe actually work season to season. Quince works directly with top factories and cuts out the middlemen. You're not paying for brand markup or fancy retail stores, just quality clothing. The cashmere is 100% Mongolian, the same stuff luxury brands use.
Starting point is 00:23:18 The Pima Cotton is long staple, which means it stays soft and doesn't pill. The European jersey linen is breathable and lightweight. Everything is built to hold up to regular wear and still look good. Their clothing is rated between 4.5 and 5 stars by thousands of people wearing it every day, and they only partner with factories that meet rigorous standards for craftsmanship and ethical production. We love quints. I love quince. Got a lot of quince, though. Yeah, I've got to get some spring quince stuff now that it's getting hot out again.
Starting point is 00:23:42 Stop over complicating your wardrobe. You don't need a closet full of options. You need a few pieces that actually work. Their light sweaters are good. We love their polos. Got to get some shorts for the summer. Is it summer? Spring, whatever season it is.
Starting point is 00:23:54 Right now, go to quince.com slash offline for free shipping and 365-day returns. That's a full year to build your wardrobe and love it. And you will. Now available in Canada, too. Don't keep settling for clothes that don't last. Go to Q-U-I-N-C-E.com slash offline for free shipping and 365-day returns.
Starting point is 00:24:11 Quince.com slash offline. You point out that economists have struggled so far to see any significant effects of AI on the job market, at least yet. I guess that doesn't totally surprise me, only because it still feels like we're early in the process of, like, mass adoption of AI by employers. Or are we? All the job market stuff, I'm like, well, yeah, it's still early. So why would it show up right now? because just so many people are like, oh, it's a cool chatbot that I'm using.
Starting point is 00:24:47 But like the more advanced cases of how AI could displace employment, I feel like it's a small sliver of companies that are using that now. Yeah, that question prompted two things for me. So number one, it's a weird paradox that the technology is moving quickly and we're early. And sometimes, like, I find myself in debates where it's like, we're early. No, it's moving quickly. No, we're early. But both things are true at the same time.
Starting point is 00:25:12 Like, it's spreading quickly, and Chatsapitie came out in 2022 November. Like, it's been three years and five months. A child who's three years and five months old is very much a toddler. Like, this is a really, really young thing. And in a way, I think you're right that it would be strange to be overconfident about something like its macroeconomic effect in the labor market. The other thing that's like a complexifier here is really rough for people like me who love to be able to explain things, is that a lot of things were happening around no.
Starting point is 00:25:42 November 22. Number one, chat ChbettD debuted and kicked off this sort of age of generative AI. Number two, that's right after the Federal Reserve started jacking up interest rates faster than any period in modern history. And what's the point of jacking up interest rates? It's to raise the cost of money. It's to reduce demand for, among other things, labor. So if you look at a graph of hiring rates in America, it looks like hiring rates have this
Starting point is 00:26:08 inflection point right as Chach ChbT comes out. and just sort of like an innocent observer, they can look at this graph and be like, oh my God, Chad ShpT destroyed the labor market. No, it didn't. Not necessarily. The Federal Reserve was trying to cool off hiring. That's one of the best ways to cool down inflation. That's why hiring declined.
Starting point is 00:26:27 And the other thing that happened right around this period is that there was related to the raising of interest rates, this huge inflection point in tech hiring, where all these companies, you know, block Jack Dorsey just laid off like 40% of his workforce, block and meta and alphabet, all these companies hired so many people between like 2018, surging into 2021 right after the pandemic. They hired so many people and then they really, really pushed on the brakes around 2022.
Starting point is 00:26:56 And so you've seen hiring at the big tech companies and medium-sized tech companies really fall off. So you have three things that happened around the same time. This sort of tech U-turn, raising of interest rates and the debut of AI. And it all seems to coincide with this information. inflection point in hiring, it's hard for economists to say, oh, that inflection point is because of only one of those three variables. My guess is that it is much more to do with tech and the Fed than it has to do with AI for now. But as for what AI does to the labor force in the near future, I do think that's a big, open, murky picture, and it's precisely why a science fiction story
Starting point is 00:27:31 had the ability to move markets by a trillion dollars. I guess as I have continued to use AI and have gone from like chat GPT to Claude. And I'm still not like, I'm not great at prompts. I didn't use it for everything. Little research here and there use it as Google. But as I have started to glimpse how advanced it's getting and how much it can do. I've started to think, I don't know how it doesn't displace a ton of labor. And even people's like, well, it's going to make, it could just make workers more productive, yes, but that could then tell an employer, well, I'm not going to hire more people. I'm not going to lay off the people I have, but I'm not going to hire more people because now they're more productive with AI. I'm struggling to imagine the scenario where it
Starting point is 00:28:21 doesn't just wipe out a ton of white collar jobs. Yeah, let me tell you what scares me. And then I'll tell you why I think I've landed in a more optimistic position when it comes to more people keeping their jobs. And what scares me. So Claude Code comes out. while I'm on parental leave. And, you know, during the precious moments where you get the baby to sleep, I start to play around with it. And I realize it's really, really good
Starting point is 00:28:43 with government data. You can download big, big tranches of government data, say, from the Bureau of Labor Statistics and ask Claude Code to go through it. And it can do really, really good statistical analysis and make beautiful graphs. And you can tell it exactly what color and font you want the graph to be in.
Starting point is 00:29:00 So I do a lot of work on this data set called the American Time Use Survey, which is basically a guise, government survey of how people spend their time. Every year they ask thousands of people, how much time do you spend eating? How much time do you spend, you know, recording podcasts? And people say, you know, I spend three hours a week eating and this amount of hours doing this other thing. So I love this data set. I used it for this big piece that I wrote called the antisocial century on how much time people spend socializing versus being alone. And I was like, I want to do a story on being a dad. And the change in time
Starting point is 00:29:33 that fathers spend with their children, especially young children. And I just asked, in plain language, Claude, come up with 10 graphs about how the time-use experience of being a dad has changed in the last 20 years. Don't ask me any more questions, just go do it. Now, maybe Claude's going to make a bunch of mistakes, right? Like, I'm sure a lot of people listening are like, Derek,
Starting point is 00:29:58 like, you're a journalist, what are you doing, like turning over your research projects to an AI? I'm never going to publish that straight to my substack or talk about it on a podcast. I took all of it, and I sent it to an economist at the Aspen Institute, who I relied on for American Time Use Survey data. And I said, check it out, you know, fact check this research. And his work came back, and it was within like one or two percent
Starting point is 00:30:20 of what Claude had done. And he was like, I'm terrified because this took me five hours, and I'll bet it took you three and a half minutes. And I said, not only did it take me two and a half minutes, but also I don't know what the F I'm doing with any of these datasets in terms of, manipulating them the way you are. So that's crazy, that the five hours of an expert's time could be done by a novice in two and a half minutes.
Starting point is 00:30:42 That, I think, if you multiply it out over the economy, you could begin to wonder where are some of these sort of knowledge economy jobs going to go. Here's why I'm more optimistic. I have this model in my head of, like, is this technology a horse technology or a spreadsheet technology? So with the horses, famously, horse labor is a huge part of human history. There are millions of horses working on farms and, you know, carrying people throughout the streets in the early 1900s. But the internal combustion engine is invented, and it basically wipes out the working horse population by 99.9%. Right. Like, the internal combustion engine famously, between tractors and cars, just completely displaces horses.
Starting point is 00:31:24 So some people basically are like, humans are horses. But humans aren't horses. And you know they're not horses. because look what happened to spreadsheets the 1960s, 1970s. There were a handful of people who were working in spreadsheets, who were, you know, accountants or, you know, working with taxes. And then, you know, products like Excel are invented. And you'd think, innocently, if all you knew about economic history was the horse,
Starting point is 00:31:47 you'd be like, oh, my God, all of these spreadsheet workers can be put out of business. And instead, the number of people working with spreadsheets didn't go to zero. It went to infinity. Everybody works with Excel. Or, like, everybody. everybody in marketing and PR, et cetera. There are tens of millions of workers who, for better or worse, I'm sure many of them would say for worse,
Starting point is 00:32:08 lived with Excel attached to them like a barnacle. And so Excel did not displace work. Excel just became the universal working companion. My current hope, and maybe this is just motivated reasoning, my current bet, is that generative AI is going to be more like Excel spreadsheets than like internal combustion engines and horses. I think it's just going to be like a home screen for a lot of knowledge workers,
Starting point is 00:32:34 whether they're in marketing or PR. It's going to make graphs for them and PowerPoints. It's going to write memos. And the downside of that is that a lot of people, I think, are going to outsource their brains to these machines and suffer like cognitive atrophy and get dumber over time. Like that's going to happen.
Starting point is 00:32:50 But the bright side is I think they're going to continue to get a wage. I don't think this is going to drive unemployment to 20%. Yeah, I do feel like it's going to create an economy that prioritizes creativity, judgment, like all of these skills that are a little tougher to like wrap your arms around. I hope so. So I had my annual physical and I get blood work done and, you know, because I'm 44 now, they of course like test for everything and my doctor sells me and all this stuff. So anyway, I get all the results back before the doctor me because it just like, wow, that's so funny. So it's just like you get the email from the lab,
Starting point is 00:33:32 right? Yeah, with a lot of total nonsense little like bars where it's like here's your line between within the bar and it's like your GXK9 is 44.4. Right. It's like, thanks for that. It's like, yeah, it's just enough to freak you out if you're a little, if you have a little hypochondry like me. And, you know, then there's like, it's all black and then there's a little red and the red means that it's like out of the range. You know, so I was like, you know what? I'm a doctor's busy, whatever. I'm just going to throw it in just to see in chat GPT, right? This is a chat GPT at the time. And they spits out a whole analysis about every single thing, very easy to understand. And I'm like, well, this will be interesting when I finally talked to my doctor.
Starting point is 00:34:11 And my doctor calls, and it was exactly right. In fact, it was more helpful than my doctor. I had the, so I had literally the exact same experience. There's no point to me recapitulating it because it was word for word what you experienced. The question to our purposes is, and we don't need to belabor it, but like, whose job does that actually replace? Right. Well, I was going to say, because, like, that example can go both ways because I'm like, oh, instead of the doctor having to call me or all the other patients to explain this shit to us, because we don't know what he knows, now the AI can do that, and he can go see other patients. Exactly. Exactly. I mean, there's a, sort of a C-plot line on the show The Pit this year of, I don't know if you watch The Pit,
Starting point is 00:34:54 maybe some of your listeners do. Too intense. I get it. I get it. There's like a subplot line about the introduction of generative AI to hospitals. And I've spoken to folks to doctors and to people who've spoken to doctors about how they're using tools where essentially as they go about treating a patient, they'll verbalize more than they're used to so that the AI can process everything that they're hearing and observing and thinking during the intake during the exam, and then it'll just write up its report. And the doctor doesn't have to spend
Starting point is 00:35:31 hours and hours a day, certainly many hours a week, writing up all these reports by hand, because the generative AI does it and the doctor can look at it to make sure that it's right, of course, and then they have more time to actually spend with patients. And how common an experience is it as a patient to, like, have the doctor come into your room 25 minutes late when you're in the middle of a busy day. And the doctor's like, I'm sorry, I got held up doing X. Well, if I can find a technology or if we can make a technology that reduces that X so that they can meet their patients on time and maybe even see more patients, that seems like a mitzvah for everybody. So again, I'm being optimistic here and I don't want to represent the idea that like, I can't imagine really bad scenarios for the rollout of AI. But they're just absolutely going to be use cases in which is just saves time so that we find more human to human applications of the workday. That's my deep, a poli- vanish hope. The one thing I also feel relatively certain about is we are going to need a government that we can trust that is functioning, not only here, but probably in other places around the
Starting point is 00:36:42 world and we'll need some kind of like global cooperation here in order to put some guardrails around this because I think if there are no guardrails around it, I don't see a scenario where this ends well. And, you know, you may. mentioned Anthropic in the Pentagon a few times, but I'm interested in, like, how big of a deal do you think what HegSeth did to Anthropic and whether it's going to be something that's repeated? Oh, it's completely outrageous. I haven't heard a real defense of the supply chain designation, which essentially said that we're going to treat Anthropic as if it's a Chinese Communist Party agent, like it's Huawei, like it's a saboteur. I have heard good arguments for why Anthropic
Starting point is 00:37:25 has no business doing contracts with the U.S. government, right? That if Anthropics is going to have guardrails that the Pentagon doesn't agree with and wants to be in the loop for decisions that the Pentagon wants to have sole ownership over and that the Pentagon might have the right to request sole control over decisions that are made in the interest of national security, then, yeah, I mean, Anthropics, a private company that can come to a free decision to not sign a contract with the federal government. What I have a huge problem with is the federal government pointing at a company and saying,
Starting point is 00:37:55 You, I just wrote a contract for you. Sign this contract, including every single detail in the contract, price, what's ruled in, what's ruled out, use cases, guardrails, sign this contract, or I'm either going to have a defense production act, either going to have a DPA designation that means that I'm going to force you to sign this contract, or if we don't do Defense Production Act, I'm going to label you a supply chain risk and ether your company, nuke you from outer space. that seems to me to come very close to the end of private property rights in America, right? I mean, not to be hyperbolic, but like if the government can point at you and say sign this contract, and if you say no, they can destroy you.
Starting point is 00:38:39 That really doesn't sound like something that's too distinct from just straightforward Maoism. And so I'm concerned about this thing on two levels. I'm really concerned about the supply chain designation, which I think is completely egregious. But also, there's a story behind the story. here. And it's like, there's a world in which would open AI an anthropic or building plateaus before it becomes genuinely dangerous. There's a world in which it doesn't. There's a world in which these technologies could be like really useful for hacking into the federal government, for violating private property, for operating drones that like a household buys,
Starting point is 00:39:18 for hacking into energy grids and bringing them down. What these companies are building might not be so distinct from an advanced weapon. And in the 1940s, we didn't ask private companies to build the nuclear bomb. We didn't say, hey, Ford, GM, like, go, like, see if you can, like, get all the uranium and build an atomic bomb and just, like, give us a call if you think you built it. No, we had that should under lock. And so if these technologies are building what they sometimes claim to be building, it's not entirely clear to me that something about, their work shouldn't be nationalized because work like that has always been nationalized. So I think what Hegs have did is indefensible. But I think if you scope up and sort of see the
Starting point is 00:40:06 broader contours here, the next few years are going to be really weird from an AI versus USG, U.S. government perspective, if the technology continues to improve. Well, and good for Anthropic for trying to be the... the white night here, if they are, right? But like, the challenge here in a, you know, competitive global economy with no rules when it comes to AI is that even if Anthropic does everything to make sure that they are building a safe ethical AI, there are competitors. There's going to be the XAI that run by Elon Musk, you know, or or San Francisco. Altman and Open AI jumping in to do what Hegseth wants them to at the last minute.
Starting point is 00:40:57 And that's just in the United States. Like we're not even talking about what's going on with China, right? And so it's like, yes, we have to have like rules and guardrails here in the United States. There also has to be like some serious international agreement treaty, whatever. And it just feels like we're so far from building the legal framework to give ourselves a chance here if in fact this technology is going to develop as fast as some of these people think it is. Everything you just said, I totally agree with.
Starting point is 00:41:24 There's one answer I could give that's just like, I recapitulate everything you just said. I agree with all of it. I'm very scared about the fact that even if we get some like philosopher king government of the United States through the 2030s, right? Okay, a lot of other countries exist. And China isn't 20 years behind in AI development. They're like nine months, 14 months. It's a blink of an eye in the course of history. If what we're talking about is something akin to a modern super weapon, it's incredibly scary.
Starting point is 00:41:51 And it scopes up to this broader idea that, like, I don't judge people who just want this whole thing to go away. What if we just stopped tech in, like, 2011? It's like, what if we were just like, I'm good? We have Uber and Google Maps. I'm done. We have private chauffeur services for me that I can hail on my phone. Don't need the autonomous weapons. I think I'm good.
Starting point is 00:42:18 It's like, I have some sympathy for that point. I want AI to solve cancers too, but my guess is that we're going to end up building super weapons before we solve cancers. And so it's not as if there's a point that I was circling with a conversation with Pablo Tori and Mina Kimes on their podcast a few days ago. It's not like the AI CEOs are giving us a whole lot to root for. The advertisements are like, it'll help you do more pull-ups. That's cool.
Starting point is 00:42:42 Like, you know, I wish I could do like three more pull-ups in a set. So the advertisements are like, this is good because it'll help you cook like a better pasta and do more pull-ups. and then the CEOs are like, it's going to displace 30 million jobs and totally transform the U.S. economy so that we're going to need a universal basic income because you're so not going to recognize the labor market that follows AI. And also, we might be building something like a nuclear bomb, and, God, we would really like to be regulated because we don't know how to solve these problems.
Starting point is 00:43:08 Like, we're not being given as citizens a whole lot to root for, and when people feel their experience with AI is like I opened TikTok and it's slop, and then I watch Dario or Sam talk. about this technology and it's dystopia. I hate this. I want to go away. Bring me back in a time machine to 2011, please. I get it.
Starting point is 00:43:26 I'm not saying that I think everyone in the space is like the same level of morality. I think there's gradations, but like I absolutely get the popular backlash and we wouldn't necessarily go into this, but like you should do a thousand shows on it. You know it's going to be just a huge issue, 2026 maybe, but like 2028,
Starting point is 00:43:44 especially if it seems like a bit of a bubble. Like these presidential candidates better get their message straight on AI because it's already the biggest story in the world potentially and God only knows what it's going to look like in two and a half years. Yeah, and so far the most they'll say is it's really important. It's the issue of our time and we need some regulations
Starting point is 00:44:02 and it's like an inch deep. A lot of the knowledge or at least what they're saying. This episode is sponsored by BetterHelp. Women are often the world's primary caregivers, but who cares for the caregiver? This international Women's Day, BetterHelp is shifting the focus back to you. Beyond the roles you feel for others,
Starting point is 00:44:25 your mental health deserves its own space. Discover how therapy can help you reclaim your time and prioritize your growth. That's something. Yeah, whether you're a woman or a man or know a woman or has ever known a woman, you probably need therapy. And, you know, maybe your therapist is going to be a woman. Maybe not.
Starting point is 00:44:45 That's up to you. Better Helps Quality Therapists work according to a strict code of conduct and are fully licensed in the U.S. BetterHelp does the initial matching work for you so you can focus on your therapy goals. A short questionnaire helps identify your needs and preferences in their 12-plus years of experience and industry-leading match fulfillment rate means they typically get it right the first time. If you aren't happy with your match, switch to different therapists at any time from their tailored wrecks. With over 30,000 therapists, BetterHelp is the world's largest online therapy platform,
Starting point is 00:45:10 having served over 6 million people globally, and it works. With an average rating of 4.9 out of 5 for a live session based on over 1.7 million client reviews, your emotional well-being matters, find support and feel lighter, in therapy, sign up and get 10% off at BetterHelp.com slash offline. That's betterhap H-E-L-P-O-C-O-L-P-O-L-L-P. If you love positive America and want more of my political analysis, you should subscribe to my newsletter, the message box. I'm Dan Fifeer, former senior advisor to Barack Obama, and in message box, I break down what's actually happening in politics and what it's going to take to beat Donald Trump, MAGA. If you follow every poll and every twist-in-turn in the
Starting point is 00:45:45 campaign, message boxes for you. This isn't just hot takes. Every edition delivers clear analysis, behind the scenes insight, and practical strategy you can actually use whether you're working on a race, organizing your community, or just trying to win the argument in your group chat. So if you're listening to this, hit pause, go to your browser and head to crooked.com slash yes, we Dan, because I have a special offer for crooked media fans. You would get 20% off the message box for an entire year. So go to crooked.com slash yes, we did. Let's talk about everything is television, which is a piece where you argue that everything
Starting point is 00:46:24 is becoming television, social media, podcasts, the AI-generated slot videos we were just talking about. And by television, you mean video on a screen, particularly the endless flow of videos on a screen. Can you talk about this convergence over the last few years of how in different sectors everything is becoming television? Yeah, so this idea first occurred to me because I was reading an FTC report where Facebook meta was trying to argue that it wasn't a monopoly. And the process, of filing a report with the federal government arguing that they're not a monopoly. They made this really interesting confession
Starting point is 00:47:00 where they said that only a tiny fraction of time spent on meta services, I think 7% on Instagram are spent consuming content from friends. 93% of the time spent on Instagram is consuming content from people you don't know, video content often from people you don't know. And I read that.
Starting point is 00:47:19 And just the thought that just popped into my head was, oh, you know. Yes, of course, Instagram is television. Meta's television. These are not social networks. These are the television networks of the 21st century. And then what that immediately clicked into is that I had been talking to some of my friends and employers at the Ringer podcast network where I record my podcast, Plain English, and they've been urging me to turn my podcast, which was audio only, into a video podcast, just like this one and just like all the ones that Crooked does.
Starting point is 00:47:50 And initially I was very foot-dragy on this because I like to make the kind of content that I consume and all the podcasts that I consume are just in my ears while I'm making coffee in the morning. And so I liked an audio-only product. But they show me the data, and it's like completely obvious that video podcasts are going like 5 to 10 faster
Starting point is 00:48:08 than audio-only podcasts. And you are leaving so much listenership on the floor or dollars on the floor, influence on the floor, whatever you want to call, whatever noun you want to use. Whatever it is, you're leaving on the floor if you make it an audio-only product, rather than a video product. And then I thought again, oh yeah, that's it. Podcasts are also
Starting point is 00:48:26 becoming television. And that very same week that I read the FTC report, SORA 2, the sort of weird AI video social network that Open AI was experimenting with came out. And I was like, Jesus Christ, AI is trying to become television as well. They're trying to like make their product, which could theoretically do anything. Like it's going to solve cancer. It's what they always say. They're trying to turn it into short form video into TikTok television as well. And so it occurred to me that, like, television and short form video in particular, right, flowing video, it was like the attractor state of all media. It's like, it doesn't matter where you start in, like, the media ecosystem.
Starting point is 00:49:03 You can start as radio. You can start as an online yearbook for Harvard students. You can start as a company trying to build superintelligence. It doesn't matter where you start. If you're in the media ecosystem, the attractor is television. You will become television. And so I thought, you know, I should write that. So I wrote the piece.
Starting point is 00:49:23 It's called Everything is Television. And it's just this idea that there might be some interesting consequences of living in this age of just nonstop, short form video flow. That like the grammar of our media, the grammar of our politics, of our life is just so monopolized by the logic of if you're making media, it has to be TV. Yeah. And I love the piece because the consequences of this are really. fascinating to me. And especially the consequences of how the short-term video has now affected what used to be traditional television. And you use the point that Netflix telling screenwriters to make plots as obvious as possible for all the people who are half watching and half-scrolling
Starting point is 00:50:11 on their phones. And I will just say, you can tell. Both you can tell and I am guilty. Right. I'm thinking about this the last couple years. And I'm not. I've been thinking about this the last couple of years. I'm like, why does Netflix churn out these movies with like A and B list actors and they look great and the production is great? And like, there is just a hole in the film, in the plot, and in the writing, and in the dialogue. And I'm like, I know there are wonderful writers out there. What is happening? And it's intentional. Like, it's not just like all the great writers have disappeared.
Starting point is 00:50:45 And it also makes me think that when everyone's like, well, yeah, I want to. never be able to replace, like, human creativity. And I'm like, that's true with, like, prestige television, which, like, a small slice of liberals are basically into. But for most of the country, most of the world, I actually think what AI will be able to create, probably going to do it for it. Do you do it for them. Yeah, it very well might. I don't have, like, a moral position on TikTok and Instagram, really. This isn't, like, a vegan talking about cow meat. But I'm not on Instagram or TikTok. So when I write about Instagram and TikTok, it's like a cultural anthropologist,
Starting point is 00:51:22 like visiting a foreign country a little bit, which maybe that kind of social distance is useful. Maybe it's not. But I agree. I don't think these short-form videos that often go viral are like so inspiring that I'm like, oh my God, no one with an AI tool could possibly make something as diverting.
Starting point is 00:51:40 It's like I'll bet they can. I bet they will. I do think that, and I want to be clear about this, because I don't know that AI slop will entirely take over social media. I do think that there is a level at which people like knowing that they're listening to or watching people.
Starting point is 00:51:59 Like, there's a reason why, as Matt Bellany and my colleague at The Ringers reported, there's a reason why Hollywood is a little sketchy and a little surreptitious about how much they let on they're using AI. His point is they use a lot more AI than they actually admit. And that's because I do think there's a little bit of this embarrassment quotient, which is good to have. Like, I want to keep it.
Starting point is 00:52:26 Keep the shame. Keep the shame. Like, people should want to read and watch and listen to stuff made by people. I've had a really busy day today. We're talking on Friday. And so I haven't had an opportunity to listen to the Harry Stiles album that I understand came out this morning, both for personal purposes and to be able to have any conversation with my wife when I get home from work. I know that's going to be top of her list. I enjoy listening to Harry Stiles.
Starting point is 00:52:52 It has never occurred to me to put a song I know to be an AI song on a playlist. And God, I hope that my child, who's two years or three months old, feels the same way and isn't putting some AI slop on her future robot Spotify playlist in the 2040s, whatever that's going to be. I really, really hope so.
Starting point is 00:53:12 I don't know. But I can definitely tell you right now, it's not remotely interesting to me to either consume like most AI content or, and this might be just as important, or just share it. Like, if I did accidentally
Starting point is 00:53:25 watch like an AI video that wasn't obviously an AI video where like the fun of it isn't that it's like stupid AI, I think I'd be too ashamed to share it and I hope that that shame still exists for a lot of people. And I also think, pointed this out, like on the entertainment side,
Starting point is 00:53:43 I am not passing judgment on anyone. I like all kinds of shit. The effect on politics and democracy is what really worries me in the short-term video and you get to this a little bit. I just want to read your take on Neil Postman's very old warning
Starting point is 00:53:59 about a society that watches too much television. Quote, every form of communication starts to adopt television's values. Immediacy, emotion, spectacle, brevity. In the glow of a local news program or an outrage news feed, the viewer baths in a vat of their own cortisol. When everything is urgent, nothing is truly important.
Starting point is 00:54:18 Politics becomes theater, science becomes storytelling, news becomes performance. The result, Postman warned, is a society that forgets how to think in paragraphs and learns instead to think in scenes. I mean, that is politics today, right? I wrote that, but I also didn't write that. And that's not me being like Claude wrote it. I mean, like, Neil Postman wrote that. Right. Well, yeah, that is a...
Starting point is 00:54:39 It's incredible how much he saw 50 years ago. Amusing Ourselves to Death is, like, one of the most prescient books written in the last half century. It's genuinely remarkable. And it's really interesting to me who succeeds in today's politics. There are people who I think use straight-to-camera short-form video in a rather brilliant way that I think represents, like, the light side of the force, not the dark side of the force. Like, I think Zoranam Dani, while I disagree with some of his policies, is an absolute savant. I think his level of charisma and cheer, good cheer. specificity is remarkable.
Starting point is 00:55:14 Like, he's not doing straight-to-camera short-form video slop. He's doing, here's a halal cart, and they used to serve food. There was $8, and now it's $10 because the cost of permitting went up from $15,000 to $20,000, and I want to reduce the cost of permitting. Let's come together and pass this new law because you deserve cheaper lunches. It's wonderful. It's brilliant. And I love that.
Starting point is 00:55:38 I wonder if Barack Obama came on the political. scene today, not some Barack Obama who was like coached in the ways of short form video, but like the actual Barack Obama that exists. How confident are we that there's like space for a figure like that, for an orator, like throwback style orator versus someone whose success is their ability to like be snappy, snappy, snappy and straight to camera, like I say, thinking in scenes and news becoming this kind of short form performance, is there a way in which that style of politics is like isn't conceivable anymore in the 2020s? I obviously have thought about this quite a bit.
Starting point is 00:56:29 Few things. One, even back then, when Barack Obama was running for president, it was a challenge that he had to adapt to. to learn to communicate in that era. Because he is like, he is first and foremost before he was a speaker as a writer, and he would write in long paragraphs, didn't really like periods that much, a lot of semicolins. And just to get him to go from writer to speaker
Starting point is 00:56:56 and a speech that's not like, you know, 12 pages long, but five pages long, took some adaptation. So, like, perhaps he could adapt. The argument for that there's still a demand for, sort of long-form content, right, is, of course, the, like, people who are listening to the fucking three hours of Joe Rogan or some of these, like, long-
Starting point is 00:57:17 So the attention span... No, I'm glad you mentioned that, because that's a really important, that's a really important pushback on my thesis, is like, how the fuck do you explain Joe Rogan and Lex Friedman and whatever, Dwar Keshe, and all these podcasts that are super popular, and they're three and four hours long if everything's moving a short-form video.
Starting point is 00:57:34 I've thought about that. It's a really, really good and smart objection to my thesis. So I want you to finish on the Obama thought, But I'm going to pick that up in a second. No, so there is that. There's also, though, we tend to think about this in terms of like, is this candidate good at this form of video? And, you know, Zoran Mondani's good direct to camera and he's specific. And Barack Obama was good here.
Starting point is 00:57:56 But like what you were writing and what Postman has been warning about for 50 years is more about, like, participating in a larger system. Right. And I do think that an environment. where communication adopts television's values, immediacy, emotion, spectacle, brevity, everything's urgent, nothing is truly important. I think it makes the playing field between sort of right-wing populists and authoritarians and liberal democracies, uneven. Like, I have always thought that it gives the right-wing populace an advantage, because all of those emotions, all of those values are very much the values of people
Starting point is 00:58:36 who are just looking to tear shit down, whip people up, and take power. And I think that the values required for a functioning democracy, which is, you know, viewing things as not binary, but complicated, nuanced, taking the time to understand one another, empathy, like, looking towards the future and doing something that's not going to pay off for another five to ten years, like, it is, swimming upstream to try to be good and short form video if you are sort of fighting against these larger forces in our politics that prioritize, you know, shit posting and everything that we see from Donald Trump. I don't know if I'm making myself clear, but like this is what always sort of challenges me with like a Gavin Newsom, right? Like, so everyone can look at Gavin's last year
Starting point is 00:59:32 and be like, he's really good at getting attention. And he figured out how to do what Trump's doing, get attention. It's like, yes, but I can also tell, and I can even tell this from talking to Newsom, that, like, he knows that it was a short-term sugar high and that what he's doing is sort of empty calories and that there needs to be something truer and deeper to his political agenda and philosophy if he wants to succeed. And I think that, I'm sure Mom Dani believes that as well, but I think that, like, I think we're just in a tougher environment now where if we are all just scrolling through our phones watching short-form video nonstop, then the values required to participate in a functioning democracy are just going to be that much harder to live by.
Starting point is 01:00:17 Yeah, I think political success requires that candidates lean into the technology at the moment rather than lean away from them. I mean, this goes back over a century. This is FDR and the fireside chats on the radio. Eisenhower was considered like the first, if not television president than the television primary, I think it was 1950, what it's 56, was the first television primary. JFK was maybe the first true television candidate, and, you know, it's lore now that the difference between his cool appearance on television versus Richard Nixon's sweaty appearance on television meant that people who watched that debate leaned heavily toward Kennedy, while people who listened to it on the old technology, the radio, leaned toward Nixon, who obviously lost. So I've one thesis or one belief that it's important for, candidates to lean into modern technology. I also have this thing where, like, I hate the values that are often inculcated with
Starting point is 01:01:10 this modern technology, which is why I'm looking for folks who are able to ingeniously braid a loving or optimistic or cheerful or productive mode of politics into a mode of media that is often whatever the antonym of all those words is. Like, uncheerful, unproductive. I've now forgotten what half of them are. I'll blame my children. I do think that, again, I think Mamdani's good at it. A point that I think is worth reflecting on here is that, you know,
Starting point is 01:01:41 one might naively or innocently assume that, like, given the mode of this technology and the popularity of far-right populism around the world during this age of short-form video taking over media, that the most successful politicians would be those that are mean or angry or vengeful or populist in the mode of screaming about the group to blame, whether that group is immigrants or billionaires or whatever. And it's just important to remember that two of the most significant,
Starting point is 01:02:14 upwardly mobile politicians of this moment, the Democratic Party, are Zora and Mamdani and James Talariko, who are schoolboy cheerful and just seem really damn nice. And I've only met each of them very briefly, either on Zoom or in person. I think they seem really nice to me. You've probably spent more time with them, but they seem really nice. And it's interesting to reflect on the fact that, like, yes, McClouin said the medium is the message, and yes, theories of media often suggest that there's, like, a deterministic quality of media, that we become the media that we consume.
Starting point is 01:02:46 But I want to maybe, like, give, like, a cheer here for human agency. Like, we can choose to not be as angry as our media might seem to want to make us. and I think Tala Rico and Mamdani suggests that there's a way to be charismatic in a very 21st century way while also retaining some base level of human decency. And my hope is that whoever picks up the baton for the Democratic Party writ large gets very good at brating those two skills because I think they're going to be necessary. And Mamdani proves that being nice or cheerful or, you know, you don't have to be like a squishy moderate, right? It doesn't actually have to do with ideology.
Starting point is 01:03:29 I think there's sometimes, and you see this from folks on the left, whenever you say something like AOC is an incredible speaker or Mamdani's so charismatic. And they'll be like, oh, you think civility is the answer. And it's like, no, no, it's not about civility. And it's not about just it's you can have a very, very populist agenda, an ideological left agenda or center, center left, whatever it may be. It is about the style. And I do think that after we have all lived through this last decade, I think there is more of a hunger out there for someone who is going to speak to people's aspirations and hopes for the country, then we might imagine, given that we are, you know, just drowning in all this vitriol all the time. I think it's funny that you landed there. So the podcast that I released this morning is with Senator Ruben Gallego.
Starting point is 01:04:23 and we talked about the Iran War, and we talked about his ideas for democratic messaging. And I didn't expect to end in the place that we ended, but the place that we ended was his case for how Democrats need to match their affordability rhetoric with an aspiration rhetoric. That affordability is really important, like raising the floor of, like bringing up Americans' experience of daily life to some minimum viable level of middle-class comfort is really important. And at the same time, I think there's a lot of voters that are like Democratic Party adjacent.
Starting point is 01:04:58 Like, we could get them. They're not deep MAGA. We could get them. Especially young voters who hate Trump right now and especially young male voters who are really piss at Trump right now. I think these guys want to get rich. Now, I don't want them to get rich by breaking the law. I don't want them to get rich by trying to bet on cryptocurrencies that are going to be up 1,000 percent tomorrow and down 2,000 percent the next day. But I want them to see in the Democratic Party a group of people who believe that they can stand for rights and decency and affordability and also celebrate aspirations and success.
Starting point is 01:05:40 I think that sometimes there's a way that Democrats can talk that treats success as something, not to punish, that's too strong a word, something to tax, something that could become problematic, There's like a tall poppy syndrome, I think, in Democratic Party language sometimes. And sometimes that's appropriate. This is not my all anti-billionaire messaging is bad. So, Bachshpiel.
Starting point is 01:06:01 But I think we need to find a way to celebrate aspiration and success. And maybe that's another challenge for someone to combine with straight-to-camera presentation and a bit of cheer. Is this like, how do you make people feel a little bit optimistic and not just feel like you're trying to be like a sort of cold problem solver? Yeah. last question before I let you go you wrote a beautiful essay last week or this week I can't remember which week on being a dad
Starting point is 01:06:28 and the one part that really stuck with me is you talk about the reasons to become a parent and you mentioned that a reason to become a parent is that we're built for it and it's built for us which is something that is really hard to understand
Starting point is 01:06:46 before you become a parent I found that even when I became a parent, it's not automatic. You kind of have to just let go at some point and embrace the new experience, the chaos, the joy, all of it. The crying. Yeah, the crying. Or as you put it, ride the ride. You talk about you like it. It's like an amusement park ride.
Starting point is 01:07:10 And it really is. You have to get on and not just get on and then be nervous the whole time, but get on and be like, all right, hands up. Let's just see how it goes. Yeah, I loved writing this piece. And this is one of those essays where people out there who write might have this experience where sometimes if you ask how long did it take to write that essay or write that article, it's like it either took six months or it took six hours. It's like it was gestating for so long and then actually putting it together was so easily
Starting point is 01:07:36 because I've been thinking about it a lot. There were a couple ideas I really wanted to get out there. One is that I feel really strongly that being a parent isn't like parenting a base. it's like parenting this sequence of babies because the baby keeps changing day after day while retaining her basic facial structure like she sleeps then she doesn't sleep and she nurses and she doesn't nurse and she smiles then she stops smiling then like it's it's always changing and there's something really lovely about this idea to me that like being a parent is falling in love with a thousand beautiful strangers that evolve behind a single face um and i i i was drawn to write this essay initially by having that feeling. But the next feeling that I had was like really trying to think about like the experience of being apparent and like how it felt different than the rest of life. I am a wire cutter guy, if that makes sense. Like if I need to know like what, what mic to buy, I go to wire cutter. What pants should I buy? I go to wire cutter. What helmet should I buy for
Starting point is 01:08:37 my electric bike? And what electric bike should I buy? I go to wire cutter. I outsource so many decisions in my life. And in many ways, I'm like the worst creature of instinct. Like, I'm not consulting myself for any of these decisions. I'm not doing any of my own research. I'm letting the New York Times wire cutter make all my decisions for me. And then there's parenting. And in parenting, I just, like, felt, like, really connected to my instinct.
Starting point is 01:09:01 And in wondering, like, what is that? Like, why do I feel this, like, surge of instinct when it comes to being a dad? I thought, you know, maybe people are just, like, honed. by natural selection or whatever, to do like a very finite number of things. Like, of course, eat and sleep and move around and procreate and have sex, but also, like, how does this species survive
Starting point is 01:09:25 if we aren't at some level made to parent? And therefore, how does the species survive if we're not at some level made to, like, fall deeply in love with those thousand beautiful strangers evolving behind a single face? And so that led me to this idea that, like, you know,
Starting point is 01:09:41 life is like this amusement part, where you're locked in it, and the park has no clear purpose, and there's just a lot of rides that are there, and falling in love is a ride, and making deep friendships is a ride, and sex is a ride. And these are all experiences that you can choose, and they were all built for you, and in a way you were built for them.
Starting point is 01:09:58 And parenting, I think, is as profound and unprofound as this. It's just another ride in the park. Like, if you choose not to do it, that's fine. It's just another ride in the park. But it is there. it is built for you, and you were built for it. And if you are only going to be in this amusement park once, you might as well ride the rides.
Starting point is 01:10:25 And so that was sort of the second idea that I had, is that my case for becoming a dad is three words long. Ride the rides. Right the rides. Well, and I'm someone who wasn't sure that I was built for parenting or parenting was built for me. Like, I don't know why. I was just very nervous about it.
Starting point is 01:10:41 And I guess I thought for a while, because someone would be like, oh, there's like a biochemical connection when the baby's born and all that. And it happened. And I was like, well, I don't necessarily feel that yet. I thought it was like a switch. But it's like everything else in life where it just gradually dawns over time. And I just remember having this moment, I don't know, if it was a year into when after Charlie was born. But I was just like, I put him to bed.
Starting point is 01:11:06 And then I was like, oh, I'm good at this. I think I'm good at this. Like, I did not think I would be good at this. I did not think I would be built for this. But, like, I am. How did that happen? It is exactly what you articulated in that essay that we are. We're built for it as humans.
Starting point is 01:11:22 Amen, brother. Thank you so much for joining. And it's good to have you back from Leave and thinking and writing about all these very important topics. So good to see you. Thank you, good to see you. As always, if you have comments, questions, or guest ideas, email us at offline at cricket.com.
Starting point is 01:11:38 And if you're as opinionated as we are, please rate and review the show on your favorite podcast platform. For ad-free episodes of Offline and Podsave America, exclusive content and more, go to cricket.com slash friends to subscribe on Supercast, substack, YouTube, or Apple Podcasts. If you like watching your podcast, subscribe to the Offline with John Favreau YouTube channel.
Starting point is 01:11:58 Don't forget to follow Cricket Media on Instagram, TikTok, and the other ones for original content, community events, and more. Offline is a Cricket Media production. It's written and hosted by me, John Favro. It's produced by Emma, I like Ilik Frank. Austin Fisher is our senior producer. Adrian Hill is our head of news and politics. Jerich Centeno is our sound editor and engineer. Audio support from Kyle Seagland.
Starting point is 01:12:30 Jordan Katz and Kenny Siegel take care of our music. Thanks to Delon Villanueva and our digital team who film and share our episodes as videos every week. Our production staff is proudly unionized with the Writers Guild of America East. If you guys like Pod Save America, please consider subscribing to our Friends of the Pod program. So Friends of the Pod get lots of stuff. You get more Pod Save of America. That includes our new show, which is called Pod Save of America only Friends. It's where Dan gets naked. Where Dan gets full frontal nudity, but mostly
Starting point is 01:13:15 it's a bi-weekly subscription-exclusive podcast that is basically Pod Save America, but behind a paywall. So it's a little bit looser and more fun, and it's Love It and Favreau and me and Fifer and that other crooked hosts, we go deeper on the news and cover more stories. You also get open tabs, which is a weekly behind-the-scenes newsletter
Starting point is 01:13:31 from the show. Plus, you get ad-free episodes of your favorite crooked podcasts, and all kinds of other stuff. Dan will come to your house and clean it once every quarter. Yeah, clothes. Dan is very busy, clothes only. But along with just getting great content, becoming a friend of the pod,
Starting point is 01:13:44 joining our subscription community is the number one thing you can do to help us grow to help independent progressive media. So if you're ever thought about doing it, if you ever wanted more Pod Save America, consider going to crooked.com slash friends and becoming a friend of the pod.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.