Bankless - AI Populism: Warning Shots Before 2028 | Jasmine Sun

Episode Date: May 13, 2026

AI is no longer just a technology story. It is becoming a political fault line. Jasmine Sun joins Bankless to unpack the rise of AI populism, why backlash against data centers and AI labs is spreading... across strange political coalitions, and whether jobs, inequality, and SuperPAC money could turn AI into a defining 2028 election issue. --- 📣SPOTIFY PREMIUM RSS FEED | USE CODE: SPOTIFY24 https://bankless.cc/spotify-premium --- BANKLESS SPONSOR TOOLS: 🔮POLYMARKET | #1 PREDICTION MARKET https://bankless.cc/polymarket-podcast 🟦 COINBASE ONE | GET 20% OFF https://bankless.cc/coinbase-one 🧭OKX | TRADE, EARN, PAY to OKX | 120M+ USERS WORLDWIDE https://app.okx.com/join/USBANKLESS 🦊 METAMASK | DOWNLOAD NOW https://go.metamask.io/BL-Pod-Download 🌐BRIX | EMERGING MARKET YIELD https://bankless.cc/brix 💰NEXO | Get your 30-day access to Wealth Club Premier https://bankless.cc/nexo --- TIMESTAMPS 0:00 What Is AI Populism? 2:56 Why AI Could Matter in 2028 11:13 Data Centers, Distrust, and Warning Shots 25:27 The Jobpacalypse Debate 43:25 Game Over for Labor? 51:52 The New AI Political Map 1:04:50 Power Inequality, Policy, and the Grand Bargain 1:18:37 How to Stay Useful in the AI Era --- RESOURCES Jasmine Sun https://x.com/jasminewsun Jasmine’s Substack https://substack.com/@jasmine --- Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures

Transcript
Discussion (0)
Starting point is 00:00:02 Bankless Nation, we are here with Jasmine's son. She writes about AI, technology, and politics. She is a contributing writer at The Atlantic and recently has a New York Times opinion piece on AI and the permanent underclass of phrase we are all too familiar with here in the world of crypto. She's also the AI of the AI populism series on her sub-sac jasmine.comit.com news. Jasmine, welcome to Bankless. Thanks so much for having me. Jasmine, you put together a definition for AI populism. You wrote it a worldview in which.
Starting point is 00:00:32 AI is viewed not only as a normal technology, but as an elite political project to be resisted. This is really what we want to explore here with you today on the show. Kind of want to ask the question, and maybe we can start with this. How big is AI populism as a political issue domestically here in the United States? And we kind of want to get to do we think AI populism will be a relevant issue in the 2028 election? So maybe we can start with that first question. Just how big do you think AI populism is in the world of politics? politics. Yeah, thanks for asking. Yeah, I've been thinking about AI populism a lot over the last
Starting point is 00:01:06 few months. I think noticing this mass movement that is sort of growing around the AI backlash and in particular noticing how very different interest groups and very different factions, different sides of the aisle are coming together to protest AI. And so, you know, when I'm in Washington, D.C., I'll notice that there are family first conservatives sitting with antitrust people, sitting with environmentalists, people who would never be working side by side, but who have united in order to push, for AI regulation. And that was sort of what really got me thinking about AI populism. In terms of how big of a force it is in the U.S. right now, I would say that it's not a primary force in American politics yet, but it is rising extremely quickly. And so one of the best research polling that's been done on this
Starting point is 00:01:50 topic is from David Schwarz-Bloor's research. And what he's shown in his polling is that among, like, you know, a list of 40 different issues that American voters might care about, AI ranks 29 out of 39. so it's not super high, but it has risen in salience the faster than any other issue over the last year. And so in terms of how quickly it is entering the broader political conversation, I think AI's rising really fast. And the other thing that I'm starting to notice is that AI is not just a separate issue. Like most people who are, you know, thinking about AI, they're not, they may not have particular opinions on, you know, which model is the best or should we do chip export controls. they're really seeing AI as part of these broader conversations around affordability, economic
Starting point is 00:02:32 mobility, geopolitics, and those are issues that do rank very high on Americans' list of concerns. And so if AI is seen as, you know, a bogeyman are very tied to conversations around land use and their neighborhoods around economic mobility and whether you're going to have a job, then AI will be a much bigger part of the political conversation than we would otherwise expect. So it's rising fast, but it's still 29th in the list of the news. So there are other, you know, the top five issues got to be like the economy, jobs, inflation, that type of thing. And yet we see some of the most savvy politicians that we have in the U.S. that seem to be doubling down on AI populist messaging, maybe tripling down even.
Starting point is 00:03:11 Bernie Sanders, it seems like, has made it sort of a cornerstone piece for him. In other words, he's kind of like betting heavily on the topic of AI populism and putting a lot of his chips in. why? Like is he, if it's, if it's only 29th, wouldn't people rather hear about inflation and jobs and other things that are core to the Sanders message? Why is he betting so hard? I mean, because they're tied, right? I think like, I think it's because they're tied together. I mean, I have the Blue Rose research pulled up next to me. Number, the top five issues, like you say, it's cost of living, the economy, corruption, inflation, and health care, right? And that's kind of roughly the issues we'd expect. My guess is that those five issues probably haven't changed all that motion over the past,
Starting point is 00:03:51 you know, 20, 30 years. My guess would be two things. One is if AI is a thing that you were going to blame for the economy, for the cost of living, for like corruption, inflation, health care, then you're able to tie it into the issues that Americans do really care about, right? And when, you know, you have these AI CEOs saying AI is going to take all the jobs, when you have these questions about whether we're in a bubble or like the fact that like, you know, I think it was something like a huge fraction, like 30% or something of US GDP growth in 2025 was from data center and AI related investments, then it means that your questions about cost of living and the economy are very tied to AI. And then the other thing that I think is going on with Bernie is I think there is an element
Starting point is 00:04:29 of opportunism, right? And you don't just see that from Bernie. You also see it from other politicians is you may have been saying the same message on cost of living and the economy and the billionaires for like year on year on year on year. But now you have this new force that's showing up. And it's, and you know, the leaders are also promising. It's going to change everything. It's going to take all the jobs. We are like the only thing that matters in the economy now. And so like maybe if you feel like your messaging wasn't resonating before in terms of getting people to support universal health care or like a higher minimum wage or whatever, AI is like a brand new shiny reason that to sort of build support for the policies that you might have already wanted to pass. You see that from
Starting point is 00:05:06 people like Bernie. You also see that from folks who want to say increase speech regulation and censorship or content moderation for tech platforms where there are folks who are already very interested in applying stronger age and kids' safety laws or stronger speech regulations on tech platforms. And now that AI has showed up, it's become kind of an extra reason to push for the thing that people are already pushing for. So I do think that AI matters, but I also think that a lot of politicians are being pretty opportunistic about, you know, pointing to the shiny new thing and saying maybe this is a reason to do what I've been saying all along. I kind of wonder if this opportunism is actually going to stick in the hearts and minds of the American people, though,
Starting point is 00:05:42 right? So there was something that we recall, at least bankless listeners will recall, being kind of crypto of Elizabeth Warren and some politicians tried something similar with kind of an anti-crypto policy. This was in 2023, 2024. Listeners will recall kind of a campaign slogan, something she promoted. Elizabeth Warren is building the anti-crypto army. Right after the fall of FTX, which was highly opportune of a time to broadcast that message. Yeah, it was, you know, Sandbank, and freed and you have kind of the corrupt crypto bros and this weird technology that no one really understands and there was a bubble and there's NFTs and everyone hates it anyway. And so there
Starting point is 00:06:21 seemed to be an effort, there was somewhat contrived, an opportunistic effort to lump all of these things together and have kind of a theory of everything message around populism, some of the campaign messages for Elizabeth Warren. But it didn't seem to really stick or hold. Like even among obviously the crypto people didn't enjoy this, that she was building an anti-crypto army. But I think like the normal people just looked at it and were like, huh? Like anti-crypto army, I care about jobs
Starting point is 00:06:53 and the economy and inflation. Like, what are you talking about? And that messaging didn't really stick. And I'm wondering if that will be a repeat story with this AI populist opportunism that we're seeing, that the politicians are trying to group things that just like don't exactly belong together. in a voter's mind. Yeah, I mean, I could see why you would think that. And I do think that looking at,
Starting point is 00:07:15 you know, parallels to crypto are definitely interesting. I think that AI has some pretty distinct differences. One is just a far bigger part of the economy than crypto ever was. Like, crypto was not driving like 30 or 40 percent of GDP growth over the course of the year. Crypto is not, you know, yes, there's like Bitcoin mining operations, but these are not showing up in neighborhoods as much as data centers are. Most people at their work are not being forced or encouraged to use crypto as part of their jobs, nor is there as high of consumer adoption, like even from people's own volitional use of crypto. That was always a niche thing. It was, crypto is very hard and confusing to use. And my guess is that most Americans never really got in the habit of using crypto on a
Starting point is 00:07:56 regular basis, whereas chat GPT is like the fastest growing app in human history, right? And so I think that in terms of the salience of AI to a lot of normal people, it does feel like a more relevant thing. There are also other differences. Like, I think the AI leaders have been very different than crypto leaders in their messaging. The way that you describe the Warren time dynamic, which is not something I'm personally familiar with, I didn't follow crypto quite as closely. But it sounds like Elizabeth Warren was forging one narrative and people in the crypto industry and maybe many crypto advocates had another perspective. In AI, one thing that's really interesting and that has always been really interesting about AI is that the risks that the populace are talking about are many
Starting point is 00:08:32 of the same risks that people in the industry are talking about. Right. Like Dari Amade is one saying, that 50% of entry-level white-collar jobs are going to go away by 2030. And so that adds a lot of credibility to the message when the people building the technology are saying, actually, that's true. Like, this stuff is going to hurt you. It is going to take your job. When the market pulls back, most people just wait.
Starting point is 00:08:51 They hold cash, hoping things stabilize. But there's another move, and that's where Nexo comes in. Nexo is a platform built to help keep your digital assets productive. You can earn daily interest on supported crypto assets through their yield product or get funds through a crypto-back credit line without having to sell any of your assets. So if you want optionality, NXO gives you both size of the equation.
Starting point is 00:09:10 You can put your assets to work or borrow against them when you need flexibility. Nexo has been around since 2018 and has over $8 billion in assets on the platform and has paid out more than $1.3 billion in interest to clients globally. So if you're a new U.S. user, there's a welcome incentive waiting for you when you sign up.
Starting point is 00:09:26 Check it out at the link in the show notes. And as always, this is not investment advice. In 2024, emerging markets generated over $115 billion in annual, yield for investors, with yields ranging between 10 to 40%. These are some of the highest, most persistent yields on Earth. The problem, Defi can't access them. Bricks changes this. Built on Mega-Eath, Bricks takes emerging market, money markets, and sovereign carry,
Starting point is 00:09:49 and turns them into composable primitives you can access straight from your wallet. While defy investors earn 3 to 6% on stable coins and T-bills, institutions have been harvesting 10 to 50% yields backed by sovereign monetary policy. Bricks connects these worlds with institutional gray tokenization, local banking rails, compliance across jurisdictions, and real-time stable coin settlement. Bricks does the heavy lifting so Defi can finally access real collateral and structured products on top of real world yield. Even the best carry trades can be within reach. Bricks brings DeFi's promise to the emerging world and brings the emerging market yield to your
Starting point is 00:10:22 wallet. Let the yield flow with Bricks. You would have never thought two years ago that you could soon be trading tokenized oil on MetaMask, but here we are. I've been using MetaMaths since 2017, and we all remember buying NFTs with it in 2021. And now, in 2026, if you're a new to be you, you haven't checked in on MetaMask recently, let me tell you. You can trade tokenized stocks, funds, and commodities, along with leverage perpetuals, prediction markets, and even, yes, you can gasslessly swap between crypto tokens across networks, too. There's advanced security features like MEV and frontrun protection, and even a debit card, so you can actually spend your crypto directly at merchants all around the world. And it's all self-custodial. Everything
Starting point is 00:10:54 you want to trade in one place. This is the open money future we've all been waiting for. Check out the new MetaMask. It's already on your phone or in the link below. There's something dislocated to me when you tell me that AI is 29th in terms of importance on politics, yet we have, I mean, maybe this is just cherry picking or just like picking out, you know, a few bits of data, but you have entire communities showing up into town halls to tell people that they don't want data centers in their communities' backyards. Like that doesn't ring like a 29th most important political issue. AI, and maybe it's something as you just said, like the AI tech leaders, Dario, Sam, they're saying, oh, yeah, we're going to completely rewrite the social fabric.
Starting point is 00:11:38 And, well, what does the social fabric do as a result of those statements? Like, kind of gets scared, kind of gets offended, decides to show up where they know how to show up, which is in their communities. And so there's something like uniquely galvanizing about AI. And so when I hear the 29th most important political subject matter, I feel like, I feel like that's a lagging indicator. indicator. And I'm watching the trend lines. Like, I'm looking at the fact that it is number one for fastest rising issue with number two being war in the Middle East. So this is as a February to be clear. Yeah. And then there's one more thing I'd like to just introduce is there's actually
Starting point is 00:12:14 been violence on the table. Sam Altman's home has been the target of two attacks, one with a Molotov cocktail, another one with some bullets. I think there's others. And then they're just not related to AI, but for some reason the Luigi, I don't know how to pronounce his last name, but the individual who killed the... Maggioni, David. He's all in a place. Yeah, the healthcare CEO, like the political assassination.
Starting point is 00:12:37 And then we have, you know, people showing up with small top cocktails to Sam Altman. It's just, as a political topic, it's just far more galvanizing and motivating than any other. Like, what do you make of just like how some people really feel motivated to do big things, big drastic things
Starting point is 00:12:56 when it comes to AI? And what that means for just like the future of, of what that means for the, 28 election and domestic politics. Yeah, I mean, I think that people, again, AI has sort of become almost this political buggy man. I think in some ways it reminds me of the way that China showed up in the discourse over the last decade where everything was because China.
Starting point is 00:13:15 Like we got a, you know, we need to do AI because China. We need to reinvest in manufacturing because China. We need to educate our kids better because China. The specter of China competition in China eating America's lunch on the economy, on geopolitics, on whatever, was sort of used as an all-purpose justification in Washington, D.C. And I think that sometimes this is fair. Like, again, I think some of the AI risks are really real. I think that China competition is a real thing.
Starting point is 00:13:41 But I think it also comes from the sense that when there's a big other force in the world, this big alien force, whether that's another country like China that's very foreign to people, or whether there's this spectrum of super intelligence. And people don't really understand it, but it promises to change everything. And it seems very powerful. And like there's a lot of money behind it. It becomes very easy to sort of blame entire. in to a really wide range of issues. But yeah, I think that this opportunism is probably going to
Starting point is 00:14:05 accelerate going into the 2028 primary season. I mean, it's going to be a crowded primary, most likely, on both the Republican and the Democratic sides. And we're already starting to see some of the likely candidates picking this up as part of their campaign messaging. Like, it's very notable to me that, you know, like Roe Kona and Mark Kelly, both who are expected to put themselves in the running, have been, you know, doing these big AI action plans, Josh Hawley on the right for example, has also been especially active in AI legislation on kids' safety, on jobs. I've heard from other folks who haven't necessarily introduced plans yet, but who are expected to do so. And I think, again, it's because you always need a galvanizing new thing that's going on
Starting point is 00:14:46 in order for these politicians to justify why they are the unique ones to sort of meet the moment with their plans. And AI can also be interestingly kind of distorted to fit any of these plans. I think another thing is just like it collides with pre-existing sentiment, pre-existing populist sentiments in America, right? Like we're already seeing rising distrust of institutions, rising distrust of elites, distrusted billionaires, corporations. That's been a growing sentiment in the U.S., a growing resentment long before AI. And with how wealthy these AI billionaires are, with how much revenue the companies are making,
Starting point is 00:15:23 anthropic hitting 30 billion run rate recently with the scale. of these data center investments, I think that AI is a very good target for a lot of this anti-billionaire, anti-corporate sentiment. And so, you know, like I'll, even when I talk to accelerations, even when I talk to people who are very pro-AI, when I don't talk to AI executives, they understand that they are very unsympathetic, right? Like, most Americans do not relate to Sam Altman. They do not find him relatable. They know that they personally are getting no piece of this pie. Remember, these are private companies. And so most people have no way of of sharing in the wealth of this thing.
Starting point is 00:15:58 And so it's very easy to blame the AI billionaires because they're like kind of culturally weird. They're really far away from you. They're not sharing their wealth in any way. And they are kind of like transforming the whole of economy and society. And so I think that they're also a politically convenient target and one that I expect is going to get more ire and more hatred over the next couple years as the presidential primary really kicks into effect.
Starting point is 00:16:23 Like one crypto contrast I think is really interesting, for example, is the super PACs that the industry has created, right? And so during the crypto era, Chris Lehane, who now works for Open AI, was one of the critical people shaping the Fairshake pack, which lobbied for pro-Crypto legislation. And he was really effective with a lot of that. They went after candidates who really wanted to crack down on crypto, they scared off more candidates from doing the same,
Starting point is 00:16:45 and for the most part, a lot of potentially onerous crypto legislation was avoided, and Fairshake mostly flew under the radar for normal people. Whereas on the other hand, the same playbook was tried for AI, is being tried with leading the Future Pack, also shaped by Chris Lehane, as well as some other AI venture capitalists and executives. They went after Alex Boros in New York for, you know, pushing New York State AI regulation. And actually the opposite thing happened where Alex Boros was ranking like number three in the polls. He was kind of an irrelevant guy who's going to lose. The AI billioners go after him, start running attack ads. He starts running his own ads being like
Starting point is 00:17:19 AI billioners hate me, shoots up to number one in the polls or like neck and neck number one, number two. And he now has a much better chance of winning. Now that leading the future, this AI super PAC has gone after him. I've seen in other districts, the same thing happen, where when leading the future, the AI super PAC endorses a candidate. The other person in the race will say, thank God I haven't been endorsed by the AI billionaires. And so you have enough of this populace sentiment that it's actually a little bit of a political liability to be partnered too close with AI industry.
Starting point is 00:17:48 In so many ways. I think some things that happen with crypto were really a dress rehearsal for AI. I do want to get on this thread of the violent attacks, though, because that is somewhat new in American life. And I'm curious about this thread. I think you called some of these things, the attack on Sam Altman warning shots. The Molotov cocktail thrower, he was a 20-year-old. He was part of a pause-AI discord group. Some of his writings, he said sociopaths, psychopaths are gambling with your future and with the lives of your children.
Starting point is 00:18:22 I'm wondering, between this and the murder of Luigi Mangione, the United Healthcare CEO, Brian Thompson, are those like the attacks on Sam Altman and the murder of a healthcare CEO, is that all part of the same movement? Or is there a particular thread that is targeted towards kind of the tech leaders and the AI leaders that's separate from the attack on kind of health care executives? Yeah, super interesting. I think my argument would be that they are not part of the same movement. They have different motivations for their attacks. Like, for example, the attacker of Sam Multman, the guy who threw the Molotov, he had written some blog posts about existential risk in particular and his, you know, Eliezer-Yude Kowski-style fears about how AI was going to kill us all. So he definitely had some AI-specific fears. I think the things that feel really similar to me when you look at a lot of the recent assassination attempts or success. assassinations that have happened over the past few years is a lot of them are committed by very online young people who spend a lot of their time in discords and in these very niche online communities that often tend to develop more extreme beliefs like Charlie Kirk's murderer did the same thing. He was also a discord lurker very young as well. And I think that it also reflects the fact that political violence in the U.S. has become more prominent. And that's something that political science
Starting point is 00:19:48 researchers have found too, both when they look at the incidents of political violence, but also when they pull the public on, do you ever think assassination is justified? Do you think that violence is justified? And now, whether you pull for right-wing figures or left-wing figures, you get numbers, like 10 to 20 percent of Americans think that assassination attempts are justified when they're directed as people who you think are bad people, whether that is Nancy Pelosi's husband or whether that is Donald Trump or whether that is the United Healthcare CEO. And so the thing that's the thing that's that I notice with, you know, Sam Haltman's attackers, like with the other attackers, is that these are young people who have developed a pretty nihilistic politics, whose views might be increasingly
Starting point is 00:20:29 extreme as a result of participating in online communities where people kind of reinforce each other's beliefs really quickly in this cycle, and who also believe that they have no other outlet, but political violence. I think that when I think about the resentment that people feel, or why do crazy things like this happen? Like, I am no fan of political violence, you know, like, why would someone do something like this. What I really see is these people no longer believe that the democratic system works. They do not believe they have any other channel to quote unquote have a voice or to shape the direction of what happens to politics, what happens to the economy. And they see direct action, in this case direct violent action as the only way of making their voice heard
Starting point is 00:21:08 in order to stop some of the changes they think are coming. I mean, I see this at a lesser scale with things like data center protests or something like that. It's like, you know, a lot of my friends in in the AI industry, for example, think that the data center protests are really stupid. Like, they're like, data centers are the wrong target. If you are worried about AI safety, you should pursue regulation or something. But I'm like, do normal people have any channel to pursue regulation or to shape how these models are trained or what the products look like? They don't. They don't. They don't know anybody who works in AI policy. They don't know anyone who works at an AI lab. If they feel like they are being forced to use AI in particular ways that they don't like
Starting point is 00:21:43 or that it's threatening their job or their kids' safety, they do not actually have a lot of channels to express a discontent. It's not like something you can vote on. It's not democratically governed. And when people are really nihilistic and very distrustful of these companies, which is how a lot of folks feel, they are going to go for things like grassroots protests or even in the extreme cases political violence. And so that's one of the things that I notice when I see more incidents of violence, whether it's against health care executives or AI executives. It's people saying, I don't like the way our healthcare system works. I don't like the way that AI is affecting my life. And I have no idea what to do about it. And I feel like I have nothing to lose anyway because I feel very bleak about
Starting point is 00:22:21 the future. And so I might as well shoot someone. I think that's really scary. Yeah. Yeah, AI is definitely in this moment of time in which there is but convergence on just a number of different things. Wealth inequality is at all time highs. The last tech boom, social media, you know, promised global connectivity to all of our friends. And we all understand that we're being fed something completely different. And everyone is kind of disgruntled about that. And so like you said, distrust is at all time highs. Just being chronically online is probably at all time highs. Oh, yeah. Absolutely. And then all of a sudden we have this AI industry, which I think as you're kind of alluded to is like a pretty good boogey band to express a lot of our frustrations in society upon.
Starting point is 00:23:04 It's kind of like this blank slate. It's like, what are you upset about? Well, you can point it at AI in some particular way. And to your point, like, a basic psychological principle is like if you thwart any individual human's goals, what are they going to do? They're going to lash out. Yeah. You back someone into a dog into a corner and they have no choice but to bite. And I think with wealth inequality, you have a growing number of people who probably feel
Starting point is 00:23:29 something like that. It's like, I don't know how to improve my circumstance. And then here we have like a new like wave of technology. and you have the CEOs, the leaders of that technology, really not doing themselves any favors? No. Like Sam Altman and Dario are both like, yeah, we're going to like do what mass job wipe out and it's going to be sick.
Starting point is 00:23:52 Sam has changed his messaging. Sam has changed his messaging. The same is, A, changes messaging after the Newarkers from an underclass piece, I think it was pretty clearly responsive to that. B, if you read his tweets closely, he says in the second tweet, he's like, I think people will be more fulfilled than ever, but we're going to have some painful transitions along the way. And that's the thing that really bothers me. It's what they all say. It's they say that in the future, 20 years, 50 years, whatever down the line, we're going to have this amazing
Starting point is 00:24:18 utopia where AI does all the work, all the diseases are cured, consumer goods are really cheap, housing's cheap, whatever's cheap, and life is going to be perfect. But they all talk about this transition period where it's kind of a euphemism. They'll say it really quickly like, yeah, there'll be a bit of transitional friction, but it's going to be okay. And what do people hear, what do they mean by transitional friction, what they do? do mean is that if you are a current worker, not somebody 50 years in the future, if you work as an illustrator or a copywriter or a young software engineer, you are kind of screwed. And so like, even in Sam's new sort of approach to this, he's still admitting that a lot of people working right now are going to
Starting point is 00:24:51 be screwed over on the way to the utopia. And so when people hear that, they're like, man, like, I don't want to be screwed over. Yeah, yeah. There's that stat that like 80% of American, the American like labor force is like one unexpected medical bill away from like poverty. And And when you hear like Sam Malman say that, oh, yeah, there's going to be a painful transition. Well, that counts as an unexpected medical bill, a painful transition. And so this is probably making things feel a little bit too real or threatening to the average worker. I want to know what you think either people like Sam Altman or Dario, the leaders, and then also people at these companies, what they think publicly versus what they think privately.
Starting point is 00:25:30 Like there's kind of like maybe there's a gap between on mic versus on mic. This is a quote from your article in the New York Times. Tech industry sources expressed more extreme concern about the labor market impacts of AI in private conversation, but suddenly became optimist once I turned on the microphone. And so I kind of want to understand, give us a take about what people, what you think people believe behind closed doors, all the people inside of like the AI elite Silicon Valley circles. Yeah, I mean, so the reason I wrote this New York Times opinion piece was in large part that
Starting point is 00:26:04 I felt like people were saying things behind closed doors that they were not willing to say on the record. And I felt like because I had at least heard some of these conversations and I was aware of the sentiment, I could piece it together and sort of lay out a case with publicly available information and a couple anonymous quotes as to what people really expected. And even when I was reporting the article, I noticed this happened where there might be a person who I talk to just as part of my normal life living in San Francisco. We just chat about AI and they'd say something like, yeah, I think the median person is screwed. I don't know what I would do if I was 17 and I didn't have a lot of money. I don't think I could go to college. I have no idea. And then if I'd ask that person, hey, like, would you mind like doing this interview for the piece? I'm trying to make a case for managing this disruption better.
Starting point is 00:26:49 That same person would say, sure, I'll do the interview. But then on the interview, they'd focus on stuff like, well, you know, like, I think AI can help people start a lot of small businesses. And they would be super reluctant to say any of the things that they had said maybe an hour or a day before to me the same person. And this actually freaked me out more. It was because it wasn't just that people have these bleak predictions about what was going to happen to the economy and to workers, but that they were flipping their tune as soon as I turned on the mic and asked them to go on the record. And I noticed this happened with multiple people.
Starting point is 00:27:20 Some people wouldn't go on the record at all. And one person, like a high, you know, high-powered venture capitalist told me, a lot of my executives are telling me that they want to lay off their workers with AI. But to be honest, Jasmine, I don't think they're going to talk to you for their piece, for your piece, because they don't want to be the bad guys. They know that they're going to get backlash for saying that. And so I feel really frustrated when people say things like, you know, Dario is just trying to do marketing and hype for his company. And the reason that he's predicting these crazy things is that he doesn't actually believe it. It's just marketing. I'm like, A, he does actually believe it. I feel pretty certain he actually believes it. That doesn't mean he's
Starting point is 00:27:53 correct about the way it's going to play out, but he at least believes he is correct. And second of all, it makes him look worse. It makes people more anti-AI when he says that, so it doesn't make sense as a marketing strategy. And then third of all, the vast majority of AI leaders, researchers, and executives who hold the exact same belief as Dario are not willing to say it out loud because they don't want to be the one targeted for laying off their workers with AI or for building the worker replacing technology. And so I actually do think that the belief that there will be at minimum mass job displacement or a near-term disruption is super, super common. I think people do. differ on like, will there be jobs in the far off future? Like the permanent underclass belief
Starting point is 00:28:33 is more niche, the idea that everyone is permanently screwed. But I think that the belief that AI will exceed the abilities of basically every human and this will cause mass job disruption in the near to medium term is pretty common among folks who I talk to in the AI industry. So you think when Dario says 20% unemployment, you think he really means it? You think he actually thinks that's what's going to happen. And so this is a warning for the world to get ready for that. Yeah, I do think so. Let's talk about whether he's right or not, because there is significant pushback on those unemployment numbers.
Starting point is 00:29:07 People say, people like Dario and Sam, they're not economists. One of the sources of that pushback is Mark Andreessen, who I think enjoys pushing back on a lot of your work, Jasmine. So, I mean, he'll point to the lump of labor fallacy, right? So he'll call this classic zero-sum economics, the idea that there's only a fixed amount of work in the economy and then you have to sort of split it up. Well, that's not really true. That's the lump of labor fallacy, of course. We can have grow the pie types of gains, productivity gains, new industry, new demand. The classic case in lump of labor fallacy that everyone cites is ATMs.
Starting point is 00:29:45 There was a time where people thought ATMs were going to kill the jobs of bank tellers, you know, in the 70s and 80s. What actually ended up happening in the decades that followed was we got more bank tellers, they actually grew because demand increased. And we've seen the same thing with radiologists. You know, AI was supposed to wipe out radiology jobs and radiologists jobs are growing. Even programmers right now, maybe not entry level, but the demand for programmers, at least by some measures, is increasing as a result of AI. They'll also point to deflation benefits to labor. So they'll say AI is a deflationary force. It's making everything cheaper in particular services. So we You want better health care services, time with a doctor.
Starting point is 00:30:28 Well, you have a doctor on an app in your phone with doctor level intelligence or a therapist or a psychologist or a lawyer or name your thing that you want to make more affordable. This is all a deflationary effect and that will benefit labor as well. And then lastly, something like Mark will dismiss all of the things that even people like Dario are saying is kind of a particular lens on the world, maybe like a doomer socialist type of take, that you're taking your worldview and you're applying that to AI and you're saying, you know, here it is. You're being politically opportunistic about things. I'm not saying you in particular, of course, Jasmine. I know you are reporting about these things, but this is the pushback
Starting point is 00:31:10 on the unemployment numbers that it's just like, that's not actually how it's going to play out. And even if Dario believes that's how it's going to play out, we've had technical revolutions throughout history and it's led to more productivity. It's led to more, you know, positive of some games for more people and why wouldn't it play out this way? So what do you think is actually going to happen here? Yeah, I mean, that was a lot. I can, I don't know if you, do you want me to say what I believe or do you want me to make the steelman for Dario's case? Because those are not the same, because I don't agree with Dario either. First, why don't you give the steel man for Dario's case? And then I would be interested in your own opinion, because I know you've spent a lot of time here
Starting point is 00:31:48 and given it considerable thought. So, yeah, like you mentioned, I think the most common critique of jobs doomers, which Merck and Drison and other folks have made, is the lump of labor fallacy and Javon's paradox, right? Or Jevon's paradox, I don't know how to pronounce it. They basically say that if something is cheaper, then actually demand can go up. And so if software is really cheap, more people will want software. If therapy is really cheap, even more people can access therapy and demand for therapy will go up.
Starting point is 00:32:13 And there will always be new forms of work to do. People's desires are infinite. They're not limited. It's not like once you satisfy one desire, they won't want a new thing. And we see this where, you know, There are now yoga studios, and maybe 100 years ago, we weren't spending our money on yoga studios or something like this. And I think in general, historically, this has been a really good argument,
Starting point is 00:32:32 and it has held true through history. The thing that I think Darya would say as to, like, why would AI be different, is that both of those arguments, Jevon's paradox and lump of labor, assume that more labor equals more humans. So what they're saying is that demand is unlimited and that the amount of labor to do in the economy will always go up. but they also assume that there is an inherent link between productivity and labor and humans, right? Whereas the thing that AI promises to do is particularly AGI, like fully human replacing AI,
Starting point is 00:33:05 is that you can have labor without having humans. So you can produce software without having humans. So yes, maybe demand for software goes up, but AIs are making all the software. Yeah, maybe demand for therapist goes up, but AIs can do the therapy. Lots of people are already using AI for therapy. And so even though we're not yet in that world, because AI is very jagged, it can't do everything yet. And so humans remain compliments to AI. Right now, humans are augmented by AI for a lot of things like radiology. You need both a human and an AI together. And so if demand goes up, you still need human labor.
Starting point is 00:33:36 AI is generalizing really fast. It's improving really fast. And Dario believes that in the next two, three years, we're going to get AI that can do, that can produce infinite amounts of software or therapy or whatever it is without the requirement of having any humans. And so let's take your software engineer example. Right now we see that overall demand for software engineers is going up, but the junior engineers are affected, right?
Starting point is 00:33:57 So if you're a new grad engineer, you are actually struggling to get work because you're not really that much better than Quad Code. But if you're a senior engineer, you're totally fine, lots of demand for senior engineers. The thing is, if you look at the way that AI models have progressed on software benchmarks, year after year after year, they're improving really, really fast.
Starting point is 00:34:12 And so right now, maybe AI can only replace a junior engineer, but it seems totally feasible to me that next year AI will be able to replace a mid-level engineer. And maybe the year after that, it will be able to replace a senior engineer. And if that continues, then AI will no longer need human engineers to make more software, right? And so the argument that people like Dario would make here is that AI breaks the necessary tie between humans and labor. And that's the thing that people like Mark and Driesen are failing to consider. That would be me making the steel man for Dario's case.
Starting point is 00:34:46 Even there, on Dario's case, like, it wouldn't be the case. Let's say AI automates all of kind of the labor types of tasks in the economy. Isn't it the case that humans still have this insatiable demand for kind of status types of games? So you think about something like yoga or, you know, a personal trainer or something like this. This is just about fulfillment, I suppose, in life. Or maybe there's some idea of a status game that's being played. You know, it's like I can get stronger, I can get more fit, something like this. And so maybe all the software developers become personal trainers and, you know, they spend their time on more fulfilling tasks. And isn't it fantastic that all of these more labor-intensive, boring types of jobs get filled? And we'll just replace all
Starting point is 00:35:32 of that. As long as humans are around, we'll just replace all of that with other games that we play, like status types of games. Yeah. So this is the argument that like Alex Emas, the economist has made, right, is like what will become scarce, like, oh, relational goods, like you said, therapists, you know, personal trainers will become scarce party hosts. I think that there will be a lot of party hosts after AGI event planners, whatever it is. And to be clear, I personally am quite sympathetic to this argument. But the argument that Darya would make here, or what I'm feeling pessimistic, the argument that I would make here is actually, AIs are really good at emotional and relational labor too. And even a lot of wealthy people choose that stuff. So before, maybe if you wanted to be entertained,
Starting point is 00:36:08 you might have to go see a live play. You'd have to go see live theater and you need like 50 people to like make this production of live theater. Now you have like, Netflix and TikTok. And increasingly in the future, we're going to have Netflix and TikTok with like AI avatars and AI storylines. You just need like way less humans to produce the same entertainment. And like even people who are really rich sometimes prefer to watch Netflix and TikTok versus go to the theater, even though they can also afford to go to the theater. A lot of people do prefer asking chatGBT for medical advice or what they should do about their relationship problems over asking a human therapist, even if they can.
Starting point is 00:36:45 afford the human therapist. So we see people make choices that prioritize the convenience and quality of technology over the status good of talking to a human over and over and over. We see that happen all the time. And I think that it is true, in my opinion, that there might be like some niche areas where people really want another human there. But that pool might actually be a lot smaller than people think. People pay more for Waymo's than they do for Uber's, for example. Even people who could afford a black cab taxi driver will often prefer the Waymo instead. And so, you know, I think that actually AIs are quite good at doing a lot of these relational tasks and will continue to get better at them. I also think that one of the things you want to look at in terms of demand is how many people can
Starting point is 00:37:27 afford to produce demand. So I spent some time in China recently. China, one of the problems that China has, one, it's had white-collar unemployment for quite a long time for non-AI-related reasons. And one of the reasons for that is household spending is very low. And so you don't have as big of a services economy because there's not that as big of a middle class. You have some very rich people. You have a lot of very poor people. And middle class spending is really necessary to drive consumer demand. Because rich people only have so many hours in a date. They only have so many wants, right? And so if you have a world that's very unequal, which is something that we expect with AI because there's going to be more returns to capital. Those rich people, they may be able to hire a few party planners and a few
Starting point is 00:38:05 personal trainers. But like, they got 24 hours like the rest of us. And so you're just not going to have as much demand in a very unequal economy compared to one where there's a really strong middle class and everybody is, you know, buying a lot of services and goods all the time. So those are some arguments that I would consider making if I were trying to make the more extreme case. But once again, I just want to say that like my own beliefs are a little bit more moderate. And so let's zone in. What are your beliefs on the unemployed? Yeah, I mean, I, so I what do I think is going to happen? I lay some of this out in the New York Times piece. I do expect the near-term labor disruption.
Starting point is 00:38:39 I think that there are certain categories of jobs that are way easier to automate than others. And this is where a lot of my disagreement with people like Dario comes from is that software engineers super easy to automate because code is verifiable. It's all the context is in a code base. You have this open source data on the internet
Starting point is 00:38:55 that you can go train on. Most jobs are not like that. Software engineering is a really weird type of job. Maybe accountants are also like that. There's like a few jobs like software engineering, maybe digital marketing. copywriting and freelance digital illustration, maybe like accountants or something,
Starting point is 00:39:08 management consultants, let's call it like 10% or 5% of the US economy is jobs that are very, very easy to automate for like some slate of reasons like this. Those I do think are going to get disrupted pretty quickly because financial incentives are just going to make bosses choose to use AI over hiring humans, especially like when a human gets laid off or they quit their job,
Starting point is 00:39:28 you're just not going to replace them if an AI can do a good job. So I do think we will see some labor impacts, even though I don't think it's going to be all of the jobs because physical world jobs, relational jobs, jobs that are protected by regulation like doctors, that stuff I think is going to take a long, long time to automate. So I see these near-term disruptions. I also think that retraining is usually overestimated by economists. So folks who believe in stuff like lumped labor, economists, they tend to say that people are just going to go move to other jobs. So before, during deindustrialization in the U.S., when a lot of factory jobs are automated, these economists
Starting point is 00:40:03 predicted that the laid-off factory workers would just move to different geographies to work in different factories or that they would learn like digital skills, like learn to code. And I think we all kind of laugh at that now because we see over the past 10, 20 years, that these steel workers did not learn to code. They also did not move. They often got addicted to opioids and had a really, really bad time. And like, we are still living out the political and the social consequences of deindustrialization, even though it wasn't that many workers. And actually, it created more jobs total, but the new jobs that were created by factory automation. We're all like software jobs in San Francisco and not like jobs in Buffalo, New York, right?
Starting point is 00:40:37 And so just because you have new jobs elsewhere in the economy does not mean that the people who are laid off are going to be able to retrain even with income support, even with access to school, into those new jobs, because these people might be like 50 years old. Like they just, they don't have the brain elasticity anymore. They don't have the motivation anymore to go and learn something brand brand new. And so I think that even if it's not, even if it's, let's say 5% of jobs are going to be automated by AI, and it's not all of the jobs immediately. I think a lot of these folks are going to really struggle to retrain. I don't think that they're all going to easily switch into a new job. I think
Starting point is 00:41:09 they're going to build a lot of political resentment. And so this is where it sort of connects to my interest in AI populism. This time, maybe instead of right-wing resentment, the kind that drove Trump, it might be more like left-wing resentment, where it's blaming the AI billionaires. I think we already see that. I think some of the biggest critics and skeptics of AI are people like creatives whose jobs have already been impacted by AI. And so I think we're going to get a lot of populist backlash that results from people's jobs being threatened, even in small numbers. And I also think that on the macro scale, even in a world with full employment, you still might get a declining labor share of the economy, which is something to worry about, which is right, this idea that, yes,
Starting point is 00:41:46 maybe everyone still has a job, but overall wealth is accruing to capital owners who have the ability to rent infinite robot labor. And wealth inequality can cause its own kind of problems, like these political imbalances, resentment at elites, things like that. And so that's something I worry about even in a world where people mostly retain their work. So I tend to be like job displacement. It's not going to be, it's not going to happen all at once. It's not going to be the sort of apocalypse. It's going to affect some narrow categories of people, but those people are going to be really, really mad about it. And it's going to really, really suck for them. And I kind of want our policymakers to be more proactive to tell people,
Starting point is 00:42:21 if your job is automated by AI through no fault of your own, like you spent decades learning, some skill and now it goes poof because of AI. I do think we should support those people. I don't think it's their fault. What do you think about the whole concept of just like the capitalism end game, which is like there's just a, it's just game over for labor. You take super intelligence and then not too long afterwards, you get super robots. You smash those things together and like the whole concept of being a human is just obsolete and redundant. And then this invokes the idea of just like the permanent underclass where there are just people who are just, stuck down there, and then you like zoom forward a few decades, and you get movies like Elysium
Starting point is 00:43:01 where all the elites like escaped to their like super fortress in space and all the permanent underclass are like stuck on earth. And it's just like it's just entrenched that way. What do you think about this whole concept? Well, yeah, it's kind of like the idea to flesh that out a bit more and add to that, David, it's like the idea that capital no longer needs labor to function. Like for all of its history, capital has had to hire labor in order to get jobs done, get work done. And now it has AI tokens to substitute for human labor. So it doesn't need labor any longer. And this is like the extreme version of what might happen.
Starting point is 00:43:38 Right, right. There's like books like or there's an essay called the intelligence curse. I don't know if you read that. We had the authors on. Also, Garrison Lovelies coming up with a book called obsolete, I think, which, you know, delves into this. thesis, basically labor becomes obsolete in this world. Yeah, I mean, I think that is the, like one of the versions of the things that people like Dario are even more extreme than Dario do believe. That's what they're worried about, right? Is like, AI will be a one-to-one substitute for labor. It will be
Starting point is 00:44:04 able to do literally everything and capital will discard people. And that's where I would start to make arguments like you've been making Ryan, where I'm like, well, actually, if human labor is scarce, some people will want their human party planners. And so I do think there will be some jobs available in the relational economy. I think that it also requires believing in full automation. Again, like technology has to advance so much that it's not just replacing cognitive jobs, but it's also replacing, like, jobs in the physical world. And do I think that could happen someday? Like, maybe, probably, like, robotics is improving. But we're pretty, pretty far away from that. Like, I think that we are going to have a lot more problems to deal with in the next decade before we
Starting point is 00:44:41 get to the point where full automation is even worth considering. Even for folks who do map out and care about these full automation, scenarios, like the economist Phil Trammell, who wrote his capital in the 22nd century essay, making a version of this argument. He's called it the 22nd century because his very rough, low confidence estimate was 100 years in the future. And again, we may never arrive there if the relational sector of the economy is big enough. Wait, 100 years in the future, what happens? Labor will go to zero. So his prediction was full automation, labor goes to zero. If it's
Starting point is 00:45:13 plausible, it's going to be like 100 years in the future or something like that. Even if it's this drastic, we have time. Yeah, like, I think we have a lot of time, and I think there are a lot of things that could happen between now and 100 years for now, and so maybe personally I'm more focused on these near-term scenarios, but, like, I do think that, like,
Starting point is 00:45:29 it is worth considering that capital relies on labor right now, and if it doesn't require humans as much, I don't expect governments or corporations to be as generous in terms of things like welfare and caring about what people think about how things should go because they have robot alternatives, and so those political dynamics might start showing up. Yeah, I mean, that's the argument of the intelligence curse, basically, that it breaks the social contract between labor and capital and governments and its citizens.
Starting point is 00:45:55 And so a new social contract has to be, you know, created. Yeah. And I think that's why people are turning to things like violence, frankly. It's like if you are not, if you as a worker or as a normal person have no leverage as a result of doing work, because that's one of the traditional ways you have leverage. Where do you have leverage? You can do violent acts, do terrorism, and riot in the streets. So I think people are recognizing that one of their few channels of leverage when you lose everything else is to do violence. And so that's why I think that even if I am a totally self-interested capitalist who doesn't
Starting point is 00:46:30 care about people at all, I would be pretty concerned about making sure that not too many people end up unemployed and disempowered because I do not want to face these violent threats from people who have been deprived of every other channel for leverage except for violence. Okay, but for sure. But like, okay, but doesn't it seem a little early for that? Like, we don't know how this is going to end up yet. So what is unemployment in the U.S.? Is it something like 5%?
Starting point is 00:46:57 Yeah, I mean, I don't think it's too early to plan for scenarios, right? Like, I don't think that we have to institute a UBI right now. Like, I would not support that. No, to be clear, not to plan for scenarios. But why are people getting violent and angry already? Like, it hasn't happened yet at some level. That's what I find somewhat curious. You know, Alice Scanline.
Starting point is 00:47:17 can stop it, right? You know, like, that's what they're trying to do. Okay, but is it a little Ted Kaczyk? That's pretty, that's pretty strong, David. I mean, I think these are extreme people, right? Like, these are not normal people. Most people are not engaging in violent tax. But the thing is, if you genuinely believe that this thing is going to come for you and your family
Starting point is 00:47:32 and your community. Sure. Then, like, these people believe that if they do enough violence, they can stop the thing. Again, I do not endorse violence. I think this is super bad. It's not that many people, but you can see how one would arrive at that. What I'm trying to find out, the violent actions aside, but like all of this, this, you know, kind of vitrile against tech populism.
Starting point is 00:47:49 This is this vitrile against big tech is how much of it is vibes, you know, versus reality. Like, we don't actually know what is going to happen yet. It hasn't hit us yet. So so much of this is narrative and vibe. And it might turn out. The narrative which the AI leaders are fostering. Some of them, yes. Which gives the vibe a lot of credibility.
Starting point is 00:48:11 Like no one is saying the other side of the vibe other than like Mark Andresen. And Mark Andreessen is investing in law. lots of companies whose value proposition is to replace workers. So I know that Mark andresen is tweeting different things, but if you look at his portfolio of companies, many of them have a core value proposition of replacing workers, right? And so, like, I see why people would be skeptical of Mark and Drescent's public statements.
Starting point is 00:48:34 Right, right. And plus, Mark Andreessen just kind of politically aligned himself. And so now he's kind of like shoehorned in that sort of political camp. The other thing, Ryan, I think, is kind of worth highlighting. is like the, did you read the text message or the statement that the recent Donald Trump assassin attempted assassin left behind? There was like a whole, in his manifesto, there was like a question answer, question, answer. And he was answering his own question.
Starting point is 00:49:01 Like, why are you the one to do this? And then he would answer it. He basically like rationalized himself and anyone who was curious in reading his manifesto is to like why he thought he was valid in making an assassination on Donald Trump. And this is clearly a guy who's like chronically online. He was like in Reddit communities. And it looks like just like kind of hyper-rationalism. And I think that's,
Starting point is 00:49:25 these are the same people who are doing political violence against Sam Altman. It's like, this is why I kind of said, it's a little Ted Kaczynskiy. Is they think that they are stopping this future Terminator, like Skynet type thing that is going to happen in the future. And they just have to do the right thing in the now to solve that future problem. So, like, well, again, no one on this podcast is supporting or political violence, I can kind of see the logic.
Starting point is 00:49:52 Well, I mean, you only need a very. If you think the stakes are this high, right? If you see this is what you're saying. But the tech leaders are saying the stakes are this high. Like, it's like, I don't know what Dario's P. Doom is, right? Like, but like I think he probably has a P. Doom that's like 30% or something, is my guess. Like, it's probably, like, quite high, like, relative to most people. Like, he is, he is clearly very worried about the prospect that AI.
Starting point is 00:50:15 I could kill everybody or leave the world a very bad place. And I can see that if you believe that, which I, again, I personally do not happen to believe, but if you thought that these tech leaders were actually gambling with your future and they were actually going to like do two coin flips and there's a 25% chance that you're going to end up, if not dead in the permanent underclass, you might think like just like you got to kill baby Hitler, you got to do this kind of violence, you know? And they've done too many thought experiments. This is the whole thing about the hyper rationalism like online too much.
Starting point is 00:50:42 I'm like, you have done too many thought experiments, like read some virtual. You need to touch crap. Yes, please, virtue ethics. Let's re-inject that, please. I want to talk about the kind of just like the political map that comes out of this. There's just a ways to kind of divide how the future politics when it comes to AI looks like. There's left versus right. There's labor versus capital.
Starting point is 00:51:03 There's Silicon Valley versus Washington, D.C. How do you think the lines are going to get drawn here? Clearly, like Bernie Sanders is on one side, and I think AOC would join him. I don't know necessarily who the pro-AI politics are, but when we see factions joining together and political lines being drawn, how do you map this out? Yeah, I think this is super interesting.
Starting point is 00:51:25 I mean, like you said, there's a million ways it could break. One that I worry about when I'm, like, freaked out by all this is that it's going to be these, like, techno-capitalist elites from both sides of the aisle, sort of centrist, pro- neoliberalism, pro-technology folks against everybody else, whether they're like right-left, whatever, people who are don't like technology.
Starting point is 00:51:45 I mean, like some people have articulated it as friends of the future will be one camp. And then like everyone who's trying to stop technology and stop change will be another camp. I don't know that it will be that. But I think that sometimes feels really plausible, especially when I notice that a lot of these very anti-AI factions, they're very bipartisan. They have people from a lot of different political camps like creatives, labor unions, environmentalists, states, rights people, family first people, religious people. like so many different interest groups are coming together, all because they think that AI is going to alter the existing environment, existing jobs, people's existing social circles and their way of life. And then there are people who are more interested in like economic growth or the long run future
Starting point is 00:52:25 or are just a little bit more pro-technology in general. And this freaks me out because I feel like personally I am someone who really likes technology. I like using AI. I love the internet. I feel like It's added so much to my life. I believe in economic growth. I just want to distribute the benefits of growth equally. Like, I just think that we should care about the distribution, but I generally am pretty pro-technology. And that really freaks me out to think about this kind of thing.
Starting point is 00:52:50 Like, you know, I wonder how Ezra Klein and Derek Thompson, like the abundance folks feel, because they are people who try to make a case to the Democrats that they should embrace technology more, right? Yeah. That actually, if we think about the way that things like AI might bring down the cost of pharmaceuticals or unlock scientific discoveries or make work less onerous, Like, that could be an amazing thing for Democrats and for anyone who cares about, like, a broad,
Starting point is 00:53:11 public well-being. And I was a pretty big fan of abundance. I'm pretty sympathetic to that argument. But I don't think that's the way that the current Democratic Party is going to go because they're the ones who's voter base of, like, youngish, college-educated white-collar people are the most impacted by AI. They are very scared. And we have a lot of distrust of the technology companies right now where people think, yeah,
Starting point is 00:53:33 maybe there's going to be a cure for cancer, but I'm not going to get it, is kind of the way that people feel. Like maybe there's going to be, you know, a therapist, teacher, whatever in your pocket for all those people, but I can't pay the 200 bucks a month to get the best models. And I'm going to be left on the other side of that divide. And so I think with such low levels of social trust right now, trusting companies, trust in the government, trust in each other, I would not be surprised to see a kind of increasing split around these lines of, are you, are you part of this like broad populist group or are you sort of on the side of the techno capitalist elites or whatever? Wait, wait. So the way you broke it out, right? And your fear, the thing you hope
Starting point is 00:54:11 doesn't happen, but the thing that you're seeing take shape is some sort of a binary between sort of the the futurists and the Luddites or the technophiles and the anti-tech people. Like the Eax, the accelerationist and the D cells, I'm very much seeing that too. And I think that is the worst possible outcome because there are a lot of people who are kind of more in the middle who are like, hey, technology, if it's good and if it helps people, and there are ways to kind of marshal it towards that, we can't just be anti-tech. And also, we can't just be pro-tech no matter what the technology is. It's kind of like a guided tech type of theory. And those people that are caught in the middle will have to pick a side. I think probably Derek Thompson, Ezra, Ezra Klein are among
Starting point is 00:54:55 those who would have to pick aside. And I'm wondering if where you think, if those are the two sides, at least for this election cycle, where do you think that splits among party lines? It seems like the left is more going in kind of a decal-type direction more than the right, though there are factions of the populist right. The right is not inherently pro-AI either. But they seem to be at least in more than the left, certainly. And so if that's the break, are we going to get Democrats who have to be decel
Starting point is 00:55:25 and then Republicans who have to be accelerationists? Yeah, I mean, my sense is that in the 2020, 28 election, unless things get really, really crazy with AI, it'll probably still be Republicans versus Democrats. But between those party lines, I think it is more likely the Democrats will be the D-cells, which as someone who is personally mostly closer to a Democrat than I am to a Republican, and also closer to a pro-technology person than an anti-technology person, I'm like, oh, I really don't like this. But I, yeah, I think that, again, AI impacts the voter bases of the Democrats more than it does the Republican voter base for the most part.
Starting point is 00:56:01 I think that concern is one of them in terms of just like the job threat. I think Democrats tend to be, these days, be more concerned over like, I don't know, things like protecting labor, protecting the environment, protecting creatives, like a lot of these particular concerns that AI introduces are more aligned with the Democratic voter base. And I think the Republicans still have maybe, you know, like I think even in this current political environment, the fact that Trump was mostly an accelerationist and mostly a pro-AI person really prevented a lot of Republican Congress people who wanted to pursue AI regulation from doing so
Starting point is 00:56:39 because they knew that Trump was going to, Trump or one of the aligned PACs or something, was going to go after them if they tried to introduce two onerous of AI regulation. And so my guess is that Democrats would be the more decelerationist party, but then again, you do have folks like Gavin Newsom, who is the current Democratic frontrunner, who is pretty pro-tech and has aligned with himself with Silicon Valley a lot. So I'm not sure about that either, just because you do have people like Gavin Newsom or John Ossoff, who recently did a fundraiser in San Francisco with Chris Lehane, the Open AI lobbyist, right? And so, like, you do see a few Democrats going for the pro-AI lane.
Starting point is 00:57:16 I wonder if that's going to work, like in a money versus the people battle, like in a world of increasing populism and resentment against tech, does having this super PAC, behind you, does having Silicon Valley money behind you, is that going to win you the primary against other people who are like, screw the AI billionaires? I have no idea, but it'll be interesting to watch. If the left or the Democrats do go in that direction, which is kind of like anti-tech moratorium on data centers, the Bernie Sanders type of approach on this, doesn't this kill the Ezra Klein, Derek Thompson, abundance agenda entirely? Because maybe you have abundance with kind of like housing, if you could even get there. But like, that means you don't
Starting point is 00:57:55 have abundance on intelligence. And intelligence, as we were just discussing, can mean cheaper healthcare, cheaper therapy. Like, it can be a deflationary force. Yeah, in theory, everything. I mean, if Dario is even, you know, a small percentage correct, then that can be a massive supply shock in a good way, supply economic shock to our entire economy. And it's essentially a progressive policy to give healthcare intelligence to every citizen of the United States. We could do that if we have an abundance agenda for intelligence.
Starting point is 00:58:31 But it seems if you go full decal and you just do moratoriums on things, then you don't get that. I mean, I think if the Bernie moratorium camp takes over the Democratic Party, like if right now most Democrats are not backing the moratorium, if they all decided to go that way,
Starting point is 00:58:45 I do think the Dems would be the party of the decals. You know, like I think that would signal a big shift for the Democratic Party if they got majority support among Democrats for a moratorium. I will say that, like, if I were to steal man Azur Klan, Derek Thompson, because I think they actually talked about AI populism in their one-year retro and abundance recently. I heard that, yeah. Yeah, and I think one argument that you could make if you were dumb
Starting point is 00:59:06 is that the thing that's blocking health care provision and housing and all that is not really more intelligence, that it's either a political issue or something to do with manufacturing or stuff in the physical world. I mean, we've seen Beaumel's cost disease where the cost of digital services goes down, but a lot of healthcare is still like surgery or housing requires like building in the real world or the U.S. has lost a lot of manufacturing capacity compared to places like China. And so one could make an argument that one is pro technology in the sense of physical things like drug development and manufacturing, things that deliver these broad-based benefits,
Starting point is 00:59:42 even if intelligence we don't max out or something. And so I could imagine an argument to something like that. But I do broadly think that if the Democrats become a sort of firmly anti-tech party, that would be a blow to the abundant style progressive movement. Yeah, like New York State, Senate, there was a bill being considered Senate bill S7-263. And this would basically prohibit AI chatbots from impersonating licensed professionals for therapy or health care advice or that sort of thing, which of course drives the cost. It doesn't decrease the cost of providing those services if someone wants to get those inside of an AI or a chatbot, right? So that does seem to be part of the decelerationist agenda seeping into politics. I don't know if that'll pass or not, but I can't.
Starting point is 01:00:30 Yeah, I don't think it will. I mean, if it does, I think it'd be really stupid. Like, I think it's a stupid bill. Most people like using chatbots for medical advice. Like, that's one area where people, I would say, do not have popular sentiments, is that most people find their chatbots quite useful for doing these kinds of little tasks giving them advice. And I think taking that consumer surplus away from people would be a bad thing. Similar, like, I look at the Waymo battles, right? Like, it's like Waymo's are safer than human drivers. I think the research is pretty clear on that fact.
Starting point is 01:00:57 They feel safe when you're in them. I love taking Waymos. I do think that, like, I would like to see either Google or, you know, governments think about how to transition cab drivers into other roles, like if Waymo does expand in a city. Because again, it's not those cab drivers. It was fault that they invested decades in a career that may go away. But like, I do want to see Waymo's rolled out eventually. I think the world will be better if we have technologies that make us safer. And so to me, the question is just how do we navigate that transition in a way that is empathetic to people who sort of are the losers lost out on the technology because it devalued
Starting point is 01:01:31 their skills. But I do definitely, I would like to see a vision where we are still spreading technologies that do make us safer, make us healthier, whatever. Quick shout out to OKX. They are live in the States, building the new money app. and Wall Street is taking notice. The parent company of the NYSE just invested at a $25 billion valuation and took a board seat. That's the New York Stock Exchange coming to crypto, not the other way around.
Starting point is 01:01:53 And Y, OKX? It's the only app combining a full centralized exchange and self-custody wallet in one place. Sex trading, decks access, on-chain activity, all in a single interface. Nor are bouncing between five apps, copying, pasting addresses, or bridging tokens in separate tabs. They support Bitcoin, Ethereum, Salonet, base, and more. Millions of tokens, just a few clicks, and an infrastructure. that processes trillions in transactions and keeps assets fully backed. OKX users are set to get tokenized New York stock exchange stocks and derivatives later this year.
Starting point is 01:02:23 Tratify and Defi finally in the same app. Head to the link in the show notes, download OKX, and see why it's the NYSC's go-to for going bankless in the United States. Not investment advice, services not available in New York, Kentucky, and Texas. What's something you're actually looking forward to next month? Because Coinbase is doing something interesting. Coinbase one member month starts with 20% off your first year. year of Coinbase 1 plus a $50 Bitcoin bonus when you spend $100 with a new Coinbase 1 card
Starting point is 01:02:48 in your first 30 days. They're also layering in extra rewards and perks throughout the month. And if you're active in crypto, Coinbase 1 is basically designed for you. You get zero trading fees on thousands of crypto assets, 3.5% APY on USDC, and boosted staking and lending rewards and up to 4% Bitcoin back with the Coinbase 1 card. So if you're going to try it, now is the time to lock in a 20% discount before the weekly rewards kickoff. Start your month of more with 20%. percent off the first year of your annual plan at coinbase.com slash bankless. That's coinbase.com slash bankless. Visit coinbase.com slash bankless to get 20% off of the first year of your annual
Starting point is 01:03:24 plan today. Offers are valid until May 31st. Terms apply, Coinbase one card is offered through Coinbase Inc and cardless ink, card issued by first electronic bank. Bitcoin back rates are based on cardholder assets on Coinbase. I was kind of wondering an undercurrent of this whole AI populism and our discussion today has been growing wealth inequality. And I sort of wonder if the AI populism is just a proxy battle in some way or bundling of the greater problem of wealth inequality. And as I look at something like wealth inequality, I'm sort of like wondering what the problems inside of that actually are. So if everyone is getting wealthier, but the top are getting wealthier at a faster pace, at some level, you look at that system, you'd say, okay, what's the problem as long as we're all
Starting point is 01:04:06 getting wealthier. But then sometimes I wonder if wealth inequality, we call it wealth inequality, but it's really more about power inequality. And it's more about a concern that a certain group of elites are able to translate that wealth into coercive direct power. And they begin to become kind of the rulers. I don't know if you've given any thought to that, but like what is what is the driver behind this backlash to wealth inequality. Is this really all just kind of a proxy battle here for power? Is that what's really in contention in the American political system?
Starting point is 01:04:52 Yeah, I think that's a good diagnosis. I think that a lot of inequality is a proxy battle for power, right? I think that's why people are not that excited about certain ideas like a UBI because it feels like being on permanent welfare and really, lying on handouts from the people who actually do have all the money and power. And even if they're keeping you around so that you can pay your rent and pay for food, you don't really have a say because you're still reliant on them, right? The dependence is that you are dependent on, say, with UBI, like the state for doling out those welfare benefits or, you know, you look at corruption
Starting point is 01:05:23 is a top five issue for what voters care about. And you look at a lot of corruption that's going on with current administration. You look at the way that Elon Musk got into politics, basically by spending a ton of money. And not only did he spend all that money, but a lot of things that the Trump admin did basically went his way. He was allowed to do Doge. Like they cared about the issues
Starting point is 01:05:43 that Elon wanted to care about and he basically spent his way into political power. And people see that. People see that when you have money, you can influence policy. You can influence this physical world. You can buy yourself a lot of freedoms that other people don't have.
Starting point is 01:05:57 And I think that's where the real frustration comes from because like you said, if people can pay their bills and pay for healthcare and pay for food, which again, not everyone can, but that's a different question. That's not the same as, oh, yeah, what's the point of my vote when Elon Musk can just buy his way into power, right? Jasmine, you're clearly very sharp and informed about all these subjects, so I've definitely appreciated getting your wisdom and your takes on the podcast today.
Starting point is 01:06:18 When it comes to actual policy positions, what are your recommendations? What do you think people should do? If you were the lady behind the policy machine, if you have any ideas or concepts of things that you think would actually be like effective interventions here that would kind of like smooth out the hard edges on both sides. Yeah, I mean, oh man, this is the hard question, right? I should say I'm not a policy wonk and I didn't focus most of my research on policy solutions.
Starting point is 01:06:43 I've talked about them with a lot of people, but it's not something that I feel really confident on my prescriptions for. It's also, I will say, it's something that I don't think anybody feels very confident about knowing what to do because, like Ryan said, like a lot of the impacts haven't played out yet. Like, we are going to need a different policy situation for if we see like slow and gradual job displacement versus we actually do get this like big apocalypse or job shock or maybe we get no job shock at all. Maybe everything's fine. And then we shouldn't, you know, do anything crazy. But
Starting point is 01:07:09 I do think we should be planning for those different scenarios. I think that pretty likely to me seems like we're going to need some tax and redistribute for like corporate and capital gains taxes. Like if it is true that a ton of money basically flows to these AI infrastructure companies, for example, and they get way, way, way bigger than everything else in the economy. Finding the right way to do tax and redistribution is pretty important. What do you spend on if you're going to redistribute? I think that some are like longer unemployment insurance. Like right now in California, where I live, you get six months of unemployment insurance. I think if we start to see a lot of AI displacement of these long-time jobs, people generally need more than six months to learn a new skill.
Starting point is 01:07:47 Maybe you need 12 months or two years of unemployment insurance. There's things like universal health care start to become relevant because one thing that I expect in an AI world is you're going to have more entrepreneurship and small business capital and more freelancers and small business people, right? It's less like you have a giant firm that employs like millions of people, not millions, but like tens of thousands of people or thousands of people. You're going to have more one person companies, people doing startups, people doing small businesses, that person with a yoga studio or their event planning thing or whatever. Those folks are going to need health care. And right now, I think the economy is and the benefit system is wired for a place where most people are in these like
Starting point is 01:08:24 normal W-2 jobs, but actually what does it look like if you have a lot more small business owners and freelancers? We are going to need to think differently about benefits in health care and things like that. I also think like education is going to look really different, right? And so right now we have this four-year college system that everyone's been, not everyone, but a lot of government effort has been spent pushing people through the four-year liberal arts college system. I am a little bit pessimistic about how long that's going to last. I think there have been a lot of cracks in this sort of four-year college system for a long time, a lot of problems with it, this idea that you study history for four years and you get handed
Starting point is 01:08:57 like an accounting job at the end or whatever it is, this has always been a broken promise, like your skills or not whatever tied to any of the classes that you went to, people are growing to tons of debt and now they're not even getting another job at the other end of it. And so maybe we need apprenticeships, maybe we need national service programs where some countries have national military service. Maybe we do national public service and you work some kind of job, whether it's cleaning up parks or working in administration, learn some actual on-the-job, real skill that we need. And like you actually take that and like convert it into job skills instead of taking philosophy courses that you like chat GPT your way through, which is basically what's going on
Starting point is 01:09:31 right now. And so I think that these are all the way that I'm sort of thinking about it is like what are the ways we expect the economy to change? I expect less white color I see work. I expect more small businesses. I expect more relational sector work. I expect more people who go through these periods of losing their job and needing to find a new thing to do. And like how do we plan policies that are going to train people that are going to give people a little bit of a cushion so that it doesn't ruin your life if you're like in this period of like vast technological change. So say we give them a cushion, right? But say on the other side of that, Dario is more right than everybody else and there's actually like no real job on the other side. Then do we get to a
Starting point is 01:10:11 UBI? Like what do you think about that? And there's some other interesting ideas like the idea of a tax per token per AI token generated where you're just like taxing AI. the source of consumption or there's the idea of creating kind of a sovereign wealth fund almost the way resource rich countries and oil and natural gas kind of do. And so we take a percent of AI and we create sort of a sovereign wealth fund that all citizens own. Any of these ideas appealing? Are they too radical to think about right now? I think we should think about them. I think that like researchers should start to plan out what that would look like if it looks like we're moving on more track to a Dario world, which again, I would say right now we are not on that path,
Starting point is 01:10:56 but if it seems like we're ticking towards that path, I would prefer if some research had already been done. I mean, you know, Sam Altman did that UBI pilot a while ago, right? And it wasn't like he tried just giving a bunch of people money and running a randomized controlled trial on to see what people did with that money. One thing I often wonder is what is the next version of that. Do we need to do a pilot of a jobs guarantee, do we need to do a pilot of some of these other programs, so that if we hit a world with truly mass unemployment, we can know what the better options are. I think a public wealth fund is interesting. I think like the Norway model is pretty interesting. Shorter work weeks are one that I think about a lot because, again, like lump of labor fallacy,
Starting point is 01:11:35 like in a world where humans are always necessary, you don't want to do that. But I think that, like, if humans are able to do less and less tasks, because machines can literally just do the vast majority of tasks you ever could imagine because they're just smarter and more capable in all dimensions. I think that it would be better to me to shorten the work week so that people still have jobs. It's not like 10% of the people have jobs and 90% are unemployed. I would personally rather have a world where 90% are employed, but they have maybe a two days a week work week. They have a 15 hour work week. Because again, that still gives you a little bit of leverage. When you care about these political issues, like do you actually have leverage? Is there some reason
Starting point is 01:12:12 the capital or political, the government has to care about you, you have some role in the economy. You also have some purpose. I think it's better for people to feel like they have a purpose in life. I think about like shortening the work week and maybe we go from a 40-hour work week to a 30-hour work week to a 20-hour work week as a number of capabilities that AI can do expands and human capabilities in a comparative sense decrease. So shortening the work week is one that I think most people would be in support of because, again, I think people want purpose. They just want relatively easy and chill job to do. Jasmine, I think one of my biggest fears is something you said earlier, which is like AI populism
Starting point is 01:12:47 wins out to such an extent that decelerationists kind of win the day. And we just like kill this technology. We say not in our town, not in our county, not in our state, not in our country. And then it moves somewhere else. Maybe it moves offshore. And we lose the benefits of it. We lose the productivity gains. We lose the labor enhancement. Maybe another country gets these instead. And I hope we don't go too far in that direction. Maybe the direction, you know, critics would say Europe has gone in in some areas in some ways, you know, Germany with nuclear, for example, moratorium on nuclear power generation. And so there's no more nuclear power plants. But at the same time, we can't just have the tech optimist vision without any regard to how wealth gets
Starting point is 01:13:40 distributed to the rest of the population. So if you were to think through some sort of a, maybe like a grand bargain where you're like mediating these two parties and you're like, there's Bernie Sanders on one side and there's maybe like Mark and Drieson on the other side, what kind of a grand bargain would you propose to have a meeting of the minds? I think about the way the U.S. government like worked in the 1990s where the right and the left we're all like, okay, maximize the pie.
Starting point is 01:14:08 the left just wanted to tax it higher in order to pay for our social programs, for instance. Now we're of the mindset of either just like full accelerationist or full just hit the brakes, but is there some kind of grand bargain we can strike? What would that look like? It's a hard and a big question, but I'm asking the same one.
Starting point is 01:14:25 It seems to me that there has to be some kind of grand bargain. I mean, I think that the original new deal on rewriting of the social contract with the introduction of workweek regulations, minimum wage, union bargaining power was that. When I was having a conversation with a friend earlier about how come during the 20th century, the United States experienced a ton of mechanization and automation. And some people's jobs were displaced in that process, but you didn't see mass political violence.
Starting point is 01:14:50 You didn't see a Luddite style backlash. And there are different theories for why this is true, but one of the strongest theories is that in the 20th century, automation mostly affected factories that had strong unions that basically sat down at the union bargaining table and worked with the automators to figure out, okay, we're going to have wage guarantees for the people who keep their jobs. We want workers' wages to go up if productivity goes up. So like let's tie workers' wages to productivity gains. And also you had like the expansion of federal level welfare in order to sort of again, reassure people that the jobs would be better jobs that they would be taken care of, that they would share in the gains. I think most people want
Starting point is 01:15:29 to live in a growing economy. Most people want to be more productive. they just want to know that they are going to get a piece of that. And if their company ends up making more money because technology increases productivity, they want to get an equal or like some part of that as well. And I think that's the part that's broken. I don't know that today unions are the right people to be doing that bargaining.
Starting point is 01:15:51 One is like, we're not affecting unionized industries anymore. We're talking about software engineers and marketers and whatever. And most of these people are not in unions. But that kind of, what is the bargaining table? Is the thing I now think about. I think when there's not a legitimate channel to have those kinds of conversations, that's when you see things like political violence or you see these data center moratoriums because you don't have a place where you can actually negotiate. So like with things like Waymo, I'm like, is there a way for
Starting point is 01:16:15 the cab drivers and Waymo to come to the table and to figure out some kind of arrangement where Google, which is a very profitable company and is going to make even more money in a world where Waymo's are everywhere, can they somehow share some of that with the cab drivers who are affected, fund training programs. I don't know what it is. But like these are the conversations that I'm really interested in. And I hope that policymakers, political candidates start to think about what their role in this looks like. Maybe they're sitting down with AI executives and saying, where are you seeing impacts on jobs? Like, what do you think we should do there? Because if my belief maybe naively is, if you can come to a deal, if you can get to a bargain, we're going to be able to preserve the gains
Starting point is 01:16:56 of technology, the growth that you get from technology without this kind of mass populace backlash. One way to ensure that people get their share of the pie, I think, is to also kind of stay ahead of the curve and use AI to the best of their ability. The way me and Ryan talk about this, when we're optimizing our clods and our cloud co-works is how do we get our clods to produce more valuable tokens? Like what do we need to do? What prompts do we need to do? What data do we need to give it to make the tokens that come out of our cloud more valuable? And Jasmine, you are also a content producer. We're all content producers here. You do a lot of writing on a one of the fastest growing substacks, which we will link in the show notes if listeners want to subscribe.
Starting point is 01:17:34 But maybe this is just a personal question. How do you use AI to do your work better? And what do you have to teach both myself, Ryan, and also the listeners? Yeah, we don't want to become NPCs. Me neither. High agency only. Yes. Oh my gosh.
Starting point is 01:17:51 I mean, I feel like you guys are probably like masters at this. So I don't know that I have any crazy tips. You know, I mean, I use like, I pay for like the best models. I use every. few months, I will sort of, I have my own personal e-vowel, so like mine is something like, if I feed, if I feed an AI, like 10 interview transcripts and one paragraph about the kind of article I want to write, can't it just spit out a reported article? I never copy-paste these to be clear. I do not actually use them, but that's e-vel that I measure them on because I want to know at what point
Starting point is 01:18:19 will they be able to do that kind of work. And if they do start getting pretty good, I also want to know where's my comparative advantage going to be. The way that I think about this, and I think is what most economists would advise as well is technology is going to get better, but so long as humans have a comparative advantage, then you're going to be okay, right? As long as you're a compliment to the technology. And so I am actually almost more interested oftentimes in what it is the tech can't do yet. The thing that the only way to find out what the tech can't do yet is to constantly be playing with AI so that you know, right? Because like if AI is way better than you have something, you should use it for that thing. Like I use chatyPD for research, like transcript generation.
Starting point is 01:18:53 Sometimes I'll ask for feedback, like all the time. If AI is better than you has something, I think that oftentimes you should use it and take advantage of that. But the other thing is when I experiment a lot with AI, I really see the jagged edges. I see the things it's not good enough yet at. Like it cannot do a podcast like this. It cannot have a conversation. It cannot build trust with an interview source
Starting point is 01:19:12 and get them to share their feelings about stuff. It can't go places in the physical world and describe what it is like to be in a place. So a lot of my writing is kind of like scene based in, quote unquote, anthropological. And I think that is more interesting to people in a world where AI can just get facts off the internet. Like, I read the facts off the internet,
Starting point is 01:19:30 but what I can do is I can actually stand next to a data center and see what it sounds like and interview the people around it and say, what do you think of this thing? And so I would probably spend a lot of time, not just experimenting, but also asking, what is my personal comparative advantage as a human against the AI? And how can I really invest in that?
Starting point is 01:19:47 Because that's what is going to be robust as AI gets better and better. Jasmine, thank you so much for coming on the show. This was a fantastic episode. Thank you both. Yeah, love the conversation. You write at substack, subsac.com slash at Jasmine. You're also on Twitter. Where else do you want readers or listeners to go to to find you? That's great. Yeah, Twitter and jasmine.com are the best places to find me. Thanks so much.
Starting point is 01:20:09 We'll get all those in the show notes. Bankless Station, you guys know the deal. We didn't really talk about crypto. We talked about AI, but nonetheless, it's risky. Either way, you can lose what you put in, but we are headed west. This is Frontier. It's not for everyone, but we are glad you're with us on the bankless journey. Thanks a lot.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.