The Ezra Klein Show - Why Are Palantir and OpenAI Scared of Alex Bores?

Episode Date: April 21, 2026

Leading the Future, a super PAC whose funders include the founders of companies like Palantir and OpenAI, is spending millions of dollars this election cycle, and a considerable amount of that money i...s going toward attack ads against Alex Bores – even though Bores himself used to work for Palantir. Bores is a New York state assemblyman who is running for Congress to represent New York’s 12th District. His campaign includes an extensive A.I. policy platform, including demands for A.I. companies to be more transparent about safety, and an idea for an “A.I. dividend” that would redistribute some of the profits of A.I. companies to the public. So his race has turned into a central battleground over the future of the A.I. industry and who has the power to shape it. In this conversation, we discuss how Bores went from working for Palantir to running a campaign that would regulate the A.I. industry, the major issues he thinks A.I. policy needs to address, and his response to the attacks against him. Mentioned: Give People Money by Annie Lowrey “Alex Bores’ AI Policy Framework For Congress” “NY Congressional Candidate Faced Palantir Sexual Comments Claim” by Laura Nahmias “AI populism’s warning shots” by Jasmine Sun Book Recommendations: A Theory of Justice by John Rawls World Eaters by Catherine Bracy Bird by Bird by Anne Lamott Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com. You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs. This episode of “The Ezra Klein Show” was produced by Annie Galvin. Fact-checking by Lori Segal. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota and Isaac Jones. Our recording engineer is Aman Sahota. Our executive producer is Claire Gordon. The show’s production team also includes Marie Cascione, Michelle Harris, Rollin Hu, Kristin Lin, Emma Kehlbeck, Jack McCordick, Marina King and Jan Kobal. Original music by Pat McCusker. Audience strategy by Shannon Busta and Lauren Reddy. The director of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Brianna Johnson. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:31 If you are living in New York's 12th congressional district, you may have seen these endless attacks on Alex Boris, one of the Democrats running there. He made hundreds of thousands of dollars building and selling the tech for ICE, enabling ICE and powering their deportations while making bank. Now he's running from his paths. Ice is powered by Boris's tech. Yikes. Boris did work for Palantir. The rest of that attack is not what you might call true. But what interests me is who is paying for it.
Starting point is 00:01:05 The Super PAC leading the future, and its subsidiary, think big. Who funds the Super PAC leading the future? Well, among their big donors are co-founders of OpenAI, Andrewson Horowitz, and wait for it. Palantir! So why is a co-founder of Palantir, Joe Lonsdale in this case, funding a super PAC to try to destroy a candidate on the grounds that he once worked for Palantir. The reason is leading the future is a Super PAC dedicated to destroying anyone who might
Starting point is 00:01:38 regulate the tech industry in general or AI specifically in a way these funders don't like. And Boris as a member of the New York State Assembly co-authored and passed the Rays Act, one of the first pieces of AI regulation passed in any major state. There is a principle here that is much more important than any single congressional seat. You'll hear it, honestly, if you just listen to AI founders talk. They say they believe in it. Sam Altman, a co-founder of Open AI, who it should be said has been horribly targeted in recent violent attacks by anti-AI individuals. He was trying to cool down temperatures here, writing, it is important that the democratic process remains more powerful than companies. It is important that the democratic process remains more powerful than companies. Altman is right.
Starting point is 00:02:26 But it's his co-founder, Greg Brockman, who is one of the major donors for leading the future, who is trying to make sure the democratic process is subordinate to the companies and is trying to do it by funding a super PAC that can unleash enough money to crush any legislators who cross them. Boris in general been a pretty effective legislator in just over three years at the New York State Assembly's past 30 bills, and has been recognized by the Center for Effective Lawmaking as one of the most effective freshman legislators. But it's his ideas on regulating AI that particularly interests me, in part because I think they make sense and are worth discussing things like an AI dividend, but in part because I just really do not want to live in the world that leading the future is trying to create. A world where the AI industry hovers in enough money, they can then destroy anyone who might regulate them. What's funny about all this is, you'll hear it. Alex Boris, not an anti-AI kind of guy.
Starting point is 00:03:26 I think he gets AI pretty well. I think he's trying to balance its risks and its possibilities. But if you're looking for a pure AI backlash candidate, he's not it. And I think that tells you something that what leading the future and super PACs and groups that might emerge like it are actually trying to do is to stop anyone from legislating on AI. So if the democratic process is actually going to mean something here, ideas are going to have to speak louder than that. this kind of money. So I wanted to hear what Boris would actually do, if given the chance. As always, my email as a client show at NYUTimes.com. Alex Boris, welcome to the show. Thanks for having me.
Starting point is 00:04:13 So I want to begin a bit in your early political memories. How did your politics begin? Well, it began with something that I wouldn't necessarily call politics, only in retrospect. Would I put that phrase on it? But it was with my parents in union fights. In second grade, my dad and his colleagues were locked out by Disney for fighting for better health care. There were contract disputes for over a year and Disney wouldn't budge. And finally, the workers went on strike. And in response, Disney locked them out for three months and cut off their health care benefits, including my dad's friend who was about to start chemotherapy. And thankfully, the union stepped in and they paid for the treatment and he survived. But my dad would pick me up from second grade and bring me to the
Starting point is 00:05:01 picket line. And that was my first experience of people working together for change. He would put me in front of the Disney store. And we've all seen people walk past picket lines. It's not hard to do. It's a lot harder to walk past an eight-year-old with a sign that says Disney is mean to my dad. And so that was my first lesson, both that health care needs to be universal, but also that the way we win is by working together. That if you're one worker, you're one person, you're one, anything advocating, it's easy to get crushed.
Starting point is 00:05:33 But if you have a union, you have an organization, you have a campaign, you have a movement, well, then you stand a chance. What did your dad do for Disney? My dad was a, worked for Monday Night Football at the time.
Starting point is 00:05:45 So he did graphics and videotape and instant replay. He worked in the trucks. Eventually became a technical director, but he was one of the people that's actually sending out the signal before it hits your TV. And so you then study industrial labor relations at Cornell and then get a computer science degree. I'm curious about what those two very different disciplines taught you. Well, they sound very different, but every day it seems to be more and more intertwined.
Starting point is 00:06:18 at the School of Industrial Labor Relations, I learned economic theory, I learned collective bargaining, I learned how to run campaigns and organizations in ways that actually can change power and win things. And I learned to stand up for working people and to view a lot of interactions in the world through that lens. Be specific about that. What did you learn about how to stand up for working people. My freshman year, we ran a campaign against Nike. Cornel was sponsored by Nike, our athletic team sponsored by Nike. So I was part of a group called Cornell Students Against Sweatshops. It was affiliated with USS, United Students Against Sweatshops, and they taught us how to build a campaign over time. We learned how to be strategic. So you start with a clear demand. In this case, it was Nike had
Starting point is 00:07:15 laid off 1,800 workers in Honduras without giving them legally mandated severance pay. And we argue that the Cornell Code of Conduct required that Nike be responsible for their subcontractors' actions that they make the workers whole. So we put that into demand. Then you build up over a period of educating. And so we'd have teach-ins. We'd have sort of ridiculous actions to grab attention. We did a working out for workers' rights where we were in the quad and just like playing 80s music. and getting people like, hey, what's going on? Oh, well, let me talk to you about what's going on in Honduras.
Starting point is 00:07:51 And then you build up to more aggressive actions that require a reaction from the administration. We ended up being successful in that campaign. Cornell decided it was going to cut its contracts. And I think something like three weeks after Cornell made that announcement, Nike, about-faced, paid the workers all the money they were owed and gave them job training in health care for a year. So, oh, Ed, you tell me about how you learn to do actually. activism in college, which is interesting. Yeah. But I want to go a level deeper than that. You're doing industrial and labor relations. Yeah. What is the deeper theory or thesis of the relationship between workers and corporations between labor and capital that you came out of that with?
Starting point is 00:08:33 There's so much that's in contention between workers and capital, but in the best world, it's how you're actually working together to grow the economy, that workers are not out there to bankrupt any company that they want the company to grow. And so there's fights over how you distribute the pie, but theoretically both want to grow that pie. And then there's really interesting relationships internationally. One of the things that I discovered was for so many of the countries where we thought labor conditions were awful, the laws on the books were actually quite good. The question was with enforcement. And if the home countries actually tried to do enforcement, the factors are just up and leave and go somewhere else. So the lever where maybe you can change that is in the countries
Starting point is 00:09:23 that are buying most of the goods. And so we would apply pressure in the U.S. about holding countries to the standards they had already set up for their workers. So I feel like you're describing to me the education of a young radical here. You're walking picket lines at eight, your study in industrial labor relations, doing anti-corporate malfeasance campaigns, skeptical of globalization. How do you end up at Palantir? Yeah. So I really wanted to be a lawyer, but every lawyer I spoke to who told me not to be a lawyer. That was my experience, too.
Starting point is 00:10:01 Take time off in between, make sure that's what you want to do. And so I went to a economic litigation consulting firm called Cornerstone Research, where we were preparing expert witnesses for trial. And so we were doing economic modeling and playing with data, but I was interacting with lawyers all the time. So building a skill set, but could see what they were doing. And I found I really enjoyed the economic modeling. I really enjoyed playing with data. And also to that ideology, as I'm growing up, I'm a Democrat. I believe the government can and should be a force for good, but that also means we take on the burden of proving it. And so I was a young believer in, I probably wouldn't have put it in these terms back then, but expanding government
Starting point is 00:10:44 capacity in making sure government's actually delivering. And Palantir in 2014, right, in the Obama administration, was about how can we expand government capacity while protecting privacy and civil liberties. And so at the time, it felt like very much the natural fit. So I want to stay in this 2014 moment, because this is a period when there is a lot of optimism. The technology is going to solve some very fundamental problems of democracy. That you're going to have all the civic tech, that the interfacing between citizens and the government is going to be much smoother, much better, that these companies are fundamentally good.
Starting point is 00:11:30 Google doesn't want to be evil. Facebook wants to connect the world. Palantir wants to make your data comprehensible. And I think there's also an underlying view that the answers to our problems are out there somewhere in these masses of data. And if you can just make the whole thing legible, you could get the answers. And something poisons pretty quickly, I'd say, after 2014. Like that really feels like a different ideological moment than we're in. Entirely.
Starting point is 00:12:04 What was wrong about that? or what would you add or change to my rendition of that optimism? A lot of that is true. The Palantir story that was told to prospective employees, and Alex Karp would do this a lot, was that he most feared fascism, that he had just finished being a German philosophy student, and he was most afraid of fascism developing.
Starting point is 00:12:32 And fascism happens when government fails to provide for its citizens, and they start blaming someone else for it. And people then feed that hunger and that hatred. And he couldn't do anything about the latter, but he could do something about government failing to deliver. And so the reason that he wanted to do, Ballantir was after 9-11, after this real rise in a feeling of being unsafe,
Starting point is 00:13:01 could we build the systems that would allow government to make people feel safe, but build it in such a way that was protecting privacy and civil liberties. That was the pitch. That was the fundamental idea was we were there in many ways to stop fascism. And how'd it work. Trump's elected in 2016. That was a weird bit for...
Starting point is 00:13:23 With the aggressive support of Peter Thiel, one of the Palantir early investors, I mean, I don't know if he, would you call Peter Thiel a Palantir co-founder? I think so. I think that's the phrase that is given. But Alex Karp was very much fighting for Hillary at the time. And if you look at donations of employees at Palantir, they tell a very skewed story towards the Democrats as well. Yeah, Silicon Valley is very democratic in this period. Absolutely.
Starting point is 00:13:50 Absolutely. You have a lot of Obama administration figures that they can't go to Wall Street anymore. That's not kosher for a Democrat, but you can go to Silicon Valley. Yep. But that election in 2016, but even more so his re-election in 2024 is a real failure of that mission. And to now see leaders of the company and Silicon Valley broadly throwing their lot in with what I think is a fascist regime is a real, a real disappointing switch. So you're at Palantir 2014 to 2019. You start, I think, as a data scientist, by the end, you're one of the people
Starting point is 00:14:27 leading their relationship with the government. Yeah. I focused on the federal civilian side. Yeah. So what does that work? So that was work with the department. Department of Justice, with CDC to track epidemics, with Veterans Affairs to better staff their hospitals and give veterans the care they deserve in need. It was helping a lot of the federal civilian agencies. How much is what we now think of as AI and generative AI starting to come into the work you all are doing then? Not at all. And here's what I mean by that. Palantir was aggressively anti-AI in that period. It believed that data integration, was the true source of value and that AI was a magic layer that would be applied on top
Starting point is 00:15:11 and it was all marketing and we were doing the real work that was getting data to come together. Can you describe what the difference is in those two? Yeah. What is data integration versus whatever they thought AI was? Yeah, well, so AI in a very naive sense, I mean, we'll talk about it in other ways now. This is before agentic models and all of this. but AI is doing analysis of data. And before you can do the analysis of that data,
Starting point is 00:15:39 it needs to be organized in a way that AI can make sense of it. But the actual thing that's difficult is organizing all your data together. That requires hard work, and there's no magic to do that yet. And the software plus engineers going on site and doing a lot of that hard work to do the manual hookups, that was always going to be the true source of value. So you're at Palantir across the end of the Obama administration and into the first Trump administration. Yeah. Now, Palantir working with the government is a different animal depending on which government it's working with.
Starting point is 00:16:13 Very much, though. How does that change? I was leading the work at the Loretta Lynch Barack Obama DOJ, and then all of a sudden the Jeff Sessions, Donald Trump, DOJ. And priorities changed pretty drastically. the work with the banks was probably wrapping up anyway just because of time, but clearly there was no more interest in that work. The contract that we had had us choose three mutually agreed upon case types. And so I met with the new leadership after the transition. This is early 2017 and said, you know, what do you want to prioritize? What do you want to work on? And they said the opioid epidemic. We said, great. We definitely want to do that work. They said violent crime. Cool, as long as long as you're going to do. We said, great. We definitely want to do that work. they said violent crime cool as long as it's not a dog whistle like yeah we'd love to work on that
Starting point is 00:16:59 and then they said civil immigration and I said we're not touching that that's not the work that we are building this for and I was empowered as the lead of the project to do that I had a contract that allowed me to because it was three mutually agreed upon case types and while I was there
Starting point is 00:17:16 and in the DOJ project we didn't do any of that work that's not how the decision went at every customer or in every project. So Palantir during this period does begin working on immigration with the Trump administration. I never worked on any of those projects, and so I was never, like, cleared on it. But to the best of my understanding, during that time, it was not stopping the Trump administration from using it for immigration. I don't think there was building of features specifically for deportations, but I could be wrong about that. But even the fact that they weren't going to stop it from being used in that way got a number of employees, myself included, quite upset.
Starting point is 00:17:58 You leave Palantir in 2019. Why? Separately from May on a project that I never worked on, Palantir had signed a contract with a department within ICE called HSI Homeland Security Investigations that during the Obama administration was focused on anti-human trafficking, anti-drug trafficking, anti-drug trafficking. trafficking, sometimes counterfeiting, things that are not controversial and that everyone would support. And then when Trump comes in in 2017, they try to change the nature of that work. They try to get another part of ICE called ERO, enforcement and removal operations, the part that everyone thinks of is ICE, to get access to the software and to use it for deportations. And there were a lot of conversations internally at Palantir about what was actually happening. as employees can always see that if we weren't clear on the project.
Starting point is 00:18:50 And a fundamental question came up of, well, why not right into the contract those same protections that we have elsewhere where we can say don't use it for deportations? And eventually executives made clear to us that they were not going to do that, that they were going to renew the contract without putting in those guardrails. And so I made plans to quit. So there was a Bloomberg story that questioned this, clearly coming from somewhere inside Palantir. and it says that there was shortly before you left, I think it said five days before you left, a warning from HR about sexually explicit comments you had made to a coworker,
Starting point is 00:19:27 and then separately that when you did your exit interview, you said you were actually leaving because you were burnt out and there was too much travel. So I want to take these as pieces. Was there a sexual harassment claim against you at Palantir, and is that why you left? No, and no. This came out of a,
Starting point is 00:19:45 an attack from executives of Ballanteer that are upset that I am pushing for AI regulation and that I've called out Palantir's work in the past. As I told Bloomberg when they reached out, I had expressed my concerns about the work with ICE internally. I had begun interviewing months and months before. I had an offer in hand. I then had retold a story of something that had happened to me on the job. Someone didn't like that retelling, had talked to HR. HR had one conversation with me where I shared exactly what had happened, and that was the end of it. There was no file, no letter, no, none of the things that are claimed in that story. I dropped the matter immediately. You weren't disciplined inside the company or something. Nothing like that. And this seemed like
Starting point is 00:20:35 what the Bloomberg story said, but I want to check it. The infraction was a story you told or something you said, not something done with or towards a colleague? Correct. I mean, the story goes into it. It was a, well, see, now, can I retell the story here? It was a paper goods manufacturer that was talking about uses of tissues. It sold tissues. The marketing department was talking about how tissues are used.
Starting point is 00:21:00 And I retold that example from the presentation on how tissues were being used in, like, odd things that had happened while working at the company. And then the burnout and travel side of it, the argument there is that you're making this claim that you took a moral stand against the way it was being used, but actually you were just kind of tired of working there. As has been cited in multiple sources, multiple current Palantir employees have backed me up that they heard me talk about ICE and stand up and do all of that. I have no idea what notes they took from the exit interview. I asked to see them. I was told by the Bloomberg reporter, she didn't actually have them, that this had just been told to her by the executive so they could claim whatever they want
Starting point is 00:21:43 on top of the notes that, again, I never saw. I know what I had said before and during, and that I had brought this up many times. And a year after I left, Palantir emailed and called me, begging me to come back. It feels like if there had actually been a real thing there, they probably wouldn't have done that. So no, this is, you know, you just heard me be fairly critical about Palantir. I had before as well. The executives there didn't take kindly to that. And the SuperPack that's attacking me is against any regulation on AI. And this is just another desperate hit by them. I have been amused that the Super PAC, which is attacking you, which is partially funded by Joe Lonsdale, a Palantir, that one of its core attacks on you is that you worked at Palantir. Correct. That's a pretty
Starting point is 00:22:32 strong level of political shamelessness. I would agree. I would agree. I mean, so I would say lying about employees record. But they are very terrified. They are very afraid of me in office. And beyond that, they've said publicly that they are trying to make an example out of me, that they want to beat up on me so bad that when the idea of regulating AI comes in the future, that politicians run in the opposite direction. And so they're not, primarily concerned with what is honorable or what is true, they are concerned with causing pain. So, 2022, you're elected to the New York State Assembly. In 2025, you passed the Rays Act, which gets us into the AI regulations you're alluding to. This is one of the first major pieces of AI legislation
Starting point is 00:23:21 passed by any state in the country. Before we get into, what does it do? What was the philosophy behind it? When you're working on that bill, and I know you had co-sponsors on it. What were you all seeing and what were you all trying to achieve? We were seeing AI develop extremely rapidly and industry themselves warning about what was coming. You know, this is after the letter that was signed by so many executives saying that we should treat the risk of extinction from AI equal to global nuclear war and promoting perhaps a pause, many of them had signed voluntary commitments with the Biden White House. saying, you know, we are going to take certain safety precautions, and this is the first step towards binding federal regulation. And then we saw no binding federal regulation come.
Starting point is 00:24:12 And we'd also heard from companies themselves that they were okay with certain safety standards, but they're in a competitive marketplace. And that if they see their competitors starting to skimp on safety and cut corners, they would be forced to as well. So when you hear that call, you say, okay, you should establish some baseline that people can't go below so that there is some established safety standards that everyone is playing by. What's the baseline you tried to establish? There were a few provisions in there. One was that you had to have a safety plan that you made public and actually stuck to that largely followed best practices in the industry around how you were going to test the models for specific risks, how you're going to record those tests
Starting point is 00:24:57 and what you would do with that information, that you had to report to the government critical safety incidents, which we specifically defined in the bill. If it goes wrong in these sorts of ways, may not have harmed anyone yet, but could suggest something is coming. You have to let us know about it. And those provisions largely survive till the end.
Starting point is 00:25:15 There were two others that were in the original that ended up getting cut out. One of them was that you can't release a model if it fails your own safety test, basically designed for the way the tobacco companies operated, where they were the first to know that cigarettes caused cancer, but denied it publicly and continued to release their products, or fossil fuel companies that knew oil caused climate change, but denied it. We're saying if you knew your model was particularly risky, you have to take action on that. And the last provision was third-party audits,
Starting point is 00:25:45 was saying that you can put up whatever standard you want, you can assert that you're going to follow it, but someone else should check your work. Not the government, but just a different party should come in. The same way we have financial audits, the same way we have SOC2 security audits, that another party needs to look at it and say, yes, you are following this. And presumably you're working on this bill, what, 2024, 2025, four passes? How of your views on AI, the risks it poses, the questions it raises changed with the subsequent pace of model releases? I think things have happened much faster than I thought they would. And I think our ability to pass legislation has moved much slower than I thought it would.
Starting point is 00:26:31 And so that difference in speed between how AI is advancing and how government reacts is wider than I was expecting when I started on this process. Have you thought about the change in public opinion? Because it looks to me like we're seeing a pretty powerful AI backlash rising. You've polls showing now more Americans are worried about AI than are enthusiastic about it. There's a lot of counter data center energy playing out throughout the country. What have you made of how quickly the politics have sort of shifted beneath that AI? Both how many people have focused on it, but also how bipartisan it's remained. You have all people know about polarization and most issues end up polarized.
Starting point is 00:27:24 And this one hasn't so far. And it has resisted that longer than I thought it would. That if you talk to voters, you see across Republicans, Democrats, and Independence pretty similar attitudes. Across state legislators, pretty similar attitudes. Even in Congress, there's more bipartisanship than you would think. I mean, surveys regularly show that about 10% of people want to put AI, put the genie back in the bottle and pretend it never existed. And I empathize, but I don't think that's the way forward. 10% of people represented by the Super PAC leading the future want to just let it rip. That's the Super PAC that's attacking you. Yes. They want to just let it rip. They don't care how many people it hurts, just how fast it moves. And 80% of Americans want see some benefits, but see a lot of risk and think it's moving.
Starting point is 00:28:13 too fast and want to have some say in its development. The fact that it stayed so bipartisan has surprised me and also the fact that it's risen up in people's minds so much has surprised me. Has the pessimism around it surprised you? And we were talking earlier about the period when there was a lot of optimism. Yeah. About tech, about software, about the internet. And I think you can really look from early computers, early internet, all the way pretty late into the social media era, You know, probably around Trump, I think, things begin to turn. Cambridge Analytica, algorithmic feeds. But that's a long time when these systems and technologies are present for people.
Starting point is 00:28:53 And there's a fundamental optimism about them. AI, shot GPT, I think, is when this really burst into public consciousness. It's 2023. We're here in 2026, and the polling is already turned negative. I mean, the week before we recorded this, Sam Altman was. targeted in two separate violent attacks. There was a malt of cocktail thrown into his home. Awful.
Starting point is 00:29:18 Two other people shot at his door. I was a little shocked to see people celebrating these attacks online saying, you know, where can we support the bail fund? Yeah. This has moved into fury and fear and pessimism really, really quickly. Yeah.
Starting point is 00:29:34 Why do you think that is? Well, there was a separate split in AI around capabilities. The debate used to be, is this real or is it stochastic parrots, but usually even before that, is it just, you know, slop that is never going to actually replace a human. Fancy auto-complete. Exactly, exactly. And so we had these debates on one dimension, which was like, is it good for people, is it bad for people? And then there was this other dimension of like, how big an impact is it going to have? And I think that debate's been collapsed. People are not skeptical of its power anymore.
Starting point is 00:30:11 or some are, but fewer and fewer each day. And so the intensity with which we're having that first debate has really ramped up. But I think it's also been that we saw what happened with social media. We saw what happened with these previous revolutions that were supposed to change everything for the better. And we've seen platforms established with great promise. And then over time, once they get power, really turn on their users. And so people are no longer willing to believe the story. that is told about a technology or a platform always benefiting people.
Starting point is 00:30:47 And you see this argument from some of the AI founders. They say, well, it'll create material abundance for everyone. It will create, there'll be no more poverty. Everyone will have everything. And everyone's looking around saying, of course that's not what's going to happen. You're a private company. You're going to profit. You're going to keep it all for yourself.
Starting point is 00:31:04 Like, how are we going to force it to? Sam Altman recently said it'll be like a utility. It's like utilities are really highly regulated. And so people are just not willing to believe that spin anymore and yet seeing really quickly changes in their lives. Jasmine Sun, the AI writer, just wrote this kind of interesting piece on AI populism. And I thought the way she defined it was interesting and a little more subtle than you normally hear, which is she wrote, I define AI populism as a worldview in which AI is viewed not only as a normal technology, but as an elite political project to be resisted. And what she's getting out there is AI populism, I think, and the AI backlash tends to include two dimensions.
Starting point is 00:31:46 One is that this technology is being overhyped. The other, as is often put to me in emails, is it is being pushed down our throats. That it's not a thing people want. It is a thing being forced upon them. Now there's all this investment behind it, so the investment needs to be paid off, so the companies really have to do it. and that if you take the power seriously, you see it in a different way that kind of almost like any version of having a high in the economy is going to be just a way of paying off these huge investments, that we're not, we're not getting a technology we want, we are having a new paradigm forced upon us. How do you think about that? I think it's a beautiful description.
Starting point is 00:32:29 What I hear from my neighbors is very much the feeling that this is moving so quickly that we don't have control and the American people so far have not had a say in it. So I think the first part of that definition of the belief in its capabilities, that part is shrinking as part of the dialogue as we're seeing it do more and more. but the fact that it is being thrown at us and we currently don't have control, I think, is what's motivated so many people to be thinking about AI. It has always struck me that if you listen to the founders and leaders of the AI companies, they are very specific on the harms and the gains are very general sounding. So, you know, you'll hear Dario Amade talking about, you know, 50% of entry-level white-collar workers seeing their jobs automated away.
Starting point is 00:33:21 There actually are Waymo's on the same. streets now. You can see that those could take jobs from taxi drivers and Uber drivers. There has been all this talk about existential risk, the sense that you could build something smart enough to disempower human beings. There's a lot of specificity on replacing coders, and then you'll get these very vague. It's going to help with drug development. It's going to solve, you know, material scarcity. And I think if you're a normal person being offered this technology that might make sure your 13-year-old son has a, like, AI porn bot before he is a real girlfriend, and you might lose your job. And maybe there's some chance of human race
Starting point is 00:34:03 doesn't maintain control over its own future. Why wouldn't you want to pause on that? Absolutely. Absolutely. When you're seeing the harms day by day, whether it's your kid, you know, the pedagogy at schools hasn't been updated, and some people still think that a signing take-home essays teaches critical thinking. It doesn't anymore. And on top of that, you see chatbots and you see some of the truly horrific stories that have happened to teenagers. And maybe you go to your job and your company now has a hiring freeze. They're not laying people off yet, but they're not doing their usual hiring, and you're worried about what's coming from that. Are you all going to be necessary in the future? And then you see your utility bill go up. And maybe you're not. You're not doing their usual hiring. And maybe you're not. You're worried about what's coming from that. Are you're worried about. And then you're, you're not data center is built near you. Maybe it wasn't, but you're starting to think about what's causing that. And then on top of that, you see people saying, oh, yeah, and it might kill everyone, right? These are the news stories that are coming in, and you're maybe not seeing that benefit. And there are benefits, right? This is not a story of a technology that is just bad, but it's moving really, really quickly. And a few people are controlling the direction. And many people have lost confidence in government.
Starting point is 00:35:16 ability to steer it. It becomes a question of if Democratic institutions can govern this technology before it governs us. I think pretty clearly no. Well, I'm running a campaign to change. I guess we'll talk about that. But I think being worried about how fast these systems are moving and having any awareness at all of how fast the U.S. government now moves should make one worried. Absolutely. And so one thing you do see is proposals emerging to try to slow AI down by functionally choking off some of the inputs. So there's a Bernie Sanders AOC bill to just have a data center moratorium. There's some bipartisan interest in this.
Starting point is 00:36:00 Ronda Santos in Florida has a bill that would be very restrictive on data center construction. Yeah. What do you think about a data center moratorium? The Bernie Sanders AOC proposal is a moratorium until we pass real regulations. that protects people. I agree with that. I think we should pass real regulation today. Do you agree with the data center moratorium until we do? Well, I think what they are calling for is that we need the real regulation. They don't think that bill is going to pass in this split Congress. They are setting the terms of the debate,
Starting point is 00:36:29 which says, why are we going forward with this until we've done the real work? And I think that's the right question to ask. Like, if I could wave a magic wand and pass any bill I'd want, it wouldn't be the moratorium. It would be the regulations that the moratorium is calling for. But putting that as a negotiating tactic, I think, is meeting the moment in the scale. I mean, Bernie talks about the potential benefits of AI and also talks about the risks and the downside. I think he's been the clearest communicator on it. But you're right, it's a bipartisan issue. It is not one that is left right. In your framework for AI regulation, you have a somewhat different approach to data centers. You seem to see them as a kind of opportunity,
Starting point is 00:37:42 an opportunity for what? They could be an opportunity. And then, This is, again, you need the regulation first. It's not, oh, yeah, this will work in the future. And given the political power of these companies, I would be very skeptical of them doing it unless we pass regulation with teeth. But the idea is that our electric grid is so outdated and so in need of updates throughout the country, but even here in New York. And it also slows down the renewable energy transition because if you want to have solar on homes, you need a grid that is more responsive to generation happening in a distributed manner, and it's not right now. And we've tried to upgrade the grids. We need funds to do it, and the only options on the table are the government pays for it, which is taxpayers, you and I, or it adds to our utility bills, which is ratepayers, again, you and I. And here comes an industry, with for all intents and purposes, and unlimited private capital that is really willing to pay for time. They are desperate for speed in building these out. And so what I'm saying is you can set the incentives such that if you
Starting point is 00:38:52 want to build a data center and you're doing X percentage renewable, it should be a very high percentage, and you will pay not just for the connection to the grid and all the infrastructure that's needed for that, but you'll also pay on top of that a fee to make the grid more resilient and help the upgrades elsewhere so that you can truly make the grid more green and more reliable, well, then we'll move you to the front of the interconnection queue, and by doing that, we'll push your competitors to the back of the interconnection queue,
Starting point is 00:39:24 and you set up a incentive to actually build things in a way that benefit us. Is it possible to do, given the way our buildouts and infrastructure really work? And the reason I've developed some cynicism here is I remember being promised, the smart grid of the future, future in the 2009 American Recovery and Reinvestment Act. Yeah. And we didn't quite get that. No.
Starting point is 00:39:50 I don't think anybody said at the end of that where our grid was now smart. And then we passed the Inflation Reduction Act and the bipartisan infrastructure bill, which between the two of them had a lot of thoughts about energy generation and that and other things were meant to work on the grid. And I'm not saying there were no upgrades made to the grid anywhere, but I am saying that I keep getting promised, gigantic grid overhauls. Yep. And then being told a couple years later, that somehow our grid is still this archaic mess
Starting point is 00:40:21 where the biggest problem for getting new green energy online is we can't connect it. Your cynicism is warranted. A hundred percent. Thank you. And I dare say you wrote a whole book on ways that we could make that easier to do. But maybe the difference here is you have private capital coming up to do it. And the whole proposal is being precise on ways that we can. can expedite and by expediting, shifting the ones that are dirty and not paying their way to the
Starting point is 00:40:48 back of the line. So as I understand the theory underneath the data center approach, it's really that if all this money is going to flood into AI and AI is going to be, at least in part, built on the collective commons of the entire culture that came before it, that we should benefit. That is not just Sam Altman created some magic algorithm. Sam Altman and OpenAI and Anthropan, and Anthropic and GROC and so on, inhaled the entire internet, ate up my books and the books of everybody else around, and trained, you know, these systems on them. You have an idea in there that I think tracks this theory more closely than other things I've seen, which is an AI dividend. Talk me through that. The AI dividend starts from thinking about how we can give
Starting point is 00:41:39 Americans a real stake in the AI economy. And it starts with humility that we don't know exactly how it's going to go. We don't know how disruptive it's going to be. But right now is the time to plan for the potential outcomes that could come. And there's always been this conversation, right, in my econ classes at ILR, it was that, oh, every technology revolution has always created more jobs than it's destroyed. Arguable, maybe. But this is the first time someone's building a technology and stating that the goal is to replace
Starting point is 00:42:12 all human labor. It is to be better than humans at everything. And the metric by which we understand how good the technology is getting is how much better it is in humans. It is functionally how well it is capable of mimicking different forms of human labor. Exactly right. And then exceeding them. Exactly right. I mean, you are creating a replacement for a human labor machine. Exactly. And it's the first time that has been tried. And it doesn't mean it will succeed, but it certainly means government needs to take it seriously. And so the idea of the AI dividend is, what if we end up in that world where all human labor is replaced or just a significant portion of it is displaced? How do you have a society that is actually functioning then?
Starting point is 00:42:53 And you have to start talking about universal basic income. And the idea is to make sure that we are setting up the structures now that would lead for Americans to be protected if we end up in that future. And I have a lot of things about how we can prevent that future, changes, et cetera. but the AI dividends, almost that insurance policy. And you could fund it via boring things like a wealth tax I've been talked about. You could fund it via a token tax. So putting a tax on the usage of AI, maybe limited to commercial opportunities where you're replacing human labor or not. And that's a fine policy as long as investment in capital always leads to more jobs, which has been economic theory for hundreds of years.
Starting point is 00:43:37 But maybe AI is shifting that. And so if it's shifting that, we need to shift our tax policy to be taxing AI and to be discounting hiring humans. And token tax starts to get at that. But then the other funding mechanism that I talk about for the AI dividend is actually taking warrants in these companies, large out-of-the-money warrants where you say, you know, if the value of this, the AI companies were to go up an enormous amount, then the government would have the right to buy shares at, a set price. They basically only pay off if one or multiple of the companies are wildly successful, basically if they are replacing all human labor. And if you institute that now, then VCs celebrate it and say, you're participating in the upside. And if you try to implement it after one of them are successful, then you're seizing the means of production and seizing wealth. And so my idea is you
Starting point is 00:44:32 go down all of these paths. You start to find ways to have the revenue to actually fund universal basic income or investments in job retraining or just a broader safety net, but do it in ways that automatically scale and adjust and kick in at the speed of AI. Here's a concern I've always had about this set of policies or this set of answers to the problem of AI and job displacement. So I've been very, very near the universal basic income debate a long time. My wife, Annie Larry, wrote a book on universal basic income called Give People Money. I used to work closely with Dylan Matthews, who did a lot of writing on universal basic income. And the trick of universal basic income to me, which maybe you support on its own merits, right, which is fine.
Starting point is 00:45:16 But is under any plausible scenario of AI job displacement, it is happening to some people and not all people. And I see looking skeptically, but I don't see a world in which one day we wake up and everybody's jobs are gone. It's going to start with some people's jobs. It'll start with some people's jobs. So if I thought it was going to be everybody's job all at once, I wouldn't worry about it. Because then we would just figure out a policy to compensate everyone. But you imagine you're a teamster and you drive a truck and you're making $80,000, $120,000 a year. And the autonomous truck companies put you and your fellow teamsters out of work.
Starting point is 00:45:58 And don't worry, we've actually passed universal basic income. No, it's totally insufficient. You're now getting $37,000 from your universal basic income. 100%. And I'm getting $37,000 from the universal basic income. And I'm still here in my podcasting studio. You got screwed. I got a check.
Starting point is 00:46:15 What worries me the most is I don't think we're going to a world of full automation. But even if you believed we were is transition. And some people are going to really lose out and other people are going to be unaffected or gain. And I don't hear policy. ideas that seem to know what to do with the people who are losing out along the way, right? The people who are actually getting displaced. Not the world of everybody is displaced, but the world is graduating with a marketing degree.
Starting point is 00:46:49 You are three times more likely to be unemployed than you were before. Or coders are suddenly seeing a contraction in demand for their services. But some coders are making a ton of money. Yeah. Like how do you think about the, differentials here. Universal basic income by itself is insufficient. And I would love to understand why you think we're not headed to a world of full automation because it's tough for me to see where that stops once we start on it, but we can come back to that. There will be a period of transition either way.
Starting point is 00:47:16 I don't think it'll be all at once. So the idea is not just, oh yeah, we're all going to have this basic income, because you're right. People will be screwed by that. The idea is to do a number of things simultaneously, which include changing the tax code so that we're actually charging for the use of AI and discounting the use of labor. And that's a way to protect jobs and slow down the transition itself. It's investments not just in universal basic income, but in job retraining programs and in structures that help people go into new careers. Now, granted, they have a really bad track record. This is my concern. Really bad track record. But it doesn't mean you shouldn't still be investing in community colleges and finding ways to improve it as much as possible. But you're right.
Starting point is 00:47:59 To just say that, oh, we're just going to give a universal basic income is not enough. We have to think about other ways of adjusting that transition, which could include when you have people who have a permit or training or a license that takes a number of years to acquire, maybe you still require that for the transition for five years or 10 years so people can turn that training into equity, and that's another way that they have a stake in the AI economy. We're going to need a lot of policy solutions. That's why the framework I put out has 43 different ideas in it. But let's get very specific on this. And I want to come back to the question of full automation. Yeah. But New York City is facing a near-term question here, which is Waymo, the autonomous
Starting point is 00:48:44 vehicle company. They have had permits to do the sort of mapping and testing here needed to eventually roll out Waymo in New York City the way it's been rolled out. San Francisco and Phoenix and other places. And that set of permits have expired. And, you know, Mayor Mamdani has been, I would say, very non-committal about whether or not he wants to extend them. He said, if a company like Waymo finds itself in New York City, what they will also find is a city government that is committed to delivering for the workers who keep the city running. Those workers also include our taxi drivers. So here you have this very near question, right?
Starting point is 00:49:21 Waymo is a technological advance. They are nice to ride in. They are safer from all the data we have. They also will, if you roll them out en masse in the coming years, displace, taxi drivers, Uber drivers, lift drivers. How do you balance that? It's a tough and ongoing question that the speed of the transition only makes worse. There are ways of, again, maybe you require a medallion for Waymos for a set amount of time, and that's what enables some bit of transition, but then you're only protecting the medallion.
Starting point is 00:49:51 owners and not the drivers, right? But that's maybe a piece of what that transition looks like, especially for those that have gone into a huge amount of debt to buy that medallion. You think about job retraining and other places that can go in. You think about a broader safety net, but we don't have a full policy solution for any sort of disruption that happens this quickly. It just hasn't been developed. And we need people in government that are willing to take that problem seriously and look for solutions that aren't just stop or go because this technology is coming. But so what's your version of that solution for Waymo?
Starting point is 00:50:31 Because Waymo is interesting to me, or autonomous vehicles, right? You can think of many different companies trying to do this. Even more so than, I think, at least the public conversation around generative AI, where I think the gains, which we can talk about, it has been sometimes hard to see what they are in the way people talk about it. Driverless cars really do have gains. A world of driverless cars is safer. There are a lot of people who have mobility issues right now or discrimination issues and getting picked up and all kinds of things where they could really be helped. They are just a kind of fascinating technology. You know, you're not going to have people falling asleep and then hitting somebody on
Starting point is 00:51:09 the road. Slowing them down has a cost, a cost in just a convenience people might experience, but also a cost in safety, a cost potentially in lives saved, and speeding them up has a cost in displacement. So you said we need politicians willing to take this seriously. You're a politician. You're looking to take this seriously. Yeah. What do you do? Well, I've said a few different options and things that we can do together, which is the medallions.
Starting point is 00:51:37 Should the Waymo keep going? That's the end of the, you'll charge Waymo for medallions. And the money goes into the coffer. Who gets that money? I think you can specifically be focused on job retraining and on people who. were displaced and you can try to share the benefits in that way is a portion of that answer that we have to go to. But the real question is, should we be investing in Waymo's or in public transit? Like, we have a great system to move people around, and we actually need an investment
Starting point is 00:52:02 in improving that. So I took a Waymo for the first time in L.A. And it was a light rain for New York City standards, but I think a thunderstorm for L.A. standards. And I got in the Waymo and it went 20 feet and it pulled over to the side of the road and just said dialing support. Didn't say what, no, why it was calling, et cetera. And I found out later, it turns out almost every Waymo in the city had done it at the same time because it couldn't handle rain. And so support timed out. And I was sitting there for 12 minutes. My first Waymo I ever read. And I went to call an Uber or lift or something. And finally, support came through. And the person was like, oh, yeah, seems like you're stuck. Like, I'll, I'll drive you out of there. And so I have questions about how they
Starting point is 00:52:54 function in the brain in New York City. And I have questions about when the backup is human drivers. It seems like where it's another form of outsourcing as well. So, yes, in the long term theoretical, will autonomous vehicles be safer than humans in most cases, yes. But to say that we are definitely there right now. Oh, I wouldn't say we're there necessarily right now. It's only the conditions in which they're willing to do them, which are quite limited. Like, you can't take a Waymo from San Francisco to Phoenix. Can I take one inside San Francisco or Phoenix? So all of that is to say, I think it's a, this hypothetical of they're ready to go and be safer right now is not, it's not right. But I think they're safer in the place they drive. And the reason I'm pushing on this is not because I'm pro-Wamo
Starting point is 00:53:37 or anti-WAMO. It's that there is a question that public officials are facing right now, about how quickly to move forward into that world. And, you know, Zora and Mamdani could extend the permits and accelerate Waymo coming to New York City, or he could drag his feet and keep it out of New York City. And then there are some ideas in the middle about maybe you could have Waymo paying high prices, but even to the extent you're doing that, what you're doing is pulling Waymo in. I think people sometimes don't quite want to face up to that there is a yes or no question on some of these issues. And, you know, in the long run, do you want to protect the
Starting point is 00:54:18 jobs of taxi drivers or do you want to have autonomous vehicles operating inside of your city? Is that kind of yes or no question? I think, as Kane says, in the long run, we're all dead. There's a question of speed, not yes or no. And I think most people here are from zero to 100, somewhere between 40 and 60, and we're being described as yes or no. I think it's not ready right now for the environment of New York City. It will be ready sometime in the future. And like with a lot of AI, we need to be thoughtful on that transition
Starting point is 00:54:50 on how it benefits people and how it hurts them. I think it is almost easier to imagine ways of handling the financial consequences of AI for people, even though I don't actually think we figured that out. Then the consequences for their dignity, for their purpose, people train for jobs. a job is part of their identity, and then all of a sudden it's getting taken from them, and you're going to say, you know, hey, taxi worker, over here at the community college,
Starting point is 00:55:18 you can retrain to be a home health care aid, that there's something here that we're going to have to balance, you know, the economic efficiencies or pushes forward with the basic deal we offer people in this country and in this economy, which is that, you know, you study for something, you learn how to do a job, you apprentice, and that we value you for doing that, and then we're supposed to treat that as having value. I feel like we don't talk about this dignity dimension enough, so I'm curious how you think about it.
Starting point is 00:55:55 I think it, for so long humans have been defined by their job, and that's become a piece of the dignity that you, in this worldview, have purpose, have value because of the thing that you do. And that's been ingrained in people for a while. And if we keep that mindset, then UBI is an extremely disappointing answer to it. And I think for lots of reasons,
Starting point is 00:56:29 it's not the full solution. The world that is painted by the AI optimists is we're going to get to this post-working area where people no longer derive their purpose from work. I'm skeptical. We'll be like the British gentry. Yeah, I'm skeptical. I'm skeptical.
Starting point is 00:56:54 But you believe in full automation. So then you think we're going to dystopia? On our current path, yeah. But I think we have the chance to change it. When you throw the ball down the field mentally, If you're skeptical, what is the good outcome here? What is the good outcome of we have automated a way, which you seem to think is very possible? Yeah.
Starting point is 00:57:14 A at least very large percentage of the economy's jobs. And yet what we have is something better than at least where we've been or where we are. It would have to be at the point where it's not just your basic material needs are met, but the standard of living is higher than it is now, where you can go about your day and be in a business. better place than you are right now. And this isn't a perfect analogy. AI is different in all kinds of ways. But if you look 100 years ago, the average American worked 60 hours a week and had a much lower standard of living. Now the average American works 40 hours a week as a higher one. We could get to one where we work 20 hours or 10 hours and have a higher one yet. But we were able to do that transition because workers had power, because Americans had political power. And
Starting point is 00:58:06 because we were able to shape that technology to work for us, either directly through legislation or by organizing unions and doing it indirectly at the workplace. If this transition happens too quickly and we lose that political power, it doesn't just happen. So I want to talk about something where we already are seeing the effects of it. And you talk about this as very early in your plan, which is kids. And one of my theories of legislating having covered a lot of this is sometimes a crucial thing in building legislative capacity is to just find places where there's enough consensus to legislate a bit
Starting point is 00:58:43 so people learn about the issue and learn how to legislate on it. You know, there's all kinds of experiments consenting adults can run on themselves. I am pretty worried about the situation with AIs and kids, and we really don't know what it's going to mean for kids to have relationships with AIs
Starting point is 00:59:00 and to grow up or they've got AI friends and so on. What is your approach to kids in generative AI. I agree with you. I think kids in some ways need more protection and we don't know a lot of the impacts that AI will have. That doesn't mean we don't look at places where it can benefit kids. I mean, I can imagine a world where having a personalized tutor at exactly your level in each subject and able to communicate with you in exactly the way you like to learn as a supplement to what you're getting from teachers in the classroom and your parents is a helpful thing, but teachers and parents need a view into all of the interactions.
Starting point is 00:59:45 And we need strong data protection. And I think broadly a lot of these projects, even when you think if some teenagers should be allowed on or not need to be thoughtful on the mental health impacts, this is a really scary period. And we've seen the big stories about chatbots, but then we've also seen like chat GPT integrated into teddy bears and things that just feel really unnecessary. So what's in your plan on this? What do you actually want to do? So age verification for certain aspects of these interactions, the mental health checking, as I said, engaging in updating pedagogy, making sure that teachers and parents have a view into any interaction that goes with AI,
Starting point is 01:00:30 protection on training of kids' data and data privacy aspects as well. And yes, we need to prepare kids for the jobs of the future. I don't think you should shut off access to AI. People should be exposed to these tools as they are in high school and college and getting there. But being really thoughtful about what those interactions are. When you say updating pedagogy, how do you want to update it? Well, so you can still assign essays, but if you just do a take-home essay, people are just putting it into chat GPT. And everyone knows this. But I've done a few things where high school students come up to Albany and when the teacher leaves the room, I say, how many have used chat GPT to write an essay? And every hand goes up. So should we be requiring essays written by hand? Should we require them
Starting point is 01:01:15 written in Google Docs or a program like it so you can actually watch keystrokes being entered, right? Just updating for the tools that are up there and making sure the old way of teaching is still teaching. You know, we're, I'm hiring for something right now. And, It has really disoriented me that cover letters are now completely useless. You know, I've been involved in the hiring for hundreds of positions now, given my time at Vox. And cover letters were always quite important to me as a way of sussing out. Maybe somebody whose qualifications were less obvious for the role, but you could see in the way they wrote an unusual mind at work. and now I'm not saying that's completely impossible, right?
Starting point is 01:02:00 You can still write a great cover letter, although increasingly it's getting a little. But it is getting harder and harder to know what you're looking at. Like, are you looking at somebody who is, you know, a great mind at work? Are you looking at somebody who's cyborging it with an AI system and should maybe that's fine because that's the world and somebody who's very, you know, facile at using them is actually showing they have a skill that others don't.
Starting point is 01:02:24 But on the other hand, I actually want to know how the person thinks, not how good they are at prompting, to completely knock out our ability to evaluate somebody's writing skills. Can I ask, not any of your current employees, obviously, but people you've interviewed. Have you noticed the loss of just skill in writing? I haven't noticed it yet, but I would say I have not hired since AI got good enough. I've definitely noticed it. And I think people underestimate this because they're used to the quirks of poorly prompted chat GPT writing. And it is incredibly, incredibly easy to spot.
Starting point is 01:03:03 Yeah. But if you know how to use the systems and you're better at it and you're using, you know, more advanced forms of chat GPT or Claude or Gemini, you can't tell. But I think when you ask people to write things, it's just not, I think there's been a few years now where that skill is not being taught. And you have pointed out that writing is how. many people strengthen their ideas, that the work that goes into that is part of the work of thinking. And I have noticed as people have, again, not speaking to anyone I've hired, but people have applied or others that I think there has been a decrease in people's ability to write well and express their thoughts clearly and do the editing work.
Starting point is 01:03:46 So one thing in your AI framework that I thought was interesting was that you want to, expand the government's capacity on AI. What does that mean? It means making sure that we have the expertise within government to understand this technology and help contribute in a positive way to its development. And this has been horribly underinvested. And so we're not taking this technology as seriously as we need to. This is the first major technology that is developed, basically, without any government progress, any government work in it, right? Al Gore didn't invent the internet, but DARPA did develop the intranet that became the internet. And even the space race was obviously primarily government led. AI was completely developed in the private sector.
Starting point is 01:04:40 I mean, some grants on research, but it was done outside the structures of government. We need to be hiring in the expertise within government if we are going to help to govern and lead to good outcomes here. Can we do that with the way government hires? I've run into this question before, talking to people inside the federal government, inside state governments. Government hiring for very good reasons has structured pay scales and worries about horizontal equity and a million things that makes sense when you're very worried about corruption and patronage and favoritism. The market for top AI talent is insane, right? What meta will pay you, what Google Alphabet will pay you, what Open AI, what anthropical pay you, what they can pay you.
Starting point is 01:05:23 I don't think any of them are going to pay me, but yeah, not you specifically, but one. There's a question of not cutting funding for the parts of government trying to do this, but there's also the question of how do you just make sure the government has the staffing talent to keep up in a market the sot? We absolutely should make it easier for government to hire experts and to pay more in order to compete in that way. I mean, we've found a way to let states directly, fund more hiring, it's usually the football coach in any state. I'd rather it be a real AI expert that's working to make this future actually work for Americans. I want to get you to expand on this a bit,
Starting point is 01:06:02 because I think it, I think as we're hearing a lot of reports of anthropics mythos, which I have not had access to it, so I don't know how good it is really at hacking every computer system on the planet, but they are saying it is very capable at that. And I think you really quickly, if we're going to have AI companies creating what are functionally cyber superweapons, the ability of the government to actually oversee these systems becomes pretty paramount very quickly. I think Anthropic is an interesting place and is posing a lot of governance challenges in opposite directions at the same time. On the one hand, you can't just have a private company creating cyber super weapons and hope for the best. On the other hand, we just watched with the Anthropic and, you know, Department of Defense, Department of War controversy.
Starting point is 01:06:55 When you're dealing with the Trump administration, do you really want this kind of quasi-nationalization of labs? I think we're seeing simultaneously that it is uncomfortable having these systems as private as they are. It is uncomfortable recognizing that if the government gets its hands on them, they could be used for whatever a particular government. government's purposes might be. And so it's left a lot of us, I think, who care about regulation and care about governance in an awkward spot. It is deeply uncomfortable because we are talking about such extreme power. And it's a question of where that power lies. If you take, as a given, that there will be a superintelligence developed, which I don't see any reason why there won't be at this point, then of course it's an uncomfortable question about where that sits because you're
Starting point is 01:07:50 talking about something that is smarter than any human ever. That is a real power question. And this is a real question that needs to be settled by policy, that needs to be settled by law, that if you're just leaving it up to the whims of an executive branch where there's no restrictions on them, or private companies where there's no law, both of those feel deeply uncomfortable. comfortable. This is why we need Congress to step up to the plate and actually decide how this division should happen. So in the answers you've given me, two things that have kind of come clear in the background of the way you think about this. Is one, you seem to believe we're going to go to full automation. Not necessarily tomorrow, but you reacted with a lot of skepticism when I said,
Starting point is 01:08:36 I didn't think we would get there. I think there's a significant likelihood and we should take it seriously. And that superintelligence is also a real possibility, that we're not necessarily going to stop at human level or even like a bit beyond your average worker, that we could be soon dealing with something. I think for a lot of people, they would hear that and say, so why not stop it? Why do we want a super intelligence, the machine god, that will put us all out of work, that we have no guarantee we will know how to control, right? If this is your set of views, why move forward as opposed to, you know, trying to throw your body on the train tracks? Well, I don't think right now, metaphorically throwing your body on the train tracks will make a strong difference.
Starting point is 01:09:26 I do think we should slow down the development until we've made a lot more progress on the alignment problem. I do think we're getting into really risky territory. what you need, and one of the sections of the plan is about diplomacy, it's about international action. We should be engaging with other countries. You should be engaging with China. We should be building universal verification systems on what is happening both at the chip level, where you can look at the geography and how it's being used and in the models themselves. We should be trying to lower the temperature on there being in arms race. So, yeah, I am worried. If I had a magic wand, I would slow things down until we had better guarantees about what we were stepping into and where we were going. So now I want to flip the valence of this conversation. We've been talking, as I think most of the AI conversation does, about what I would call AI harm reduction.
Starting point is 01:10:48 If this technology is moving forward, how do we make sure it causes as little harm as possible? But I think for people to want this technology to move forward, for it to actually even be conceptually a good idea for this technology to move forward, I think the case has to be better than that. And we were talking earlier about many ways like the absence of a positive vision for AI. These companies have to make back, you know, in the coming years, a lot of investment. And as best I can tell, the business model they've come up with is replacing white-collar workers. and to some degree, subscription fees for people asking Chachapida look in a mole.
Starting point is 01:11:32 What I have been wondering about for some time is all these promises of AI for drug development, AI for energy innovations, what would it look like to have a public agenda that actually tried to make that real, that actually tried to make it such that there was more AI development that went in those directions and that we got more out of it? So, I mean, I've heard you talk before about your interest in AI drug development. I want to hear your thinking, even if it's not a full policy agenda, on what it would mean to have a positive agenda for AI, where the public sector is shaping this towards social good as opposed to simply private profit. We would build out an initiative that we've done in New York called Empire AI, which was that the state government bought a large cluster of GDP. and committed to continuing to build that out and gave our public universities access to it so they could run experiments at a much cheaper rate and made a public investment on a research front to go after lots of things, including AI alignment and AI safety. But we could be directing grants to that specific research. And we could be building the infrastructure in the government to make that cheaper. I absolutely believe we should be trying to use AI for good,
Starting point is 01:13:02 and New York was the first state to do this. Others are following, but the federal government has the resources to really do a deep investment here. And yeah, for a while AI benefits have been riding on the story of alpha fold and solving protein folding, which was an incredible advance and has sped up drug discovery. But there could be more like that out there. There are definitely more like that out there. If there's not, then we've been sold a bill of goods here. And I think the government should be making use of this technology for good and directing research in that way. That doesn't, by the way, solve alignment problems. It could be that you want it to do really good things. And then actually in pursuing that, it goes off in a whole other different direction.
Starting point is 01:13:50 But yes, that is a good use of public investment. So let's focus. in on drug development for a minute because I think it's in some ways like the clearest case. I mean, GLP-1s, for instance, are a revolution right now, but they're actually a quite old drug. I've been around for decades. And all of a sudden, we have all of these new candidates, either to develop or to test. Let's say you imagine what certainly seems possible, which is that in the next, call it, three to five years, AI systems begin generating a pace of molecules worthy of investigation, either new molecules or existing molecules that the a system scour the data and realize they might
Starting point is 01:14:30 have other uses. But if you know anything about drug development, you have choke points all across that process. There's what the FDA can do. There's getting, you know, everything from rats to monkeys, to humans for trials, that a world in which we suddenly had more good candidates would be a world where the choke points became something very different. This gets a little bit more towards the way you were thinking, I think, about the grid, which is if A is going to create, if we imagine A will create all this pressure for investment and it'll create all this, like, demand for something, how do you use that pressure to open up parts of the system that have been clogged that have fallen someone into disrepair, right?
Starting point is 01:15:17 Like, how would you make it possible? for your economy to actually benefit from AI, which requires operating not just in the world of probabilistic predictions, but actually in the world of things, of steel, of cement, of human beings who are willing to sign up for a drug trial. Well, that's why there's more to my platform
Starting point is 01:15:40 than just the AI piece. I'm giving you a good opportunity to talk about it here. But we have to cut red tape and cut regulations. One of the ways that I have used AI already is I put every statute in New York State through an LLM and asked it to identify laws that are out of date that require paper when we could do something digitally, a bunch of ways of checking that we have requirements that are just getting in the way of getting things done, what Jen Polkum I call the policy cruft that develops over time, and put together now a 60-page bill
Starting point is 01:16:15 for this session of just pulling out a bunch of these old requirements. that are getting in the way of doing things. We can do the similar thing with regulations, not just with statutes, but where have we developed practices that are now in the way of moving forward in drug discovery or broadly? Yeah, we need to change policies
Starting point is 01:16:33 that stop government from getting things done. And sometimes that's in technology doing the thing more efficiently. Sometimes that's in using the technology or not, but finding ways to identify choke points and find ways to alleviate you. them. Or we're talking it's tax week. A lot of us who waited until the end of paid our taxes this week. And it was already possible for the IRS to pre-fill a tax form for most Americans who have
Starting point is 01:17:06 pretty straightforward taxes and lobbying has made that very hard and the Trump administration has made that harder. But it would be fundamentally as a technical matter, trivial for their to be through the IRS a tax preparation AI system that every American had access to, where they uploaded their forms, it was cross-checked with IRS data, and it did their taxes for them in seconds, you know, saving people a lot of time and energy. Like, the capacity to actually have, give every American an AI accountant under the auspices of the IRS. If we don't do it, it's not because we can't. You know, there's a real question of whether or not the lobbyists allow people to do that.
Starting point is 01:17:58 But the relationship between people and the state could really be transformed if government chose to transform it. 100%. And I think we need to make that a priority. So I have a bill that I've been pushing for for a few years to make it easier for different agencies within New York City to share data. that you give to them for the purpose of signing you up for benefits so that if they sign you up for one benefit, you can automatically be assigned for another one. That right now is restricted, and we should change that. Obviously, New York City invested like $100 million on building a portal, but actually what we need are changes on the back end of laws that make it easier to share that
Starting point is 01:18:37 data. I'll go a step forward, which was I was speaking with the tax department in New York State and advocating for, okay, free file, it makes it easy for you. You don't need another software. but why can't we just do it for New Yorkers? We have a lot of the information as the New York State Department. And the answer I got back is that so much of the information we have is actually wrong. They had this need to just improve the data internally first. And I said, okay, why don't you just find companies that are wrong or build systems to help them? And they were like, we're working on that, but like give us five years.
Starting point is 01:19:12 Like, that's where we want to get so that we can automate it. So maybe it does come back around to data integration and just having the data correct. and it might not be any more that the technical aspects of how to do your taxes is the limitation, but just as the underlying data that we're feeding accurate enough for it. I guess the principle I'm trying to get at here is to the extent you don't believe we're going to pause. I'm not saying you don't, but one doesn't, right? That we are going to move forward at some pace here, which seems likely. I think actually benefiting from AI as a public,
Starting point is 01:19:49 is a harder challenge than people have given it credit for. I don't think just because the systems get better, there is necessarily a public benefit. There can be individual benefits, individual harms. But if we want drug discovery to accelerate, we need to open up the systems that would allow drug discovery to move faster. You know, if we want the relationship between people
Starting point is 01:20:13 in the state to get cleaner, we need to actually create the conditions for it and overhaul, very, very, very difficult and archaic and multi-layered and error-filled, you know, government databases. And it's interesting because I do think right now throughout the private sector, you see companies, you know, with greater and lesser degrees of success trying to figure out, like, what does it mean to rebuild ourselves to use AI, everything from how teams are structured, to how our data works?
Starting point is 01:20:44 The government, you know, because it doesn't get competed in business by new, governments, you know, is working on much older systems, and it's very, very hard to build them. But I don't know, I think for AI to be worth it, you're going to need a lot more of this kind of investment at a much higher level of ambition. And like right now, I'm not saying, we don't even seem to be able to legislate on the harms very effectively. So I'm not confused as to why we are focusing there. But I do worry a bit about it because there's a world where we've done some reasonable harm reduction legislation and done very little benefit from it. And that's a world where we've kind of pushed AI towards being a worker replacement machine
Starting point is 01:21:25 as opposed to having like a public vision for what we want from it. I 100% agree. And this is the hard work of governing. I don't think these are maybe the easy places where we can build the legislative muscle. I would hope so. I think that's probably around kids. but I think these are parts of the places where we have to work together to change that. And part of it will be on AI and setting up incentives. And part of it will be building the infrastructure that allows that to happen. We're talking a lot about pretty high concepts here. One of my first bills in the state legislature was to help the state get on cloud computing
Starting point is 01:22:02 because it mostly uses mainframes. And the Speaker of the Assembly... Most uses mainframes in 2023. Yes. Yes. The Speaker of the Assembly codes in Fortran, and I always joke that his retirement plan is going to be fixing all the state systems because they still run on Fortran. There's just work that needs to be done on modernizing to allow us to take advantage of the benefits, and that will require both direct investments and a lot of legislating to encourage that direction. So one of the reason I want to have this conversation with you is you've ended up, whether you wanted to or not, a bit of a test case for how all this is going to work.
Starting point is 01:22:40 So you're running for Congress, and there is, as I mentioned before, the Super PAC that's funded by co-founders of Palantir, Open AI, and Injuries in Horowitz. I spent a million dollars opposing your campaign so far, suggests to spend. Two and a half, and suggested they might spend up to $10 million. You know, at the same time, I've looked at some of their statements, Greg Brockman, who's one of the Open AI founders and is a major donor of this pack. he has said, being pro-AI does not mean being anti-regulation, means being thoughtful, crafting policies to secure AI's transformative benefits while mitigating risks and preserving flexibility as the technology continues to evolve rapidly. So what's their problem with you? If they really truly believed in having one national framework that regulates AI and balances
Starting point is 01:23:31 the benefits and risks, they'd be supporting me. I think it's a difference between what they say for marketing purposes and what they actually believe and their actions portray that. So OpenAI last week released a policy document that mirrors a lot of my policies. The emphasis are different. I wouldn't say that. I felt parts of it. Parts of it. Yeah, they're like, we believe in a 32-hour work week. Yeah, yeah. But they did say they wanted third-party audits, but sometime in the future, I think we're already there. And there was much more of emphasis on society dealing with the problems after the fact as opposed to restrictions on the developers, right? I'm not saying it's a match, but they put forward some policies there. And
Starting point is 01:24:17 they also put later in the week policies specifically around kids out that included safe harbor provisions, included testing, encouraging red teaming of models. So when you red team a model or red team any software, you get people to try to intentionally break it and to do something it's not supposed to do. And you might want to red team it around producing child sexual abuse material to make sure that it can't out in the world. And right now in every state in the country, red teaming it and producing that material would be illegal. We have a no tolerance policy on the production of the material. And obviously no DA is going to go after you for that. But one of the things they talk about there is they want to extend safe harbor provisions so that you can actually
Starting point is 01:25:05 encourage red teaming. Yeah, I mean, this is my concern, and I've heard this from people on the Hill, like people in the Senate. Alyssa Salkin said a version of this to be on the record that at the exact moment that AI is becoming so powerful that it would be irresponsible for Congress to not be starting to construct regulations, legislative structures, transparency, kids, that the AI industry now has so much money that much as crypto did before it, it's able to create a kind of super pack of, you know, that has like a Death Star-like capability. Now, it's weird because Anthropic is, you know, one of the funders of another pack that is sort of more pro-regulation and is supporting you. So you have players on both sides.
Starting point is 01:26:00 But a world where AI will have this much money and the political system is this permeable to money is a world where in order to regulate AI are going to need to have to sign up your own AI patron to support you. And so I feel like there is some bigger question of political economy and power here that has ended up getting a bit of a test case in this race, which is, I think, quite. worrisome. I just think we could very, very quickly end up in a scenario where politicians are terrified of the issue. And that's the goal of leading the future. The goal, as they've stated, is to extract so much pain in this race and to beat me up so badly that when the idea of AI regulation is proposed in the future, politicians run in the other direction. I mean, they have said publicly that they want to make an example out of me. Think about what that means. Not that, oh, we have a different view, and so we want to make an example out of Alex Boris.
Starting point is 01:27:04 And they want to do that, not because I have ideas that are outside the mainstream or, you know, when I proposed my framework, I got praised from those on the left. I also the chief futurist of OpenAI retweeted it. They're coming after me because I successfully passed the bill. Frameworks, there's lots of frameworks. Those are cheap. Who's going to put political capital forward and get that? something actually done. And they tried to prevent any states from moving forward by putting
Starting point is 01:27:38 this preemption language in legislation that failed. So they instead got this executive order from Donald Trump to target states that want to regulate AI and try to extract punishment. They would cut off funding, that they would sue the states. And it targeted the Rays Act, along with a few other bills throughout the country. So why are they coming after me? Because I might actually get a bill passed. What, this goes back a little bit in our conversation, but what actually in the race act do they fight? Because as somebody who cares about air regulation and I think it's a good start, what actually
Starting point is 01:28:15 got enacted there is a pretty soft bill. It is so when they, it's the strongest AI safety bill in the country and I'm embarrassed by that fact. It should be much longer. When they, when they come after it, when they're trying to get it changed, what are they so upset about. It's that there's any regulation whatsoever that really is the challenge and that there is any regulation that they have to play by any rules is such an anathema to them. And they don't have to win forever. They only have to push this off for an election cycle or two. The speed with which
Starting point is 01:28:45 AI is developing, the amount of political power, let alone capital that they will have to deploy in the future will be unbounded. We already have elected officials who are, were terrified to take up this cause, despite how popular it is, because they see all the money on the other side and the risk averse. I'm running for Congress. I talk to every member of Congress I can, and I hear from them in quiet conversations, yeah, we're watching this race. We want to see if this is a issue that you can win on standing with people or if the money just swamps everything. And the lesson that will be learned by members of Congress, if the Super PAC wins, is run the other way, is don't actually touch. Maybe you can say a speech on it.
Starting point is 01:29:34 Maybe you can go on a podcast about it, but don't try to pass the bill because they will end your career. They got some place to end. Also, final question. What are three books you recommend to the audience? So the first is my favorite book of all time. And I know you have thoughts on this book, but it's a theory of justice by John Rawls. I think it does the best job of setting up a broad framework of rights of humans while also understanding when inequalities could be justified. And I think it's the best place to start for political philosophy.
Starting point is 01:30:05 And I know you've tried it a few times. I will point out that in the intro, he says, you know, this is a third of the book that you have to read to get the basics of it. And here's the half of the book you have to read to really deeply understand it. And the rest is, you know, for the academics. And so I'd encourage you to give it another try. So a theory of justice by John Rawls. The second one is World Eaters by Catherine Bracey, which is marketed as this deeply anti-VC book. But I actually think is written by a tech insider and a much more nuanced approach to the incentives that venture capital sets up. And that is always for growth, growth, growth, and don't think about the social consequences. And I'll add that, since VC's always pushing for a company that will scale no matter what, I saw this happen to my wife, who's a YC founder and built a business that probably could have been fine on its own, but had had the venture investment, and it was scale or die.
Starting point is 01:31:04 And so a lot of the negative externalities I've come from that, I think it's a really timely look as we are building out AI. And the last one's, I think, a little more whimsical, but goes back to our conversation about the skill of writing. and it's bird by bird by Anne Lamotte, which is just a delightful read and is a good reminder for any procrastinators to just break down your work and do it bird by bird. That's where the title comes from. But is such a well-written leads by example and in the instructions on the art of writing. And I encourage especially when our skill of writing is being degradated for people to be intentional in that practice and to read that book. Alex Morris, thank you very much.
Starting point is 01:31:48 Thanks for having me. This episode of Isaklancho is produced by Annie Galvin, fact-checking by Lori Siegel. Our recording engineer is Amin Sahota. Our senior audio engineer is Jeff Gelb, with additional mixing by Isaac Jones, and Amin Sahota. Our executive producer is Claire Gordon.
Starting point is 01:32:16 The show's production team also includes Roland Hu, Marie Cassione, Marina King, Jack McCordic, Kristen Lynn, Emmick Helbeck, and Jan Koppel. Original music by Pat McCusker.
Starting point is 01:32:30 Audience strategy by Christina Samaluski and Shannon Busta. The director of New York Times opinion audio is Annie Rose Drosser.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.