Hard Fork - Trump Is Selling a Phone + The Start-Up Trying to Automate Every Job + Allison Williams Talks ‘M3GAN 2.0’

Episode Date: June 20, 2025

This week, President Trump’s family business announced that it was introducing a mobile phone and a cellular network. We tick through the many potential conflicts of interest this new business ventu...re raises. Then, the co-founders of the startup Mechanize defend their efforts to automate away all jobs — starting with software engineering. And finally, we take a trip to the movie theater. “M3GAN 2.0” is out next week, so its star, Allison Williams, joins us to discuss the film, and A.I.’s impact on her career and parenting.Guests:Matthew Barnett and Ege Erdil, co-founders of Mechanize.Allison Williams, actor.Additional Reading:Trump Mobile Phone Company Announced by President’s Family, but Details Are MurkyThe President Is Selling a PhoneThis A.I. Company Wants to Take Your Job We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, Kevin. Hello. I'm here in a beautiful studio in London. Casey, we have been trying to get a studio like this for years, and I think we just figured out that we have to just move the show to London and tape it here every week. You're in this sort of like lush red booth
Starting point is 00:00:18 with the Hard Fork logo behind you on a TV. It sort of looks like if Hard Fork was a Denny's diner, it would kind of look a little bit like the studio that you're in. Like I keep waiting for them to bring you a plate of pancakes. I want the Grand Slam breakfast. Now, Casey, do you miss me in person? Is it different recording without me?
Starting point is 00:00:36 No, it's great. I have stretched my legs all the way across the studio for the first time. My circulation has never been better. It's markedly cooler in here, both in the sort of temperature and vibe sense of the word. So, no, if you want to stay over there for a while, you're fine by us.
Starting point is 00:00:54 I'm Kevin Ruse, a tech columnist at the New York Times. I'm Casey Noon from Platformer. And this is Hard Fork. This week, the Trump family is releasing a phone. We'll tell you how the president is using influencer tactics to profit while he's in office. Then these AI co-founders say they're going to automate away every job.
Starting point is 00:01:17 We'll meet the team behind Mechanize. And finally, we're going to the movies. Megan 2.0 is almost in theaters, and star Alison Williams is here to talk about it. ["The Trump Family's New Initiative"] Well, Casey, what are we talking about this week? Well, Kevin, if you hear an ominous ringing in the distance, you may be hearing the Trump phone.
Starting point is 00:01:42 Oh, no. What's the Trump phone? Well, Kevin, this week the Trump family announced two initiatives. One, a new cell phone provider called Trump Mobile. The other, a new forthcoming smartphone. It's gold colored, it has Trump branding, and they're calling it the T1, which is the same thing.
Starting point is 00:02:00 They call it the first Terminator movie. I'm sold. So Casey, let's talk about this. What is actually going on here? What are they selling? And what is this mobile service they are operating? Yeah. Well, you look, obviously the Trump family has a lot of initiatives.
Starting point is 00:02:18 They love to do a lot of branded merchandise. And for the most part, it all just sort of washes over me and I don't pay that much attention to it. But once you know, the President of the United States, his family says we're doing a smartphone and we're going to have a cellular network, I think, well, Kevin, we should probably learn a little bit about that. Okay. Yeah, so teach me. So there are sort of two aspects of this to talk about there is
Starting point is 00:02:43 the cell network, what's called a mobile virtual network operator or MVNO. That's Trump Mobile. And then there's the phone. Why don't we start with the MVNO? Casey, what is an MVNO? Well, there are these real cell networks like the ones owned by AT&T and Verizon,
Starting point is 00:03:02 and they spend billions of dollars to build networks all over the country, but they wind up with this unused capacity. And that creates the space for the MVNO to come in and say, hey, why don't you let us buy that capacity at a wholesale price, and then we'll resell it to other people and perhaps make a tidy profit. And believe it or not, Kevin,
Starting point is 00:03:24 this has turned out to be a pretty good business for some people. For example, have you heard of Mint Mobile? I have actually. This is the one that is run or part owned or was part owned by the actor, Ryan Reynolds. Yeah, that's right. So it was founded in 2016 and sold to T-Mobile eight years later for more than a billion dollars, which is, you know, not a bad price given that they didn't even have to, you know, build any cell phone towers. So, yeah, I remember reading about this and reading that Ryan Reynolds had somehow made something like $300 million by selling this MPNO thing to T-Mobile, and I thought that sounds like a great business.
Starting point is 00:04:01 Maybe I should learn about it, and then I never did. Well, I was just learning about it the other day, Kevin, because one of our fiercest competitors in the podcasting space, Smartless, the podcast hosted by Jason Bateman and Will and Aronett and Sean Hayes, they have launched their own MVNO. It's Smartless Mobile. Really?
Starting point is 00:04:21 Huh. So what you described to me just now is like a basically surplus shop for cell phone service. Like if your Verizon and like say only 60% of your tower capacity is used, you could like sell that extra space to an MVNO who could then sell it to customers. Or like, how does it work?
Starting point is 00:04:39 Yeah, well, so here's the twist. Here's how I would pitch an MVNO. It's cheaper for worse service. Okay? So, if you have like a Verizon or an AT&T plant, my guess is you're going to be spending, I don't know, 80 plus bucks a month on your service, but you know, you're going to get to use your cell network during all the busy times. You're going to get priority. If you're on an MVNO though, your service might be really slow during busy times, but
Starting point is 00:05:03 in exchange for that, you might only pay 30 or $40 a month. Trump Mobile says they're going to sell for $47.45 per month, which appears to be a reference to the 45th and 47th presidents, Donald Trump. So okay, they're going to sell this service. Is it running on Verizon or AT&T or like one of the big mobile carriers? Or like, how does it actually who are they Subcontracting for so my understanding is that they're renting capacity from a group of those. It's not just one of those They're gonna sort of go sort of you know
Starting point is 00:05:34 Bundle up unused capacity from a bunch and and create the network that way. I see. Okay So that is the cell phone service that the Trump family is going to start offering. Now we have to talk about this phone. I know the following things about this phone. One, it's gold. Yes. Probably not actual gold, gold colored. Two, it's an Android phone.
Starting point is 00:05:59 Yep. Three, it is billing itself as being made in the USA. That's right. So all of those three things are correct, but I'm sure you know more about this phone. Tell me what there is to know. Well, so they're calling it the T1 phone, 8002 gold version,
Starting point is 00:06:15 which sounds kind of like a Taylor Swift album. It will purportedly be sold for $499. The family has suggested you pre-order it now for $100. But there are many, many remaining questions, Kevin, about what this thing is. We've seen exactly one rendered image of this phone on the Trump mobile website. There have been questions about
Starting point is 00:06:43 whether this is actually a photo of a phone or maybe just a Photoshop render. Some people are even just calling it a concept of a phone, Kevin, if you can believe that. Yes, I mean, I can. In part because I've been, I was reading, David Pierce had a great story about this at The Verge that had an actual rendering of this T1 phone and basically made it sound like this thing is just either not real or that it is, they have managed to come up with some miracle of supply chain logistics and device manufacturing
Starting point is 00:07:13 that no one else who's been thinking about this stuff for decades has managed to come up with. Yeah, and I would say that like, typically in a situation where you're a celebrity and you license your image and likeness and name a lot to just kind of like whoever the highest bidder is That bidder does not tend to be an incredibly innovative Operator who is able to work supply chain miracles and create an incredibly premium good at the lowest price you can imagine But that's like a pretty rare thing that happens in these cases Yeah, but those celebrities did not write the art of the deal. So I think we have to give him some credit here. I think that is very fair.
Starting point is 00:07:48 But to get to the heart of it, Kevin, basically no one thinks that you can build a modern smartphone in the United States that runs Android 15 for less than $500. Yeah, I mean, I was skeptical of the price tag on this thing because I've seen some stories about the sort of people doing the math on what it would cost to say make an iPhone in the US and it's many multiples of the cost of manufacturing that oversees and it's not just all of the components it's all of the fabrication it takes a lot of
Starting point is 00:08:21 specialized equipment to make all this stuff very precise. So I just cannot think of a scenario in which they could make something like this phone in the US and sell it for $500 and still make a profit on it. But do you, am I missing something? I think you are exactly right. And so that raises kind of two questions. One is if they are able to deliver at that price,
Starting point is 00:08:47 basically like what shortcut did they take or how did they get there? Will they be public about that? And if they're just simply not telling the truth about the price and it's actually gonna be more expensive, that doesn't sound great either. Yeah, and they're actually, we should say there is like one US made smartphone, it appears, that is still shipping.
Starting point is 00:09:05 It's called the Liberty Phone, and it is fabricated and assembled in California. And the starting price of that phone, Casey, would you like to guess? I'm gonna guess $2,000. $1,999. You lose by prices, right rules, but you win spiritually. $2,000 is apparently what it costs to have a phone that is assembled
Starting point is 00:09:27 in the US. So if the Trump family has figured out a way to do that for 25% of that cost, I would be very impressed, but I would also not be surprised if they're just sort of pulling that claim out of thin air. All right. So why are we talking about this today? Well, one, it is just kind of a funny story during kind of a dark time. It struck both of us and we thought it would be worth just kind of walking through some of those details. But there is also a really dark undercurrent here, Kevin,
Starting point is 00:09:55 and it speaks to the utter strangeness of having a president who seems to be openly using public office for private gain. And while I'm not gonna make the case that the Trump phone is the absolute most important story in the week, given all of the tensions abroad, the protests at home, I do think it is worth pointing out to our audience
Starting point is 00:10:17 just how many conflicts are baked into an idea as simple as let's have a phone and let's have an MVNO. Totally, I mean, what I keep thinking about when I hear these stories about the various spinoff businesses that the Trump family is starting, I just feel so bad for Jimmy Carter. You know, like they made Jimmy Carter put his peanut farm in a blind trust when he took office
Starting point is 00:10:40 because owning a peanut farm was seen as a potentially bad conflict of interest. And I'm just glad Jimmy Carter, well, I wish Jimmy Carter were still around, rest in peace, but I'm sort of glad he's not around to see the absolute depths of side hustles that the Trump family has gotten itself into. Yeah. Well, so let's talk about a few of the areas, Kevin, where this might raise a conflict. One is just the fact that telecommunications is a heavily regulated industry. Trump appoints the head
Starting point is 00:11:11 of the Federal Communications Commission, which oversees the telecom industry. So now, if you're Brendan Carr, the head of the FCC, every time you go to make a policy, you're probably gonna be asking yourself, well, what does this mean for Trump mobile at the Trump phone? I have to imagine that this Trump mobile MVNO thing is going to start off being a very small operation.
Starting point is 00:11:33 But if it were ever to grow into something that was actually competing with the big mobile giants, I think it absolutely would be a very ripe conflict of interest there. Well, and now let me throw another conflict at you, Kevin. It's extremely common when a new smartphone comes out for a manufacturer like Samsung to go to some of the tech companies out there and say, hey, we have a new phone coming out. Would you like to make a deal with us?
Starting point is 00:11:57 Pay us a certain amount of money and we will put your app on our phone. Well, now imagine that you're Amazon or you're Meta, and you want a curry favor with the Trump administration because you have a huge amount of business before the government, and you're still working to make inroads with Trump and his family. Wouldn't this be a great time for you to come along
Starting point is 00:12:20 and say, hey, Donald Trump, you name your price. We would love to get Amazon on the Trump phone. We would love to get Amazon on the Trump phone. We would love to get Instagram on the Trump phone. And all of a sudden, you have opened up a new avenue essentially for bribery for these companies to curry favor with the Trump administration. Yeah, it's fascinating. And it's so troubling for all the reasons you just outlined. But I also think there's a sense in which the Trump family's various business
Starting point is 00:12:45 endeavors during this term are really kind of giving us a roadmap to the ways that people have found to monetize influence in the last couple of years. Like just look at the meme coin business that it has entered and that is actually making like quite a bit of money for the Trump family. That is something that did not really exist in any scaled way a couple of years ago. But now, not just politicians, but lots of celebrities and influencers, people who have their 15 minutes of fame, that is one of the ways that they sometimes try to cash in. It seems like this MVNO thing is also becoming a way that people like the Smartless podcast guys, like Ryan Reynolds, like sort of all of the other celebrities
Starting point is 00:13:28 who have gotten in on this kind of deal, that is a way to sort of turn attention and reputation and influence and fame into money. And so I think it's just, it's worth saying, I think this is something that a lot of politicians are probably going to pay attention to and potentially try to replicate because if Donald Trump is able to sort of monetize his influence I think there will be lots of other people who say well if he can do it, why not me?
Starting point is 00:13:56 You know Kevin, I'm glad you brought up the meme coins because I think the meme coins offers a really tangible example of these avenues for influence that we have been talking about. So the Wall Street Journal reported over the last week on a new financial disclosure from President Trump, and it showed that he had personally made $57 million from his family's crypto firm last year, and it put his current crypto holdings around $1.7 billion at the low end, that's a conservative estimate. So why does that matter?
Starting point is 00:14:30 Well, just as he oversees the FCC, he oversees the crypto industry as well. He appoints the head of the Securities and Exchange Commission, which has a lot of leeway to regulate how crypto is sold or not sold. And so here we can see exactly how much it benefited President Trump to come into office, sweep out all of the anti-crypto regulators, bring in a bunch of pro-crypto regulators. And by the way, he was heavily lobbied by the crypto industry to do that.
Starting point is 00:14:58 They put a lot of money into his campaign. Well, now he has $57 million. So again, just to say, we tried to set up a system in the United States where this could not happen. The Constitution has an emoluments clause that says you cannot accept direct payments or gifts from foreign governments, for example. And we tried to just create strong norms that said if you're in public office, you cannot use it to just make a bunch of profits for yourself.
Starting point is 00:15:23 But that norm, like so many others over the past six months has been shattered and it just seems like a really Troubling precedent at least to me Yeah I mean part of what I find so curious about this moment with the Trump family and their business expansion is You know typically in politics you try to reward your supporters. The people who vote for you, typically you have some affection toward them and you try to give them things
Starting point is 00:15:52 that will make their lives better. In this case, it is a weird inversion of that where the people who are the most loyal, diehard Trump supporters, they're going to be the ones lining up to buy the meme coins, lining up to buy the NFTs, probably're going to be the ones lining up to buy the meme coins, lining up to buy the NFTs, probably lining up to buy the Trump phone and the Trump MVNO service.
Starting point is 00:16:12 And they are going to be paying more money for things that are less valuable to them than what some other less Trump affiliated carrier or seller would provide them. And so I think it is just a fascinating experiment in how you can pretend like you are rewarding your most loyal followers by really selling them something that's not very valuable.
Starting point is 00:16:34 And I just, I wonder how long that will last or whether there will be people who buy this stuff and say, hey, wait a minute, my old Verizon service was way better or my iPhone was way better or I wish I didn't lose all this money on crypto coins. Yeah, it seems like the president has figured out ways to monetize tribalism in ways that have been very beneficial to him.
Starting point is 00:16:56 And I suspect pretty enjoyable to the people who are buying at least some of these products, but it just goes against so many precedents in our country's history. I want to say one more thing about this. The great Russian and American journalist, M. Gessen, wrote a piece in The Times recently that really resonated with me. They wrote about the shock of authoritarianism. They had, of course, spent a lot of time in Putin's Russia. And there was just kind of one shock after another of this norm being shattered and that norm being shattered. And the human mind, they write, is always seeking a sense of stability,
Starting point is 00:17:31 is always seeking a sense of, okay, well, that shock might have happened, but my life is still basically the same. And they wrote about the danger of that because as shock after shock after shock accumulates, you wake up one day and you realize you have much less freedom that you used to and a lot of other bad things have happened. So in and of itself, a Trump phone, a Trump MVNO might not seem like that big a deal,
Starting point is 00:17:54 even though they really are in their own ways quite shocking in the context of every other presidency we've ever seen. But my fear, Kevin, is that as more and more of these shocks happen, we do get desensitized to it. We do stop paying attention to it. We don't bother doing a segment about it on a podcast because we just think, oh, well, you know, that's Trump. What are you going to do? So at least this week, we wanted to say, hey, this thing that seems crazy, it actually is super crazy. When we come back, they're coming for your job.
Starting point is 00:18:27 Why the team called Mechanize. Yes, this was a fun one to write. This was a startup in San Francisco that got started earlier this year. They raised a bunch of money from people you probably know. Patrick Collison was one of their investors. Jeff Dean, a big AI honcho at Google was another one. They are a very buzzy startup. And what attracted me to writing about them
Starting point is 00:19:14 was that they have said that their goal is to automate all labor. They want to take away everyone's jobs, yours, mine, everyone we know, and replace them with AI. And they think they can do this in the coming decades with a new type of reinforcement learning system. Well, that is a promise we have heard from a number of Silicon Valley companies, but maybe none as directly as the mechanized founders are pitching it. What exactly is their secret sauce?
Starting point is 00:19:48 Well, they are building what they call reinforcement learning training environments, basically simulated environments that these new AI agents can use to learn how to do various white collar jobs. And they believe that this approach can scale not just to software engineering, which is the first job that they're trying to automate, but to all other kinds of jobs eventually as well. And I thought this was an important conversation to have on the show because we've been talking about AI and jobs for the last couple of episodes now. I think this is
Starting point is 00:20:18 a conversation that is beginning to grow more and more important and more and more urgent, and there's starting to be evidence of some job displacement from AI. And then along come these guys at Mechanize who say, well, this is actually only the tip of the iceberg. You have not seen anything yet. Our plan is to help these giant AI companies automate a bunch more jobs very quickly. Well, that seems to raise a lot of important questions
Starting point is 00:20:42 about how society would deal with the fallout of such a thing. So why don't we bring in these founders and see what they have to say about it. Yep. Let's bring in Matthew Barnett and Ege Erdil from Mechanize. Matthew and Ege, welcome to Hardfork. Hi. Hi. So, I had a lot of fun writing this story about Mechanize, the company that you all founded along with your third co-founder, Tame Bissaraglu, earlier this year.
Starting point is 00:21:18 And one of the things that made me interested in what you were doing is that unlike a lot of AI companies I cover who sort of pretend not to be automating jobs or they say you know we're just making helpful co-pilots and assistants for workers we're not going to replace their jobs you all were actually coming out and saying yes we we absolutely want to automate jobs and not just a couple of them we We want to automate all jobs. So tell me what inspired you three to leave Epoch AI, the research firm where you were before this, and start this company and to be so open about the agenda of automating labor. Yeah, I guess when we were at Epoch AI, which is a non-profit research organization which had the mission of informing society about trends in artificial intelligence, we did a bunch of research into the economics
Starting point is 00:22:09 of AI. And as part of that research, we looked into what would be the impact of AI that could substitute for human workers across the economy on the economic growth rate. And a very robust conclusion of that research was that it would speed up economic growth by enormous amounts, like unprecedented amounts, maybe by 10 times, maybe more compared to current rates. And that would unlock such a vast abundance of not just material goods, but also services that today can only be provided by humans, technological progress, like medical progress,
Starting point is 00:22:44 that currently, no matter how much money you have, you can't really purchase. We think if AI automates everyone's job, because we can scale the AI workforce so much more than we can scale the human workforce, that leads to this vast abundance, enormous increase in the variety of goods, new medicine, new technologies, etc., and makes people's lives much better. So this is a story we've heard from other AI founders. They want to unlock a world of radical abundance.
Starting point is 00:23:09 And I think a lot of our listeners hear that and they think, okay, here comes the Silicon Valley hype guys and they're out there raising funds. And so they're going to tell me this beautiful story so that they can raise billions of dollars and you know, they'll get rich. How do you respond to the idea that this is kind of just a bunch of hype that you're selling to kind of benefit your own project? I guess I would say that the difference between just hype
Starting point is 00:23:32 that someone's speculating about and something that's real is you can test it empirically. You can look whether it's an implication of robust economic models. You can try to look at the history of automation. I would just say that if you look at the empirical evidence, it's quite clear that automation has been good for most people.
Starting point is 00:23:51 People have benefited from mechanization of agriculture, from refrigeration, to all these sorts of technologies. So I would just say that anyone who thinks that it's just benefiting a small group of people should really study the history of automation. And I think that almost all the evidence would show that they're wrong. Well, I have studied the history of automation. I wrote a whole book that touched on the history of automation. And one of the things that I want to just really impress upon you all is that like, I do agree with the statement that automation and technology broadly generally improve people's lives in the long run. I don't think, for example, a lot of us would willingly switch places with our great-great grandparents. They had hard, backbreaking lives of manual labor. But people don't live in the long run. People live in the short run and in every
Starting point is 00:24:38 technological revolution that we've ever had, there have been people who struggle, who fall through the cracks, who aren't able to sort of seamlessly make the jump from one era to another. So what do you say to those people who look at you and say, well, I get that this future of radical abundance may be possible somewhere down the line, but for me in the year 2025, what this looks like is my job getting automated away
Starting point is 00:25:02 and me not having any way to pay my bills? Well I guess a few things. First of all, I think because AI, we expect AI to speed up economic growth, assuming it can substitute for everyone's job and not just like the job of like a few percent of workers or something like that. The impact of that is so big that I don't actually think the long run is that long. Like it might be like a few decades or something like that. So it might easily be within the lifespan of most people who are currently alive, for example. That's one thing.
Starting point is 00:25:30 But the other thing is, I think the standard of we should only automate jobs, we should only embrace new technologies if there are no losers, if nobody is made worse off by the adoption of a new technology is extremely strict. I don't think that's a reasonable standard. If AI can actually substitute for human workers across the entire economy, it's just a substitute for a human worker. In that case, humans will not be getting income from wages, but there are lots of a lot of other sources of income.
Starting point is 00:25:58 There are countries in the world today where actually citizens get their income from say natural resource endowments. Like there is just a certain amount of natural resources that a country owns, and the government has maybe a sovereign wealth fund, maybe they have other ways in which they can distribute the income from that natural resource to the population. And that is something we see in our world today. So it's not actually that far-fetched.
Starting point is 00:26:18 And that's the kind of thing I would expect to happen in a world where AIs are vastly more capable, and human workers that cannot compete. So you guys published a blog post recently where you wrote about, in part, the history of automation in software engineering. And you note that automation has been coming to software engineers over a period of decades. And as that has happened, their jobs largely have not been eliminated.
Starting point is 00:26:44 So what are you seeing right now that is making you say, okay, it really is different this time and we're going to be able to go that last mile and actually fully automate everything and take away these software engineers' jobs? I'm not sure we're actually saying that something is definitely different this time. In fact, two of my co-founders think that maybe full automation will take many decades. And so I don't think we're actually taking the strong view that there's something different in the next five or 10 years such that everyone will lose their jobs. I actually think that especially in the next five years, we'll probably see a continuation of past
Starting point is 00:27:19 trends, which is that AI automates some tasks within professions. It doesn't completely automate the entire profession in the sense of completely replacing most workers in those professions. So to take the example of software engineering, we think that these coding assistants will be used to help software engineers. We don't necessarily think that the coding assistants
Starting point is 00:27:39 will be able to do all the jobs that a software engineer can do. So for example, a software engineer often needs to coordinate across teams. They often need to plan projects, test the software to make sure that it's up to the design specifications. They need to do a lot of these different types of things, which are very hard to automate, which aren't just under the label of coding.
Starting point is 00:28:00 And so we think that if AIs can just do coding, but it can't do these other things, then in fact it will lead to a productivity increase in these professions and probably raise wages for software engineers, even though it's not taking over their entire job. Of course, in the long run, we just expect that it will be able to take over people's jobs, but this isn't because of some specific thing that we think is different now. We just have a projection of here's how long we think it will take for AI to replace everything. And we have disagreements about that, but it's not that we think that it's like a
Starting point is 00:28:34 qualitatively different type of automation. We just have a guess at how long that type of thing will take. Got it. Okay. So let's talk a little bit about what you guys actually do. You just raised a bunch of money. You're presumably hard at work building something. I've read a little bit about it in Kevin's article, but tell people what it is that you're actually building. Yeah, so what we're building is a commercial learning process where we will design essentially virtual work environments, you could say, for models to acquire the skills that human professionals have that enable them to do their jobs.
Starting point is 00:29:09 Like using a spreadsheet, maybe. Yeah, like using a spreadsheet, using common software tools that people use in their work, like Slack for messaging, checking their email, if they're doing software engineering, then using tools like GitHub. So what we're doing is creating these work environments with scoring. So basically we have a bunch of tasks in these environments and we can score a model to see, did the model do this task well?
Starting point is 00:29:38 How well did the model do this task? And then we give this to an AI company and they are able to train their model in this environment. So they handle the parts where they train it. But we are the ones who supply them the environment in which they will be doing this training. The way your co-founder Tame explained it to me that I thought was a useful model for my understanding was that you are basically creating what amounts to very boring video games. Like a video game in which the goal of
Starting point is 00:30:06 the video game is to be a software engineer or be a lawyer or be an accountant. You set up this environment and then the AI agent goes and plays it a bunch of times, and he gets the signal for whether they failed or succeeded, and ideally gets better over time until it's good enough to actually do the full job. Is that more or less correct?
Starting point is 00:30:28 I think that's a good description, yeah. What do you think the next target is after software engineering? What is the next easiest job to automate using this technique? Huh. I guess, I mean, there are some things that are adjacent to software engineering, like data science, for example might be a good target
Starting point is 00:30:45 and we have podcasting I think that's a That would take it like a different kind of reward signal, maybe okay So you're sort of at work on these system, you know I'm struck by the fact that there's this interesting tension where on one hand you're saying that you're going to Automate all labor on the other hand. You saying, well, it might take 20 or 30 years. You've also raised venture capital and they all want to return within seven years. So what are those conversations like? What are you promising
Starting point is 00:31:12 to give them before the end of the decade? Even if you just automate a substantial fraction of labor, but not all labor, then that would still be extraordinary in terms of the amount of progress we would have made over a brief period of time. So I think if we could get as far, I mean, very ambitious of automating like 20% of current jobs within the next five years, that would be insanely valuable from just like a conventional perspective. So I don't think we need to achieve the ambitious long-term goal of automating all jobs for this to be a successful venture within like just a brief period of time.
Starting point is 00:31:47 Got it. I have a question for you guys. So, when I was at your launch event and I stood up during the Q&A and I said, is any of this ethical? Matthew, you had a response to me that I thought was interesting. So I want to ask you for an abridged version of that case. Make the case that what you are doing, trying to automate jobs, is ethical. Well, I think what I said was that it was a question of cost and benefits. I
Starting point is 00:32:11 would say that there are costs to automating everything. It's true that people will lose their jobs. However, we need to compare this to the enormous upside potential from automating all jobs, which is this vast prosperity that would be created from automating labor, be the fact that goods and services would be much easier to produce, which should mean that we just have a much higher standard of living across virtually all areas of life. I think one thing in particular that people miss is that the secret to mass consumption is mass production. Like in order to get people to consume a lot of goods and services, in order to get people to have a high quality of life,
Starting point is 00:32:49 this needs to be backed up by a lot of production, a lot of goods and services actually being produced. So if you have some mechanism that's able to expand the base of goods and services that are being produced, then as long as there's some way for this to be shared among the broad population, not necessarily equally, but just as long as there's some way in which these goods and services can be distributed even slightly, then I think that that just leads to almost everyone becoming better off. And not just in terms of them having a higher material standard of living,
Starting point is 00:33:25 but I think also their lives would still have meaning. They would find new ways for their life to have meaning other than work. I think, for example, one thing people didn't anticipate perhaps in 1800, if you were to ask people, you know, would all this automation be able to work out if people aren't going to be constantly working on a farm? They might not have been able to anticipate, for example, that in the next 200 years we would have governments funded education in universities. And so people might not have realized that there's this alternative way of enjoying your time, which is going to school, which is going to college.
Starting point is 00:34:04 This is a new way for people to spend their time, which I would argue is even more meaningful than these long hours toiling on a farm. And I would just think that that's the default that I would predict as a result of this empirical trend that we've already observed. I know you guys are focused on the, like, you know, capitalism part of this equation, but I'm curious what thoughts you do have on the role of government here You know, we're always struck on this show by the disconnect between on one hand We have entrepreneurs like yourselves saying hey in the next 18 to 24 months
Starting point is 00:34:35 The world is gonna look very different and on the other hand the politicians mostly are just kind of like okay cool Like go for it. Yeah Are there things that you would like to see them do or you think they could do to get us ready for a world that was very different within a few years? I think it's a hard question. Like, it's very hard for me to say what we could do today to make a world in which like all jobs have been automated or most jobs have been automated like much better.
Starting point is 00:35:03 Right now, I only have a very vague sense of what that world's gonna look like. When that world's actually here, we will have a much more detailed understanding of how things are gonna work. And I think we have seen this. I understand that what you're saying is, like, it's hard to know the future, and it's hard to make policy based on something that is moving and changing so quickly. But do you think there are things that governments could or should be doing to cushion the fall for workers who may lose their jobs in the next, say, five to 10 years? Should they be sort of rolling out something like basic income
Starting point is 00:35:30 or strengthening the social safety net? Like, do you have any ideas for how we could cope with mass job loss if what you are predicting comes true? I don't really think we're predicting mass job loss in the next five or 10 years. Like, at least I would assume that initially the impact of AI automation is actually going to be dry wages up. And it will lead to some occupations changing in character. So for example, software engineers might still be employed, but their job might start looking different because some of the tasks have been automated. So I think the world in which there is mass job loss and mass unemployment,
Starting point is 00:36:04 I think that world is further away. I loss and mass unemployment, I think that world is further away. I would say like definitely more than 10 years away. Maybe Matthew disagrees with me about this a little bit. Well, I definitely don't expect mass job loss within the next few years or next five years. So I do think that it's kind of premature to start talking about government programs to cushion against that. I guess the thing that I would like to emphasize the most here is just that I think when people come up with plans for the future, especially
Starting point is 00:36:28 if it's not an immediate plan of what to do in like the next year, for example, I think that's just kind of overrated. I mean, if you asked people 10 years ago, what should we do to prepare for people losing the jobs from LLMs, then how many people would have been able to come up with like a good recommendation of what to do by 2025? I just think that would have been overrated. So I think what's underrated, I think, is just being honest about our intentions, saying this is what we intend to do. This is our roadmap, perhaps.
Starting point is 00:36:57 We don't actually know whether a roadmap will succeed, but this is at least what we're planning on doing at the moment. And we just think that at the time that these things are becoming more apparent, that they're becoming more salient, then people will be able to leverage the knowledge that they have, which they'll have detailed knowledge of the time that it's actually happening, far more detailed than they would have had five or ten years in advance of the event actually occurring. And then they'll be able to use the tools and knowledge of the time to be able to craft appropriate policy. Committing ourselves to some sort of plan or policy
Starting point is 00:37:29 ahead of time without knowing these details, it just seems foolish to me. Well, I mean, I think my counterpoint would be, if you know that a pandemic is going to come at some point, you can manufacture and stockpile vaccines, right? Even if you don't know exactly when it's going to arrive. And so my hope would be, similarly here, if you know that there is going to be massive job loss to come, you could start saying, well, in such a world, what kind of solutions might
Starting point is 00:37:52 exist? You have a theory of the case. I certainly agree that we can look at the character of the solutions that might exist. So I would say that the character of the solutions that would look reasonable are similar to the character of the solutions that we've already seen in the past. Governments, for example, have already become more generous in terms of redistributing income in the last 100 years or 200 years as automation has progressed. We have social security, for example, which didn't used to exist at all. We have Medicare, Medicaid.
Starting point is 00:38:18 We have all these different programs for caring for people for the poor with the unemployment insurance. And I would say that a continuation of that character, those types of things for taking care of people seems to make a lot of sense if you care about redistributing income. So I would predict and not just predict, but I would say that that's a reasonable thing to do
Starting point is 00:38:40 in the future, but I don't necessarily have a particular plan of what that would look like. I'm not saying like, oh, a UBI would be best, but I just think that that type of thing makes a lot of sense. Well, as I said in the article, I am glad that you all are being honest about your intentions. I think it is useful to have an honest and open conversation about the possibility of job loss through automation with AI. I do hope that you will find some sort of empathy for people who are really scared about what's coming.
Starting point is 00:39:08 We hear from listeners every single week on the show who say that their jobs are changing in ways that they don't necessarily like because of AI, who are anxious that their bosses are trying to automate them out of a job. So I would just say like, my free advice to you, as you go out making your pitch about automating all jobs, is just like, there are people on the other side of that.
Starting point is 00:39:27 And I think those people are concerned and worried, especially when they hear guys from San Francisco, talk about how they're excited to automate their jobs. Yeah, I guess I would say it's difficult when you whenever you're doing something that you think has enormous benefits, but has some costs. And in those cases, you can look at the people who are bearing costs or who fear that they might suffer some costs. And in those cases, you can look at the people who are bearing costs or who fear that they
Starting point is 00:39:46 might suffer the costs. And it's just like a very difficult thing to do. I mean, of course, I have empathy for those people. But I acknowledge that it feels kind of cold for me to say that I feel empathy for you, but I just think that these benefits that I'm listing are just much greater than the costs. But that's always, I think, what it sounds like when someone's performing some sort of utilitarian calculus But compared to the vast majority of policies that people actually talk about in the public sphere
Starting point is 00:40:12 This is something that is much more positive some than negative some that you might have otherwise imagined All right. Well, I think we're gonna have to leave it there But hopefully someday you can come back for another round of spirited debate Come back when you've automated podcasting and give us the news. Thanks guys. All right. When we come back, slay tuned for our segment with Alice and William, star of Megan 2.0. Well Kevin, when this show started, you and I made a pact, which is we will see every
Starting point is 00:41:02 Megan movie released in theaters. And this week we honored that pact. I don't remember making that pact, but we did see a new Megan movie on Monday night together. We had a little double date with our partners. Yeah. And you know, this is a movie that has a lot of ideas and it may surprise you for a movie that I think is mostly designed to be a lot of fun, but we were sort of laughing as the movie went along with just how many concepts in the film are things we have talked about on the show. And so we were really excited when we heard from the movie studio that we would actually be able
Starting point is 00:41:33 to talk to the star of Megan herself, Alison Williams, about this movie. Yes, and I will say my guess is that this is the only movie of 2025 that will contain a reference to the paperclip maximizer in it. That is how deep down the rabbit hole, the screenwriter and director, Gerard Johnstone went here. So, Mega 2.0, a movie about AI that managed to get a lot
Starting point is 00:41:58 of the inside terminology of the AI world into its script. It comes out June 27th, but in the meantime, here's Alison Williams. Alison Williams, welcome to Hard Fork. Oh my gosh, what an honor. Thank you so much for having me. It's such a pleasure. Casey and I had a double date on Monday night.
Starting point is 00:42:23 We both went out to see Megan 2.0, had a great time. And what struck me about the movie is it was actually quite impressively literate about some of the nerdier, more arcane parts of the AI and tech universe. So I'm just curious, how much research did you do going into this of, like, concepts like instrumental convergence, which, like, only, you know, a couple hundred real AI nerds in San Francisco talk about? And some actors and actresses now.
Starting point is 00:42:56 We are evolving. A lot of research. On the first movie, I did a lot more research into robotics and engineering and AI and women in tech and all of those things because I couldn't be farther from a woman in STEM. I'm a woman in English and film and television. And this one I did much more of like a physical preparation, but yes, I mean, to be able to speak cogently about these things and also make it sound like you know what you're talking about, it's always smart to just keep up with the reading and what there
Starting point is 00:43:28 is to know about what's going on. But yeah, I give a lot of that credit to Gerard. He doesn't just like do the superficial pass where it's like gobbledygook that means nothing. We really do want exactly what just happened. We want praise from the 100 people who know what we're talking about and honestly like we can go home now. This was really all we needed. Thank you guys so much for having me. Watching it I did have that feeling of oh like they put real ideas in this movie like it really does feel like it has been keeping track of a lot of the big discussions that are happening about the role of tech and society.
Starting point is 00:44:06 You know, your character, Gemma, becomes a bit of a screen time crusader as this movie begins. She's lobbying against smartphones and schools. I'm curious, how much of that resonates with you personally? So much, like I think Gemma and Jonathan Haidt would have gone on like a tour together Just you know talking about these issues I really think that the first movie was sort of like posing a hypothetical that then by the time it came out was starting
Starting point is 00:44:33 to feel very prescient and and like Kind of urgent this movie is sort of saying like okay hypothetical over we are here Now let's have an kind of an ethical conversation and a moral conversation about now what? Sort of about parenthood and stewardship and the parallels in the first movie between Gemma's motherhood, so to speak, of Katie, her niece, and of Megan
Starting point is 00:44:57 is just still at work in this movie, except it's an even more loaded word, I will say, without spoiling anything. And it's definitely trying to, like, fully entertain you in the theater and just, you're on a ride. And then when you get home and you get into bed, you're like, so, I guess we should be, like, expressing a little more gratitude to the Roomba, I think? We should be...
Starting point is 00:45:16 Yes. Yes. And, like, I did the other day when I wrote in one for the first time, like, say thank you to my Waymo, I think. It's just... You gotta say thank you to the Waymos. They're keeping track. You gotta. Listen, I feel grateful for anyone who gets me anywhere safely, including an inanimate
Starting point is 00:45:31 line of code or many, many lines of code on cameras. So I feel like it's, it is definitely asking us to think critically about like the, our ethical responsibility. It feels like it's asking us to enter into or realize that we have already entered into a relational positioning rather than a parasitic one where we can just use and use and take and take and take and expect and expect versus being in relationship with and it sounds crazy, but it is sort of the main question of the movie is to be in relationship with these types of ways of existing that
Starting point is 00:46:06 we have brought to life, so to speak. Yeah, I mean, I will say like this does not feel like a far future scenario to me. In fact, you know, just recently OpenAI announced that they're doing a partnership with Mattel, the toy company, and there have been lots of companies. I got that article like 150 times. I'm sure, I'm sure. Like for an official like merch tie-in, are they gonna do like a Megan doll?
Starting point is 00:46:33 I don't know, it hasn't come up yet, but it felt like when I read that headline, I was like, well, you know, sometimes it takes a little longer than this for sci-fi to become reality, but this is just, this feels like it's part of our marketing cycle. Now, I think we have kids around the same age, mine's three.
Starting point is 00:46:51 Would you give your kid an open AI or just an AI in general toy? Okay, well, here's the deal. So as you know, with a kid at this age, our son is incredibly curious. I love this side of him and I love how inquisitive he is and also how dissatisfied he is with any surface level. He can feel when you're phoning something in or when you just don't have expertise. And in his least condescending way possible,
Starting point is 00:47:15 he'll always ask for like supporting documents and evidence and he'll be like, let's go. They really do the citation needed thing. Let's look to the back. Let's go through a bibliography. Let's like see where we can go a little deeper here. Let's go past They really do the citation-needed thing a lot. Exactly. Let's look to the back. Let's go through a bibliography. Let's see where we can go a little deeper here. Let's go past the Wikipedia of it all and get some primary sources. So often when I'm trying to explain something to him, like jet propulsion, which he asked
Starting point is 00:47:36 me about last week, I turn to, as we all do, chat GBT. And I will speak into the speaking thing for like a hands-free thing if I'm multitasking and I'll say in a way that a three and a half year old can understand, can you explain how a rocket launches and how jet propulsion works and and then Arlo watches this little orb on the phone just like going in and out. It's like the least stimulating graphic experience on the planet and yet his eyes are, I'm watching something so intense happen. Like his brain is not able to comprehend
Starting point is 00:48:07 what this interaction is that's taking place. And he's hearing all this information and it's distilled perfectly and it's timed perfectly and it's exactly what he wants to know. And then he'll ask a follow-up question. And then the look on his face is so troubling that I will like cut it off and make him stop talking. The other day, we were talking about a subject, I can't remember what it was, it might have
Starting point is 00:48:29 been something related to like animal husbandry or something. It's a long story. We're talking about baby deer's lot. Anyway, so he said, can you ask Chappatiti? And I was like, Chappatiti, who is Chappatiti? And then he was like, the person who talks from your phone and answers my questions. And I was like, oh my God,
Starting point is 00:48:49 we are finished using this technology as parents. This is upsetting. And honestly, like this is free for you OpenAI, but Chappatiti is like a very cute and very benign sounding nickname for an extremely powerful machine. So. I love that story.
Starting point is 00:49:05 And it just makes me curious about the tension because I feel like the tension that you're describing is also in the movie of on the one hand, I was just going to say, yeah, like you have this thing that is able to mesmerize your son answer his question. You know, kids are famously curious. They often exhaust their parents patience, but now you have something they can just explain everything to them, you know forever That's kind of the promise of Megan too, right? Is hey, let me take a little bit of this parenting work off your hand be an extremely supportive friend commit a little light murder
Starting point is 00:49:34 For you if necessary totally So I'm curious in the sort of real world when you when you're away from the the killer doll and you just sort of have This sort of mysterious new technology How you navigate that tension? Well, it's really interesting. Honestly, I never get tired of explaining stuff to him if it's in my wheelhouse The times that I reach for chat GBT is when he has wandered beyond I can't explain gravity Period I can't explain it to a three and a half year old for sure, you know can
Starting point is 00:50:03 I can't explain it to a three and a half year old for sure. You know who can? ChatGBT. So what I used to do was I would look it up on ChatGBT and then I would sort of translate it so the information was coming from me. And then there was a time when I needed my hands and I just did the voice activated thing and I didn't think about it. And then now here we are. I think our strategy is to limit the amount of stimulus
Starting point is 00:50:26 to an amount that he can tolerate. And he and I talk, we talk all the time about overstimulation. He, when he was like two and a half, he was running around the house at breakneck speed and he was like, mama, I think I'm overstimulated. And I was like, you have nailed it. That is exactly what we're seeing right now. And so in the same way, I sort of think that an amount of knowledge that is delivered in
Starting point is 00:50:48 such a confusing vehicle can also be overstimulating. Like I think there's such a thing as like just overdoing it, being overwhelmed by the amount of information that's coming at you. And as anyone with a toddler will tell you, often the best amount to give them is just enough for them to wonder more and fill in the gaps for themselves because sometimes our instinct to over explain things deprives them of the opportunity to answer some of it on their own and the things they come up with are so extraordinary and the questions they end up having. So I try to remind myself that less is more
Starting point is 00:51:25 and he's just learning so much every single day and a little bit every day is totally enough. So that's how you feel about AI as a parent. I'm curious as a creative and a person who makes and acts in movies, how you feel about it. Obviously AI and Hollywood have had tensions going back years. You've now been in two films featuring AI as a major theme. I'm curious when you put your actor hat on,
Starting point is 00:51:54 what you feel about this technology. It's obviously intimidating. My mind goes instantly, of course, to job security because I'm like, okay, what about my job can't be performed by an algorithm or an AI or a computer or something like that? And the answer, honestly, is a little bit embarrassing. But it's the parts of us that are flawed
Starting point is 00:52:14 and that make mistakes and that don't do things perfectly, like a hair out of place that looks normal or a smudge of lipstick or a lack of continuity here or a slurred word occasionally in a sentence, the way we all speak kind of, or handwriting that's not consistent. All of those things are so human. I kind of rely on the tiny moments
Starting point is 00:52:36 where I'm bad at my job to save my job, frankly, because I think that's how art ends up feeling human. To me is the parts of it that aren't executed perfectly. Well, you know, my sort of question in this vein was like a few years from now, it's time to make Megan 5.0, they come to you, they say, Alison, we have so much footage of you,
Starting point is 00:52:59 we have so many- Oh, I've been anticipating this for, since the first movie. You have, okay. Keep going, keep going, keep going with your question. Sure, so I mean if they come to you, they say, look, we're gonna write you, you know, a really nice check and you don't actually have to do anything. You can like, you know, stay at home, go make another movie, like whatever you might like
Starting point is 00:53:14 to do. We'll just sort of make Megan 5.0 using your digital likeness. What's your like gut reaction to that kind of offer? I've actually contemplated this from like a contract standpoint that I would want my likeness compensated first of all to the same amount that my actual personhood would be. So that's the first one is I would want to make it as cost prohibitive, not to say that I'm like enormously expensive to hire. I just mean like, you know, I would not want it to be the vastly cheaper option.
Starting point is 00:53:43 And I actually think for now at least, it is the more laborious and difficult option. So luckily I feel safe there, unless you're going for an uncanny kind of thing. But even for the Megan movies, we do a combination of human and machine. It's crucial to our achievement of Megan and for Amelia and for all of the iterations of all of these little beings
Starting point is 00:54:04 that we have in this movie, I don't want to give them all away, but they're all a collaboration. But there will always have to be, I think, a human component for all of it, just because of what it is. And so I think that even if they want to do, let's say like something happens to Gemma, she's no longer alive and she's reanimated as AI,
Starting point is 00:54:27 and let's say that's the situation in Megan Five. I may have just written it with you guys on this podcast. I'm gonna, what do I do, like put a flag in it? Like, IP, this is mine. Yeah, that's yours. No, that belongs to the Hard Fork podcast now, sorry. Kevin! We don't have room for that.
Starting point is 00:54:43 Just pork cooks in the kitchen. That's fine, it'd be an awesome collab. Anyway, so I think like if that were the case, they would still at this point need me to be involved, need my physical, someone's physical body to be involved for it to work, even from like a motion capture standpoint. But I do want to like, I guess I should copyright my likeness.
Starting point is 00:55:00 I don't know, it's already, I already watch cuts of trailers for movies that I'm in that have like lines of dialogue that aren't in the movie that are said by an AI version of my voice that don't end up making it to air. They're just in the draft versions of them. And then I go in and record them. That is always really strange because it doesn't exactly sound like me, but for a second, I'm like, wait, I don't remember saying that.
Starting point is 00:55:21 Oh, okay. That's AI. And it's just so rudimentary now. But yeah, as of now, there has to be then a human pass on it to make it sound, yeah, worse, I guess, in a way that's more normal and better, conversely. I wanna ask a follow-up related to kind of how different the two films are.
Starting point is 00:55:40 So, Megan One is, feels very much like a horror film. Megan 2.0 feels a lot more like an action film, you know, to me it was feeling like kind of Mission Impossible across with Terminator 2. I wonder as you're reading both scripts, like you read the script for Megan 2.0, do you feel as an actor like, oh, like the genre has evolved, and so my performance, it needs to evolve. And if you did feel that way, if maybe that is exactly the sort of thing that's gonna be hard to translate, when it comes time to make Megan Five, and the genre has maybe shifted yet again,
Starting point is 00:56:14 and they don't actually have the raw materials they need to do the performance that you would have done. Exactly. I mean, that's a great point. I think the thing that's fun about this movie is that we started having conversations about what it was gonna be before the first one even came out and the clear mandate from like every corner of our world
Starting point is 00:56:31 was there's gonna be a second doll. And so the question then was like, okay, let's extrapolate from there. Like, obviously we're in a world where T2 exists, so let's not make like a direct one-to-one reboot of that movie, let's figure out how to make it interesting and also like live in our world. And then as Gerard started to put the pillars of the plot together in a way that felt so logical to him
Starting point is 00:56:54 and is so unpredictable as you start watching the movie to everyone else and makes for such a fun and unexpected and unpredictable ride, he realized that we were really, we had to be in an action world for reasons I don't want to spoil. The stakes just get really bigger, the world gets bigger, and the stakes get really bigger. That's what a Yale education can give you. And not just a Yale education, an English major. The stakes get a lot higher, and the world expanded in such a way that it kind of,
Starting point is 00:57:24 the story like pushed us into action. It wasn't the tail wagging the dog in the other way. Once we realized we were in that genre, like it was a really cool thing to realize that we had created a tone in the first movie and characters that could be transported, like with, as long as there's still that thriller DNA in it, it can be like translated into a different genre.
Starting point is 00:57:51 And now as we are, you know, talking about possibly being lucky enough to do another one, we're sort of asking that question again of like, okay, so where to, where do we go next? Like, what should we, where should we take this, this little team of misfits? Where should we creep next? Like, what should we, where should we take this, this little team of misfits? Where should we creep other people out in the future? And it's a really fun way to iterate the franchise.
Starting point is 00:58:12 Yeah. I mean, the thing that stuck out to me about Megan 2.0 as compared to the first one is it just feels like a much more complicated moral. Like, I feel like if the takeaway from the first movie was like, AI bad, like, there was some there was some complication of that in the second one, where I don't want to spoil any details. But like, there, you know, there are points at which like AI is part of the solution. So did you feel like the script that you were reading and then performing was a more complicated like is the moral of Megan 2.0 to the extent that the film has a moral?
Starting point is 00:58:46 Something different than AI bad Yeah, for sure I think it had to be because I think making a movie with the moral AI bad would be like a deeply dick move in an era where AI is like we just it would be sort of like I don't know it would just be Obnoxious to make a movie that's like psych. We think this is bad and should go away, but you're stuck with it. So you know what I mean? It would just be counterproductive. Whereas I feel like the movie we made is asking a much more nuanced question.
Starting point is 00:59:18 It just feels like, yeah, we're all on kind of a similar moral journey to the one that, that Jemma is on. And by the end of the movie, without giving anything away, we're all in kind of a similar moral journey to the one that that Gemma is on. And by the end of the movie, without giving anything away, we hear exactly how she feels on the topic. And it is, shocker, very different from the way it is the beginning of the movie. Not that being cautious is a bad idea, but just that there needed to be asterisks to her argument because of where we are as a society. And there just are certain realities to that that she needed to contend with.
Starting point is 00:59:46 Yeah. In the first movie, there's a wonderful scene of a kind of like combination dance murder. In this one... One of those. And this one, I desperately want to spoil, but I won't. There's an incredible song that brings the house down. How important is it to the creative team
Starting point is 01:00:04 for there to be an iconic moment of gay culture in every Megan film? Um, extremely important, but it's not engineered like that. It's just a moment where she behaves... Ah... in just a, kind of a shocking use of the arts, I guess, in a moment where, especially when it's coming from tech,
Starting point is 01:00:24 it's kind of shocking. the arts, I guess, in a moment where, especially when it's coming from tech, it's kind of shocking. I had this realization the other day that I played a character on Girls who broke into Spontaneous Song to the detriment of everyone around her quite frequently and, like, never asked permission, and it was never the right time.
Starting point is 01:00:40 And that, like, I'm almost being, like, karmically punished by watching this other doll. This is your cross to bear, yes. It is sort of like, I have now brought another being to life that does this, where she'll spontaneously dance and or sing at certain points in a movie when it's never feeling quite right. Anyway, I think that it's important to us
Starting point is 01:01:01 that we make a movie that feels so tonally specific, like making sure that we hit the same tone as the first movie was like priority number one because if we became too self-aware and too in on the joke and too winking at the camera, it would be a totally boring movie to watch and feel like exhausting. And so we knew that like eventually the camp would be there and the self-awareness and the in on itself would be there,
Starting point is 01:01:24 but it would only be there if we executed completely earnestly on the story and the emotional beats as they were there. And I think watching women with really strong personalities committing to an emotional beat and then rising to the occasion in their strength is in and of itself sort of a celebration somehow of queer culture.
Starting point is 01:01:44 Absolutely. And so it feels like making sure that we just like stick to that in these movies is not at all an effort to, I don't know, placate anybody, but ends up just sort of like taking us in that direction every single time. I'm glad you did. Alison Williams, thanks so much for coming on.
Starting point is 01:02:04 Thanks, Alison. Thank you for having me. This was so fun. I'm star you did. Allison Williams, thanks so much for coming on. Thanks Allison! Thank you for having me. This was so fun. I'm starstruck. We hope you'll come back for Megan 3. I absolutely will, especially if I get to play myself a human being. You would be an honor. Thanks for watching guys! Before we go, for even more Alison Williams, head over to YouTube at YouTube.com slash hard fork. She participated in one of our favorite segments group chat chat about what's been blowing up her phone lately.
Starting point is 01:03:04 Hard fork is produced by Whitney Jones and Rachel Cohn. We're edited by Jen Poyon. We're fact checked by Caitlin Love. Today's show was engineered by Katie McMurrin. Original music by Marion Lozano, Diane Wong, Rowan Nemesto, and Dan Powell. Video production by Soya Roquet, Pat Gunther, and Chris Schott. You can watch this whole episode on YouTube at youtube.com slash hardfork. Special thanks to Paula Schuman, Pui Wing Tam, Dalia Haddad, and Jeffrey Miranda.
Starting point is 01:03:33 You can email us at hardfork at ny times dot com with how you would protect yourself from a killer robot. Thanks for watching!

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.