Offline with Jon Favreau - Sam Altman's Big Little Lies

Episode Date: April 11, 2026

New Yorker journalist Andrew Marantz joins Offline to break down his new investigation into Sam Altman, the CEO of OpenAI, the maker of ChatGPT. Over the course of hundreds of interviews, including o...ver a dozen with Altman himself, Andrew and his coauthor Ronan Farrow unveiled a leader who tells people exactly what they want to hear, whether or not it’s true. Just like the AI model he created! Jon and Andrew discuss the contradictory narratives coming out of OpenAI, whether they could build portals that summon aliens, and how Altman’s resolve to go “founder mode” means he may be headed down the same well-traveled path as many tech oligarchs before him.

Transcript
Discussion (0)
Starting point is 00:00:00 Offline is brought to you by IndyCloud. April's funny. Half the internet is talking about spring cleaning. The other half is already planning there, 420. Wow. That's where Indicloud fits in. Indicloud is your fully legal online cannabis dispensary for gummies, exotic flour, premium pre-rolls, and zero-sugar THC, and zero-sugar T-HC. A clean, alcohol-free way to relax without throwing off tomorrow. Everything available is federally legal hemp THC, lab tested, and shipped discreetly to your door. And this month, new customers get 40% off on. all month long with their biggest sale of the year. Sleep gummies for nights that actually restore you. Zero sugar THC sodas for social plans without alcohol. Premium pre-rolls for intentional wind downs and $70 ounces for consistency that feels sustainable. Boy. We love into cloud. Yeah.
Starting point is 00:00:47 It's great. It's great to have a wind down. I like an intentional wind down. I love it. You know it's intentional because I take the gummy. Yeah. Listen, honestly, in a pinch, I'll take an unintentional wind down. I just want to wind down.
Starting point is 00:00:59 I want to get down. I want to wind it down. I'm up. I want to be wound down. And that's what IndyCloud can do for you. That's what it can do for you. If you're 21 or older and a new customer, go to indecloud.com, that's dot co, not dot com, and use code offline for endocloud.com for 40% offline for 40% off all month long, shipped discreetly to your door, plus free shipping on orders over $50 and $30.30.000. That's Indicloud.com.com. Code offline for 40% off all month long, shipped discreetly to your door, plus free shipping on orders over $50 and $30.30. in free gifts on qualifying orders. Don't forget to fill out the quick survey when you order to support this show. As always, please enjoy responsibly and mega thanks to Indocloud for supporting your 420 plans this year. Another person who told us that this is probably a bubble is Sam Altman, who has said multiple times that he thinks it's a bubble and that someone is going to lose a phenomenal amount of money. I believe that's a direct quote. So, yeah, I worry about the potential for a bubble here.
Starting point is 00:01:59 I'm John Favreau, and you just heard from today's guest, the New Yorkers Andrew Morantz. Andrew, along with a fellow New Yorker journalist you may have heard of, Ronan Farrow, just published an incredible expansive investigation about one of the most important figures in tech, Sam Altman, the CEO of OpenAI. Over the course of hundreds of interviews, including over a dozen with Altman himself, Andrew and Ronan unveiled a picture of a leader who is widely distrusted by the people who have worked with him closely, and who tells people exactly what they want to hear. whether or not it's true.
Starting point is 00:02:34 Just like the AI model he created. Andrew and Ronan raised the question, can the man behind the most influential artificial intelligence company in the world who's going full steam ahead on a potentially civilization-destroying technology actually be trusted? I'm sorry to say, the answer will not make you feel better. I talked with Andrew about the contradictory narratives coming out of OpenAI, why this is so much more complicated than good guys versus bad guys,
Starting point is 00:03:00 and how Altman's resolve to go founder mode means he may be headed down the same well-traveled path as many tech oligarchs before him. We'll get into that conversation in a moment, but before we do, please consider becoming a crooked media subscriber if you haven't already so that you don't miss out on any of the great content we're putting out for our friends of the pod. Subscribers get our new extra episode of Pod Save America called Pod Save America Only Friends,
Starting point is 00:03:22 other subscriber-only shows like Polarcoaster with Dan Pfeiffer, access to all of our excellent substack newsletters like Pod Save America Open tabs, add-free episodes of all your favorite cricket pods, and you get to feel good about supporting one of the few independent, proudly pro-democracy media outlets left in Trump's America. So head to cricket.com slash friends and subscribe. Here's Andrew Moran's.
Starting point is 00:03:51 Andrew, welcome back to offline. Thank you. Always a pleasure. I want to talk to you about your big Sam Altman piece in The New Yorker that you wrote with Ronan Farrow. You and Ronan spent 18 months reporting this piece. You sat down with Sam Altman, And I think more than a dozen times, you get access to hundreds of pages of internal memos, documents.
Starting point is 00:04:12 And, you know, on one level, it's a story about the internal drama of a company where people no longer trust the guy who runs it to the point where multiple people described Altman to you, unprompted as a, quote, sociopath. But this also happens to be one of a tiny number of companies building a civilization changing and possibly. civilization destroying technology. So I guess my first question is after spending 18 months on this, what if anything changed for you personally in terms of your perspective on AI and the people building AI? Yeah. I mean, this is a really critical backdrop for this, right? Because you know, all people who are at a certain echelon of power and wealth deserve serious scrutiny. but I don't think I would have been that interested in this level of individual scrutiny for someone who, you know, was the CEO of a really big, you know, transportation for structure. Yeah, exactly, like, or a shoe company.
Starting point is 00:05:17 I mean, this matters because of the structural impacts of AI specifically. And so there's a lot we can get into about Sam Altman, the person, the personality, the persona. But the reason this matters at all is because I think AI, really matters. And I think I see a lot of people who are worried and scared and therefore want to put their heads in the sand and say, it's a parlor trick. It's a trick of the light. It's not real. It's hitting a wall. It's stochastic parrots. It's whatever. I don't think that is tenable anymore. Like, I just don't think we can sit this one out as a society. And so I think we need to bring serious scrutiny to bear on the people who are building it and on just like knowing what the thing is to
Starting point is 00:06:04 the extent that anyone knows, including the people who are building it, because this is not like a news cycle that you can just sit out. Like AI is part of, you know, weaponry at the highest levels of the military. It's part of surveillance. It's part of basic transportation infrastructure and weather prediction. It's, you know, liquefying our brains with slop. It's contributing to what experts call human enfeeblement, which is basically like the more you outsource to LLMs, the less you're able to think and write and perceive the world. So like these things are happening whether or not you think that you should spend time worrying about the more sci-fi scenarios where it kills us all. And by the way, we can get to this. But I think the sci-fi scenarios where it kills us
Starting point is 00:06:47 all are also worth worrying about. Yeah. Did you leave the reporting more alarmed about what, where we're headed? I did. Yeah. I did. And this is not just, again, this is not just an open AI thing or a Sam Altman thing. I think before I really started reporting on AI in earnest, I kind of thought, you know, of course, like, nerds are going to nerd. And like, you know, sci-fi people are going to sci-fi. And like, yeah, everyone has some apocalyptic fantasy about how their generation will be the last one ever on Earth. Yes. And there's definitely truth to that. I mean, there are these narrative things that. you know, in the nuclear age we get Dr. Strange Love and, you know, now in the age of AI we get AI dystopian fantasies. And it's even weirder than that because the AIs are trained on data
Starting point is 00:07:38 that includes dystopian sci-fi. So they themselves start spitting it out sometimes. Yeah. So I'm not sitting here and saying like the SkyNet scenarios are likely. But the more I looked at this stuff, the more I kind of understood what the arguments are from the people who are really worried and they were not all arguments that I could immediately refute. And so I think the fact that you now have members of Congress on the left and the right, you know, saying let's take these nerds kind of more seriously than we did. It's not incidental. I think it's because they're actually listening to the substance of the arguments for the first time. And even though the arguments might be hypothetical and even though they might be technical, they're not ones that you can just immediately
Starting point is 00:08:21 bat down without giving them serious thought and without actually trying to regulate. or control our way out of it. Yeah, and the other thing is we talk a lot about the technology itself, but you can't divorce the technology itself from the people who are building it, and then the people who are in charge of it, and the people who may or may not regulate it in the future. Right, I would place my money on may not, but we'll see. Right, but it seems like the entire, the governance structure of AI,
Starting point is 00:08:48 in the broadest sense, not just from actual governments and politics, but from what's happening at these companies seems critical here, which is what your piece gets into with regard to Sam. So I just want to get into a few of the bigger revelations in the piece. I thought one of the more damning revelations is what happened with the allegedly independent investigation of Sam Altman after the board fired him in 2023 for essentially lying to them. And so Altman sort of engineers his own return a few days later. And one of the conditions of his return is this outside investigation led by Wilmer Hill.
Starting point is 00:09:24 which is the same firm, law firm that investigated Enron. A few months later, OpenAI announces that the investigation has cleared Altman, but there's no written report. Nothing's made public. That's it. And a board member told you this could prompt a need for another investigation. Has anyone reached out to you guys since the publication? Anyone in the Delaware or California AG's offices?
Starting point is 00:09:49 Or do you think there's an appetite for a real investigation now, or do you think that chapter is closed? Yeah, I mean, we, I think, really nailed down and report for the first time that there was never a written report because it appears that a report was never written. And it seems from all of our reporting that that was intentional, that, you know, the goal seemed to be to clear Altman, or at least that if that was where it was heading, you know, a lot of sources told us like, well, then why should we create a paper trail that could create complications for us if where we're heading is to exactly. generate him. And this gets to sort of one of the persistent patterns that comes up in the reporting of this piece, which is, you know, everyone knows that Sam Altman was fired in late 2023 and everyone knows that he came back. What people didn't know before we got our hands on all these documents, and by people, I mean not just the general public, but like Microsoft executives,
Starting point is 00:10:49 like investors, Open AI employees. There was a ton of confusion. at the time of like, why is this person being fired? Like, what did Ilya see became the meme around Silicon Valley? Because Ilya Sutskiver was the co-founder, member of the Open AI board, who kind of became the swing vote in the firing. And we have now reviewed a lot of documentation, including the full memos that Ilya Sutskiver sent to the board, backing up why he thought Altman should be fired. Lots of other notes that were kept by Dario Amadeh and other employees.
Starting point is 00:11:22 Also, some employees who have left and have gotten out of the game are not part of rival companies, but who are just sort of concerned citizens or whistleblowers. And what it all redounds to is basically, like, if it had been one really simple smoking gun that you could have put in a tweet, we would know about it by now, right? The reason that this remains mysterious on some level is that it wasn't one thing. It wasn't like Ilya walked in on Sam strangling a bunch of baby kittens and was like, you know, this guy needs to go, right? Normally, when you fire a CEO, it's because of a pretty clear, bright line pattern of
Starting point is 00:11:59 behavior. In this case, what we document, and the reason it took such a long, a meticulous process and piece, is it's kind of this accumulation of small details where people feel that he's telling mutually contradictory stories to different sets of people, both inside and outside the company. He's telling people what they want to hear. These are the allegations that one hears, and honestly, any one of them, them in isolation, you might kind of think, like, okay, a CEO who tells people what they want to hear,
Starting point is 00:12:28 like, is that a fireable offense? And it's only over kind of the accumulation of these details that it starts to add up to something. Well, and also, alarmingly, it seems, from your piece and from everything we've seen, that since he has returned, none of that has really changed. None of the complaints or concerns about him have really gone away. He hasn't changed. He's still sort of doing the same thing. Yeah, I mean, if anything, one thing we do, document in the piece is that he's sort of gone more into what's called founder mode in Silicon Valley, which is like, yeah, it's my company and, you know, I'm not going to be as much of a people pleaser anymore. You know, when we talked to him and we actually, you know, did talk to him extensively,
Starting point is 00:13:08 he did kind of cop to this and say, yeah, you know, at certain times in the past, I've been sort of too much of a people pleaser and I've been too conflict averse. And he said, I'm going to work on being less conflict diverse in the future. So if anything, it's sort of more controlled. at the top, which I think it's important to point out, like, this is directly flying in the face of the way that Open AI specifically was pitched from the beginning. Right. You know, there's a way of looking at this that's like, again, wow, so crazy that a CEO has control of his own company.
Starting point is 00:13:40 Like, how naive could you guys be? I think for people who are not inundated with this stuff, it's important to start from the beginning and to remember or recognize the ostensible purpose of all. Open AI. The reason that Sam Altman said it needed to exist was as a counterweight to the big evil mega corporation Google, because AI was such a powerful technology that it couldn't be left to the profit motive to develop and deploy. It had to be in the hands of a small safety-focused nonprofit research lab, which was what Open AI was supposed to be at the beginning, because it could only be built slowly, cautiously, with aggressive support for maximum
Starting point is 00:14:22 regulation and that to do it quickly, to do it in a race dynamic, would be potentially devastating or could potentially destroy or kill everyone on Earth. That was the pitch. And then they just decided, well, we're going to actually have a for-profit company. That did become, actually, while we were working on the story, they made the final conversion. And speaking of Delaware and California, this was challenged in both of those states because their original articles of corporation, their original binding fiduciary duty was as a nonprofit to benefit all of humanity. And you know, you can say those are sort of airy words and, you know, all tech companies sort of say some version of don't be evil, right? But they really said and their employees to a large
Starting point is 00:15:11 extent really believed that the whole purpose was to be different. They had all these different Byzantine corporate structures where they were at first totally a nonprofit and then they were a capped profit owned by a nonprofit and the board of the nonprofit had exclusive control. And they also had this charter where they said, if someone else is developing a safe version of AI before we do, we should merge and assist with that project. Like we should merge our resources into the safe AI project, even if that happens to be at Google or at the U.S. government. So they were saying these things that no normal company in the history of capitalism would ever rationally say, but that's because they weren't supposed to be a normal company.
Starting point is 00:15:54 What did Sam Altman say to you guys about that shift? So we had several conversations about this. And one of the things that comes up is, you know, we didn't realize how much money we would need to get this off the ground. Like, we knew we would need money. Basically, I mean, Sam didn't say it to us in these words, but what's clear from talking to him and from reviewing the documentation is his initial pitch in May of 2015 is to Elon Musk, who was then merely the 100th richest person in the world and not the single richest person. And he says, because AI is so dangerous and because Google is doing it and Google is the bad guy, we need to start a Manhattan project for AI. And we might need up to a billion dollars to do it. Fast forward to now, their most recent round of funding alone was $122 billion.
Starting point is 00:16:47 And we kept having to update that in the piece because we would write in the piece. Their most recent round of funding alone was $40 billion. And then by the time the piece went to revision, they had done another head spinning. Like the numbers here are literally like impossible for a human to conceive of. Yeah. And so to answer your question, this is the story that Sam tells is that, yes, we thought we could be this little David versus Goliath safety lab, but we just didn't realize how compute intensive
Starting point is 00:17:16 and how cost intensive the project would be. And there's truth to that. This stuff, you know, it gets smarter, apparently the more data and training you feed it. And that's really expensive. And you need to build these massive data centers. They suck up a lot of power. You need to cite them somewhere.
Starting point is 00:17:32 So these are all like infrastructure challenges that we're not foreseen at the beginning of this. But it doesn't fully explain how aggressively and how longstanding, according to a lot of private records, the intent to ditch the nonprofit structure actually was. Offline is brought to you by Delete Me. Delete Me makes it easy, quick, and safe to remove your personal data online at a time when surveillance and data breaches are common enough to make everyone vulnerable.
Starting point is 00:18:04 It's easier than ever to find personal information about people online. Having your address, phone number, and family members' names hanging out on the Internet can have actual consequences in the real world. It makes everyone vulnerable. More and more, online partisans and nefarious actors will find this data and use it to target political rivals, civil servants, and even outspoken citizens posting their opinions online. With Delete Me, you can protect your personal privacy or the privacy of your business from doxing attacks before sensitive information can be exploited. The New York Times Wirecutter has named Delete Me their top pick for data removal of services. Someone with an active online presence, privacy is important.
Starting point is 00:18:40 Way too much on there about yourself. If you're online a lot, it's probably more important. info about yourself and people you know, then you even imagine. Have you ever been a victim of identity theft, harassment, doxing? If you haven't, you probably know someone who has. Delete Me can't help. Take control of your data and keep your private life private by signing up for Delete Me. Now at a special discount for our listeners, get 20% off your DeleteMe plan when you go to join DeleteMe.com slash offline and use promo code offline at checkout. The only way to get 20% off is to go to join DeleteMe.com slash offline offline and enter code offline at checkout. That's Join DeleteMe.com slash offline code offline.
Starting point is 00:19:15 Offline is brought to you by OneSkin. You've probably heard us talk about One Skin for their best-selling skin care, but now they're bringing that same longevity science to address hair loss with their scalp serum, OS1 hair. Spring can bring an increase in seasonal hair shedding. Happens all the time. And changes in routine can trigger stress-related hair loss at any time of year. That's right. Yikes. One Skin's OS1 hair serum is formulated to address those concerns at the source. Powered by their proprietary OS1 peptide. This scalp treatment targets the hair follicles to support an environment where hair. hair can feel thicker, fuller, and more resilient. Best of all, OS1 hair is drug-free, delivering effective results without any harsh side effects. Experience the difference of a peptide-driven approach to scalp health, and see why users are prioritizing OS1 hair in their daily routines. Born from over 10 years of longevity research, One Skin's OS1 peptide is proven to target the
Starting point is 00:20:05 cells that cause the visible signs of aging, so your scalp and your hair stay healthy now and as you age. For a limited time, try One Skin with 15% off using code offline. oneskin.co slash offline. That's 15% off 1skin.co with code offline. After you purchase, they'll ask you where you heard about them. Please support our show and tell them we sent you.
Starting point is 00:20:29 I saw the country's plan you report on is pretty incredible. Greg Brockman, the president of OpenAI, allegedly proposed that they play Russia and China and the U.S. against each other, basically starting a bidding war for advanced AI. Brockman half denies this. Yeah, so,
Starting point is 00:20:46 I was going to say, A, how confident are you in the reporting? And B, what does it tell you about how the founders actually thought about humanity benefiting from this technology? So it's actually, we feel really confident in the reporting. You know, it's funny. Like, I think people really are right to be skeptical about any of these industry stories. And especially to be on the lookout for, you know, competitors trying to sling dirt at each other. sort of launder it through the press. Fair.
Starting point is 00:21:19 There are several parts of this story where we really, really try to put pressure on things that seem like they, you know, are flinging, you know, mud at OpenAI so that a competitor like Google or Anthropic or XAI can, you know, benefit from that. And we go to great lengths in the story to kind of tease those apart and try to be fair. Something like this country's plan is not, you know, everyone in the room basically agrees that some version of this happened, and they kind of just recall it differently. Now, to be clear, we are talking about hypotheticals, right? We're not talking about a scenario where they did sell AI to Putin or she, but basically everyone in the piece agrees that some version of a country's
Starting point is 00:22:04 plan happened. And that basically, I mean, people should go read the piece, but basically, in the early days of Open AI, they are all talking about this mission of how when they achieve the most powerful advanced AI ever, and it's kind of the most powerful invention since electricity, they need it to benefit humanity rather than destroying humanity. How will they do it? What does that mean in practice? And they're kind of bouncing around ideas like in a, you know, in a conference room with a whiteboard. And they actually hired someone whose entire job was to make a game plan for like, okay, how did they do it with nukes? Well, they had this whole thing called a Baruch plan and, you know, let's write up a whole proposal about what a Baruch plan for AI
Starting point is 00:22:50 would look like, right? And the allegation is that over time, this kind of non-zero-sum, non-competitive vision kind of morphs into a fundraising pitch, basically, and that then it morphs into, well, what if we, like, sold it to world governments? Now, Greg Brockman denies that that was the idea. He says it was actually, like, something less scary than that, But nobody just denies that this took place at all. These are the kinds of things that were being batted around and that apparently they were also pitching to outside investors, at least one investor. So these things sound crazy on their face because they kind of are.
Starting point is 00:23:29 But it's also like this is how they were talking about it at the time. This wasn't just a public, you know, rhetorical display. This wasn't just like what they put in commercials. This is how they talk about it among themselves. there will be an AGI dictatorship and whoever gets there first will, you know, control the ring of Soron. I mean, these were like routine metaphors that they used in their private correspondence. On the country's plan thing, Greg Brockman does say we were never going to auction this off to evil world powers. So his story is that there was a more collaborative effort that he was envisioning.
Starting point is 00:24:05 But these are all different versions of the way people remember the same set of discussions. What was the argument for the country's plan that is not diabolical and just about like, you know, playing these countries off each other to make money? There were several iterations of it. So there could have been, what we were told is that there could have been a version where it was like trying to make it like mutually assured destruction so that everyone had an equivalent arsenal so that nobody blew each other up. Now, again, I think people who deeply study nuclear deterrence would find some flaws in that analogy. But this is how it was talked about, right? You want to give everyone the nukes. Exactly.
Starting point is 00:24:46 Exactly. I mean, I think, you know. We're pro-nuclear proliferation. Exactly. We like the proliferation. Yeah, okay. I mean, you know, they wouldn't be the first people in history. I mean, the thing is like, in this story, as with all these stories, you don't find people who are sitting there, twirling their mustache and saying, how they'd be evil today.
Starting point is 00:25:05 What they saw themselves as trying to do, and this is Sam Altman, Greg Brock. men, like, I do believe, based on the body of evidence, they were trying to find a way to be the good guy. And I think the story that you tell yourself, if you think that you are in this world historical position, I mean, remember, these are people who routinely compare themselves to Robert Oppenheimer and all the characters in the making of the atomic bomb. And they sort of say, like, okay, who are you? Like, he's Edward Teller. I'm Oppenheimer. Who are you going to be? Right. So if you think, and not for no reason, that that's your role in future history books, then you have to come up with a way to be not villainous in a way that's also realistic and that also wins the
Starting point is 00:25:56 race before the bad guys win the race. And so then it does become a kind of Manhattan Project thing, right? Why would you build an atom bomb? Well, you would do it if the bad guys are going to do it first. Yeah. And I mean, and I think Sam acknowledges this to people in your people. piece, which is, I think from the outside, people are like, oh, these rich people just want more money, right? Well, they're all rich, and yet, of course, money is a driving motivation for a lot of people, for all people in business. But I think what people sometimes miss is how power and not even power in the sense of, like, again, twirling your mustache, but influence. And this notion, this great man theory of which they think, like, yes, this is going to be legacy defining and I'm history. And so I must control this because other people are bad.
Starting point is 00:26:43 And if I control this, it's good. And maybe they don't think to themselves that they're going down the bad path. But when you believe that you are the only person that can do something and then you just keep getting more and more control, it's going to lead to bad outcomes historically. Right. And it's going to lead to race dynamics, which was another thing that Open AI set out to avoid ostensibly from the beginning. On the sort of foreign entanglements, one line in your piece I keep coming back to is the, former Open AI executive saying, quote, we're building portals from which we're genuinely summoning aliens and that Altman has now placed one of those portals in the Middle East.
Starting point is 00:27:22 So national security officials in your reporting clearly alarmed about this, as I think they should be. Altman's, you know, foreign financial entanglements are compared to Jared Kushner's. Can you talk about why this alarmed so many people? And, you know, my reaction was like, How is this not a bigger story in Washington? Oh, I mean, Altman's foreign entanglements were compared to Jared Kushner's in the process of him trying to get a security clearance, or at least considering getting a security clearance, when it emerged that members of royal families from, I guess it was the UAE in that case,
Starting point is 00:28:00 were giving him very expensive cars as personal gifts. So, yeah, there is a level of foreign entanglement here that is, at the very least eyebrow raising. Look, the whole story of these companies and their involvement with the government and with intel agencies and national security agencies could totally have been its own piece. I mean, there's a lot of really, really rich, suggestive reporting there. So Open AI was started under the Obama administration, goes through Trump one, goes through Biden, goes through Trump two. What you see and what you hear from, from talking to officials from these administrations is because the allegation about Sam Altman
Starting point is 00:28:44 is that he mirrors back what people want to hear. What you often hear from government officials is when the prevailing winds are toward regulation and toward export controls on sensitive chips and things like that, you know, there would be some push and pull and there would be some intention the way there often is with industry, but broadly a lot of the people we spoke to felt, at least under the Biden administration, yeah, I mean, you know, open AI is pro-regulation. And then we have a quote from someone basically saying, as soon as Trump got reelected, he said, okay, well, now the shackles are off and I don't have to play that game anymore. You know, that was the perception of these government officials.
Starting point is 00:29:28 And then what you see is on the first full day of the second Trump administration, this big announcement that OpenAI will do the biggest build of data centers in history with the support of the Trump administration. And then you see Sam Altman, who had been a stalwart donor to Democrats and Democratic PACs, suddenly saying Trump is such a refreshing change. It's so great to have a pro-business president. Do you think his thinking, his political views actually evolved? Or was this just, does it seem more like opportunism? It seems, and we have people in the piece saying this, Like, what he wants to do is win the AI race. And so his actions and rhetoric seem consistent with what he thinks will best achieve that.
Starting point is 00:30:13 And this is something that you see in closed-door meetings with government officials. This is something you see in public testimony before Congress. This is something you see in his interviews. I mean, one ability that people point to, and this is, you know, coming from many, many interviews, it seems like he was particularly well suited to sort of meet a particular historical juncture where, you know, it's 2015. We've just gone through the tech lash. Social media executives have had this really blustery approach to, you know, if you regulate us, you're a Luddite and you're ceding the future to China.
Starting point is 00:30:56 And so Altman comes to the public with a very different pitch and says, actually, please regulate us, what we're doing is so dangerous that if you don't regulate us, you and everyone you love will die. He goes before Congress and says, I urge you to do more. And we have in the piece, Senator John Kennedy, not usually charmed by tech CEOs, says, oh, could you please write the regulation for us, basically? At the same time, he's making a pitch to his own employees and recruits. The engineers who are so terrible, verified of the power of this technology that they themselves don't want to build it, at least not until it's proven to be safe. And he's saying to them, I'm really one of you. I really am so concerned about these safety things that I need you involved because you alone can build it safely.
Starting point is 00:31:50 And then according to the reporting we have from, you know, investors, he goes and, you know, does a pitch deck and says, let's accelerate this and, you know, it'll, it'll, it'll, it'll, be really profitable for industries. So again, it's like, I don't want to be overly shocked by the fact that, you know, CEO makes different pitches to different people, but the level of difference and the level of existential stakes that are being invoked here is really unusual. And that's also something that happens from one presidential administration to the next. Offline is brought you by three-day blinds. At this point, we can shop for groceries, furniture, and even cars from home.
Starting point is 00:32:32 So why is blind shopping still stuck in the Stone Age? That's why you need to check out three-day blinds. There's a better way to buy blind, shade, shutters, and drapery, and it's called Three-Day Blinds. They are the leading manufacturer of high-quality custom window treatments in the U.S. and right now, if you use my URL, three-day blinds.com slash offline, they're running a buy-one, get-one, 50% off deal. Three-day blinds has local, professionally trained design consultants who have an average of 10-plus years of experience that provide expert guidance on the right blinds for you in the comfort of your home.
Starting point is 00:33:00 Just set up an appointment and you'll get a free, no-obligation quote, the same day. Not very handy. The expert team at three-day blinds handles all the heavy lifting. They design, measure, and install so you can sit back, relax, and leave it to the pros. Love three-day blinds. I have been using them for years and years and years before they were even advertisers. They're great. They come to your house.
Starting point is 00:33:18 You tell them what you want for blinds. They give you a whole bunch of options. Then they help you pick them out. They help you install them. It's all very easy. And the blinds themselves are just very high quality. Three-day blinds has been in business for over 45 years. And they have helped over two million people get the first.
Starting point is 00:33:33 window treatments of their dreams, so they're a brand you can trust. Right now, get quality window treatments that fit your budget with three-day blinds. Head to three-dayblinds.com slash offline for their buy-one, get-1, 50% off deal on custom blind, shade, shudders, and drapery. For a free, no charge, no obligation, consultation, just head to three-dayblinds.com slash offline. One last time, that's buy one, get-one, 50% off when you head to the number three, d-a-y-y-blinds.com slash offline.
Starting point is 00:34:04 I want to ask about Altman's involvement in the battle between Anthropic and the Defense Department. So Hexseth blacklists Anthropic as a supply chain risk because the company wouldn't drop its prohibitions on autonomous weapons and domestic surveillance. Hundreds of OpenAI, Google employees sort of sign a letter defending them. Meanwhile, as you guys report, Altman has been negotiating with the Pentagon for at least two days while signing an internal memo claiming OpenAI shared anthropics ethical boundaries. Emile Michael, It was a defense department official who had been previously, I guess Travis Kalenick's right-hand man at Uber, says on the record, I called Sam and he was willing to jump. Is there a less cynical reading of that?
Starting point is 00:34:51 Or is that just the reading? I would say the less cynical reading of it is something we talked about before, which is people don't think of themselves as being the bad guys. people think of themselves as doing the best job they can to be the good guys in a, you know, tough set of circumstances. So I think what Sam's defenders would say, and we, we talked to multiple Altman defenders, Altman loyalists, people who've stayed at the company for a long time, people outside the company. I think what a defender would say about this Pentagon interlude is, okay, he saw that, you know, the relationship between the Pentagon and Anthropic was fraying, and he wanted to come in and get those contracts so that someone worse couldn't get that.
Starting point is 00:35:35 Probably someone worse would be Elon in that scenario. So that's the most defensible. And I think he said publicly, Sam Altman has said publicly, look, you know, this $200 million contract that we got from the Pentagon, that's peanuts to us. Like it wasn't really worth the PR hit for me to do that. I only did it because I was trying to help. Now, people can believe that or disbelieve it. maybe it's just that he's such an instinctive dealmaker that he couldn't leave a deal unmade when he
Starting point is 00:36:01 saw an opportunity. Maybe he believes in Anthropics red lines and maybe he believes that he has gotten a better deal. We don't know because they haven't made the contract public. They've just sort of said like the government says they won't do mass surveillance and we believe them. But we'll see. I mean, again, it's just like one of the benefits of putting all this together in a big long New Yorker piece is you can really see the evolution from the start of the open-AI dream until now. And I think if you could put someone who was one of the co-founders or one of the early employees from 2015 into a time machine and say, we're swooping in to get the autonomous drone contract with the Department of War, they would find that a little surprising
Starting point is 00:36:51 based on the original pitch. Yeah, I mean, reading the piece is just like watching. the train come down the track and nothing's stopping it. So speaking of Anthropic, the day after your piece dropped, the company announced it's withholding its newest model, mythos, from public release because they believe its cyber attack capabilities are too dangerous. Meanwhile, Sam Altman just told Axios this week that AI enabled that cyber attacks are, quote, totally possible within the next year. Your piece reports on an open AI representative who literally asked you, what do you mean? mean by existential safety, that's not a thing. What do you make of Anthropics decision and how do you compare it to what is currently happening at Open AI?
Starting point is 00:37:37 Yeah. Just to clarify, you know, I've seen some people sort of saying the existential safety thing, was that like a gotcha journalist question where, you know, the question was worded in a confusing way and they didn't. And I should just say, like, we put it multiple ways, multiple times because there's a difference when you say safety, sometimes that means like user safety, user privacy, making sure people don't get doxed or, you know, making sure that the chatbots don't say naughty words or whatever. And then there's existential safety, which is making sure that the thing doesn't literally kill
Starting point is 00:38:11 all of us, which, again, I didn't invent that as a fear. Like, Open AI told me to be afraid of that. Yeah. So, and that was just not something that this representative had ever heard of, apparently. Look, the thing with Anthropic is tricky because on the one hand, this is apparently the first instance we've seen of a company being asked to do something and saying, no, we won't do it because that violates our ethical principles and therefore putting itself into a really perilous position as a business. On the other hand, like, it's not like Anthropic is really acting like, you know, an AI safety lab nonprofit either. I mean, they were only in that position because they were the classified system of choice at the Pentagon to begin with. And they've made many, many other compromises. I mean, they're also raising money in the Middle East. They, you know, so I think it's this very complicated game theory dynamic where everybody thinks or wants to think we're doing the best we can.
Starting point is 00:39:14 we're between a rock and a hard place, but it's not like Anthropic is acting super unblemished by their own lights either. I mean, the whole idea behind OpenAI and then Anthropic, subsequent to that, the sort of pristine rhetorical idea, right, is we're going to incentivize a race to the top so we don't have a race to the bottom. And I don't see anyone racing to the top. I see a lot of racing to the bottom or somewhat slowing down the race to the bottom. Yeah, and this is something that I've come to think is.
Starting point is 00:39:44 key to understanding this whole thing as I've, you know, interviewed some people in these companies and done a lot of shows on this. It's like we still think in terms of characters and villains and good guys and bad guys, but there's a larger structural issue here, which is, yes, anthropic can seem right now like they're doing their best and maybe they're the best of the bunch. Obviously, I don't feel like Elon's running a tight ship over there at XAI, reading your piece about Sam Altman and Open AI. That doesn't seem so great. either, but like, it's not that these are just individuals who have like personal moral failings or, you know, this profit motive above all else. Like, there is a larger system here where if you have a competitive environment, both within this country and globally, where all of these different companies and all of these different individuals are racing to build this technology within a capitalist system, like this is what's going to happen. Absolutely. And this is, I mean, to be fair to all the crazy hypothetical scenarios we were talking about with the country's plan, this is something they foresaw and to at least some extent theoretically tried to avoid. The question is, A, was it ever avoidable,
Starting point is 00:40:56 and B, how hard did they try to avoid it? But, you know, it is definitely true that there are structural things at play here that are more important than any of the individual personalities. And I would not want people to come away from this piece thinking, okay, Sam Altman should not be AGI dictator, so clearly someone else should. Right. That's not the point here. The point is, it is crazy that we're having a conversation about AGI dictators at all. And it's crazy that that's not a super crazy thing to worry about.
Starting point is 00:41:26 Well, so that brings us to regulation, because one way to deal with the systemic incentives is to actually pass legislation, rules, regulations. A few hours after your piece was published, OpenAI just happened to release a 13-page policy blueprint calling for a new deal for the AI era, tax and capital, public wealth fund, four-day work week. One AI expert, Anton liked to called it, quote, comm's work to provide cover for regulatory nihilism. How are you reading the timing? Do you think your story had anything to do with it? Yeah, and they also hired a ghost hologram of FDR to roll it out. No, I, look, I, again, I don't know what's in anybody.
Starting point is 00:42:12 heart or mind, but it definitely came out the day our story came out. And they also acquired this tech talk show TBPN while we were closing the piece. They had a few interviews lined up that seemed thematically related to the themes of our piece. Look, I mean, it is the absence of a coherent regulatory regime that makes the PR battle so intense to some extent. Because if there were clear rules of the road, you could talk about who's playing by the rules. If everyone agreed on what to do technically to keep these systems safe, you could have a purely technical or technological conversation. But in the absence of those things, to some extent, it becomes a PR battle. So you see these companies engaging more and more in a PR battle.
Starting point is 00:43:07 And one thing that people consistently say about Sam Altman is he's an incredibly gift. pitched pitchman. And so the fact that he's given different pitches to different groups over time, you know, you could say that's a feature, not a bug, depending on your perspective on it. Anyone who's played around with this stuff knows that they have certain kind of built-in tendencies and ticks and traits. And one of them that we talk about in the piece is sycophancy, which is this problem that the models can't stop telling you what you want to hear. And we, we, that could be a feature or a bug depending on what your goal is. And so if you can't stop telling people what they want to hear, you might not always arrive at the most blunt, true answer, but it could be a compelling or, you know, appealing answer.
Starting point is 00:43:55 Keeps you on the platform. It sure does. It sure does. And the right, you know, I'm not here to say that I know what the regulation can or should be. I mean, to the extent that we are summoning aliens out of portals, like, that's a very hard thing to regulate. But I do know that the regulations that Open AI claimed to support, they no longer seem to support. And in fact, we have reporting showing that they were kind of going behind the scenes to try to scuttle that very kind of regulation and like asking people to call Nancy Pelosi and Gavin Newsom to get it scuttled. So we now live in a landscape where, you know, these things are being built. and if you are a state politician who wants to introduce a state bill to control it in New York or California, you might run for Congress and have a massive super PAC dropping money against you because you support AI regulation.
Starting point is 00:44:55 So that's another kind of way that the ideal scenario, as it would play out in an Isaac Asimov novel, kind of interfaces very uncomfortably with the realities of capitalism under. politics under capitalism. Well, I noticed that even with the new deal for the AI era that Open AI and Altman released, it is heavy on sort of economic regulations and policy proposals, all of which would require the government to deal with taxes. And it basically wouldn't really hurt the company that much or stop the company from doing what it wants to do. Well, and we're only, again, this is like last summer now, so it seems like old news, but we came very, very close to living in a world where not only was there not robust AI regulation, but where there was almost a federal provision mandating a moratorium on state regulation, right? I mean, you remember this.
Starting point is 00:45:55 So we almost, and according to the reporting from that time, it was Steve Bannon and Mike Davis and other people on the right who were lobbying against that. So there's some strange bedfellow stuff going on here. But we almost had a situation where not only do we not know how to regulate this new alien technology, and not only do we not have federal regulation to do it, all we're doing is federally banning any regulation in the states. So that's kind of where we almost were. And where we are is, okay, now we just don't have regulation, basically. There's like a couple of bills in California and other places, but it's very rudimentary. Well, and in the Open AI policy blueprint thing, the safety section is almost entirely voluntary what they're proposing. There's some regulations on economic dislocation, but not really anything that they seem to be willing to accept on the safety side.
Starting point is 00:46:51 Look, again, like a lot of this stuff, there really is a good faith argument for and against a lot of these regulatory proposals. I mean, a lot of people watched this Pentagon thing go down and use that to say, okay, is this the government that you want regulating this technology, really? You know, so there really are good faith arguments on all sides. It's just when so much of the argument is being driven self-interestedly, it's hard to know where the good faith arguments stop and begin. You report to the companies preparing for an IPO at a potential trillion dollar valuation. Eric Rees told you that in other areas, some of the company's accounting practices would have been borderline fraudulent. A board member told you the company is, quote, levered up financially in a way that's risky and scary right now. Do you get the sense that this is a bubble that will pop?
Starting point is 00:47:41 And if so, how do you think that changes the story you guys told in this piece? Another person who told us that this is probably a bubble is Sam Altman, who has said multiple times that he thinks it's a bubble and that someone is going to lose a phenomenal amount of money. I believe that's a direct quote. So yeah, I worry about the potential for a bubble here. And another thing, look, I mean, for people who are, again, not super red in on the technical details and are kind of sitting a lot of this out, one kind of simple binary that often gets tossed around is like, is this a bubble or is this like a really useful transformative technology? And I think it's key to remember that it can be both, right? A lot of the biggest bubbles that we've seen are, you know, around the building of the transcontinental
Starting point is 00:48:28 railroad or the laying of fiber optic cable during the telecom. These are massive infrastructure projects that ended up being really useful and economically transformative and also created massive bubbles followed by recessions. So you can end up to. Yeah. I mean, you can end up using all that. Now, a lot of people say it's even worse in the case of the data centers because unlike train tracks or fiber optic cable, these chips depreciate so quickly that it's, It, you know, basically you're, you know, paying for them. And then three years later, they're not usable and you have to do the investment raise all over again. So it's definitely an overheated moment economically.
Starting point is 00:49:05 And basically the only way that based on what the experts told us that we come out of it without a bubble is if these models just keep leaping and bounding and growing in their capabilities year over year and month over month and week over week. And just nobody knows that's impossible to predict. So you can raise investment based on promises, but the technological breakthroughs either happen or they don't. Yeah, so it's either a massive economic bubble that bursts or technology that quickly becomes the killer robots that we're all afraid of, or perhaps both. It could always be both. Why can't it be both? So you spoke to a lot of people who left Open AI and over the concerns that we've talked about, Sutskovir, the emoties, the whole super alignment team. These are people who took huge pay cuts to work on what they thought was the most important problem in the world.
Starting point is 00:49:56 Most of them end up leaving in disillusionment. Like you said, some are competitors, but some have just left. What did you take away from talking to them about that loss, that disillusionment? Yeah. And this is another area where we were trying to filter really hard for competitor gossip and competitor gripes. And, you know, one of the strange things about this industry is that everyone, as soon as they leave one company, go off and raise a billion dollars and start another company. So they're all kind of rivals at this point. So, you know, Ilya Sutskiver has his own company now called Safe Superalignment.
Starting point is 00:50:30 Dario Amadeh obviously has his own company called Anthropic. So we were trying to really filter and not just like launder people's grievances and complaints. But one thing that does become pretty clear is there were some people who were really close to this technology who really, really believed that it could be massively dangerous. And so again, this is something that often gets discounted as, oh, this is just an attempt at regulatory capture. This is just people trying to hype up their product. I am here to tell you there were and are people close to this technology who really, really think it's dangerous. Now, why are they still building it? Good question. there's kind of a selection bias problem here where the people who are so scared of it that they don't build it, they're not in the piece because they stopped building it.
Starting point is 00:51:22 So you do have this kind of weird game theory problem of you only end up dealing with the people who are scared of it and yet continue to be in the race. But the scenarios where this thing goes off the rails, there are more of them than I realized and they are less far-fetched in some ways than I realized. I mean, still far-fetched. But they don't require necessarily for, you know, the thing to wake up and become SkyNet and decide that it hates humanity and destroy us, right? I mean, there are many, many other ways that this thing can go wrong. And, you know, it's, I'm actually just going to read you one thing because I thought, I think it's relevant.
Starting point is 00:52:09 This is a quote from a blog post. superhuman machine intelligence, quote, does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn't care about us much either way, but in an effort to accomplish some other goal, wipes us out. That's a quote from a blog that Sam Altman wrote in 2015. And so it's an oopsie that destroys civilization. An oopsie. And, you know, some of the best sci-fi stories involve oopsies. But, you know, again, like, we made it through the nuclear age so far.
Starting point is 00:52:41 Maybe this week that'll change. And we may make it through this too, but it's not to be taken lightly. And I think a lot of people take it lightly or ignore it. And look, I don't know what's going to happen. The people who are building this stuff don't know what's going to happen. I don't know if to the extent that AGI is meaningful, I don't know if it will arrive in six weeks or six years or 60 years or never. But I know enough to be concerned about the power of this stuff. And being concerned about the power of it doesn't mean you think it's good or bad or this or that person should be in control of it.
Starting point is 00:53:18 I think it just means taking it as seriously as the people who are building it. Yeah, I was going to say just a final question, as you worked on this for so long, like, what's the response to this piece that would tell you it moved the needle? And have you seen any version of it yet? Yeah, I mean, you know, I don't go into these things with a like, oh, I hope it does this or a kind of like activist thing. obviously, like, even if I wanted to, you know, journalism is not really that powerful. But I would like for people to reckon with how serious could this be. And again, I'm not here to say, like, everyone should be a dumer. And like, all I mean is it would be nice if people, you know, lived in the timeline that they
Starting point is 00:54:02 happen to live in. And like, in the way, you know, and you guys do this with politics all the time, right? dealing with people who don't want to live in a world where we have a president who's saber-rattling with, you know, taking out all of Iran's bridges and power plants for a war that he started for no apparent reason. Like, but that's the timeline we do live in. And so I think an equivalent of that with the AI stuff, you can think that people are, you know, spinning out and, you know, getting wrapped up in hype cycles and you can think all that stuff.
Starting point is 00:54:34 But none of that is mutually exclusive with. taking the underlying thing seriously and taking some of the concerns seriously, because like it or not, it's here. And it's only going to get, as far as I can tell, more powerful. Well, glad that you and Ronan took it seriously and wrote this piece. Everyone should check it out. Andrew Morrance, thanks as always for joining offline. Thank you. Really appreciate it.
Starting point is 00:55:11 Offline is a crooked media production. It's written and hosted by me, John Favro. It's produced by Emma Ilich-Frank, awesome. Justin Fisher is our senior producer, and Anisha Banerjee is our associate producer. Audio support from Charlotte Landis. Adrian Hill is our head of news and politics. Matt DeGroote is our VP of production. Jordan Katz and Kenny Siegel take care of our music.
Starting point is 00:55:31 Thanks to DeLan Villanueva, Eric Schute and our digital team who film and share our episodes as videos every week. Our production staff is proudly unionized with the Writers Guild of America East.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.