Today, Explained - OpenAI owes us $180 billion

Episode Date: March 25, 2026

OpenAI's founders promised its tech would benefit humanity. Now that it has split into a giant charity and a for-profit company, that mission has gotten complicated. This episode was produced by Dani...elle Hewitt, edited by Jolie Myers, fact-checked by Andrea López-Cruzado, engineered by Patrick Boyd and David Tatsciore, and hosted by Sean Rameswaram. This episode was produced in partnership with Vox's Future Perfect. Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent. Photo Illustration by Nikolas Kokovlis/NurPhoto via Getty Images. Listen to Today, Explained ad-free by becoming a Vox Member: vox.com/members. New Vox members get $20 off their membership right now. Transcript at ⁠vox.com/today-explained-podcast.⁠ Learn more about your ad choices. Visit podcastchoices.com/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 Chat GPT. You either love it or you hate it, am I right? You love it because it tells you why your back keeps doing that. You hate it because it uses a boatload of fresh water to do so. Or maybe you hate it because after OpenAI trained chat on centuries of humanity's creative labor, its leader, Sam Altman, said he wants to sell it right back to us. We see a future where intelligence is a utility like electricity or water, and people buy it from us on a meter and use it for whatever they want to use it for.
Starting point is 00:00:34 Cool. But wait, Chad GPT's parent company OpenAI has the potential to do tons of good, too. Turns out they've got $180 billion of charitable monies to give away to humanity, to help our cause. That's more than double what the Gates Foundation has to play with. OpenAI owes us $180 billion. But are we going to get it on today, Explain from Vox? Here we go. Once upon a dismal day, Bob's ice cream van looked gloomy and gray. Although he had big ambitions, his socials lacked creative vision.
Starting point is 00:01:08 Ugh, that bad? Maybe vamp it up a tad? I have an idea. Bob launched Canva and got into gear. Create the video in the vampire team and make it the funny as I mean. It went viral. Bob's business? A revival.
Starting point is 00:01:23 Now, imagine what your dreams can become. When you put imagination to work at Canva.com. Once upon a dismal day, Bob's ice cream van looked gloomy and gray. Although he had big ambitions, his socials lacked creative vision. Ugh, that bad? Maybe vamp it up a tad? I have an idea. Bob launched Canva and got into gear.
Starting point is 00:01:45 Create the video in the vampire theme and make it the funny as I mean. It went viral. Bob's business? A revival. Now, imagine what your dreams can become. when you put imagination to work at Canva.com. Hey, chat. Introduced Today Explained, the podcast.
Starting point is 00:02:04 Of course. Today Explained is a daily news podcast from Vox. Each episode takes a single... No, just introduce it like you're introducing the show. Like, this is Today Explained. Ah, got it. This is Today Explained. Show me the money, chat.
Starting point is 00:02:19 Sarah Hirshander from Vox is here to tell us where to start. I think we would have to start back in... 2015, so that's when Open AI started. OpenEI began as a nonprofit. It began as this nonprofit AI lab founded by a few donors, including some extremely familiar names like Elon Musk and Sam Altman. It's very important that we have the advent of AI in a good way. And they founded it as a nonprofit to develop AI in a way that is safe and that will benefit
Starting point is 00:02:55 humanity. And they created it as a nonprofit lab. instead of as a corporation or as like a for-profit startup, which is normally what we would see for this kind of thing, because they figured that this technology was going to be so transformative that we need to make sure there's no profit motive involved. So nobody's going to make money off of what we're making. The reason for our structure and the reason it's so weird
Starting point is 00:03:17 is we think this technology, the benefits, the access to it, the governance of it belongs to humanity as a whole. You should like not, if this really works, it's like, a powerful technology and you should not trust one company and certainly not one person with it. That was 2015. Fast forward a few years and AI starts getting a lot of buzz because of a new product called ChatGBT that Open AI, the nonprofit lab, developed. A new artificial intelligence tool is going viral for cranking out entire essays in a matter of seconds. We have about 100 million weekly active users now on ChatGBT.
Starting point is 00:03:57 Open AI is the most advanced and the most widely used AI platform in the world now. So over time, Open AI was saying, you know, we need a lot more money to be able to do this properly. It costs a lot of money to develop AI. It costs money to, like, hire people, the computing power. All of it costs a lot of money. So like, we need investors. We can't just rely on donations and sort of the tax breaks that we get as a nonprofit to develop this stuff. And so they created this like what's called like a capped profit subsidiary, which was like a little for-profit arm that they could use to raise that money.
Starting point is 00:04:27 it would still be under the control of the nonprofit as kind of like the umbrella parent organization, but they were able to raise some money to some extent. A lot more money started pouring in, a lot more interest from investors started pouring in, and Open AI was kind of struggling to reconcile the nonprofit part of their mission and the fact that they were becoming this enormous, one of the most well-known tech companies in the country. So, 2024, Open AI decided that they wanted to completely, like, disentangle themselves from these nonprofit roots. So they no longer wanted this cap profit model where investors could only get a certain amount of, you know, their investment back. They wanted to be able to raise as much money as they wanted, and they wanted to be able to kind of behave like any other sort of for-profit AI company would.
Starting point is 00:05:19 Basically, what it wants to do is it wants to become a Delaware public benefit corporation. And what that is, it's really just like a traditional corporation, but with some permission to do some public goods, spend on public benefit. The whole raison d'etre of open AI was to build artificial general intelligence, but for the good of humanity. That's why initially it was a not-for-profit, but then suddenly they realized they needed a ton of money to be able to access the compute to build AGI, and therefore the awkwardness began. They were eventually able to come up with sort of a deal with the Attorney General of California, which is where the company was based, that split Open AI formally into like two arms. One is like the corporation, an Open AI corporation that may eventually go public. And the other is this new philanthropy, which is basically the original nonprofit that is now like still the parent umbrella organization of Open AI, the company. But it also has these new responsibilities.
Starting point is 00:06:14 Basically, the philanthropy has two jobs. One is to do grantmaking, so like giving money to other charities. The other one is to do oversight over Open AI the company. And then on the side of sort of oversight, we haven't at least publicly seen the Open AI Foundation now that it has this sort of formalized role via this deal with the Attorney General. We haven't seen them really step up in a different way, at least not yet. Can you tell me how open AI has sort of made it clear to the public that this is no longer like a touchy, feely for the good of humanity operation? It feels like they've entered into controversy several times in the past few years. I mean, I don't think, I don't want to speak for them. I don't think they would identify as not being a touchy-feely for a good operation. I think they're actually trying very hard to still appear that way. And I don't want to be too cynical here. This whole deal is like super, super new.
Starting point is 00:07:15 So it is possible that we'll be seeing a lot of changes coming in the next year or two. But I think at least like so far this year, opening eye has made a lot of headlines because if it's deal with the Pentagon and the way that it's behaved in these negotiations and it's competitor Anthropic, which was actually founded by former opening eye employees who were disgruntled about, you know, some of opening eyes decisions about converting from the nonprofit. But Open AIs come across as the company that was more willing to negotiate with the Pentagon in a different way than Anthropic was. Anthropic said it had two red lines that it would not cross. The Pentagon said that it was going to move to declare the company a supply chain risk. And so Open AI stepped into that they're going to take this contract, but they want to have some safeguards.
Starting point is 00:08:02 Anthropic came across in that whole negotiation as a company that was willing to stand up against the Pentagon to put down some red lines on where it did and did not want its technology to be used, whereas Open AI simply did not come across that way. It's unclear exactly what those negotiations looked like, but that is at least sort of what the public has taken, I think, from those interactions. And then we've also seen Open AI get into a little bit of trouble because of some of its lobbying around AI safety. It's been opposed to different like statewide AI safety measures. And they say, say that they do that because they want a federal safety measure,
Starting point is 00:08:40 that they're kind of collaborating with the Trump administration on. But at the same time, I think a lot of critics have raised alarms about the fact that they've been opposed to those kinds of safety measures, which Anthropic, again, this competitor to Open AI has embraced. So I think at least from the public's perception, I'm not saying that this is everything that's going on with an opening eye, but the perception is certainly not that OpenEI is stepping forward in a real, like, leadership way around what it means to be an ethical AI company, specifically given its nonprofit roots.
Starting point is 00:09:16 Okay, so that's what's been going on on the for-profit side. What about the not-for-profit side? Is there anything happening there with $180 billion of shares, I guess, in Open AI? So I spoke to a spokesperson at Open AI who says that there is a lot going on behind the scenes, but there is not that a lot that we've been seeing so far. Like I said, we have seen that $40.5 million going to different community nonprofits, which is great. I talked to some of the nonprofits. They're wonderful.
Starting point is 00:09:50 But I think $40.5 million is, I did the math here, like on the back of a napkin, but like it's about 0.02% of $180 billion. And while opening, I has said that it will be giving as an initial promise, $25 billion. to charity falling into two buckets. One is focused on like scientific research and health and one is focused on what they're calling AI resilience. We have no idea what that's actually going to look like. And again, I'm giving an open AI the benefit of the doubt. This deal was made in October. $180 billion is a lot of money. You almost don't want them to start giving away that much that quickly. Like you want to see them slowly building up their team. And a really important thing to note is that the board of directors of the Open AI Foundation is almost ideal.
Starting point is 00:10:36 identical to the board of directors of Open AI, the corporation. There is one member of the Foundation Board that is different. Again, that might change over the course of the year. But the fact that, like, the Open AI Foundation doesn't have that sort of independent structure just yet has raised a lot of alarms. You're saying the people who are influencing decisions on the for-profit side of Open AI are the same people influencing decisions or a lack thereof on the not-for-profit side. With the exception of one member, yes. And when I asked opening eye about this and sort of raise the alarm bells that a lot of people had about the idea that these board members could kind of put on a different hat when they're meeting about the foundation and when they're meeting about the corporation, you know, the answer was basically we have conflict of interest policies and they know how to do that. Trust us.
Starting point is 00:11:29 We're professionals. Yeah, basically trust us, which I think. get raised a lot of doubts for a lot of the critics who've been skeptical about the restructuring. That was Sarah Hirshander. She's a fellow at Future Perfect here at Vox. It's a section of the yellow website that focuses on making the world a better place. Imagine that. In a minute on today explained also from Vox, we're going to hear from one of OpenAI's most prominent critics. She's not just skeptical about this restructuring. She thinks it's illegal. This is advertiser content brought to you by Stonyfield.
Starting point is 00:12:35 organic. Our cows, them going out to pasture, they love it. They're so excited to go out every day. They wait right at the door. And in fact, we milk them and we just open up the laneway and let them just go right out to pasture. I'm Rhonda Miller Goodrich and I'm a dairy farmer in Cabot, Vermont. Our farm is Mollybrook Farm. We're an organic dairy farm and we are a supplier to Stonyfield. Mollybrook Farm has been in my husband's family since 1835. We started our organic transition in 2015. We had 53 acres of corn ground, and of course we had to use herbicides and pesticides, and the soil was dead, really, for all intense purposes.
Starting point is 00:13:20 We stopped growing corn and stopped using herbicides and pesticides, and we ceded that down to perennial grasses. After that, we began to see biodiversity in that soil again. To be organic certified, our cows need to be in pasture at least 120 days. I think the organic practices really benefit our animals. You know, having good feed, good water, a nice light area, that's what's important to us, and that's what's important to Stonyfield. Visit stonyfield.com to find Stonyfield organic yogurt near you.
Starting point is 00:14:04 You might be tempted to let Taco Bell's new Lux Value menu go to your head. Because 10 indulgences for $5 or less makes you feel fancy. Like you might think you need cloth napkins. Well, you don't. Just use the ones that come in the bag. Don't let the luxe go to your head. When West Jet first took flight in 1996, the vibes were a bit different.
Starting point is 00:14:24 People thought denim on denim was peak fashion, inline skates were everywhere, and two out of three women rocked, the Rachel. While those things stayed in the 90s, one thing that hasn't is that fuzzy feeling you get when West Jetting welcomes you on board. Here's to West Jetting since 96. Travel back in time with us,
Starting point is 00:14:39 and actually travel with us at westjet.com slash 30 years. I can't wait to work with you on crimes. I'm really excited to dive in and explore all the angles with you. Catherine Bracey is the head of tech equity. It's an advocacy group whose main position is that tech growth should benefit everyone. She also knows Sam Altman. We worked together back in the day
Starting point is 00:15:04 and then kind of went on a touch with each other for a few years. And then when I was writing a book about venture capital, I was really interested in OpenAI's nonprofit model. And Sam had been very explicit that the reason they founded Open AI as a nonprofit was to put the technology at arm's length from investors because they knew investors would exploit it in a way that would make this technology, which they thought was very dangerous, actually live up to that potential danger. And so I wanted to talk to him about the decision-making process behind that. And he was very forthcoming about that being, yes, the explicit reason why Open AI was founded as a nonprofit. And they put a lot of thought and capacity and energy into creating this governance structure that would protect the technology from the whims of investors, the incentives of investors, the imperatives that investors put on technology companies. And, you know, a few months later, I saw that all come crashing down.
Starting point is 00:16:07 and that has really stuck with me and informs a lot of the work that we're doing today to ensure that the nonprofit, you know, maintains the mission that it started out with. We asked Catherine how she felt when she found out that Open AI was going to try and have it both ways, mission-driven nonprofit, but also money-driven for-profit. Disappointment, I would say, was my initial reaction. And then the secondary response was, well, what can we do about this? And many of us kind of came together into this coalition that really started asking questions about the responsibility of the nonprofit and the responsibility of the Attorney General of California to enforce nonprofit law. And, you know, things kind of went from there.
Starting point is 00:16:53 Tell me more about that. What's nonprofit law look like as it pertains to, say, Open AI? Essentially, you know, I run a nonprofit. in the tax code, that means that, you know, my organization does not need to pay taxes, but in return for that tax exemption, we are required to operate in service of a public service mission. Our mission is to ensure that the tech industry is creating an opportunity for everybody. OpenAI's nonprofit mission is to ensure that AI develops for the benefit of all of humanity. And legally, Sam Altman is required to
Starting point is 00:17:34 prioritize Open AI's mission above all else. And that means that anything that is created under that sort of tax-exempt banner is owned by the charitable sector, can never be divested from the charitable sector. So when they decided they were going to split the nonprofit from the for-profit, they found that actually, legally, they could not do that without divesting both the intellectual property that the nonprofit owned, including all of the intellectual property that was created, you know, that underlies the, you know, chat GPT model and the equity stake that the nonprofit owned in the for-profit company. And so I think they looked at that price tag and they said, that's not a price we're willing to pay. And so instead of sort of
Starting point is 00:18:19 splitting the nonprofit from the for-profit, they decided to sort of continue down this path of nonprofit ownership, which in my mind is completely untenable, unsustainable, and irreconcilable. Basically, every day that Open AI exists, they are violating the law. And actually what they're doing is just daring the attorney general to hold them accountable for it. I think they think they're too big to be held accountable and they need the AG to assume that he will not win a case. And that's that's kind of what they've done. They've loaded up on lawyers and they are making a bet that the AG will not sort of pursue their case. in any way that's actually meaningful.
Starting point is 00:19:05 Okay, so if I'm following you, despite the fact that Open AI has split itself into a for-profit arm and a non-for-profit arm, their not-for-profit mission still overrides everything they do. And because of that, they are violating California law because there's no way that the nonprofit interests are ever going to be primary in their business. Right.
Starting point is 00:19:35 I mean, I think as the kids would say, they're playing in our faces. I mean, they expect us to take their word that as they operate, as they make deals with the Defense Department to develop autonomous weapons and surveillance systems on American citizens, as they, you know, battle parents in court whose children have committed suicide due to conversations that these kids were having with their chatbots. And as they subpoena these parents for the list of people who attended their children's memorial service as part of those lawsuits, they expect us to believe that the nonprofit mission is being prioritized over the profit motivation of the company.
Starting point is 00:20:16 We all know that OpenAI's overriding priority is to, quote unquote, win the AI race. It's to beat out the competition in the marketplace. And it's to establish the biggest AI company they can create. And to the extent that the nonprofit mission ever comes into tension with that, the company will always prioritize profits over the mission. But a law is only as good as its enforcement. And I think if there's one sort of rule of Silicon Valley, it is to ask forgiveness and not permission. And, you know, breaking the law and skirting regulations as part of the venture capital playbook, I think they said, you know, this is worth it. there's enough money on the line for us to just break the law and do the PR work and the lobbying work and the other work that we need to do to ensure that these laws will never be enforced against us. And when you talk about PR work, lobbying work, are you talking about like saying we're going to give away this $180 billion eventually? Well, here's the thing. They announced this week a list of priorities that the foundation would be investing in.
Starting point is 00:21:26 they listed as one of their priorities, Alzheimer's research. My mother is currently dying of Alzheimer's. I'm sorry. Thank you. I have one copy of the gene that puts me at extreme risk of developing Alzheimer's when I'm older. So I pray every day that AI helps us find a solution to Alzheimer's fast enough that I can benefit from it, that my family can benefit from it. And so I'm thrilled to see them make a commitment to, you know, deploying AI to find cures to Alzheimer's and other diseases. But let me ask you
Starting point is 00:22:04 a question. What happens do you think if the research that's funded by OpenAI's foundation finds that actually Anthropics models are better at drug discovery or scientific breakthroughs than chat GPT or any of OpenAI's other models? What do you think happens then? And what does it mean for the independence of scientific research if all of this research is funded by an entity that has an irreconcilable conflict of interest? We would not accept the science around nicotine that tobacco companies were funding. We do not accept the science around alcohol addiction that the alcohol companies fund. We do not accept the science around sugared beverages from the soda industry. and we should not accept that this scientific research is funded by an entity that has a vested financial interest in the outcome.
Starting point is 00:22:56 And that is why it is so critically important that the OpenAI Foundation actually be independent, that it have an independent board, that it can deploy its resources independently, that the research that it is funding is independent. And if you worry about, if you wonder about whether this is actually true, you should ask any of the researchers who were given access to Facebook's data, and ask what happened to them. And they will tell you that it does not work to do research, independent research that is funded by the tech industry on the impact of the tech industry's own platforms. Do you still think that we're maybe better off that Open AI says that they want to give billions away
Starting point is 00:23:34 to better society than, say, Anthropic, you know, Google, maybe having some pledges to give money away but not nearly as much? Is it still better that they want to give money away at all? Well, Google has a corporate foundation. It's called Google.org. And I expect in this structure with the tension and the conflict of interest that the OpenAI Foundation has, that it will operate much more like Google.org, which is essentially an arm of the marketing department, a corporate social responsibility program that sort of gives money to innocuous groups, but will never do anything that undercuts Google's priorities. And I think if you read between the lines of Open AIs press release, the work they say they want to continue doing with community funding is all about convincing people about the importance and value and benefit and using AI. I mean, that's a market building opportunity for them.
Starting point is 00:24:30 That's not actually anything that's going to ensure that AI is developed for the benefit of humanity. And so, no, I don't think that they're going to operate any differently than any of the other companies, you know, corporate social responsibility arms. That's essentially what they have built here. This is the fight of our time. AI is not inevitable. The way it develops is not inevitable, and we do not have to take these companies at their word that they know best how to govern this technology.
Starting point is 00:24:58 We should have bigger imaginations about what's possible. And if anything, this should give us more energy and motivation to fix what's broken about our democracy, then should just sit back and let billionaires control our future. Do you ever talk to Sam Altman anymore? He doesn't return my calls. Well, thanks for talking to us. I'm happy to anytime.
Starting point is 00:25:29 Catherine Bracey, she loves tech, but she also wants it to work better for the people. She wrote a book all about her position. It's called World Eaters, how venture capital is cannibalizing the economy. We reached out to Open AI to ask what they thought about Catherine's argument that they're openly breaking California nonprofit law, but we didn't hear back yet. Should we ask chat? Let me ask chat. That's an idea.
Starting point is 00:25:51 Okay, here we go. Is Open AI violating California nonprofit law? It's not settled. There are active allegations and legal challenges, but no court has definitively ruled that OpenAI is violating California's nonprofit law. Huh. Danielle Hewitt produced today. Jolie Myers edited, Patrick Boyd and David Tattershoor mixed.
Starting point is 00:26:11 Andrea Lopez-Crucibo was on the fact check. I'm Sean Romsperm, and this is today explained.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.