The Journal. - The Big Changes Tearing OpenAI Apart
Episode Date: October 1, 2024In less than two years, OpenAI—the company behind ChatGPT—has gone from a little-known nonprofit lab to a world-famous organization at the forefront of an artificial intelligence revolution. But t...he company has faced a series of challenges, culminating last week in another high-profile departure and the decision to become a for-profit corporation. WSJ’s Deepa Seetharaman discusses the permanent change to OpenAI's ethos and what it could mean for the AI industry. Further Listening: - Artificial: The OpenAI Story - Artificial: Episode 1, The Dream Further Reading: - Turning OpenAI Into a Real Business Is Tearing It Apart - OpenAI’s Complex Path to Becoming a For-Profit Company Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
OpenAI, the company behind ChatGPT, has had a lot of drama in the past year.
There was the sudden firing and almost immediate rehiring of its CEO.
Sam Altman is back as the chief executive of OpenAI.
After that, a bunch of OpenAI's top scientists quit.
These are just the latest in a string of departures, leading many to wonder just
what is going on over there. The company also very publicly found itself in hot
water for training its chatbot on copyrighted material. Hollywood megastar
Scarlett Johansson is taking on one of the biggest names in tech, OpenAI co-founder Sam Altman.
And then, last week, a couple other bombs dropped.
The company's chief technology officer announced she was stepping down,
just as news broke that OpenAI was remaking itself,
from a non-profit to a for-profit corporation.
It's a seismic shift for a company that when it was founded,
pledged to develop artificial intelligence for the public interest.
The Wall Street Journal's owner, News Corp,
has a content licensing partnership with OpenAI.
Our colleague Deepa Sitharaman has reported on OpenAI's entire saga.
What has it been like covering OpenAI?
Dynamic.
Great word.
Yes, it's been really fast moving and there'll be lulls and then all of a sudden a bunch
of stuff will happen all on top of each other, which basically underscores the fact that it's the time of
chaos in this company's life.
I mean, you have this company that is going through
tremendous change and so you're seeing in real time
a company tearing itself apart.
Now, as OpenAI tries to piece itself back together,
its decisions could change the business of artificial intelligence forever.
Welcome to The Journal, our show about money, business, and power.
I'm Jessica Mendoza. It's Tuesday, October 1st.
Coming up on the show, what exactly is happening at OpenAI?
Courage. I learned it from my adoptive mom. Hold my hand. You hold my hand!
Learn about adopting a team from foster care at adoptUSkids.org.
You can't imagine the world without you.
You can't imagine the world without you.
You can't imagine the world without you. Hold my hand. You hold my hand. Woo!
Learn about adopting a team from foster care
at adoptUSkids.org.
You can't imagine the reward.
Brought to you by AdoptUSKids,
the US Department of Health and Human Services
and the Ad Council.
Things at OpenAI have been coming to a head
since last November,
when the nonprofit board
that governs the company abruptly fired CEO and co-founder Sam Altman.
Sam, at that point, at least from the external perspective, really seemed to be on top of
the world, incredibly central to the broader AI movement and, you know and somebody who just went around the world talking to world leaders
and talking to policymakers and visiting the White House and just generally looking like
the face of this revolution.
But then he's in Vegas, he is asked to join a call, he sees his co-founder, Ilya Setskevard,
and Ilya tells him,
hey, Sam, you're fired, basically.
The move sent shockwaves through the company.
Investors and a huge share of the employees
rallied around Altman.
And just five days later, Altman was rehired as CEO.
The dramatic reversal highlighted tensions that had been brewing at the company almost
from its founding.
When OpenAI was started in 2015, one of its main priorities was safety, making sure that
the potentially powerful technology its scientists were developing would never spin out of control.
The point of the company was to be a research lab that would be uniquely motivated, because it was a non-profit, to create powerful and valuable AI without the motivation or the corrupting influence
of having to turn a profit.
You know, where companies have a set of incentives
where they have to make money and generate a profit
and continuously grow.
This wasn't supposed to be that.
It was supposed to be a initiative
to really understand and develop
the world's most powerful AI systems
without those incentives.
So being a nonprofit was like part of its DNA.
That was the point, really.
That was the point.
But here's the thing,
AI, developing AI, costs a lot of money.
Surprise.
A lot of money.
It's not chump change.
I mean, you need billions of dollars in some cases
to develop these AI models.
And you need to hire a lot of really incredible people
who work at places like Google and Facebook and elsewhere
that are some of the brightest scientific minds.
Trying to bankroll its lead in the AI race while also staying true to its founding mission
became a constant challenge for OpenAI.
A few years in, the organization's lead funder left after a power struggle over how OpenAI
was being run, and all his money left with him.
To try to solve its funding crunch, OpenAI decided to court new investors.
So it created a for-profit arm
within its nonprofit org structure.
I feel like I hear about for-profit companies
that have nonprofit arms,
but the opposite is a little bit less common?
Yeah, definitely.
I mean, this is the inversion of that.
And so then you have this company that is inside of a nonprofit.
So initially, it too is telling investors, our goal isn't to maximize profits, our goal
is to do this research and you are at risk of potentially losing your entire investment.
And that is sort of the message and they start to take more investments
from other investors.
There are a lot of like venture capital firms and you know over time they start to take
on what winds up being billions of dollars from Microsoft and as this is all happening
and like you know there are a there are questions being asked of,
should we be taking this kind of money? Does it make sense?
Then in 2022, OpenAI released Chat GPT.
It was an instant sensation.
But that meteoric rise didn't ease the tensions at OpenAI.
It made them worse.
While the company was publicly hailed for its groundbreaking product, The meteoric rise didn't ease the tensions at OpenAI. It made them worse.
While the company was publicly hailed for its groundbreaking product, some inside OpenAI
worried that the chatbot's success would encourage the organization to release new
products even more quickly.
There are some employees that have some misgivings about, you know, there's a lot of different
bad things that can happen when you have these
kinds of systems that speak like humans and can be very convincing if they don't have
the sufficient guardrails.
And the product is really compelling and the company just keeps improving it and stress
testing it and trying to get better and better.
But for some of the employees and some of the researchers, there's a little bit of a
feeling that, okay, well, maybe we should have done this before. But at the time, it's really
explained away kind of as, well, no one knew it would get this big. It was just unpredictably viral.
The concerns over OpenAI's guardrails are part of what led the nonprofit board to fire
Altman.
But the investors who'd been putting money into the company played a big part in bringing
him back.
This is a very short-lived firing, I think end to end.
We're talking about five days, so much so that the company took to calling this period
of the company's life the blip as just like, oh, it's just something that happened, now
it's back to normal.
But nothing went back to normal after that.
That's next.
Sam Altman's return to the CEO's office was a tipping point for OpenAI.
Once he was back, investors started claiming more power.
One of their first moves was to kick off the board members
who had voted to fire Altman.
The big lesson among the investors is that OpenAI needs
to start looking more predictable, has a board that has
not nonprofit people on it, but people who have run companies before or
are big parts of companies and understand how business works, understand how tech works.
That's where a lot of OpenAI investors are sort of saying very publicly, like this needs
to be a more predictable entity because they'll feel more in control and more like they understand
this company and that a group of a few people can't just overthrow the CEO without warning
again.
Soon, OpenAI started hiring new board members to replace the ones who'd been removed.
They're adding a ton of board members this year to try to make it look like it's a more
serious board.
They're all like very corporate, very tech background, and very just kind of what you'd
expect of a Silicon Valley style board member.
And they've got people from government, former government officials, like a board that people can really count on
to not make rash, sudden decisions like firing the CEO.
That mainly if they ever wanted to fire Sam again,
there would be a process in place and notice,
and there would be a lot of predictability
with any kind of decision.
And this also starts a series of conversations internally
about what it would look like
if OpenAI shed the nonprofit altogether.
Hmm.
How did employees react to that idea?
What you see is that over time, some people that are part of
the old culture, so the culture that thinks about AI safety or thinks about
sort of the unintended consequences of AI, who think about things like
existential risk, those people increasingly feel squeezed out of the
company. And then you have more people coming in and getting hired that are involved in product
and know how to sell things and understand what people want, right?
And in the spring and through the first half of the year, OpenAI hires its first CFO, Chief
Financial Officer.
It hires its first Chief Product Officer, both signs that
it's trying to be a company that is building products and just trying to be more appealing
to the general consumer and make money that way.
And you know, all the while though, you've got concerns that this
Push into products and building shiny like what one executive would later call shining products
But this is the kind of thing that's going to distract the company from
Its original mission of building one artificial intelligence and two building it in a way that is safe and secure
artificial intelligence and two, building it in a way that is safe and secure and can provide prosperity broadly for the world.
That now it's really distracted by this desire to make money and a desire to be relevant.
Among the people who felt like they were being squeezed out were some of OpenAI's co-founders
whose vision
shaped the company's original mission.
One of the big first departures is Ilya Satskever, who is like this highly respected chief scientist
who Open AI researchers really, really admired inside the company and he leaves in May.
And his departure really marks kind of the end of an era where the company felt like
it was part of this great scientific vanguard, right?
And now this guy who's so at the forefront of that
is leaving.
I mean, there are plenty of scientists left at the company.
I don't want to make it sound like they've all fallen,
that they've all left, but this was a really big one.
This was the initial draw for so many
of those scientists early on.
Sutskiver's exit set off a ripple effect.
Immediately after he left, another key researcher
who also worked on safety, a man named Jan Leica, resigned.
A few months later, John Schulman, another co-founder
and a widely respected researcher, left as well.
And so you've got this really within a few months, just this like boom, boom, boom, three
co-founders step away.
These are sort of key safety people at the company.
Like what did that mean for safety and that as a priority at OpenAI?
I think there is a feeling among some factions of the company that OpenAI was taking the
eye off of safety increasingly and putting it more on developing products that people
would want to use, which is exactly what the company was trying to avoid when it was founded.
So there's a feeling of like, oh, we've just actually done a full U-turn and we are
no longer what we were.
And then there are other factions at the company that say it's not about, you know, we're not
doing less on safety.
We're just doing more on product.
Last week, OpenAI took another big hit when its chief technology officer, Mira Moradi, resigned.
Altman found out just a few hours before the public did.
She was an important leader and an integral part of OpenAI's operations and strategy.
She's the CTO of what is one of the most important technical companies in the world. And she was in charge of creating processes and building products and building out teams.
She's a central cog in the system.
She really made that company function.
And so the fact that she's leaving, I mean, it takes this institutional knowledge out of
the company and it's not an insignificant amount of institutional knowledge out of the company, and it's not an insignificant amount of institutional knowledge.
You know, this is a person that helped resolve conflicts and helped teams kind of get their products out the door.
In a statement announcing her resignation, Maradi said she wanted to, quote,
create the time and space to do my own exploration.
Her departure, coming after so many key founders
and executives left, is a major blow to OpenAI.
When Moratti resigned, Sam Altman was away,
speaking at an Italian tech conference.
On stage, he was asked about the upheaval
back in Silicon Valley, and he said he hoped
that this will, quote, be a great transition
for everyone involved. A spokesperson for that this will, quote, be a great transition for everyone involved.
A spokesperson for OpenAI said, quote,
we remain focused on building AI that benefits everyone,
adding that, quote, the nonprofit is core to our mission
and will continue to exist.
This week, Altman is expecting to close a fundraising cycle
of about $6.5 billion.
And he's getting some marquee names, NVIDIA, SoftBank, and
a new round of funds from Microsoft.
If Altman is successful,
OpenAI is expected to hit a valuation of 150 billion.
He's been the pitch man.
He's been talking to people about what that could look like.
And in the process of those discussions, he's also committed to saying that open AI is going
to be for profit.
And it commits to doing that within a two year timeframe, at which point investors who
aren't satisfied can ask for their money back.
It's a real risk and it puts a lot of pressure on the company to transition to make this
shift rather quickly.
And then there's another really big shift that happens, which is, you know, up until
now Sam Altman hasn't had any kind of equity in open AI and that was a selling point, right?
He's been telling politicians
and lawmakers all over the world that the fact that he doesn't have a stake in open
AI, that that means that he's more neutral. And that means he can slow down development
if it seems like it's going too quickly. And it's a sign that he's sort of taken a step back so that he isn't corrupted by by money, right and
Now he's likely to take some kind of equity stake in open AI
So is open AI's core identity
Fundamentally different now. Is it a different company? I
Mean, I don't think you can argue that it hasn't fundamentally changed. You know, it
is, it's just an argument of whether or not those changes add up to a good thing. And
there's a lot of disagreement about that. but everyone can see that it's just so far from what it started originally, and it's just changed so fundamentally.
It's just not at all what it used to be.
On a higher level, what does the shift at OpenAI mean?
Will it affect the development of artificial intelligence as a sector. It's about incentives.
And so now there are concerns that OpenAI's got a different set of incentives
that maybe inspire the company to move even more quickly, even more aggressively,
even more ambitiously about deploying its technology.
And the concern is that less and less and less emphasis lands on the safety part
that was so critical in the early stages.
And that could kind of have a domino effect on other companies as well.
Right, right. Exactly. Because OpenAI is in the center of the spotlight and it is very influential among
all the tech companies, including tech companies like Google.
And so every time it makes a move, it does send a signal to the rest of the industry
about what the norms might look like and what the norms might be.
That's all for today, Tuesday, October 1st.
The Journal is a co-production of Spotify and The Wall Street Journal.
Additional reporting in this episode by Tom Doton and Berber Jinn.
Thanks for listening.
See you tomorrow.