Front Burner - Chaos at OpenAI: did profit and safety collide?
Episode Date: November 22, 2023When ChatGPT was released last year, artificial intelligence was suddenly a reality in our everyday lives. The company, OpenAI, and its CEO, Sam Altman, seemed to be on a meteoric rise. So why was ...Sam Altman just fired by a board tasked with keeping AI in check? Steven Levy, Editor at Large for Wired, joins us to talk about the chaos at OpenAI, and who controls the artificial intelligence that could change our world. For transcripts of Front Burner, please visit: https://www.cbc.ca/radio/frontburner/transcripts Transcripts of each episode will be made available by the next workday.
Transcript
Discussion (0)
In the Dragon's Den, a simple pitch can lead to a life-changing connection.
Watch new episodes of Dragon's Den free on CBC Gem. Brought to you in part by National
Angel Capital Organization, empowering Canada's entrepreneurs through angel
investment and industry connections. This is a CBC Podcast.
Hi, I'm Damon Fairless.
Well, look out world, here it comes.
ChatGPT can now see, hear, and speak.
Essays, philosophical questions, even therapy.
ChatGPT is a computer program that will write whatever you want quickly and convincingly,
and with better grammar than a grade school teacher.
When OpenAI released its wildly popular ChatGPT last November,
it made the science fiction version of artificial intelligence seem very, very real.
Open the pod bay doors, Hal.
I'm sorry, Dave. I'm afraid I can't do that.
Suddenly, anyone with a Wi-Fi connection had an endless source of knowledge right at their fingertips,
in a convenient, open-source chatbot.
Nothing was off-limits. You name it.
University entrance exams, cover letters, novels, computer code.
And the team at OpenAI, headed by Sam Altman, promised that artificial general intelligence,
AI that can complete any task a human can, was on the horizon.
This will be the greatest technology humanity has yet developed.
We can all have an incredible educator in our pocket that's customized for us,
that helps us learn, that helps us do what we want.
From the start, though, Sam Altman and his team promised that they could do it all safely.
So when the research team signed a multi-billion dollar deal with Microsoft, they put in safeguards.
Namely, a non-profit board whose goal is to make sure that safety came before profit.
Last Friday, that board sacked Sam Altman in the biggest tech shakeup since Apple fired Steve Jobs.
So did the brakes just fly off the world's leading AI developers?
And what does this mean for the future of this controversial technology?
Stephen Levy is the editor-at-large at Wired, and he's here with me now to talk about it.
Hey, Stephen, thanks for coming on the show.
It's my pleasure.
All right, so I want to start with the board that turned OpenAI upside down.
Basically, as I understand, their job was to make sure that OpenAI stayed true to its mission.
So what exactly is that mission?
The mission is quite crisp. artificial general intelligence, meaning to make computers smarter, smarter than human beings and be able to do all the things humans do intellectually, do it in a safe way.
And that's how the company started as a nonprofit. And technically, it remained a nonprofit
while creating a for-profit wing, which encompassed virtually the whole company,
but still governed by that same nonprofit board.
So what do we know about the board?
Like who's on it?
And I guess beyond that, what do they believe in?
Well, a couple of people who were perfectly involved with this movement called Effective
Altruism.
And one of the aspects of that movement was a fear that artificial intelligence could be dangerous to human beings, which you would think would be in alignment with the original value of OpenAI.
And folks didn't really know too much about the board.
They didn't pay much attention to the board until last Friday.
The board shocked everyone by firing the CEO in the face of OpenAI, Sam Altman.
And I want to get into that in just a sec, but I guess before I do that,
there are roughly these kind of two ideological camps.
The belief that we've got to keep AI constrained, safe, take it slow,
and then there's this other camp that we just need to move fast and break things
in the parlance of silicon valley i guess what can you kind of give me a concrete sense of what
those two different forms of ai might look like like what what's what's the fear here well i i
think no one would really say that uh open ai was in the move fast and break things camp. Basically, safety was part of the mission,
but there's a question of how aggressively they pursued
that safety part of the mission by commercializing the technology
and competing with other companies that were also developing AI. So there was a tension because OpenAI started as this nonprofit company.
And then just to keep the servers running and can provide the infrastructure so they
could run the very expensive models that they run, they had to put themselves into the commercial realm.
So, and then again, that wasn't the stated reason why the board fired Sam Altman. It was because
they said that he had consistently had a lack of candor to his relationship with them and his
interactions with them. So, you know, it still is not 100% clear that that tension,
which did exist in the company, you know,
like I wrote about it a lot in the October issue of Wired,
I did a big cover story on OpenAI,
but it wasn't still is not 100% clear how much that had to do with this
particular firing, which triggered all of these consequences and complications that people are still dealing with today.
Right. And so, as you say, I don't think we have much insight yet into the exact reason the board fired Altman, right?
Yeah, they gave a reason, but they didn't give any examples.
Open AI's board said it no longer had confidence in his leadership.
The reason given, he was not consistently candid in his communications with the board.
But there has been growing...
And Satya Nadella, you know, just yesterday was, you know, gave a couple interviews.
We said...
I've not been told about anything.
You know, they published internally at OpenAI that there is not...
that the board has not talked about anything that Sam did other than some breakdown in
communications.
And I've not directly was told by anyone from their board about any issues.
And so therefore, I remain-
Microsoft owns 49% of the for-profit part of OpenAI.
The part that's worth supposedly $80 billion, right?
So you think they'd say, by the way, you know, Satya, you know, we're the board.
And here's the details of why we got rid of Sam.
I think the board is saying that for privacy reasons or lawyers say we can't say this.
say this. But I could tell you that very few people, if any, of the people who aren't on the board at OpenAI know concrete examples of why the board did this.
Okay, so there's a lot we don't know.
And I think above that, too, there's this,
I find it difficult to get a sense of what Sam Altman himself believes.
So I guess I'm trying to get my head around something I was hoping you could help with.
We've got Sam Altman appearing before Congress in May.
He's asking lawmakers to regulate AI.
My worst fears are that we cause significant, we, the field, the technology, the industry, cause significant harm to the world.
I think that could happen in a lot of different ways.
It's why we started the company.
It's a big part of why I'm here today.
And then he also says that OpenAI would actually consider pulling out of the EU if it can't comply with their regulations.
So where is he?
Well, you know, I mean, there's some dispute about that statement, you know, which he made an appearance on this world tour he made, you know, talking to world leaders and regulators and developers and, you know, curious people.
And, you know, he walked that back a little.
You know, it's been pointed out pretty widely that, you know, not only Sam, but any of these
tech leaders saying, you know, regulate us when it comes down to the details, the regulation
they want would constrain other people from some measures they've already decided unilaterally to pursue.
You have to be cautious when any tech leader says, like regulators, whether they're using that as a way to frustrate competition.
They can't afford to do that.
But there is some element, again, that's part of the afford to do that but there is you know some element again that's
part of the mission to do it safely and you know working under a regulatory framework
which prevents abuse of the technology is consistent with that mission so you know um
you have to regard that somewhat skeptically but not rule it out as a total sham.
Stephen, you spent some time with Sam Altman.
What's he like?
Yeah, well, I've known Sam for like 17 years, right?
You know, I kind of followed his career from the time he was a young, you know, founder of a startup.
You know, I know Sam.
He's a very thoughtful person.
Just inside, there's an internal combustion furnace that's going full-time to do big things in technology.
believes that big leaps in technology are what is going to make the world a better place for humanity, solve all our problems. And the biggest of those is artificial intelligence.
So when this opportunity came up, he told me that he could have a say on how
artificial intelligence can change humanity and the future of humanity.
He wanted to be part of it and also to do it safely. But, you know, I spent a lot of time
in OpenAI. They're very much committed to building safe artificial intelligence, but they're more
excited when they talk about like artificial intelligence being super intelligent than they are about
the details about making it safe.
If we're successful in what we're trying to do, if we really do make super intelligence,
it is true that a lot of jobs can be done by the AI for almost no cost.
But that also means the cost of goods and services just plummets. I think we have to be ready for a world where global GDP goes up like 50% a year for a couple of decades,
something like almost unimaginable.
There will be less jobs, but the amount of global wealth will just skyrocket.
The science fiction books they read when they were teenagers are still on their shelves and probably well-thumbed.
And they say, I want to be part of that.
In the Dragon's Den, a simple pitch can lead to a life-changing connection.
Watch new episodes of Dragon's Den free on CBC Gem. Brought to you in part by National Angel Capital Organization.
Empowering Canada's entrepreneurs through angel investment and industry connections.
Hi, it's Ramit Sethi here.
You may have seen my money show on Netflix.
I've been talking about money for 20 years.
I've talked to millions of people, and I have some startling numbers to share with you.
Did you know that of the people I speak to, 50% of them do not know their own household income?
That's not a typo.
50%.
That's because money is confusing.
In my new book and podcast, Money for Couples,
I help you and your partner create a financial vision together. To listen to this podcast,
just search for Money for Couples. I think we made it clear that we just really don't know
what's going on here in terms of this shakeup with Sam Altman's firing. But I guess I'm going
to ask you to speculate a little bit.
Take it as a given that the board's mission was to put safety before profits
or to put guardrails on the development of these artificial intelligence systems.
Do you care to speculate what went wrong, what happened?
What the board is saying, they said in corporate speak, Sam lied to us. They have since made it clear that he wasn't talking about some personal misbehavior or some financial skullduggery.
He wasn't embezzling, but more like he was doing things and things were happening in the company, you know, that he was getting in motion.
were happening in the company, you know, that he was getting in motion, presumably, that he misrepresented to the board or withheld from the board.
And he'd lost their trust because of that.
And they felt that in order to do the mission of building AI safely, he was not a person
that they trusted to run the company to fulfill that mission.
Therefore, he had to go.
I think the board was taken aback at the degree to which the company freaked out at that.
I'm sure that they didn't take kindly to the letter that went out that 95% of all the employees were 770 employees and over 700 people
signed a thing saying, hey, we want you to resign board. We don't trust you. And furthermore, we
want you to, before you leave, reinstate Sam and Greg Brockman, who is the president, and we want you to reinstate both of them and then leave.
Don't slam the door on your way out.
The board has not done that.
The board is still sitting and I think waiting for this to blow over,
maybe over-optimistically.
Yeah, and I want to come back to this to see if it's possible to actually blow over, maybe over-optimistically. Yeah, and I want to come back to this
to see if it's possible to actually blow over this.
But before I do, I just want to ask one more question.
So is it your sense that when the board is saying
that they weren't happy with how Sam Altman
was communicating with them,
is it your sense that Sam wanted to push
the development of AI faster and harder than they were comfortable with,
and that's what that communication breakdown was?
Well, here's a wild card that some people have seized as part of the grand scheme,
and it may well have figured into it somehow,
is that a little before all this happened in one of his many
you know interviews on stage sam said like four times now in the history of open eye the the most
recent time was just in the last couple of weeks i've gotten to be in the room um when we sort of
like push the front the sort of the veil of ignorance back and the frontier of discovery
forward and getting to do that is like the professional honor of a lifetime. So that's just.
And he didn't say what it was. And some people speculated that it is like the giant breakthrough
that must be restrained, right? Otherwise it has dangerous consequences. And maybe he wasn't honest to the board about what he was going to do with it or how fast OpenAI was going to develop it or whatever.
That that was a tipping point for them.
There's no indication that that's the case, but it is, you know, something that may have had something to do with it um i think this
is like going to be the greatest leap forward that we've had yet far yet so far and the greatest
leap forward of any of the of the big technological seem to want to slow things down is Ilya Sitskiver.
He's OpenAI's chief scientist and a board member,
and he reportedly led the push to oust Sam Altman.
But then on Monday, I guess he tweeted that he regretted his actions
and he wants to, quote-unquote, reunite the company.
So, Stephen, is there any hope of that, of getting the band back together?
Or is that worse left to the barn?
Well, you know, this is a weird part of the drama
because Ilya, in a way, is sort of, in a technical sense,
the heart of the company.
He's a famed AI researcher,
is involved in a couple of the key breakthroughs
in the recent evolution of AI,
which brought us to this point.
And a very passionate believer of AGI,
that the company is doing,
but he's also very concerned about safety.
Yeah, I mean, we definitely will be able to create
completely autonomous beings with their own goals.
And it will be very important,
especially as these beings become much smarter than humans.
It's going to be important to have these beings, the goals of these beings be aligned with our goals.
For whatever reason, maybe because of that, or maybe he was convinced, as those other board members were, that Sam had not been dealing honestly.
You know, he was the one person who worked for the company that joined these other three outside board members in firing Samuel.
He was the Judas of open AI.
And then a few days later, he did, as you say, a 180 and said, I really regret doing this.
And now I'm signing my name to this letter saying the board is incompetent.
They have to resign.
saying, you know, the board is incompetent, they have to resign,
and if they don't, I might go to Microsoft where Sam Altman and Greg Brockman have already said they're going.
Right.
Okay, so I kind of want to pull back kind of a high-altitude view here.
And so beyond the machinations of this, you know,
really quite dramatic corporate drama that we're, I guess,
witnessing. There's this bigger question, right, where money comes into it. So this has been
running as a not-for-profit company, or it was founded that way, and then there's this arm of
it that is for profit. But if Altman and a lot of former OpenAI employees go over to Microsoft or find a home at some place that is interested in profiting from the development of AI, I guess the big question is, you know, at the end of this, how everything's shakedown, do you think that a company whose main mission is to make money can safely usher in artificial intelligence?
safely usher in artificial intelligence.
Well, interestingly, the premise of OpenAI when they started, and I interviewed Sam Altman and Elon Musk as they were launching, they made it very clear that the whole premise
of the company is that the answer is no.
The answer is that a for-profit company can't be trusted with this powerful technology.
And that's why they started opening AI as a counterweight to that.
So if Sam Altman, you know, brings his team to Microsoft,
which is an unabashedly for-profit company, it answers to a board,
which answers the shareholders, right?
You know, in theory, Microsoft's board can be displaced.
Then, you know, the whole premise is blown up.
So there's still a mystery here, though, right? Yeah, well, the first mystery is what were the actual reasons, the examples, you know, the justification rather than the high-level description of why the board fired Sam. Why did Ilya Zutskever, who was a co-founder of the company,
recruited personally by Sam Altman in the very beginning,
why did he join in that?
What were his underlying concerns, if there were any,
and what made him change his mind?
And in the larger sense, what does this mean for the development of artificial intelligence?
I think that's the biggest question of all.
You know, we have been having this debate ever since ChatGPT was released by OpenAI like a year ago.
was released by OpenAI like a year ago.
It thrust the whole world into the debate of what is this technology?
What does it mean for humanity?
Can it be done safely?
Who's going to profit?
Is this going to work against us?
Are we going to lose jobs?
And it depended on how much trust we had on the people who developed it. Obviously,
it's going to be more dangerous to us if it's developed by untrustworthy people, if it's put into the hands of untrustworthy people. And the precariousness of the leading company developing it underlined how dangerous this time is.
All right, Stephen, thanks so much.
I really appreciate you coming on.
My pleasure. Thanks, Sam.
That's all for today.
I'm Damon Fairless.
Thanks for listening to FrontBurner,
and I'll talk to you tomorrow.