The Journal. - Artificial: Episode 1, The Dream
Episode Date: December 3, 2023In 2015, a group of Silicon Valley heavy-hitters met for a dinner that would change tech history. They believed that the time had come to build a super-intelligent AI, and they founded a non-profit la...b to try to do it. In part 1 of our series, Artificial: The OpenAI Story, we explore the company’s idealistic origins and speak with early employees about the struggle to make their AI dream a reality. Further Reading: - Elon Musk Tries to Direct AI—Again - The Contradictions of Sam Altman, AI Crusader Further Listening: - The Company Behind ChatGPT - The Hidden Workforce That Helped Filter Violence and Abuse Out of ChatGPT  - OpenAI’s Weekend of Absolute Chaos Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
In 2015, a young executive named Greg Brockman arrived at a hotel in Silicon Valley.
He was going to a dinner that would change tech history.
So this was at the Rosewood in Palo Alto.
Very nice, very nice view of, you know, kind of the rolling hills.
And you can, you know, you can see the interstate in the distance.
Always love a view of an interstate.
Despite its proximity to the interstate, the Rosewood is a classy place.
It calls itself an urban retreat.
And a strip steak in the hotel restaurant will set you back 70 bucks.
and a strip steak in the hotel restaurant will set you back 70 bucks.
It's a place where tech types and investors
hatch plans and cut deals.
That was the kind of dinner Greg was headed to.
You know, it was like 10-ish people, something like that.
And I actually showed up a little bit late
and I felt very bad
because I felt like I kind of missed all the good stuff.
But fortunately, the opposite was true.
I think the good stuff was about to begin.
Around the table were a mix of startup people and AI researchers.
Greg, for example, had been an executive at Stripe, the payments company.
Sam Altman was also there.
He was the president of Y Combinator, the famous startup
accelerator. And on the AI side, there was Ilya Sutskover, a top machine learning researcher.
And as you might expect, given the guest list, it wasn't your average dinner conversation.
They were talking about something kind of sci-fi sounding. AGI.
It stands for Artificial General Intelligence.
And it basically means a machine as smart or smarter than a human.
You know, actually, in talking about AGI, you know, general intelligence,
that was not something that was very polite at the time.
It was something where if you talked about AGI in the field, you'd be to kind of laugh down as a, you know, kind of a crackpot or something like that.
Like it was so fringe?
It was very fringe, very fringe.
But you were talking about it.
We were. And that was the rebellious thing.
For decades, AGI had seemed out of reach.
Decades, AGI had seemed out of reach.
People had been trying to build a machine that could do all the things that humans do,
and they'd mostly failed.
But the people around this table believed the holy grail, AGI, might finally be within reach.
Maybe they could build it.
AI seems to be advancing. What can we do to help?
Is it too late to start a lab with a bunch of the best people?
And kind of the conclusion from the dinner
was it was not obviously impossible.
And on the drive back to the city,
Sam and I kind of looked at each other
and we said, let's just start this.
Let's do it.
Greg, Sam, and some of the other diners
went for it.
Together, they founded an AI startup called OpenAI.
Its ultimate goal?
Building AGI.
From the beginning, OpenAI was unusually idealistic.
Its founders promised to shun profit and share their research.
Their mission was to develop safe AGI that would benefit all of humanity.
In the years that followed, many of the dreams hatched at that dinner would come true.
OpenAI would go on to develop the viral chatbot, ChatGPT. It would become one of the
top AI labs in the world. But to get there, the company's leaders would compromise nearly
every one of their founding ideals.
From the Journal, this is Artificial, the OpenAI story. I'm Kate Leinbach.
Over four episodes, you'll hear how a little-known startup built one of the world's most viral tech products
and nearly tore itself apart in the process.
A revolt at OpenAI.
OpenAI announced the firing of co-founder and CEO Sam Altman.
Hundreds of OpenAI employees are threatening to quit.
I think this might be the craziest story I've ever covered.
This is Episode 1, The Dream.
Episode 1, The Dream. Contact a licensed TD insurance advisor to learn more. Travel better with Air Canada.
You can enjoy free beer, wine, and premium snacks in economy class.
Now extended to flights within Canada and the U.S.
Cheers to taking off this summer.
More details at aircanada.com.
There was another person at that 2015 dinner,
someone we haven't told you about yet.
But he would be very important in setting the intellectual agenda for open AI.
His ambitions and paranoias
would determine the shape of this new lab.
That person was Elon Musk.
Here he is on Fox News earlier this year.
Yeah, so I've been thinking about AI for a long time, since I was in college, really.
It was one of the things that, the sort of four or five things I thought would really affect the future dramatically.
Many people who work in AI emphasize the good it could do.
Smart machines could do the work of humans more cheaply and quickly.
Productivity would rise. Prices would fall.
AI could unlock new scientific discoveries,
maybe even find a cure for cancer.
But Musk has long been concerned with the flip side.
What if machines became too smart?
Could they take over? Even wipe us out?
I think we should be very careful about artificial intelligence.
If I were to guess at what our biggest existential threat is, it's probably that.
That's Musk speaking at MIT back in 2014.
With artificial intelligence, we are summoning the demon.
You know, you know those stories where there's the guy with the pentagram and the holy water,
and he's like, yeah, you sure you can control a demon?
Didn't work out.
AI has always been a source of fascination and paranoia for Elon.
been a source of fascination and paranoia for Elon.
That's our colleague Berber Jin, who's been reporting on the role Elon Musk had at OpenAI.
He really wants to explore this technology and sort of develop the most powerful version of it. But he's also extremely concerned that if AI becomes extremely powerful and intelligent,
it could outsmart us and defeat us and, you know, take over the world.
Do you think he was alone in those concerns?
He wasn't.
At the time, in the early 2010s,
this sort of idea that AI would become an all-dangerous technology
was becoming really popular.
AI would become an all-dangerous technology was becoming really popular.
One reason for that was the 2014 release of an influential book about AI called Superintelligence.
Talk to enough people in AI and this book is bound to come up.
Because it spelled out a lot of the concerns they'd been having for years.
You know, it makes the argument that an advanced AI could essentially one day take over the world and it could cause problems for humanity and it'd be so powerful we wouldn't be able to stop it.
According to the book's author, an academic named Nick Bostrom,
the danger isn't just that AI could go rogue. A super intelligent
machine could do significant damage just by misinterpreting our instructions. There's the
classic example of like the paperclip problem. The paperclip problem goes like this. A super
intelligent AI is given some benign goal, like manufacture as many paperclips as possible.
Things go great at first.
The world's paperclip tycoons are laughing all the way to the bank.
Until the super AI transforms all of Earth into paperclips.
So it's kind of really far-fetched, almost fringe ideology around the powers of AI.
That was something that was sort of infecting Silicon Valley
and a lot of leading technologists.
For Musk, these fears around AI centered on one company in particular.
He developed this paranoia around, you know, advanced AI technology,
and he specifically developed this paranoia around Google
having a monopoly over developing advanced artificial intelligence.
AI is integral to Google products like Search and Maps.
And by 2015, the company had built up a deep bench of top AI researchers.
That concentration of power made Musk uneasy.
Here he is on CNBC earlier this year.
Google had three-quarters of the world's AI talent.
They had, obviously, a lot of computers and a lot of money.
So it was a unipolar world for AI.
Musk had a solution to build his own lab, one that would be the anti-Google.
Okay, so what was the opposite?
What's the opposite of Google, which would be an open-source nonprofit, because Google
is closed-source for-profit?
And that profit motivation can be potentially dangerous.
That was the sort of impetus for him wanting to start a rival research lab that would be a nonprofit, that would be open, that would try and share its research more widely among society in general.
And that's the sort of founding story behind OpenAI.
OpenAI became a reality in December 2015.
Its founders included many of the people
who'd been at that initial dinner at the Rosewood.
Elon Musk, Greg Brockman, Sam Altman, Ilya Sutskovor.
The lab's mission would be to build AGI,
a super-capable artificial intelligence.
And they'd do it to benefit all of humanity.
In a blog post, the founders laid out their commitments.
OpenAI would be a non-profit.
Instead of serving shareholders, it would work to, quote, build value for everyone.
It would be open.
Researchers would be strongly encouraged to publish their work and their code.
And it would be safe, meaning they would prioritize good outcomes
for all. And if it's a non-profit, then who's going to pay for all of it? So Elon was supposed
to be the financial linchpin for the entire project.
That's our colleague Deepa Sitharaman, who covers OpenAI.
He committed to helping OpenAI get a billion dollars in funding, meaning OpenAI would raise money from a lot of other investors,
and then Elon would sort of fill in the gap between that amount and the billion-dollar mark.
OpenAI had money and a mission,
but it needed someone who could turn those dreams into reality.
That person was Greg Brockman,
the guy who'd shown up late to the Rosewood dinner.
Greg signed on as OpenAI's first chief technology officer.
You know, Sam had a day job.
Elon had a day job.
Everyone else had day jobs.
But I did not.
I was unemployed at the time
and I was able to really dedicate myself
to trying to pull together an initial team
and try to figure out
like what this thing could be.
Greg takes the company
and actually builds it.
He's an engineer. He likes to code. He's in the it. He's an engineer.
He likes to code.
He's in the details.
He's in the weeds.
And he's a huge, like, just a force of nature.
Somebody who's pushing people to do better and better and better work, more and more and more work.
Greg is what you might call a workaholic.
He wrote that once he spent the holidays with a new love interest, but instead of doing holiday stuff, Greg taught himself machine learning. He says she was supportive.
to-do list was to recruit some top-tier researchers, and there weren't that many of them.
One of the people he tried to coax on board was Peter. I'm Peter Beal. I am a professor in artificial intelligence at Berkeley, which I've been for many years now.
Peter is a roboticist. If you Google him, you'll see a lot of photos of him with robots.
Sitting on a robot, leaning on a robot.
In one picture, a robot playfully tugs at his shirt.
Peter agreed to start working with OpenAI.
When you kind of started there, in the very beginning,
can you kind of describe what the office was like?
I mean, the very early days, it wasn't even an official office. It's Greg's apartment.
It's not a big apartment. I mean, San Francisco, it's the middle of the city. And there's just
a large, relatively large kitchen for an apartment. And there is a living area with couches.
And that's it.
And then there's a bedroom with some additional space.
If you want to do things while sitting on the bed.
And that's the scene.
And that's also where Elon would stop by.
He would just sit on the couch and then be like, OK, what's going on?
You know what has been happening this past week?
That's that's the scene.
That's it.
But then the energy is just unprecedented.
How do you mean?
It's 10 of the very smartest people in the world.
I'm not going to claim that there's nobody in the world
who's equally smart as the 10 people who are there.
That would be incorrect.
But I would say there's nobody who's fundamentally smarter.
Those 10 smart people were at Greg's apartment because like Peter, they believed that maybe, just maybe, they could build a machine as capable as a
human. That's the dream in some sense all along from why I got into this field, it was a dream that was too crazy to talk about because it seemed like so far-fetched that you shouldn't even be talking about it.
But I would say in that moment, for me, things had changed.
Not saying that in that moment I was thinking, hmm, we can do this tomorrow, but it seemed a reasonable goal to start pursuing.
Peter was feeling optimistic because at this time, the entire field of AI was undergoing a sea change.
From 2012 to 2015, it was just a major explosion in progress.
Like progress in AI was just completely unprecedented.
Like, progress in AI was just completely unprecedented.
Things were finally starting to work because of a new approach to AI called deep learning.
With deep learning, computers are fed a massive amount of data.
The system then detects patterns in that data and calculates probabilities that help the system learn. So, for example, if you feed one of these systems thousands of pictures of cats,
it will start to recognize what a cat looks like.
And this was huge.
Suddenly, computers could recognize all kinds of images, not just cats.
Remember when Google Photos started recognizing your mom in pictures?
That was because of deep learning.
And it wasn't just images.
Computers started to recognize patterns in text.
When Gmail started offering to finish your sentences in emails,
that was also because of deep learning.
And so the progression had become so fast already then.
It's even faster now.
But already then, the progress was so fast that it started to be reasonable to think again about AGI, especially because the progress was so unified.
That rapid widespread progress suggested that with enough data and enough computing power, computers could learn just about anything.
Maybe they could even achieve the Holy Grail,
artificial general intelligence,
a super machine that could do all the things that humans can,
see, write, reason.
You have this super lofty goal.
Was it clear how you were going to achieve that?
It was not.
Like, it feels like with the rate of progress,
things could become possible,
but there was never a clear roadmap on,
oh, like, if we do this ABC, then you will achieve it.
That's Peter Chen.
Yeah, another Peter.
And he is also into robots.
And he was also at OpenAI in the early days.
So how did you approach this task?
What kind of things did you and others work on?
Figuring out how to answer that question was a big part of the research.
work on? Figuring out how to answer that question was a big part of the research. And what made that environment really fun was you are even allowed or encouraged to think about those things.
And this was not the case elsewhere. You couldn't find this kind of environment.
By this point, the Peters were no longer in Greg's apartment. OpenAI had an actual office,
which happened to be on top of a chocolate
factory. Researchers took full advantage of their freedom to explore all kinds of Wonka-esque ideas.
They built a kind of arcade where AIs could learn by playing games like Pong and Pac-Man
over and over. They developed AIs that could generate pictures.
Some of them even kind of looked realistic.
And they programmed robots.
Robotics was another key idea, right?
If you want to build a truly general AI agent,
then it probably should know how to interact with the physical world.
You need a body.
You need a body to learn the physical world.
One of their projects was teaching a robotic hand to solve a Rubik's cube.
But it all felt a long way from AGI.
And as OpenAI's researchers were casting about for a way forward,
their funder-in-chief was starting to get impatient.
That's next. Summer's here, and you can now get almost anything you need for your sunny days
delivered with Uber Eats.
What do we mean by almost?
Well, you can't get a well-groomed lawn delivered,
but you can get a chicken parmesan delivered.
A cabana? That's a no.
But a banana? That's a no but a banana that's
a yes a nice tan sorry nope but a box fan happily yes a day of sunshine no a box of fine wines yes
uber eats can definitely get you that get almost almost anything delivered with uber eats order now
alcohol and select markets product availability may vary by regency app for details
put your hands together for Lady Raven.
Dad, thank you.
This is literally the best day of my life.
On August 2nd...
What's with all the police trucks outside?
You know, the butcher goes around just chopping people up.
Comes a new M. Night Shyamalan experience.
The feds heard he's gonna be here today.
Josh Hartnett.
I'm in control.
And Salika as Lady Raven.
This whole concert, it's a trap.
Trap.
Directed by M. Night Shyamalan.
Only in theaters August 2nd.
Summer is like a cocktail.
It has to be mixed just right.
Start with a handful of great friends.
Now, add your favorite music.
And then, finally, add Bacardi Rum. Shake it together.
And there you have it. The perfect summer mix. Bacardi. Do what moves you. Live passionately.
Drink responsibly. Copyright 2024. Bacardi, its trade dress and the bat device are trademarks
of Bacardi and Company Limited. Rum 40% alcohol by volume. When Elon Musk helped found OpenAI in 2015, he had a lot of other commitments.
He was running Tesla and SpaceX.
And yet, he still made time for this new AI startup.
Here's our colleague Deepa Sitharaman again.
You know, he's recruiting employees.
He's meeting with company leaders.
He's talking about the vision.
He is holding off-sites at SpaceX for OpenAI employees.
He would also conduct polls among the employees to see when they thought AGI could be achieved.
And what were his expectations for what this lab could pull off? conduct polls among the employees to see when they thought AGI could be achieved.
And what were his expectations for what this lab could pull off?
So, you know, initially he sets an aggressive research timeline. He's telling the leaders of OpenAI that if they didn't achieve some kind of major breakthrough soon, that they would be
kind of laughed out of the valley.
I mean, he was constantly in their faces about making more progress and more progress.
One company that was making progress was Google,
specifically a unit within Google called DeepMind.
DeepMind also wanted to build AGI.
Here's Peter Chen again.
DeepMind was another, I mean, still is another extremely successful AI lab.
OpenAI initially was somewhat created as a counterpart to DeepMind.
Like, were they the enemy?
I don't think it was explicitly said they were the enemy, but definitely it was the implicit assumption.
That was the vibe.
Yeah.
Actually, like maybe correct that a little bit.
Like I would more say it's rivalry than enemy.
It's more like rivalry, schools, sports, like that was more of the vibe.
Unfortunately for OpenAI, the rival seemed to be winning.
In 2016, Google's DeepMind pulled off a landmark feat.
Hello and welcome to the Google DeepMind Challenge match live from the Four Seasons in Seoul.
It happened in a hushed room in Seoul, South Korea, around a game of Go.
Go is an ancient Chinese board game that's way more complex than chess.
DeepMind had taught a computer called AlphaGo to play it. And in that hushed
room, the system was facing off against the world's top player, Lee Sedol.
Now the fight is getting really complicated. Google livestreamed the match as two Go
experts provided play-by-play commentary.
This is actually the first time I've seen AlphaGo
playing a game that has this difficult a fight.
Minutes, then hours ticked by,
until suddenly, Lee realized he'd been beaten.
I think maybe the game ended.
No, I don't think so
because it looks like
Lee is still counting.
No, I think he resigned.
Wow, I think you're right.
Wow.
Wow, wow, wow.
Deep Mind System
beat Lee
in four out of five matches.
It was a victory
some AI experts
hadn't expected for another decade.
Google's deep mind had achieved a real AI milestone,
whereas OpenAI was mostly producing blog posts
and academic papers.
And frustration at OpenAI was mounting.
Musk was frustrated,
and so was Greg Brockman. We'd been around for two years, and what had we done? What had we accomplished? You know, we had great people, but did we have great results? You know, I think that
we all are very ambitious. We all really want to make an impact in this field. We all want to feel
like we can help steer it, that things are going to play out differently with us than without us. Like, that's why we're here. And it wasn't clear if that was going to happen.
Setting records in AI isn't cheap.
Systems like DeepMind's AlphaGo require a ton of expensive computer chips,
not to mention the energy cost to train them.
The year DeepMind had its Go breakthrough, the lab lost about $116 million.
But DeepMind had Google to foot its bills.
OpenAI had no corporate partner.
At the moment, it felt like maybe this company won't be around in a year. Like, are we going to be able to actually, you know, build the computers that we know are going to be required to power the systems we want to build?
And, you know, it was very difficult to get people to write giant checks to this nonprofit endeavor that we were trying to build.
So, yeah, it was very uncertain
for me. One idea OpenAI's leaders discussed to solve their money problem was to create a
for-profit arm that would allow OpenAI to raise cash from investors. But Musk had a different solution. In true Musk fashion, the solution was basically him.
He thought that this nonprofit would function better or at least achieve more things if he was in charge of it.
Musk was pushing for greater control.
At this point, the management structure of OpenAI was pretty flat
and informal. Greg ran
the lab day-to-day, while Musk
and Sam Altman were in the background,
helping set priorities and raising
money. But no one was
in charge.
Musk wanted more influence
over the company's direction.
But others at OpenAI
weren't having it.
Here's our colleague Berber Jin again.
The company pushed back very heavily against that, other leaders of OpenAI, and Elon sort of fell
out. I mean, he is an extremely mercurial CEO, and a lot of people at OpenAI sort of felt like
he was a tyrant. And what resulted was this kind of awkward,
I don't know if you could call it a corporate battle, but, you know, some sort of standoff
between Elon and Sam Altman. And he essentially was pushed out of the organization.
He loses his power struggle, and it's decided among employees and the board that Sam Altman
should be CEO. And once he lost that power struggle, Musk left.
Sam Altman, the young Silicon Valley mover and shaker and master of fundraising, would be OpenAI's
first CEO. Altman and Musk didn't respond to our request for comment.
Musk announced his departure from OpenAI at a staff meeting.
Employees told Berber about it.
Sam essentially brought Elon into the OpenAI offices in San Francisco.
It was supposed to be this sort of orderly transition of power
where Elon, he would be thanked for his time at the company
and Sam would say he was stepping down
to focus more on Tesla.
But when they got to the Q&A, things got awkward.
Musk revealed that he wasn't stepping back from AI.
He was just going to do it at Tesla,
which did not land well with employees.
I think some of them were a bit taken aback.
They were kind of like,
why are you basically leaving us to become our rival? And Elon sort of snapped and called
this research intern who was challenging him at Jackass.
Elon is really used to getting his way. I mean, he runs companies like Tesla, SpaceX with an iron grip.
And so the fact that he had a different vision for OpenAI and essentially wasn't able to force that vision upon this organization that he helped found is something he's definitely bitter about.
Musk had been OpenAI's primary financial backer.
And when he walked out, he took his checkbook with him.
OpenAI had been founded to pursue a dream, AGI.
But a little over two years in, they hadn't landed on a winning idea, and they were running out of money.
But in one pocket in the office,
there was something brewing.
An idea that would set OpenAI on the path to chat GPT.
So once we had seen the results internally,
we knew that we were sitting on something big.
I was just like, this is so much better
than anything that we have.
That's next time on Artificial.
Artificial is part of The Journal,
which is a co-production of Spotify
and The Wall Street Journal.
I'm your host, Kate Leinbaugh.
This episode was produced by Annie Minoff
with help from Laura Morris.
Additional help from Kylan Burtz,
Enrique Perez de la Rosa,
Pierce Singey, and Lisa Wang.
The series is edited by Maria Byrne.
Fact-checking by Matthew Wolfe.
Additional consulting by Arvin Narayanan.
Series art by Pete Ryan.
Sound design and mixing by Nathan Singapak.
Music in this episode by Peter Leonard, Emma Munger, and Nathan Singapak.
Our theme music is by So Wiley and remixed by Nathan Singapak.
Singapak. Special thanks to Annie Baxter, Jason Dean, Karen Howe, Rachel Humphries, Matt Kwong,
Sarah Platt, Sarah Rabel, and Jonathan Sanders. Thanks for listening. Episode 2 drops next Sunday, December 10th.