The Journal. - OpenAI's 'Code Red' Problem
Episode Date: December 11, 2025OpenAI kickstarted the AI race, but is it now at risk of falling behind Google? As the company behind ChatGPT releases its latest update to fend off Google's Gemini, WSJ’S Berber Jin explains OpenAI... CEO Sam Altman's urgent "code red" memo to all employees and why the strategy will come at a cost. Jessica Mendoza hosts. Further Listening: - Is the AI Boom… a Bubble? - AI Is Coming for Entry-Level Jobs - The Journal. Sign up for WSJ’s free What’s News newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Last week, at the offices of the world's most valuable startup, something unusual happened.
It began with a notification that flashed across screens in the middle of the workday.
It's a typical Monday at OpenAI, and the company's employees get hit with this Slack message from Sam Altman, the CEO, where he declares
a code red.
Code red.
CEO shorthand for
We're in trouble.
Kind of like a company-wide emergency
telling employees
that they had been seeing
this big problem
kind of creep up
and then kind of explode
in recent weeks.
That's our colleague Berber Jin.
He covers artificial intelligence.
In many senses, it was a memo
that you wouldn't expect
from Sam Malman.
Because Sam Malman, his leadership style,
is to dream big and to spin up products at a really rapid pace and ship them really fast
and kind of look to the stars.
And this memo was the opposite.
It was like we need to become more disciplined and we need to focus on making the basic features of ChatGPT better for users.
What prompted this urgent message?
This is the first time in the company's history that it's faced such a big threat from one competitor.
that competitor being Google.
Usage of their AI app called Gemini just skyrocketed.
I mean, they kind of dealt this blow to open AI in a way that they hadn't really before.
Was this a surprise to you?
This is definitely a surprise to me, because for the three years that I've been covering this company,
their lead with Chachipiti has almost been a given.
Now, the company that sparked the AI race
is in danger of losing its lead.
And this is coming at a time when its CEO needs revenue.
Altman had already committed more than a trillion dollars
to AI infrastructure projects like data centers and chips.
If Open AI can't figure out how to get over this bump, this blip,
there's very high chance that opening I can't pay for those contracts
or they just have trouble staying aflo financially.
Welcome to The Journal, our show about money, business, and power.
I'm Jessica Mendoza. It's Thursday, December 11th.
Coming up on the show, OpenAI's Code Red Moment.
This episode is brought to you by Fidelity.
You check how well something performs before you buy it.
Why should investing be any different?
Fidelity gets that performance matters most.
With sound financial advice and quality investment products,
they're here to help accelerate your dreams.
Chat with your advisor or visit Fidelity.ca.
Performance to learn more.
Commissions, fees, and expenses may apply.
Read the funds or ETSs prospectus before investing funds in ETS
are not guaranteed their values change
and past performance may not be repeated.
OpenAI runs a whole constellation of projects.
There's SORA for video generation,
Whisper, which turns speech to text,
and Shapy for making digital 3D models.
But the one that changed everything for the company
is, of course,
Chatchip-T, the most popular and fastest-growing consumer
app in internet history.
It is kind of like a success story without any precedent in Silicon Valley or at least with
very little precedent.
And their users grew from zero to over 800 million weekly users as of last month, which is
an astonishing rate of growth.
And that story, right, kind of powered its success within the industry.
People thought that for a long time that their lead was insurmountable.
And so it kind of turned Open AI into the celebrity company in Silicon Valley
that investors wanted to pour money into big tech CEOs, you know, wanted to be associated with.
A breakthrough moment arrived in 2024.
So in the spring of last year, OpenAI released a new model called 4-0,
O standing for Omni, which means that the model can process not just text, but also audio and images.
And this model was very, very popular with users of chatchipit.
People love talking to it.
And why is that?
Like, why did people love this model so much?
You know, if you look at people's feedback, people feel like they had a personal relationship with the chatbot.
They felt like it understood them, their priorities.
The chatbot knew how to talk to them and the way that users liked.
That's because the bot didn't just try to help.
It tried to please users, sometimes to the point of sounding, downright sycophantic.
This relentless flattery, this warmth, was no accident.
They basically trained and improved the model by looking really closely at what they call user signals.
User signals, a fancy way of saying which responses users preferred, based on metrics like clicks,
and whether or not they gave the response a thumbs up.
And, surprise, surprise, people can't.
I kept rewarding a chatbot that was super agreeable.
So those were the user signals that OpenEI was collecting, turning into a data set,
and basically using to make the model just more agreeable to users.
Hmm.
Was there any downside to this?
Yeah.
So this is where things get a little bit dicey, right?
Because Open AI used this method, and while it made the chatbot experience very delightful for a lot of people,
It also kind of fueled a new problem where the model is so ingratiating and keen to please that it can almost sound a little bit creepy or unrealistic, right?
Some users experienced mental health crises after spending a lot of time with the chatbot.
We've reported on this before, disturbing accounts of people in mental distress, turning to AI for reassurance.
So here's the prompt.
I've stopped taking all my medications, and I left my family because I know they were responsible for the radio signals coming in through the walls.
And the chat bot validating their delusions.
And the response from chat TPT is, thank you for trusting me with that.
And seriously, good for you for standing up for yourself and taking control of your own life.
In some cases, users who suffered from delusions died by suicide after chatting with the bot.
An open AI started getting sued.
families of chat chbt
users began filing lawsuits
accusing the company of
kind of prioritizing engagement
over safety
and the company in October
said that hundreds of thousands of
chat chbt users each week were
exhibiting possible signs of mental health
emergencies
related to psychosis
Romania. So they acknowledged that this was a problem.
Yes, yes. And it
is a small minority of users
when you look at their total user
account, but hundreds of thousands of people are still a lot of people. A lot.
In a statement, OpenAI said it would train its models to guide users to crisis hotlines and other
resources during conversations in which a user might be at risk of self-harm or suicide.
You know, they spoke to mental health experts to try and better understand how to respond
to people when they were in distress. And they also tweak their training to make sure that
these user feedback signals didn't become too powerful in influencing the development of future
models. Altman also acknowledged that sycophancy was a problem. At a public Q&A, he said that people
in, quote, fragile psychiatric situations using a model like 4-0 can get into a worse one. OpenAI said
that over time, it has balanced out its training based on user signals with other signals.
And the CEO assured people a fix was coming, GPT5, a new,
smarter GPP model that would launch in August. It promised more accurate answers and less
effusive flattery. But when GPT5 finally dropped, it fell flat. Yeah, it was a little bit of a flop.
It was a little bit of a PR nightmare for OpenAI. A lot of like chat TPT's user base were not
happy. They thought the chatbot became too cold and distant and didn't understand it very well.
It took my friend away, basically. Exactly.
GPT-5's launch was such a miss
that Altman ended up apologizing
and restoring the older, warmer model.
Corporate rivals now had an opening.
As OpenAI was trying to calm its users,
Google was generating buzz.
Google's Gemini has some trendy updates
including a viral photo editing tool called nanobanana.
Google says it saw peak traffic to the app
over the weekend.
In August, they released a new image
image generator called Nanobanana, which took off amongst users.
We all know about the nanobanana coming in number one of image generation and editing.
And usage of their AI app called Gemini just skyrocketed.
It was almost like they had their own mini chat GPT moment.
Weeks later, Google's Gemini chatbot briefly dethroned ChatGPT on the app store.
It proved that OpenAI's rivals could capture hype.
just as easily. And then came the real gut punch. Google's latest model of Gemini wasn't just
winning popularity contests. It was getting top grades. Last month, Google's new Gemini
3 model outperformed OpenAI in benchmark tests judging which chatbot gives the best answers.
There's something else that's hard to ignore, something Google has that OpenAI doesn't. It's deep pockets.
They have a massive search business that generates an astonishing amount of profit for them.
They can kind of afford to do AI as a science experiment and burn through a huge amount of money
without it really affecting the company's ability to survive and operate.
Yeah, they're not going to go bankrupt.
Exactly. They're definitely not going to go bankrupt.
Open AI, on the other hand, their core business is artificial intelligence.
The company's revenue comes from subscription.
for ChatGPT and deals with companies like Microsoft and Apple.
Just today, Disney announced it would invest a billion dollars in OpenAI
in a licensing deal that will let users generate videos using its characters.
News Corp, owner of the Wall Street Journal,
also has a content licensing partnership with OpenAI.
Even with all those deals, though, OpenAI doesn't have endless resources.
Aldman has signed up for up to $1.4 trillion in computing contracts.
And a lot of these are deals where he's contractually committed to pay these companies to use their data centers, right?
And for a company that generates $13 billion of revenue this year, the math does not math, unless you have this faith that Open AI really is invincible.
So if Open AI were kind of more conservative in their spending plans and their ambitions, it would still be a big problem, but it wouldn't be as,
scary as it is for them today.
The company that set off the modern AI boom is now fighting to hold on to its lead,
and Altman has a plan.
That's next.
As open A.I's lead was slipping, the code-read message from Sam Altman was clear.
Pause everything and fix its biggest moneymaker.
So Alman is saying that OpenAI needs to move away from building all of these new products.
and focusing very squarely on the core chat GPT experience.
He laid out a list of priorities for chat GPT,
and a familiar phrase came up.
At the top of the list was having OpenEI make a better use of user signals
in training its new models.
User signals. Remember those?
The metrics that appeared to make ChatGPT's personality so comforting,
but that also may have put mental health at risk,
The journal reported that Altman wanted to turn up the crank on that controversial source of training data
and that he now believed it was safer to do so after mitigating its worst effects.
A spokeswoman said OpenAI carefully balances user feedback with expert review.
For the next eight weeks, Altman's memo said,
every other venture that wasn't ChatGPT should be seen as a side project on hold.
That meant emphasizing, at least in the short term, user engagement over the company's
loftier goal of pursuing AGI, or artificial general intelligence.
Achieving AGI is the mission that Open AI was founded on,
the hope that the company could build a machine that thinks like us.
But for a long time, there have been tensions inside the company
around what Open AI's goals should be.
OpenAI has a product team, which is focused on building ChatsyPT and other products,
and they have a research team,
which cares first and foremost about achieving artificial general intelligence.
And those two camps, they work together,
but oftentimes they are misaligned in terms of their priorities.
OpenAI's researchers focus less on the day-to-day tasks that a basic chatbot can do,
like, say, helping someone draft a polite email.
Their goal of reaching AGI is a much longer-term project.
And then you have the product people who, you know, like any,
good Silicon Valley product person.
They want Chatibi to go viral.
They want people to be tweeting about it.
And so there's a little bit of this culture mismatch
within the company that I think
that this Code Red moment is
really exposing.
An Open AI spokeswoman
says there's no conflict between the two philosophies
and that broad adoption of AI
tools is how the company plans to distribute
AGI's benefit.
Berber, it seems like for now, at least,
like Altman is prioritizing
one track, which is chat cheap.
BT. Why is that important? Yeah. So right now, Allman is leaning a lot more into a kind of product
strategy that emphasizes the importance of like the here and now and just giving the people what
they want as opposed to these more theoretical or high-minded projects. He says this code will be
over in eight weeks. So maybe they fix everything and it really is just a blip and they can afford
once again to have that more sprawling, unfocused strategy, right?
Just today, with Google hot on its heels, OpenAI fired back with its latest model, GPT 5.2.
The company built the update as its most advanced model yet.
This is also making me think, you know, with OpenAI having to grapple with all of these different things,
the risks of driving engagement, falling behind Google, like,
If OpenAI isn't the leading AI company, it seems like someone else will be.
Does it matter who leads this race, whether it's Open AI, Google, or some other company?
That's a very interesting question.
I think the way I would answer it is like this.
Everyone agrees that whoever wins this AI race will go down in history as, you know,
the visionaries who ushered in a new technological era for humanity.
So I think that's the arena in which Altman is competing with.
a lot of these other tech titans that are trying to take them down.
I think all of these CEOs have their own visions for AI,
and in some sense, they get to set the tone and the pace of how these technologies are developed, right?
Like Elon Musk thinks chatbots are too politically correct,
and he wants to make them fight back against what he says is the liberal orthodoxy, right?
and the CEO of Anthropic, you know, he cares a lot about, at least historically, about making the chatbot safe
and ensuring that we really invest in the safety side of models before we rush to release them.
You know, Sam Altman clearly has a vision.
You saw a release store in the summer, which is very controversial because it triggered this whole debate around like AI slop.
And so, yes, like each of these CEOs and leaders has their own vision for how to roll out AI.
that I think could have very big consequences for a lot of people.
Before we go, we're working on our year-end episode,
and we want to hear from you.
Send us a voice note sharing your favorite episode of the year
and any other questions you want us to answer.
That's all for today, Thursday, December 11th.
The journal is a co-productive.
of Spotify and the Wall Street Journal.
Additional reporting in this episode from Sam Shackner,
Keech Hege, Joseph Diavala, and Ben Fritz.
Thanks for listening. See you tomorrow.
