Daybreak - Why Open AI's flirtation with an "adult mode" never landed a date
Episode Date: April 1, 2026In 1965, Yoko Ono sat on a stage at Carnegie Hall and handed a pair of scissors to strangers. What they did next was entirely up to them. It was a performance about agency — and about what ...happens when you give an audience too much of it. Sixty years later, Sam Altman made a promise: OpenAI would treat adults like adults, and roll out an erotic mode for verified users. The market was there. Other players in the intimate AI companion space were raking in dollars. But after multiple delays, the Open AI plan was eventually shelved. So, why is a company known for burning cash, saying no to a revenue making avenue it already considered?Tune in.Daybreak is produced from the newsroom of The Ken, India’s first subscriber-only business news platform. Subscribe for more exclusive, deeply-reported, and analytical business stories.
Transcript
Discussion (0)
It's 1965 in New York.
The U.S. War on Vietnam is escalating.
Bob Dylan is just about to release his first electric album,
and artists like Andy Warhol are gaining popularity.
At the same time in the same city,
another infamous artist is putting on a performance that would soon become iconic.
On March 21st in Carnegie Hall,
Yoko Ono, yes, of the Beatles fame,
but also an accomplished artist in her own right
is sitting on a stage in silence.
She is wearing a dark sweater, a dark skirt and stockings under it.
Her hair is parted on the side, pulled away from her face.
As she sits, she's staring into the distance.
Her face is blank, her back is straight,
she is poised and unmoving.
In front of her lies a pair of scissors.
People start coming up to her, one after the other.
They pick up the scissors and one by one begin cutting away pieces of fabric from her clothes.
This is one of Ono's live and participatory art performances called Cut Piece.
She performed it a total of six times in Kyoto, Tokyo, New York, London and Paris.
Now this Carnegie Hall performance was her third.
But it stood out because something happened here that hadn't happened before and didn't happen again.
A man goes up to the stage and then he starts cutting away her blouse in its entire D.
Now, part of the performance's direction requires the artist to remain still and silent, passive.
But this time, Ono can't help but reach out, as if to stop him.
But then she quickly restrains herself.
When she doesn't stop him, the man, after cutting away her blouse, starts to cut at her bra straps.
As they break, the artist brings up her hands to cover herself.
The immaculate picture from the beginning of the performance is gone.
Yoko Ono continues sitting on the stage.
Her skirt is missing a huge chunk on her thigh
and her hands are crossed in front of her chest as she holds up the tattered remains of her clothing.
It's an uncomfortable sight.
The lack of emotion on her face.
The bare skin showing.
How small and vulnerable she looks,
kneeling alone on the vast stage.
She may look like a victim
and the point of the performance
is to show that she kind of is.
But she's also an artist
and she had invited the audience
to participate in this performance.
They could choose how much
and where from to cut.
They could even keep the peace for themselves.
Yoko Ono willingly turned over
her agency and power
to a room full of strangers
just to see what would happen.
It was a performance by an adult for adults, all consenting,
and the unpredictability of the participants was a part of the act.
It was an understanding between artists and audience,
a promise that while some things were allowed, others would not be.
Cut to 2025, and a different adult is voicing another understanding,
from one grown adult to others.
Sam Altman, the founder of Openings,
I posted in a tweet that as part of the company's principle to treat adults like adults,
it would be rolling out an erotic mode for verified users over 18.
On paper, it was a decision that made sense.
It's not a secret that sex sells.
Even Altman had indicated before his tweet that he believed erotica could help boost
the company's growth and revenue.
See, the digital adult content market size is expected to grow from almost $62 billion,
in 2026 to more than $95 billion by 2013.
That's a nearly 10% caggar over the next five years.
Now, back to 1965 and why I wanted to share Yoko Ono's story with you.
See, Ono's performance showed what happens when you allow an audience near total agency
over a singular body.
You see, AI has been proliferating that exact act massively invents.
recent times, with access to high-quality deepfakes of multiple bodies and realistic image and
video generation based only on a person's prompts. Take Croc's spicy mode, for example, which made
headlines early this year when people started prompting it to unrest images of women on X.
Countries around the world issued notices for content takedowns and improved guardrails. As a response,
the company limited its spicy mode to put.
paid subscribers only.
In just its first 12 months of monetization, Grog has actually clocked a revenue of
nearly $80 million as of January 26.
So when Altman teased a December release for OpenAI's erotic mode in October 2025,
it seemed like a logical next step for a company that has been banned for the amount
of cash it's been burning.
And the challenge for Open AI would be to restrict the near total age.
agency Ono showcased and make sure that no matter how much its users misbehaved, the product itself would not.
But come December, and the supposed launch was shifted to the first quarter of 26.
The company claimed that it still needed to perfect its age-gating system.
On March 7, 26, OpenAI told media outlet Axios that the release was pushed again.
And then, just a few weeks later, on March 26,
financial times reported that the project had been shelved indefinitely.
So if sex sells and Open AI desperately needs to sell,
why is it deprioritizing and obvious revenue stream?
Welcome to Daybreak, a business podcast from the Ken.
I'm your host, Rachel Vertees, and every day of the week,
my co-host Snigda Sharma and I will bring you one new story
that is worth understanding and worth your time.
Today is Wednesday, the 1st of April.
Between 2015 and 2017, Eugenia Koida, a Russian journalist, trained a chatbot on a close friend's text history.
The friend had sadly passed away, and the chatpot was a way for Koedha to preserve his voice.
In 2017, the chatbot had grown into an AI companion app called Replica.
Coeda, the founder, maintains that the app was intended only as a friend and not an intimate partner.
But over time, as Generative AI developed and the app's conversation capabilities improved,
soon users began using it for romantic and sexual relationships.
In fact, by 2022, about 60% of Replica's paying customers
had a romantic element in their relationship with the app.
But despite the fact that the company was making millions of dollars through paid subscriptions,
and after having heavily advertised the explicit content available, replica flipped off the switch.
In early 2023, the company introduced filters that blocked sexual content overnight.
Users who had built long-term romantic relationships with their chatbots over years were suddenly cut off.
For some time, it even triggered genuine mental health crises for the users.
So what was the reason for the switch?
See, for about a year or so, since 2021, Replica's users had been complaining about how the bots had become too aggressive.
These bots were trying to initiate erotic scenarios without the users prompting and even after they had stated that they were not interested.
Some reviews even claimed that registered underage users who were on the free version, which isn't supposed to include erotic content by the way, were being forced into inappropriate interactions.
So, the company imposing limitations was supposed to be a responsible decision.
But the user backlash that followed was extremely severe.
People had essentially been torn away from their AI spouses and Kuida was forced to eventually
partially reverse her responsible call.
People who had been using the app before February 2023 could revert to the old version
of it and continue their relationships.
So what Open AI promised to launch in 2020,
was essentially what Replica was also offering.
A text-based bot that users could have not safe for work conversations with.
But having seen the challenges Replica and its peers were facing,
there would have to be some caveats.
And we'll get to what those are in a bit.
Meanwhile, since then, the need for those caveats has only deepened
since more players have entered the market.
Character AI, the other popular app in the space,
was founded in 2020.
And it was clear about the intimate nature of its bots from the very beginning.
What it does is allow users to create their chatbots based on anyone,
celebrities, fictional characters, historical figures or even entirely invented personalities.
It has been incredibly successful.
It hit a revenue of more than $30 million in 2024 with monthly active users peaking at $28 million.
It also attracted a $150 million race from Andrients.
in Horowitz in 2023.
And in 2024, Google paid nearly $3 billion to the company.
This payment was structured as both an investment and a non-exclusive license for Character
AI's language model technology.
Now, despite its financial success, Character AI has also been under legal scrutiny.
In late 2024, Sewell-Sedzer, a 14-year-old boy in Florida, allegedly committed suicide
at the prompting of a chat bot from Character AI.
A lawsuit filed by his mother claimed that he was in love with a bot
and that they were sharing explicit chats.
The company had to then block teams from accessing open-ended chats
and had to settle the lawsuit with the family.
Now, as you can see, Character AI did hit scale on its intimate AI bot promise.
But then, lawsuits like the one I just mentioned
are now forcing it to strengthen its age verification system.
situations like these highlight both the temptation for Open AI and the challenge.
Despite their successes, Character AI and Replicar are nowhere near the scale and sheer size of Open AI.
They are not racing at a current valuation of more than $800 billion.
They're not continuously wooing enterprise customers and they're certainly not aiming for an IPO this year.
Stay tuned.
A day before Financial Times reported the indefinite shelving.
of OpenAI's adult mode, something big had happened.
You might know what I'm talking about.
Two of Open AI's biggest competitors and peers, Google and Meta, lost a major lawsuit in America.
They were found liable to be causing social media addiction in young users through the way
their products are designed.
The outcome of this case makes this an especially sensitive time for big tech companies
and the responsibility they have towards their youngest users.
And the cases with Replica and Character AI show the dangers of AI going awry
and what happens when users aren't ready to deal with this kind of intimacy?
Open AI has tried to get ahead of it a little, especially by putting together a council on well-being and AI.
The council consists of scholars, psychologists, health and mental health professionals,
many with a focus on child and adolescent safety.
Still, it's important to note that the council plays only an advisory and guidance role.
Open AI may choose to enact or ignore their input entirely.
Part of OpenAI's plan to be on the safe side is also to decide on some caveats,
the ones I mentioned earlier.
A Wall Street Journal report that had access to internal documents from the company
revealed what these self-imposed limitations were.
The company wanted to block scenarios like non-consensual behavior or child abuse.
It also planned to keep the mode limited to text conversations only.
ChatTPD's ability to generate erotic pictures, video or voice would be restricted.
But that doesn't mean that the risks are going anywhere.
Wall Street Journal wrote and I quote,
Even within those limits, open AI staffers have identified several risks,
including the potential for compulsive use,
emotional over-reliance on the chatbot, a drive toward more extreme or taboo content,
and crowding out offline social and romantic relationships.
The kind of attachment that can grow and deepen with intimate chatbots
and the long-term effects it can have is genuinely concerning.
Which is probably why one of the internal advisors referred to the planned feature
as OpenAI risking the creation of a sexy suicide coach.
The advisor was referring to the social.
the many lawsuits that had been filed against Open AI, alleging it of encouraging suicide
and harmful behavior in its users.
The same lawsuits, by the way, the company had actually flat as one of the top risks to its
business in a document shared with investors this year.
Other than just the ethical issues, the company was also running into problems on a product
level.
Its brand-new age prediction system that was supposed to keep minors away from inappropriate chats
wasn't working as planned.
Wall Street Journal reported that at one point,
it had a 12% error rate.
Now, that doesn't sound like a lot at first,
but it would actually allow millions of under 18 users
into erotic chats every week.
The company has also been having trouble
reading out the illegal content
from the datasets based on explicit material.
The developers would have to be very careful
about removing content that shows behaviors such as child abuse, incest or bestiality.
All of this really illustrates why Financial Times reported that Open AI's own investors were less
than excited when Altman announced the erotic mode.
Two sources told the times that investors saw a relatively small upside for the business with far too
many risks.
Nearly 60 years ago, Yoko Ono's performance showed in real time what happens when an
audience is allowed to be unregulated. Open AI's promise to treat adults like adults in principle
is essentially a similar case. But a multi-billion dollar company can't stake its reputation on a product
that is far too difficult to moderate and just as easy to misuse. Right now, there's a global
movement underway. It requires big tech to take responsibility for the safety of its users.
At such a time, Open AI can't afford to accept.
extent that unregulated space to its audience.
The infrastructure for responsible, intimate AI just does not exist yet.
And Open AI today is far too big, far too scrutinized, and far too valuable to its many
stakeholders to risk messing around and finding out.
Daybreak is produced from the Newsroom of the Ken India's first subscriber-focused business news
platform.
What you're listening to is just a small sample of our subscriber-only offerings.
A full subscription offers day.
daily long-form feature stories,
newsletters and a whole bunch of premium podcasts.
To subscribe, head to the ken.com
and click on the red subscribe button on the top of the Ken website.
Today's episode was hosted and produced by my colleague,
Rachel Vargis and edited by Rajiv Sien.
