Front Burner - Inside OpenAI’s zealous pursuit of AI dominance
Episode Date: August 6, 2025Later this month, OpenAI is expected to release the latest version of ChatGPT – the groundbreaking AI chatbot that became the fastest growing app in history when it was launched in 2022.When Sam Alt...man first pitched an ambitious plan to develop artificial intelligence, he likened it to another world changing, potentially world destroying endeavor: the Manhattan Project, in which the U.S. raced to build an atomic bomb.The sales pitch he made to Elon Musk worked. Altman was promised a billion dollars for the project and was even given a name: OpenAI.In a new book, “Empire of AI: Dreams and Nightmares of Sam Altman’s OpenAI,” tech journalist Karen Hao chronicles the company’s secretive and zealous pursuit of artificial general intelligence.Today, Hao joins the show to not only pull back the curtain on the company’s inner workings through its astronomical rise and very public controversies, but also on the very real human and environmental impacts it has had, all in the name of advancing its technology.For transcripts of Front Burner, please visit: https://www.cbc.ca/radio/frontburner/transcripts
Transcript
Discussion (0)
There are over 613,000 Canadian small businesses on TikTok,
businesses that added $1.4 billion to Canada's GDP in 2024.
Like Edison Motors in Golden BC,
whose electric hybrid vehicles are driving innovation in Canada's trucking industry.
Or XXL Scrunchy in Belleville, Ontario,
who turned extra-large scrunchies into extra-large economic impact.
Visit TikTok Canada.ca, to learn more about how TikTok is helping small businesses in Canada
make a big impact.
This is a CBC podcast.
Hi, I'm Elaine Chow, and for Jamie Poisson.
Last week, Open A.C.O.C.O. Sam Altman said the latest version of ChatGBTGT,
which is expected to come out later this month, will be so good that it's kind of scary.
You know, there are these moments in the history of science.
where you have a group of scientists look at their creation and just say, you know,
what, what have we done?
Maybe it's great.
Maybe it's bad, but what have we done?
Like maybe the most iconic example is thinking about the scientists working on Manhattan
project in 1945, sitting there watching the Trinity test and just, you know, this thing that
had, yeah, it was a completely new, not, not human scale kind of power.
and everyone knew it was going to reshape the world.
And I do think people working on AI have that feeling in a very deep way.
That's Altman speaking with Theo Vaughn on Vaughn's podcast.
Altman, of course, has an interest to talk up how powerful his company's product is.
Though some experts say it's nowhere near what would be considered truly intelligent.
It's not the first time Altman has invoked the Manhattan Project,
the American race to build a nuclear bomb during the Second World,
World War. In fact, that's how he first pitched his ambitious plan for artificial intelligence.
The sales pitch made to Elon Musk worked. Altman was promised a billion dollars for the project
and was even given a name by Elon, OpenAI. Open AI would go on to launch ChatGBTGPT in 2022,
which became at the time the fastest growing app in history. In a new book, Tech Journalist Karen Howe
chronicles the company's secretive and often zealous pursuit of artificial general intelligence.
Howe not only pulls back the curtain on what's happened behind the scenes at the company as it went
through an astronomical rise and some very public controversies, but also the very real impact
as it pulls resources and labor from countries like Kenya and Chile to advance its technology.
Karen's book is called Empire of AI, Dreams and Nightmares of Sam.
Altman's OpenAI.
Hi, Karen.
Hi, Lynn.
So you profiled Open AI in 2020, two years before the launch of its most popular
product, ChatsyBT, and you were catching them really at a bit of a crossroads then.
How did Open AI see itself at that time?
Yeah, Open AI was very much at an inflection point, as you point out back then.
in that they started as a nonprofit,
but then they realized that the particular path of AI development
that they wanted to pursue required a lot of capital.
So right before I got there,
they had restructured to have a for-profit arm nested within the nonprofit,
and then they received $1 billion from Microsoft.
And so they still very much thought of themselves as a nonprofit,
And they still embodied their original ethos, which was more academic, more, where a bunch of
nerds doing research. And they weren't really thinking much about building products at all.
But there was certainly starting to be a sense that that would need to happen someday because of
the investments that they were receiving with expectation of returns. So what I realized when I was
embedding within the company is that even though they said that they were a highly transparent
organization, they were going to open source everything, that they were ultimately developing
AI without any consideration for profit, that internally because of all the restructuring that
they were doing, they'd actually started becoming exactly the opposite of what they said.
They were very secretive, very competitive, and they did not actually intend to ultimately
stay a nonprofit, they needed to aggressively race in the commercial realm.
To do what we needed to go do, we had tried and failed enough to raise the money as a
nonprofit. We didn't see a path forward there. So we needed some of the benefits of capitalism,
but not too much. I remember at the time someone said, you know, as a nonprofit, not enough
will happen. As a for-profit, too much will happen. So we need this sort of strange and immediate.
And so even though they themselves culturally still felt like they were retaining a lot of their roots, this was not actually the truth of the matter.
I've heard you say that Alton, Sam Altman, founder, CEO as a manifestation of open AI.
And what do you mean by that?
Yeah, I think it was sort of the opposite way around in that.
I think Open AI as a manifestation of Altman.
And that Altman is very much a product of Silicon Valley, his entire career was spent in the tech industry and internalizing a lot of its ideologies around the idea of building startups to blitzscale them, this concept of aggressively growing them super fast, this idea of always trying to be ever more ambitious by adding zeros to the amount of money you want to raise, the amount of users you want to acquire.
and the speed at what you do.
So the idea that there is a winner-takes-all game,
and ultimately every startup should aim to achieve monopoly,
otherwise they are going to lose in that race.
You know, we knew that scaling computers was going to be important,
but we still really underestimated how much we need to scale them.
So does that suggest that nobody can do AI as a nonprofit in any kind of a meaningful way?
No, there are other things that you can do for sure,
but to be at the front of scaling research,
I think you probably can't do that as an awful.
So you can see all of those elements with Open AI.
One of the key pursuits for OpenAI is like unlocking AGI or artificial general intelligence. When a machine is able to do what humans can do, human-level intelligence, right?
Right. So one of the things that's sort of important to understand is that artificial general intelligence is a ill-defined term because there's no scientific consensus around what human intelligence is. So the idea that we might be able to recreate something that we don't really understand is kind of up in the air. And there's a lot of scientific debate on whether or not AGI is even possible. But Holman identified early on in his career that in order to motivate a large group of people,
You need to give them a mission.
You need to give them a sense of purpose.
You need to give them belief.
And I open my book with this quote that he cites in 2013 in a blog post.
Successful people build companies.
More successful people build countries.
The most successful people build religions.
And two years later, then he founds Open AI,
and he pegs its mission to this idea of artificial general intelligence.
And so to me, the reason why Open AI pursues this goal or the reason why Altman articulated this is because he wanted to rally talent, rally capital, rally resources around a religion, around this belief that it is possible to recreate human intelligence.
Altman has brought up and I'm thinking of his comments in 2023 to Congress about how he believes AGI can lead to solving climate change, curing cancer, creating job opportunities.
Our current systems aren't yet capable of doing these things, but it has been immensely gratifying to watch many people around the world get so much value from what these systems can already do today.
We love seeing people use our tools to create, to learn, to be able to be able to.
be more productive. We're very optimistic that they're going to be fantastic jobs in the future
and the current jobs can get much better. And the need to not slow down because, or else, it would
lose pace to China. I think we want America to lead. We want, we want. So get to the perils issue,
though, because I know. Well, that's one. I mean, that is a peril, which is you slow down American
industry in such a way that China or somebody else makes faster progress. And what do you make of
those justifications? So one of the reasons why are
call my book Empire of AI is because I make this argument that these companies need to be thought of
as empires. They're consolidating an extraordinary amount of economic and political power. And
one of the features of empire is that they engage in a narrative that there are good empires and
evil empires in the world. And they, the good empire, have to be an empire in the first place
to be strong enough to beat back the evil empire. And, you know, all empires have civilizing
missions, this idea that if the evil empire gets a hold of the technology first, humanity's going
to be totally devastated. But if we, the good empire, get it first, then we can civilize the
world and bring progress of modernity to all of humanity. And so Altman basically, when he's saying
AGI might be able to cure cancer, it might be able to solve climate change, and all of this
will not come to pass. And even worse things will come to pass if China gets it is literally, you know,
the playbook.
of any empire.
But what I would say is if you look at the track record of what this pursuit has gotten us,
it has not gotten us any closer to curing cancer, solving climate change.
It is in fact actually led to capital consolidation towards technologies that are not doing that
and away from technologies that could be mitigating climate change and curing cancer.
And at the same time, the argument that China is the evil empire, you know, Silicon Valley has long used that to try and ward off regulation with this idea that if they're not regulated and they use export controls to increasingly constrain the amount of computing power that China can get access to that is going to widen the gap between U.S. and China and that the U.S. will then be able to develop all these technologies that have a liberalizing effect on the world.
And actually what has happened in the last decade is exactly the opposite.
The gap between the U.S. and China has continued to decrease.
China has weaned itself off of American computer chips.
And American technology platforms have had an illiberalizing effect on the world.
So ultimately, all of this rhetoric is really just self-serving.
At Desjardin Insurance, we put the care in taking care of business.
Your business, to be exact.
Our agents take the time to understand your company, so you get the right coverage at the right price.
Whether you rent out your building, represent a condo corporation, or own a cleaning company,
we make insurance easy to understand so you can focus on the big stuff, like your small business.
Get insurance that's really big on care.
Find an agent today at Dejardin.com slash business coverage.
On the 80th anniversary of the liberation of Auschwitz
comes an unprecedented exhibition
about one of history's darkest moments.
Auschwitz, not long ago, not far away,
features more than 500 original objects,
first-hand accounts, and survivor testimonies
that tell the powerful story of the Auschwitz concentration camp,
its history and legacy,
and the underlying conditions that allowed the Holocaust to happen.
On now exclusively at ROM,
Tickets at ROM.ca.
Not everyone within OpenAI felt the same way as Sam Altman.
Yeah.
Can you walk me through a bit of some of those kind of fractures within the company?
Yeah, so Open AI has long been plagued by two factions.
One is the boomers.
They believe that AGI will bring us to Utopia.
The other faction is the DOOMers.
They believe AGI could definitely.
of the state, all of humanity as in potentially kill all humans in the world.
And even though both of them conclude that they should work on AGI and that they should
control AGI development, they have different ideas of how to do that.
Boomers think we need to build this technology as fast as possible so that we can then
release it into the world because that will bring utopia faster.
And Doomers think we should build this technology as fast as possible, but then not
release it so that we can continue to do research in a tightly controlled environment to make
sure that it doesn't ultimately kill everyone. And so through open AI's history, there's just
been all of this clashing between the boomers and domers on ultimately what the company
should do and how quickly they should do it, how quickly they should release their technologies
and so on. And often some of those fractures actually lead to other AI companies developing
down the line, right? Like people who
disagree with Altman and then go on to develop
their own thing. But it just all kind of still
fosters kind of the same goals. Yeah, exactly.
Yeah. So Elon Musk's fractures with open AI
develops XAI, Dario Amadeh fractures
with open AI, develops anthropic,
Ilius Sutscover fractures with open AI develops,
safe super intelligence, Miramorani fractures with open AI
develops, thinking machines lab. I mean, it's just
a repeated thing. And so one of the things that's important
to understand about the AI race today,
is it's actually not just a profit.
It's not just about money.
It's actually also about ideology.
Like, these people are splintering
and forming their own companies
because they have a philosophical difference
with how Open AI is doing it
and they want to do it, quote unquote, better.
I want to take a moment a little bit of time
on Ilya Satskever, in particular,
co-founder of Open AI,
former chief scientists.
Can you tell me a little bit more about him?
Yeah, so these boomers and doomers, like, I mean, they themselves use the phrase belief.
They identify as AGI believers.
And Aaliyos Zedkever is a diehard AGI believer.
And he was one of the co-founders of Open AI.
He was the first ever chief scientist and then continued to play a very prominent role.
And in the early days, his role was he, he was very effective at attracting other AI researchers
as a company because he was already quite a famous AI researcher by the time he became
a co-founder of Open AI.
He was one of the co-authors of a very, very influential, if not the most influential
AI research paper that was published in 2012.
And so he, you know, he already wasn't an AGI believer when he started.
but he just became even more of an AGI believer eight years in
when opening eyes technologies were rapidly progressing
and he was seeing with his own eyes this rapid progress.
And so he started becoming very religious.
People would describe him and his behaviors as kind of like a prophet
where he would pace up and down company meetings
and speculate about the profound
civilizational changes that could happen once AGI arrived.
And he was very torn about whether it would be a profoundly positive thing
or a profoundly negative thing.
But he would use language like,
we need to build a bunker before we release AGI onto the world.
And we need to think about the deep,
responsibility that we have.
And at one point, in order to instill that kind of responsibility among opening eye
researchers, he commissioned an artist to create a good AGI figurine, a wooden effigy.
And during a technical retreat with other researchers at the company, he created a ceremony
where he enacted out this wooden effig represented a good AGI that then we discover is actually lying and deceitful.
And as a bunch of other senior scientists were standing around this fire pit in a semicircle wearing bathrobes,
he puts lighter fluid on the wooden effigy and lights it on fire.
It's quite the image.
Yeah.
By simply looking at what I can do, not ignoring it,
when the time comes, that will generate the energy
that's required to overcome the huge challenge
that AI will pose.
And the challenge that AI poses, in some sense,
is the greatest challenge of humanity ever.
And overcoming it will also bring the greatest reward.
And in some sense, whether you like it or not,
your life is going to be affected by AI to a great extent.
And so looking at it, paying attention,
and then generating the energy to solve the problems that will come up,
that's going to be the main thing.
Sutskever was also a key player in what became a very public ousting of Sam Altman by the board at OpenAI in 2023.
to pull back the curtain on that. You know, obviously there was a lot of media coverage of that
at the time. Sam Altman, who has drawn comparisons to tech giants like Steve Jobs, was dismissed
by the Open AI board Friday. Microsoft, the biggest investor in OpenAI, has hired him for their own
research team and added a second interim CEO. Open AI posting on X that Sam Altman will now
officially return as CEO. But what would you say are some of the key things that the average person
might not have gleaned from the mainstream media coverage of that ousting
and then reinstatement of Sam Oateman.
Yeah, so there were two forces at play that led to Sam Alman's ouster.
One was this boomer-dumer clash that was happening within the company.
The other force that was at play is that Altman's a very polarizing figure.
And throughout his career, irrespective of when he was working on AI, he has been followed by both,
people praising him as potentially one of the greatest tech leaders of our generation akin
to Steve Jobs, and followed by allegations of being a liar, manipulator, and abuser. Essentially,
during this time, after ChatGPT's released, the company is in a state of chaos because they
didn't actually intend to release the fastest growing consumer have in history. They were actually
releasing what they considered to be a research preview.
And so they're scaling the company, both the servers that they need to serve up and keep chat
GBT running and also the staff that they need at a faster pace than any other company
in Silicon Valley history.
And so there's all this like organizational chaos.
There's the boomers and the doomers that are that are starting to clash more and more and
more because now all of these people around the world, hundreds of millions people are
using chat chabit.
and each of them is seeing evidence of chat GBT either doing wonderful things or chat GBT being used to do very abusive things
and therefore each side is seeing clear evidence of why they were the ones that were right all along.
And Altman in the midst of this is also stirring the pot and telling different people different things based on what they want to hear
and not really creating much of a sound environment at all for good decision.
decision-making and governance processes.
And so the board starts having serious concerns based on their more doom-re-leading ideology.
And two executives, Ilya Sutskever and Miram Radi, the chief technology officer, also start
having serious concerns.
Sutskever also because of the is more quasi-religious beliefs.
And Marotti simply because she just likes good governance and she is very concerned that
Altman's influence on the company is just adding more chaos to chaos.
And so both of them approach the board saying, we don't trust Altman to be the one to lead us to AGI.
And it so happens that the board was already engaging in those reflections.
And so all five people, three independent board members and the two executives, start having intensive discussions.
And after the board, the independent board directors then break off into their own discussions and they conclude by the end.
and that they need to fire Altman.
Have you been able to get any, like, comment from Sam Altman about that time
or actually with the reporting that you've done on Open AI?
No, so Open AI did not participate in the book at all.
I gave them many opportunities to do so
and also gave them 40 pages of comment requests,
and they declined to answer.
But you did talk to a lot of people who worked for Open AI or used to, right?
Yeah, so I spoke to over 90 current and former opening eye employees and executives,
as well as over 40 other people within the tech industry, broadly speaking,
and people that are close to Sam Allman personally.
You mentioned earlier that your focus on this idea of the empire of AI, a key part of that is kind of the footprint that a company like OpenAI has elsewhere in the world, in particular in the global south.
And can you tell me a little bit more, and in broad strokes kind of around Open AI's footprint in the countries that you reported from?
Yeah, so there are two main, I guess you could call them ingredients that have led OpenAI to really sprawl out into the rest of the world.
One is labor.
They need contract workers to do data preparation, data cleaning, and content moderation in order to develop their AI models.
And the other one is they need land, energy, and fresh water to house power and cool their data centers and
supercomputers, and they have already effectively run out of land in the U.S. to do this. And so they are
trying to expand aggressively abroad. And so I went to Kenya, Chile, Oroquay, and I also drew upon
reporting from Colombia and South Africa that I had previously done to try and paint this broader
picture of all of the different parts of the AI supply chain that are happening around the world. And I met
with workers in Kenya that Open AI contracted to build a content moderation filter
that would ultimately be used in Chat Chubit.
And what the filter was, it was basically going to be this thing that sat on all of
of Open AI's models and then blocked any toxic content from ever-reaching users.
And what that meant for the Kenyan workers was they were waiting through reams of the
worst toxic content on the Internet, as well as.
as AI-generated content where Open AI was prompting its models to imagine the worst content on the
internet. And then the workers had to read and label them into a detailed taxonomy of, is this
hate speech, is this harassment, is this violent content, sexual content, and to what degree
of severity? Is this sexual content that involves abuse? Does that abuse evolve children?
And the workers ended up experiencing the same challenges of content moderators during the social media era, where they became deeply traumatized by the work.
And it wasn't just them.
It was their families and their communities that suffered when they lost a key member of their community to mental health challenges.
So I talk about this one man, Mo Fado Kinie, who was among the Kenyan workers that opening I contracted.
He was on the sexual content team.
And his personality fundamentally changed.
He went from being very extroverted and gregarious to being very socially anxious and introverted.
And he also didn't know how to explain to his family, to his wife and stepdaughter,
why he was undergoing that transformation because he didn't know how to say to them that his job involved reading sex content all day.
That sounded really shameful.
And so over time, without any ability to understand what was going on,
his wife became more and more concerned and started having doubts about their marriage.
And so one day she asked him for fish for dinner.
He went to the store, bought three fish, one for him, one for her, one for the stepdaughter.
And when he came back, all of their bags were packed and they were gone.
And his wife said, I don't understand the men you've become anymore were not coming back.
the kind of content moderation work that you've been talking about and the impact.
Like, is there anything different about how things are being done by a company like OpenAI
versus other tech companies?
So one of the key differences is that these workers are moderating AI generated content
as well as the stuff that's great from the internet.
You know, with Facebook, usually content moderation is user generated content.
But Open AI was trying to give the workers a wide.
diversity of the worst content.
Right.
And so they were programmatically generating
all these different awful possibilities.
The other key difference that is really important to understand
is that content moderation for these generative AI systems,
it's not actually necessary if the AI companies chose to develop their models in a different
way.
The reason why content moderation exists now for generative AI is because companies made a choice
to train their systems on widely scraped data from the internet.
But that is not actually the only way to train AI models.
You can also curate your data sets such that you eliminate the need for cons of moderation altogether.
So these companies, instead of doing their own work of data cleaning,
they're offloading that to these workers in the Global South
and paying them a couple dollars an hour to do this.
very grotesque labor.
Another big issue that you write about
is the environmental cost of not just open AI,
but all AI development across the board.
And can you paint a picture for me
of just how resource-intensive that technology is?
There are, so there are a couple stories that I include my book, but there have also been some really great updates.
So McKinsey projected that based on the current pace of data center and supercomputer expansion for AI specifically, we will need to add two to six times the amount of energy consumed annually in the state of California onto the global grid in the next five years.
And the state of California, for some context, fifth largest economy in the world.
And so this is an extraordinary amount of energy.
And most of that is going to come from fossil fuels because these data centers and supercomputers have to be powered 24-7.
And there was actually a report that just came out of the United Nations in June that said that the four major AI players increase their emissions, carbon emissions, by 150 percent since 2020.
So that is just in the last few years
And now we're seeing even more acceleration of these AI data centers
And there's going to be even more emissions
So we're talking about reversing a lot of the climate progress
That was made in the last decade
And also the acceleration of air pollution problems
Because coal plants are having their lives extended
Methane gas turbines are being installed anew
And then there's also a dimension
where fresh water is needed to cool these data centers,
and so it's accelerating the freshwater crisis.
Bloomberg recently had a story that showed that two-thirds of these data centers
are going into water-stressed areas around the world,
and that proportion is actually rapidly growing.
So I profiled this one community in my book in Montevideo-Orogui,
where they were experiencing historic levels of drought,
to the point where the moment where the mom.
Montevideo government was mixing salt water into the public drinking water supply just to have
something come out of people's taps when they opened them.
And people who were too poor to buy bottled water were drinking that toxic water.
And women were having higher rates of miscarriages because of that.
And it was in the middle of that that Google proposed to build a data center in Montevideo
and use the fresh water that the people didn't even have.
And that is ultimately the state of play for these data centers.
Today, there are literally communities that are competing with this computational infrastructure for life-sustaining resources.
You know, the Trump administration announced the launch of the Stargate project back in January.
And this is a joint venture with Open AI and a few other companies where, you know, they will be investing up to $500 billion towards AI infrastructure over the next four years.
years, Trump called it, the largest AI infrastructure in history, quote, by far.
And Karen, as we kind of close our conversation, like, what do you think, you know, that says about where Open AI is going next?
Open AI is just continuing to aggressively build out its empire. I mean, they've already consumed so many resources, and yet they're trying to consume even more at an
exponential rate of growth. You know, after the Stargate initiative was announced,
Altman joked on a podcast, oh, $500 billion sounds like a lot today, but wait until you see me raise
$5 trillion for a single cluster. And if these companies are not stopped, if there are not
checks and balances on their aggressive world-spanning ambitions,
This is a huge threat to democracy.
Like if these companies are allowed to continue consolidating ever more wealth and power and economic and political leverage, they are fast becoming, if they haven't already become the apex predator in the ecosystem where they can just act with impunity in their self-interest however they want.
Karen Howe, thank you so much for your time.
Thank you so much.
That's all for today. I'm Elaine Chow. Thanks for listening to Frontburner.