The Diary Of A CEO with Steven Bartlett - AI Whistleblower: We Are Being Gaslit By The AI Companies! They’re Hiding The Truth About AI!
Episode Date: March 26, 2026The truth about Sam Altman. AI Critic Karen Hao reveals what 90 OpenAI employees told her. Karen Hao is an AI expert, award-winning investigative journalist, and former reporter for The Wall Street... Journal covering American and Chinese tech companies. She is also co-host of the podcast The Interface and contributing writer at The Atlantic. Her latest book is the bestselling ‘EMPIRE OF AI: Inside The Reckless Race For Total Domination.’ She explains: ◼️Why the US-China “AI arms race” may be misleading and politically driven ◼️The truth behind the Pentagon using Claude for military strikes ◼️Why AGI is a marketing scam used to consolidate trillion-dollar power ◼️How agentic AI like OpenClaw will automate desk jobs within 18 months ◼️The hidden human cost behind AI training 00:00 Intro 00:02:27 Why The AI Industry May Be Chasing Profit Over Progress 00:04:49 What 250 OpenAI Insiders Revealed Behind Closed Doors 00:10:48 Did Sam Altman Outmaneuver Elon Musk—Or Is There More To It? 00:14:47 What People Really Think About Sam Altman (And Why It Matters) 00:17:34 The Hidden Power Struggle To Remove Sam Altman 00:25:14 The Real Reason Companies Are Racing To Build AI 00:31:35 Do AI CEOs Truly Believe This Will Help Humanity? 00:33:08 Why OpenAI Refused To Be Part Of This Book 00:50:53 Ad Break 00:54:15 What Really Triggered Sam Altman’s Firing—And The Mass Exodus After 01:04:51 Should You Vote Based On AI Policies—And What’s At Stake? 01:12:30 How Robots Updating Instantly Could Change Everything 01:15:11 Will AI Surpass The Best Surgeons—And What Happens If It Does? 01:35:03 What The Klarna CEO Reveals About The Future Of AI And Business 01:38:09 Ad Break 01:41:58 Is AI Quietly Eroding Meaning—And Impacting Health And The Planet? 01:50:52 How We Can Actually Build AI Without Putting Humanity At Risk Enjoyed the episode? Share this link and earn points for every referral - redeem them for exclusive prizes: https://doac-perks.com Follow Karen: X - https://link.thediaryofaceo.com/7MVVs8B Website - https://link.thediaryofaceo.com/ARHB0mk You can purchase ‘EMPIRE OF AI: Inside the reckless race for total domination’, here: https://link.thediaryofaceo.com/CcrcHj2 The Diary Of A CEO: ◼️Join DOAC circle here - https://doaccircle.com/ ◼️Buy The Diary Of A CEO book here - https://smarturl.it/DOACbook ◼️The 1% Diary is back - limited time only: https://bit.ly/3YFbJbt ◼️The Diary Of A CEO Conversation Cards (Second Edition): https://g2ul0.app.link/f31dsUttKKb ◼️Get email updates - https://bit.ly/diary-of-a-ceo-yt ◼️Follow Steven - https://g2ul0.app.link/gnGqL4IsKKb Sponsors: Wispr - Get 14 days of Wispr Flow for free at https://wisprflow.ai/steven Pipedrive - https://pipedrive.com/CEO Saily - Download from the app store and use code DOAC at the checkout for 15% off
Transcript
Discussion (0)
One of the most successful conversations we've had this year on the show was with a guy called Chris Kona who talks about ways to make money on the side.
And it got me thinking because our show sponsor is Airbnb, a brand I love, I've used all over the world for the last decade or so.
And this is an unbelievable, untapped opportunity to make some money on the side if you currently are a homeowner.
Let me explain.
So many of us go traveling.
We go on holiday to see in-laws or to go on ski trips or whatever it might be.
And our home sits there, usually actually cost.
costing us money because of bills, what most people don't realize is that you can put that house
on Airbnb very simply and very easily. If this sounds interesting to you and you currently
don't list your property when you go away, your home might be worth more than you think.
Find out how much at Airbnb.c.ca.ca. slash host. That's Airbnb.combe. So much of what's happening
today in the AI industry is extremely inhumane. But this is me playing devil's ambictor.
And logically, it could be the case that the civilisation
that accelerate their research with AI
is going to be the superior civilization.
No, it's not.
This is a prediction that you're making, right?
Emons making, Zuckerberg's making,
and do you know what the common feature of all of them is?
They profit enormously off of this myth.
You know, I have all of these internal documents
showing that they're purposely trying to create
that feeling within the public
so that they can extract and exploit
and extract and exploit.
So what do we do about it?
We need to break up the empires of AI.
You know, I've been covering the tech industry
for over eight years.
interviewed over 250 people, including former or current opening eye employees and executives.
And I can tell you that there are many parallels between the empires of AI and the empires of old.
Like, lay claimed the intellectual property of artists, writers, and creators in the pursuit of training these models.
Second, they exploit an extraordinary amount of labor, which breaks the career ladder because someone gets laid off.
And then they work to train the models on the very job that they were just laid off in, which will then perpetuate more layoffs if that model then develops that skill.
And when they talk about that there's going to be some new job,
created that we can't even imagine.
A lot of the jobs that are created are way worse
than the jobs that we're there.
And then there's the environmental and public health crisis
that these companies have created.
And how they're able to also spend hundreds of millions
to try and kill every possible piece of legislation
that gets in their way and will censor researchers
that are inconvenient to the empire's agenda.
But what I'm saying is not that these technologies don't have utility.
It's that the production of these technologies right now
is exacting a lot of harm on people.
harm on people, but we have research that shows that the very same capabilities could be developed
in a different way that doesn't have all of these unintended consequences. So let's talk about all
of that. Guys, I've got a favor to ask before this episode begins. The algorithm, if you follow a
show, will deliver you the best episodes from that show very prominently in your feed. So when we
have our best episodes on this show, the most shared episodes, the most rated episodes,
I would love you to know. And the simple way for you to know that is to hit that follow button.
but also it's the simple, easy, free thing that you can do to help us make this show better.
And I would be hugely grateful if you could take a minute on the app you're listening to this on right now
and hit that follow button.
Thank you so, so, so much.
Karen, how?
You've written this book in front of me here called Empire of AI,
dreams and nightmares in Sam Altman's Open AI.
I guess my first question is, what is the research and the journey you went on
in order to write this book we're going to talk about?
and the subjects within it today?
I took a strange route into journalism.
I studied mechanical engineering at MIT.
And so when I graduated, I moved to San Francisco.
I joined a tech startup.
I became part of Silicon Valley.
And I basically received an education in what Silicon Valley is about
because a few months into joining a very mission-driven startup
that was focused on building technologies
that would help facilitate the fight against climate change.
The board fired the CEO because the company was not profitable.
And this was, in hindsight, a very pivotal moment for me because I thought if this hub is ultimately geared towards building profitable technologies and many of the problems in the world that I think need solved are not profitable problems like climate change, then what are we actually doing here?
Like how did we get to a point where innovation is not actually necessarily working in the public benefit and sometimes even undermining the public benefit in pursuit of.
profit. In that moment, I had a bit of a crisis where I thought, well, I just spent four years
trying to set myself up for this career that I now don't think I am cut out for. And I thought,
well, I might as well just try something totally different. I've always liked writing. And that's how
after two years I landed at a role at MIT Technology Review covering AI full time. And that gave me a
space to then explore all of these questions of who gets to decide what technologies we build.
How does money and ideology also drive the production of those technologies?
And how do we ultimately make sure that we actually reimagine the innovation ecosystem
to work for a broad base of people all around the world?
And so that is kind of how I then set off on this journey of ultimately writing a book.
I didn't realize that I was working towards writing a book,
but starting in 2018 when I took that job
was essentially the moment in which I began researching the story
that I documented in it.
Very timely time to start working in artificial intelligence.
For anyone that doesn't know, this is pre-open AI,
chat GPT launch moment that shook the world.
But in writing this book, you interviewed a lot of people
and went to a lot of places.
Can you give me a flavor of how many people you've interviewed
where it's taken you around the world?
etc. I interviewed over 250 people, so over 300 interviews, over 90 of those people were former or current
Open AI employees and executives. So the book covers the inside story of Open AI's first decade and how
it ultimately got to where it is today. But I didn't want to write a corporate book. I felt
very strongly that in order to help people understand the impact of the AI industry, we would also
have to travel well beyond Silicon Valley. These companies tell us that AI is going to benefit everyone,
and that's their mission. But you really start to see that rhetoric breakdown when you go to the
places that look nothing like Silicon Valley, that speak nothing like Silicon Valley,
and that have a history and culture that are fundamentally different as well. And that's where
you start to really understand the true reality of how this industry is unfolding around us.
Karen, I often try and steer conversations, but in this situation, I feel like it's probably my responsibility to follow.
So with that in mind, I'm going to ask you, where does this journey begin and where should we be starting if we're talking about the subjects of empire of AI, AI generally, artificial intelligence?
And also I'd say one thing I'm really keen to do in this conversation, which is I often see in conversations is left out, is let's assume that our viewers know nothing about AI.
Yeah.
So they don't know what scaling laws are or GPUs or computer or whatever.
And let's try and keep this as simple as we possibly can in terms of language, or explain all the complicated language so that we can bring as much people with us as we possibly can.
Yes.
Where should we start?
I think we should start with when AI started as a field.
So this was back in 1956, and there were a group of scientists that gathered at Dartmouth University to start a new discipline, a scientific discipline, to try and chase an ambition.
and specifically an assistant professor at Dartmouth University, John McCarthy,
decided to name this discipline artificial intelligence.
This was not the first name that he tried.
The previous year, he tried to name it automata studies.
And the reason why some of his colleagues were concerned about this name was because it pegged
the idea of this discipline to recreating human intelligence.
And back then, as is true today, we have no scientific consensus
around what human intelligence is.
There's no definition from psychology, biology,
neurology, and in fact, every attempt in history
to quantify and rank human intelligence
has been driven by nefarious motives.
It's been driven by a desire to prove scientifically
that certain groups of people are inferior
to other groups of people.
There are no goalposts for this field,
And there are no goalposts for the industry when they say that they are ultimately trying to recreate AI systems that would be as smart as humans.
How do we even define what that means?
And when are we going to get there if we don't know how to define the destination?
And what that effectively means is that these companies can just use the term artificial general intelligence, which is now the term to refer to this ambitious goal to recreate human intelligence.
they can use it however they want to,
and they can define and redefine it
based on what is convenient for them.
So in OpenAI's history,
it has defined and redefined it many times.
When Sam Altman is talking with Congress,
AGI is a system that's going to cure cancer,
solve climate change, cure poverty.
When he's talking with consumers
that he's trying to sell his products to,
it's the most amazing digital assistant
that you're ever going to have.
When he was talking with Microsoft,
you know, in the D.E.
that Opening Eye and Microsoft struck, where Microsoft invested in the company, it was defined as a system that will generate $100 billion of revenue.
And on Open Eye's own website, they define it as highly autonomous systems that outperform humans in most economically valuable work.
This is not a coherent vision of one technology. These are very different definitions that are spoken out loud to the audience that needs to be mobilized to,
ward off regulation or get more consumer buy-in into the industry's quest, or to get more capital,
more resources for continuing on this journey with ambiguous definitions.
I mean, speaking about different definitions through time, in 2015, in a blog post that Sam Montman
wrote before Open Air was officially announced, he explicitly outlined the existential risk by saying
development of superhuman machine intelligence
is probably the greatest threat
to the continued existence of humanity.
There are other threats that I think are more certain to happen,
for example, an engineered virus.
But AI is probably the most likely way
to destroy everything.
In general, when Altman is writing for the public
or speaking for the public,
he does not just have the public as the audience in mind.
There are other people that he is trying
to motivate or mobilize
when he says these things.
And in that particular moment,
Altman was trying to convince Elon Musk
to join him on co-founding OpenAI.
And Musk, in particular,
was spending all of his time sounding the alarm
on what he saw as a huge existential threat
that AI could pose.
And so in that blog post,
if you look at the language that Altman uses
side by side with the language
that Musk was using at the time,
it mirrors all the things
that Musk was saying.
It's identical. I mean, for 10 years ago,
Musk was going on podcast, saying, tweeting, whatever,
that the greatest existential risk of humanity was AI.
Yeah. And so, you know, like, his parenthetical,
there are other things that we,
that might actually be more likely to happen, like engineered viruses.
It's because up until then,
Altman had been talking just about engineered viruses.
And so now that he needs to pivot to speak to an audience of one,
to Musk,
between what he's now elevating as his new central fear to be the same as Musk's new central fear
with what he had previously been saying. So that's why he's like, I think this is now, even though
before I said this. And are you saying that Sam Altman manipulated Musk? Because Elon did end
up donating a huge amount of money to Open AI and co-founding it, I believe, with Sam Altman.
Elon Musk did end up co-founding it with Altman. And certainly from Musk's perspective,
if he does feel manipulated because he feels like Altman was engineering his language in a way that would make Musk trust him as a partner in this endeavor.
And of course, then Musk leaves.
And through some of the documents that came out during the lawsuit that Musk and Altman are engaged in now,
it has become clear that there was a degree to which Musk was actually.
muscled out a little bit. And so that's why he's left with this very intense personal vendetta
against Altman saying that somehow Altman tricked him into being part of this. So in 2015, Sam Altman
is writing these blog posts saying this is one of the greatest existential threats. At the same time,
in 2015, Musk is doing some very famous speeches at the time. At MIT, he said that AI was the
biggest existential threat and compared developing AI to summoning the demon.
And what you're saying here is you're saying that Sam Altman was just mirroring the language that Elon was using to get Elon involved in Open AI.
And later, it appears, and again, there's a legal case taking place now, that Sam might have muscled Elon out in some capacity.
Yeah.
So we know from the lawsuit and the documents that have come out in the lawsuit that Ilya Sutskever, who is the chief scientist of Open AI at the time, and Greg Brockman, chief technology officer at the time, when they were deciding whether or not to,
maintain Open AI as a nonprofit because it was originally found as a nonprofit. They decided,
okay, we need to create a for-profit entity. But the question was who should be the CEO of this
for-profit entity? Should it be Musk or should it be Altman? Because they were the two co-chairmen of the
nonprofit. And in the emails, it became clear that Ilya and Greg first chose Musk to be the CEO.
but through my reporting, I discovered that Altman then appealed personally to Greg Brockman,
who was a friend of his that they'd known each other for many years through the Silicon Valley scene,
and said, don't you think that it would be a little bit dangerous to have Musk be the CEO of this company,
this new for-profit entity?
Because, you know, he's a famous guy.
He has a lot of pressures in the world.
he could be threatened, he could act erratically, he could be unpredictable, and do we really want a technology that could be super powerful in the future to end up in the hands of this man?
And that convinced Greg, and Greg then convinced Ilya, you know, I think there's a point here, do we really want to give this much power to Musk? And that is why Musk then leaves, because then the two switch their allegiances, they say, actually, we want to.
want Altman to be the CEO, and then Musk is like, if I'm not CEO, I'm out.
So it sounds like Sam, again, managed to persuade someone to do something.
Mm-hmm.
I guess this begs the question.
What do you think of Sam Altman?
I think he's a very controversial figure.
You did an interesting pause.
It's a pause where someone tries to select their words.
Well, this is what's so interesting about those interviews is people are extremely.
extremely polarized on Allman. No one has in-between feelings about him. Either they think he's the
greatest tech leader of this generation akin to the Steve Jobs of the modern era, or they think that
he's really manipulative and an abuser and a liar. And what I realized, because I interviewed so many
people, is it really comes down to what that person's vision of the future is and what their
goals are. So if you align with Altman's vision of the future, you're going to think he's the
greatest asset ever to have on your side because this man is really persuasive. He's incredible
at telling stories. He's incredible at mobilizing capital, at recruiting talent, at getting all the
inputs that you need to then make that future happen. But if you don't agree with his vision of
the future, then you begin to feel like you're being manipulated by him to support. To support.
or his vision, even if you fundamentally don't agree with it.
And this is the story especially of Dario Amade, CEO of Anthropic,
who was originally an executive at OpenAI.
So for people that don't know, Dario now runs Anthropic,
which is the maker of Claude.
A lot of people probably are more familiar with Claude.
Yeah.
And it's one of the biggest competitors to Open AI.
And Amade at the time when he was an executive at Open AI,
he thought that Altman was on the same page with him,
and then over time began to feel that Altman was actually on exactly the opposite page of him
and felt that Altman had used Amade's intelligence, capabilities, skills
to build things and bring about a vision of the future
that he actually fundamentally didn't agree with.
And so that's why people end up with this bad,
taste in their mouths. And so, you know, I've been covering the tech industry for over eight years and
covered many companies. I've covered meta, Google, Microsoft, in addition to Open AI. And Open AI and
Altman is, it's the only figure that I've seen this degree of polarization with where people
cannot decide whether he's the greatest or the worst. You mentioned Dario there. And I find it really,
what I found really interesting is to look at how people's quotes evolve over time with their incentive.
So I was looking at all of the things they've said on the record on podcasts in their blog post to see how it's evolved over time.
And Dario, who was the former VP of Research, Open AI, and has now moved on to Anthropic,
who are taking a slightly different approach to developing AI, said back in 2017, while he was still at Open AI,
that this is a quote, I think at the extreme end is the Nick Bostrom style of fear that an AGI could destroy humanity.
can't see any reason in principle why that couldn't happen. My chance that something goes really
quite catastrophically wrong on the scale of human civilization might be somewhere between 10% and 25%.
And also you mentioned Ilya, who was a co-founder of Open AI and then left. I guess the first
question I'd ask is, why did Ilya leave? That's a great question. So he was instrumental in trying to
get Sam Altman fired. And he's another one of the people who over time began to feel like he was
being manipulated by Altman towards contributing something that he didn't believe in.
How did you know? Because I interviewed a lot of people. Ilya in particular had two pillars that he
cared about deeply. One is making sure we get to so-called AGI.I. And the other is making sure that we
get to it safely. And he felt that Altman was actively undermining both things. He felt that
Altman was creating a very chaotic environment within the company, where he was pitting teams against
each other, where he was telling different things to different people. Have you ever spoken to him?
I have so. So I interviewed him in 2019 for a profile that I did of Open AI for MIT Technology Review.
And back in 2019, he has a quote where he says, the future is going to be good for
AIs regardless. It would be nice if it was also good for humans as well. It's not that it's going to
actively hate humans or want to harm them, but it's just going to be so powerful, and I think a good
analogy would be the way that humans treat animals. It's not that we hate animals. I think humans
love animals, and I have a lot of affection for them. But when the time comes to build a highway between
two cities, we are not asking the animals for permission. We just do it because it's important to us.
And I think by default, that's the kind of relationship that's going to be between us and AI. We're
are truly autonomous and operating on their own behalf.
And that was in 2019, the year that you interviewed him.
One of the things that I feel like we should take a step back to examine is going back to this
idea of what even is artificial intelligence and what do we mean by intelligence.
And a huge part of the views of the different people and the quotes that you're reading
derives from a specific belief that they each have in this question of what is intelligence,
what constitutes intelligence.
for Ilya, he has throughout his research career felt that ultimately our brains are giant statistical models.
This is not something that, you know, we actually know, but this is his own hypothesis, also the hypothesis of his mentor, Jeffrey Hinton, who also was on this podcast.
This is why they have such a strong conviction in the idea of building AI systems that are statistical models and that this,
particular approach is going to lead to intelligent systems as we are intelligent. It's a
hypothesis that they have. It's not one that has been proven by science. And some people vehemently
disagree with them on this particular thing. But if you step into their shoes and take on that
hypothesis and assume that it's true that our brains are in fact statistical engines and that these
systems that they're building are also statistical engines that they're making bigger and bigger and bigger
until they become the size of the human brain. That's why they say that making this comparison
where the system will become equal to human intelligence and then maybe exceed human intelligence
is relevant in their framework. And Ilya gave a talk at one point at this really prominent
AI research conference that happens every year called neural information processing system.
It's a mouthful.
But he gave this keynote where he shows this chart of the size of brains and the intelligence
of a species.
And it's roughly linear, the bigger, the size of the brain, the more intelligent, the species.
And so for him, he thinks he's building a digital brain because he thinks brains are just
statistical engines.
So from that logic, it's like, okay, if we then build a bigger statistical
engine than the human brain, then based on this chart, it will be more intelligent and then
we will be subjected to the same treatment that we've subjected animals. But it's really
important to understand that these are scientific hypotheses of specific individuals within
the AI research community. And there's a lot, a lot of debate about whether this is in fact the
case. And some of the biggest critics say it's very reductive to think of our brains as simply
just statistical engines. Why does it matter to know the mechanism? Is it not just important to know the
outcome, which is that it's going to be able to do, make a video for me, or agents are going to be
able to do the work that I do? Does it really, really matter for us to know the mechanism behind it?
Yes and no. So it matters because these companies, they are driving their future actions based on
this hypothesis. So they have decided, we think that this hypothesis is true, like we should just
continue building larger and larger statistical models in the pursuit of artificial general intelligence.
And that's then having global consequences. Like in order to continue doing that, they're hoovering up
more and more data. They're building more and more data centers. They are having, they're, you know,
exploiting more and more labor in order to continue on this path. Here's,
Here's a question that I think is important to ask is, why are we trying to build AI systems that are duplicative of humans?
We're kind of having this conversation right now where we've just taken the premise of this industry as a good thing.
Like, they said that we should be building AGI, so we say that we should be building AGI.
But I would like to ask, like, why are we doing that?
Why is it that we are building a technology that is ultimately designed to replace and automate people away?
that is not the enterprise of technology.
Like, we should be building technology,
and the purpose of technology throughout history
has been to improve human flourishing,
not to replace people.
And so this is like a critical part of my critique
of these companies and these scientists
that have just adopted this goal
and have relentlessly pursued it
and have had enormous capital
and enormous resources to pursue it,
is, is this the thing?
right goal? Why are we doing this? Why can't we just build AI systems that do things like
accelerate drug discovery and improve people's healthcare outcomes, which are systems that have nothing
to do with the statistical engines that they're trying to build to duplicate the human brain?
So why are they doing it? I mean, you've interviewed all these people. I think it's, what, 300 people
in total? 80 or 90 of them from Open AI, the maker of ChatcheeBC. Why do you think they're doing it?
I think it's because they're driven by an imperial agenda, and that is why I call these companies Empires of AI.
What do you mean by an imperial agenda? What does that term mean?
Empire is the only metaphor that I've ever found to fully encapsulate all of the dimensions of what these companies do
and the scale that they operate and what motivates them to do what they do.
And there are many parallels that you see between what I call the Empires of AI and the Empires of Old.
They lay claim to resources that are not their own in the pursuit of training these models.
That's the data of individuals, the intellectual property of artists, writers, and creators.
They're land grabbing in order to build these supercomputer facilities for training the next generation models.
Second, they exploit an extraordinary amount of labor.
They contract hundreds of thousands of workers all around the world, including in the U.S.
to ultimately make these technologies.
We can talk about that more.
and they also design their tools to be labor automating so that when the technologies are deployed,
it also affects labor rights because it erodes away labor rights. And this is a political choice that they have.
Third, they monopolize knowledge production, so they project this idea that they're the only ones that really understand how the technology works.
And so if the public doesn't like it, it's because they don't actually know enough about this technology.
They do this to the public. They do this to policymakers.
And they've also captured the majority of the scientists that are working on understanding the limitations and capabilities of AI.
You think they're gaslighting the public in a way?
They are, yeah.
So if most of the climate scientists in the world were bankrolled by fossil fuel companies, do you think we would get an accurate picture of the climate crisis?
No.
And in the same way, they employ and bankroll, the AI industry, employs and bankrolls most of the AI researchers in the world.
So they set the agenda on AI research in soft ways, simply by funneling money to their priorities, so that only certain types of AI research are produced.
But they also will censor researchers when they do not like what the researcher has found.
And so I talk about the case of Dr. Timit Gavru in my book, who was the ethical AI team co-lead at Google when she was literally hired to critique the types of AI systems that Google.
was building, she then co-wrote a critical research paper that was showing how large language
models specifically were leading to certain types of harmful outcomes. And in an attempt to try
and stop this research from being published, Google ended up firing Gebrou and then fired her other
co-lead, Margaret Mitchell. And so they control and quash the research that is inconvenient
to the empire's agenda.
Did you have an example where this is happening to journalists as well
that are asking questions of their team members?
I think I was watching a video of yours where there was a young man that was saying
he had someone show up at his door, knocked on his door and asked for information,
emails, text messages, and this person was from one of the big AI companies.
This was opening I started subpoenaing some of its critics, yeah,
as part of a, what appears to be a...
campaign of intimidation, but also what appeared to be a campaign of fishing for more information
to figure out, to map out the network of critics further. But this was a man who runs a small
watchdog nonprofit, and they had been doing a lot of work during that time to try and ask
questions about opening I's attempt to convert from a nonprofit to a for-profit. Ultimately,
opening I was successful in that conversion, but during the period where it was sort of
of existential for open AI to complete this conversion. There were a lot of civil society groups
and watchdog groups like Midas who were trying to prevent the process from happening in the dead
of night. They were trying to get more transparency. They were trying to have more public debate
about this because it's unprecedented. And it was then that there was a knock on his door and
he was served to papers. What do the papers say? The papers asked.
him to reproduce every single piece of communication that he had had that might have involved
Musk. So this was like this strange paranoia that opening I had, that Musk was somehow funding
these people to block the conversion. None of them were actually funded by Musk. So in this
particular case, the request, he simply was just answered, you know, I don't have any documents
because this doesn't exist. So going back to this point of empires, you were saying that one of the
factors of an empire is a land grab. And then the next one was
was labor exploitation.
Labor exploitation.
The third one, controlling knowledge, production.
And one of the other ones that's really important to understand about the AI empires in particular
is empires always have this narrative that they say to the public.
Like, we're the good empire.
And we need to be an empire in the first place because there are also bad empires in the world.
And if you allow us to take a...
all the resources and use all of the labor, then we promise we will bring you progress and
modernity for everyone. We will bring you to this utopic state akin to an AI heaven. But if the
evil empire does it first, we will descend into a hell. And the evil empire being in this case?
In this case, most often it's China. But actually in the early days, open AI evoked Google as the
evil empire. So all of their decisions were about we need to do it first because otherwise
Google this evil corporation that's driven by profit, us as a benevolent nonprofit,
like this is a critical contest of who wins. Do you think the people building these AI
companies believe that the outcome is going to be all good now? Do you think they think that
it's going to be, it's going to serve everyone, it's going to be the age of abundance, everything's
going to go, well, what do you think they believe?
Would you think Sam believes?
So this is so funny, is such a core part of the mythology that they create around the AI
industry includes the belief that it could go very badly.
It goes hand in hand.
Like, they need that part of the myth in order to then say, and that's why we need to be in
control of the technology, because that's the only way that it's going to go really, really
well.
And Altman has said publicly, you know,
the worst case lights out for everyone,
but best case, we care cancer, we solve climate change, and there's abundance.
And Dario Amade, same kind of rhetoric.
It's like, worst case, catastrophic or existential harm for humanity.
Best case, mass human flourishing.
So this is like two sides of the same coin.
Like they have to use both of these narratives in order to continue justifying an extremely
anti-democratic approach to AI development, where there should not be broad participation
in developing this technology. They must be the ones controlling it at every step of the way.
Sam Altman did a tweet saying, there are some books coming out about Open AI and me. We only participated
in two of them, one by Keshe Hege Hege. Keech Hege focused on me, and one by Ashley Vance on Open AI.
He went on to say, no book will get everything right, especially when
some people are so intent on twisting things, but these two authors are trying to.
You quote retweeted that tweet from Sam Altman, and you said,
The Unnamed Book, Empire of AI, is Mine.
Do you believe that tweet from Sam Altman was in reference to your book?
100%.
Because there's only three books coming out about him.
And he'd caught Wynn that your book was coming out.
He knew my book was coming out because I had contacted Open AI from the very beginning of my process.
and said, I'm working on a book now.
Will you participate in it?
And actually initially, they said yes.
Even though, so my history with OpenAI,
I profiled the company for MIT Technology Review.
I embedded within the office for three days in 2019.
My profile comes out in 2020.
The leadership are very unhappy.
And in my book, I actually quote an email that I received
that Sam Altman sent to the company about my profile saying,
yeah, this is not great.
And from then on, the company's stance to me was we are not going to participate in anything that you do.
We are not going to respond to anything, any questions that you receive.
And this was, you know, this was things that they explicitly articulated.
It wasn't like me inferring.
So I had a colleague at MIT Technology Review that also covered AI.
And at one point, opening, I sent him this press.
release being like, we would love for you to cover this story? And he was like, I'm really busy.
Will you send it to Karen? And they were like, oh, no, we have a history you understand.
And so for three years, they refused to talk to me. But then I ended up at the Wall Street Journal
where if they felt a bit compelled, because it was the journal, to reopen the lines of
communication. And so I started having, you know, more dialogue with them.
Every time I wrote a piece, I would always send them, here's my request for comment.
I would always ask them, like, will you sit for interviews?
And we did get to a more productive relationship.
And then I embarked on the book.
So I left the journal to focus on the book full time.
And I told them right away, I'm working on this book.
I want to continue this productive conversation where I make sure I reflect opening eyes perspective in the book.
And so they were like, we can arrange interviews for you.
you, you can come back to the office, we'll set up some conversations. And then as we were going back and
forth on this, the board fires Sam Altman. And that's when things started going kind of south,
because the company started becoming very sensitive to scrutiny. And so then they started
kicking the can down the road, down the road, down the road. And I kept saying, hey, when are we
rescheduling this? What's going on? And then I get an email saying, we are not going to participate.
at all. You are not coming to the office. You're not doing interviews. And I had actually
already booked my tickets. So I was already going to fly to San Francisco to have the interviews.
And so then I told them, I was like, that's fine. I will still engage in the process.
Well, give you extensive requests for comment. I'll ask through my reporting, I'll keep
you updated on all the things that I'm finding so that you can choose to still comment.
I gave them 40 pages of requests for comment
and I gave them over a month to respond to all of that.
So this was when the tweet came out was we were doing all this back and forth,
trying to, and that's when Altman tweeted this.
Hmm.
And they never responded to a single one of the 40 pages.
Sam Altman does a lot of interviews.
Yeah.
You know, he's doing a lot of interviews all the time.
He's done every podcast.
I've seen him on everything from Tucker Carlson to I think he's done Theo Von Joe Rogan
um it podcasts all over the world I wonder why he won't do mine
well maybe I don't know why I don't know I think I'm fair with everyone I just
ask I just ask questions I genuinely care about I don't come in with huge preconceptions
or at least meet people for the first time but I've heard through the grapevine
um that he doesn't want to do mine I mean going back to what you were
saying earlier that with this the way that opening eye and these companies control research,
you asked, do they also do this with journalists? I mean, yes, the answer is yes. And apparently
they also do it with anyone who has, you know, a broad mass communications platform.
It's not just about the conversation that you're going to have with thumb. It's about who you
also choose to platform. And there's this huge problem in technology journalists.
where companies know that a really big carrot that they can give to technology journalists is access.
Yeah, yeah, yeah.
And they will withholds that access at the drop of a hat if they catch wind that you're speaking to someone that they didn't want you to speak to.
This is so true.
And I don't think the average person really truly understands this.
Yeah.
So this kind of sounds like theory as you say it.
But I'm not going to name names here because I don't think it's important.
but there is a particular person in AI
whose team have basically dangled the carrot
of them coming here for like 18 months
and I'm like you don't have to dangle the carrot
I'm going to speak to whoever I want to regardless of the carrot or not
and when this person comes if they want to come
I'll give them a fair shot
I'll ask them all genuinely curious questions
about what they're doing their incentives
I won't gotcha them
I don't have a history of ever gotchering anybody
even if I disagree like even if I have a different of opinion
I'll ask the question yeah
But they dangle carrots and they say, well, if, you know, he's thinking about it, let's think about a date.
And what the strategy is, and I don't think they, they think both people don't understand is, if we just dangle it for long enough, then they will perform in the way that we want them to do.
And they'll be, they'll be pleasant about us.
They won't be critical.
They won't give a voice.
Our critics.
And I think all of their game is just dangle the carrot forever.
Yes.
Yeah.
That's like the optimal outcome is that if we just.
dangle it. If we just tell them, yeah, we're just trying to look at the schedule.
It just doesn't work. I think in the modern world, you just have to go there and give your
opinion and allow the clash of ideas in the public forum. Let the viewers decide for themselves
what they think. Yeah. But this is a, yeah, this is such a huge part of their machinery is the
way that they use these tactics to massage the public image of these companies and make sure
that information that they don't want out, and even opinions that they don't want out there,
go out there. And so,
This is, you know, I feel very lucky now that opening, I shut the door early on me.
At the time, I didn't feel lucky.
I felt like I had screwed myself over.
I was like, should I have been nicer to them in the profile so that I could maintain access?
Which is a horrible question to ask as a journalist, right?
Like, you're supposed to report the truth and you're always supposed to report in the interest of the public.
Like, that is the point of journalism.
And in that moment, I was like, relatively.
junior in my career, I was like, did I misunderstand what journalism about is about?
Like, should I have actually been playing the access game?
But it was too late.
I had the door shut to me.
And so I had to build my career understanding that the door, the front door was never
going to be open.
Yeah.
And that actually really strengthened my own ability to just tell it like it is.
The objective.
Yeah.
And just report what I.
I see are the facts being presented to me,
irrespective whether the company likes it or not.
And most often the company really does not like it,
but I can continue to do the work.
They don't need to open the front door for me.
I was still able to do more than 300 interviews.
So Sam Altman gets kicked off the Open AI executive team.
Did you find out why that happened?
Yeah.
There's a scene by.
seen recounting.
From who?
I can't remember the exact number of sources.
So I don't want to misquote myself, but it was around six or seven people that were
directly involved or had spoken to people directly involved in the decision-making process.
So, Ilya Satskever, is seeing these serious concerns about the way that Altman's behavior
is leading to bad research outcomes and poor decision-making at the company.
He then approaches a board member, Helen Toner.
Ilya, for anyone that doesn't know, is the co-founder we mentioned earlier,
the co-founder of Open AI we mentioned earlier.
Yes.
And he kind of does a bit of a sounding board thing to Helen just because Ilya is freaking out.
He's been like sitting on this, these concerns for Bile.
And he's like, if I tell this to someone, this could also be really bad for me if Altman finds out.
And so he asks for a meeting with Toner.
And in that first meeting, he's like, like, he barely says a thing.
He's just like dancing around trying to figure out, hey, is this someone that I can maybe trust to
divulge more information. And Toner's role in responsibilities at OpenAI were...
She was a board member. Just a board member. Yeah. And specifically an independent board member.
So opening eye, when it was a nonprofit, the board was split between people who had a stake, financial
stake in the company and then people who were fully independent. And this was meant to be a structure
that would balance the decision making to be in the benefit of the public interest rather than
to be in the benefit of the for-profit entity that opening I then created.
And Ilya, as a non-independent board member,
was approaching Toner as an independent board member
to try and see whether or not she was potentially seeing
or hearing the same things that he was about the effect
that Altman was having on the company.
This then sets off a series of conversations,
first between Ilya and Helen,
and then between Miramir Maradi and some of the board members,
so Miramir Maradi was at that point the chief technology officer of OpenAI,
where these two senior leaders essentially through these conversations
and through documentation that they're pulling together,
like email, Slack messages and so forth,
they convey to the independent board members, three independent board members.
We are very concerned about Altman's leadership.
Like he is creating too much.
instability at the company. And it is, like, he is the root of the problem. It's not, they, they,
they were trying to say to these independent board members, like, the problem will not be fixed
unless Altman is removed because of the way that he's pitting teams against each other and
creating this environment where people are unable to trust each other anymore. And they're competing,
rather than collaborating on what's supposed to be this really, really important technology.
When you say instability, that's quite a vague term.
That could mean lots of things.
Like instability could mean pushing people hard to work harder.
Right.
What do you mean by instability in specific terms as you can possibly say them?
When chat GPT came out in the world, opening eye was wholly unprepared.
They didn't think that they were launching a gangbusters product.
They thought they were releasing a research preview that would help them get the data flywheel.
going, collect a bunch of data from users that would then inform what they thought would be
the gangbusters product, which was a chat bot using GPT4, and chat GPT was using GPT 3.5.
And because of that, there were servers crashing all the time because they had to scale their
infrastructure faster than any company in history.
And there were all of these outages.
they were trying to also hire faster than any company in history to try and have more personnel there.
And they were then sometimes hiring people that they were like, actually, we made a mistake, we shouldn't have hired you.
So they were firing people left and right.
And people were just disappearing off of Slack.
And that's how their colleagues would learn that they were no longer at the company.
And so it was, yes, like many fast-growing companies, a very chaotic environment and a particularly chaotic environment because it was,
extra fast. Like, they had to accelerate more than any other startup.
And on top of that, Miramarati and Ilyzatskir felt that Altman was making it worse.
Like, he was not actually effectively ameliorating the circumstances of the chaos. He was
actually sowing more chaos, getting these teams to be more divided. And this is where
it's important to understand that the executives and the executives and the
independent board members, they're all operating under this idea that they're building
AGI and that AGI could either be devastating or utopic to humanity.
And so it's not, yes, it's like any other company.
And no, it's not like any other company.
You cannot have, like in their view, you cannot have this degree of chaos as the pressure
cooker for creating a technology that they, in their conception, could
make or break the world. And so that is basically what the independent board members also begin to
reflect on. They have these conversations amongst themselves where they're like, well, based on what
we're hearing about Altman's behavior, like, if this was an Instacart, would that warrant firing him?
And they concluded, maybe not. But this is not Instacart. And that's why they were like,
well, crap, maybe this is actually, this does rise.
to the bar where we should consider replacing him because we are ultimately building a technology
that we think could have transformative impacts either in the positive or negative direction.
And so that is what happens.
It's like these two executives.
And then the independent board members also, they were hearing other feedback as well from
their connections within the company with other people in the industry.
At one point, Adam DeAngelo, who is one of the independent board members and the CEO of Cora,
which is a tech startup in the valley.
He is at a party in San Francisco,
and he starts to hear some of these rumors
that there's something weird about the way
that OpenAI has structured its OpenAI startup fund,
which was this fund that the company had created
to start investing in other startups.
And he realizes they'd never really seen documentation
about how the startup fund had been set up from Altman.
And finally, they get the documents,
and it turns out that Open AI's startup fund
is not OpenAI's startup fund.
It's Altman's startup fund.
And this was something, like one of several experiences
that the independent board members were also having
where they're like,
there's something not right about the fact
that there continuously are inconsistencies
between the way that Altman is portraying
what is being done.
versus what is actually being done.
And so when these two executives approach the board or the independent board members,
then they're like, okay, this lines up with also the experiences that we've been having.
And at that point, they then have this series of very intense discussions where they're meeting almost every day
talking about should we actually really consider removing Altman.
And in the end, they conclude, yes, we should.
And if we're going to do it, we need to do it quickly.
Because they were very concerned that the moment that Altman found out, his persuasive abilities, would make it impossible to do.
And so they end up firing Altman without telling anyone.
You know, they don't talk to any stakeholders to get them on the same page.
Microsoft gets a call right before they execute the action.
we're going to fire Altman.
And Microsoft, for anyone that doesn't know,
are a lead investor in Open AI at the time?
Yes.
One of the only investors in Open AI at the time.
And that is what then devolves the whole thing
because every single person that is affected by this decision
is now extremely angry that they were not involved.
And that is what then creates this campaign
to bring Altman back.
And then Altman is reinstalled as CEO.
So days later.
This company that I've just invested in is growing like crazy.
I want to be the one to tell you about it
because I think it's going to create such a huge productivity advantage for you.
Whisperflow is an app that you can get on your computer and on your phone, on all your devices,
and it allows you to speak to your technology.
So instead of me writing out an email, I click one button on my phone,
and I can just speak the email into existence.
And it uses AI to clean up what I was saying.
And then when I'm done, I just hit this one button here.
And the whole email is written for me.
And it's saving me so much time.
a day because Whisper learns how I write. So on WhatsApp it knows how I am a little bit more casual,
on email a little bit more professional. And also there's this really interesting thing they've
just done. I can create little phrases to automatically do the work for me. I can just say
Jack's LinkedIn and it copies Jack's LinkedIn profile for me because it knows who Jack is in my life.
This is saving me a huge amount of time. This company is growing like absolute crazy. And this is why
I invested in the business and why they're now a sponsor of this show. And Whisper Flow is frankly
becoming the worst kept secret in business, productivity and entrepreneurship. Check it out now
at Whisperflow, spout W-I-S-P-R-F-L-O-W.
It will be a game-changer for you.
Make sure you keep what I'm about to say to yourself.
I'm inviting 10,000 of you to come even deeper into the diary of a CEO.
Welcome to my inner circle.
This is a brand new private community that I'm launching to the world.
We have so many incredible things that happen that you are never shown.
We have the briefs that are on my iPad when I'm recording the conversation.
We have clips we've never released.
we have behind the scenes conversations with the guest and also the episodes that we've never, ever released.
And so much more.
In The Circle, you'll have direct access to me.
You can tell us what you want this show to be, who you want us to interview, and the types of conversations you would love us to have.
But remember, for now, we're only inviting the first 10,000 people that join before it closes.
So if you want to join our private closed community, head to the link in the description below or go to doac circle.com.
I will speak to you there.
How does a CEO of a major company get fired by the board?
Because board members, there's a quote in your book on page 357 where you say about Ilya saying,
I don't think Sam is the guy who should have the finger on the button for AGI.
Now, I asked myself this question.
You know, I work with lots of people here.
We have 150 people that work in this business.
And those people know me best.
Yeah.
They see me on camera.
They see me off cameras.
If they said that we don't think Stephen is the right person to host the Dyer of a CEO, for
example.
Yeah.
It would take a lot for them to say that.
Yeah.
They must have seen some shit off camera for them to go, we don't think he's the right
person to be on camera.
Yeah.
Or for whatever reason.
And in the case of AI, which is much more consequential than a podcast that is, you
know, filmed in my old kitchen, it almost sends a chill down one's body to think that
the co-founder of a business has gone to the board and said, this isn't the guy to
lead this consequential technology.
And it wasn't just I'm ira Miramradi then also said, I don't think Altman is the right guy.
And then they both left.
Later.
So then Altman comes back and, lo and behold, Ilya never comes back.
So his concerns about the fact that Altman, founding out, would be bad for him, manifested.
He ended up not coming back and Miramani then left shortly thereafter.
Quite a lot of these people leave, don't they?
Open AI.
They do.
So if you consider one of the origin stories of Open AI is this dinner that happened.
at the Rosewood Hotel, which is a very swanky hotel right in the heart of Silicon Valley,
that was one of Elon Musk's favorites whenever he was coming up from L.A. to the Bay Area.
And there was this dinner that was there where Altman was intending to recruit the OG team
that would start Open AI. So he's kind of telling everyone, you might have a chance to meet Musk
because Musk is going to come to this dinner, and he cold emails Ilya.
and gets Ilya to then come,
and Ilya specifically wants to come because he wants to meet Musk.
And he also emails all these other people,
including Greg Brockman, Darya Amadeh.
These are people that ended up working at O'Maday.
And they all, almost all of them,
not every one of them,
but almost all of them end up working at Open AI.
And leaving.
Almost all of them end up leaving
specifically after they clash with Allman.
And Ilya,
he left and launched a company called Safe Superintelligence.
Yeah.
Which is, I mean, that's an indirect if I've ever heard one.
Do you know what I mean?
Do you know what I mean?
If someone like co-founded this podcast with me
and then they left and started a podcast called Safe Podcasting,
I'd take that as a slight.
I'd have people knocking on their door.
and asking for their texts.
One of the things that is happening here is
it is not a coincidence that every single tech billionaire
has their own AI company.
They want to create AI in their own image.
And that's why they keep not getting along.
And in fact, it's not just don't get along.
They end up hating each other after working together
and then splinter off into their own organization.
organizations. So after Musk leaves, he starts XAI, after Darga leaves, he starts
anthropic, after Ilya leaves, he starts safe superintelligence, after Mira leaves, she starts
thinking machines lab. They want to have control over their own vision of this technology
and the best way that they have derived from their experiences of trying to
put their vision into the arena is by creating a competitor and then competing with Open AI
and with all the other companies out there. Do you think some of these AICOs realize that they
are quite literally summoning the demon, as Elon said, 10 years ago? But they don't really care
because being the person that summoned the demon is, makes you consequential and powerful and
historical, even if the outcome is potentially horrific, even if there's like a 20% outcome of it
being horrific. I remember, I think it was Dario. He's the one that said there's somewhere between
a 10% and 25% chance of things going catastrophically wrong on the scale of human civilization.
25% is a one in four chance. If you put bullets in a four chamber revolver and said Stephen,
the upside is you could become a multi-gazillionaire and be remembered forever. The downside is that
there would be a bullet in your head. There is not.
no chance that I would take that better with a 25% potential chance of things going
catastrophically wrong. So I have a very long answer to this because do they know if they're
summoning the demon? It really depends on what we define as summoning the demon. And in this
particular case, to go back to what we were saying before, there is a mythology that the AI
industry uses where summoning the demon is an integral part of
convincing everyone that therefore they can be the only ones that are developing this technology.
I got it. So on one end, you've got to say, if we don't China will, and that's terrible.
Yeah. But if we let anyone else do it other than me, then we're fucked as well.
Exactly. So that means that I have to do it and you have to give me money and support.
Exactly. So when they're saying these things, we should understand it as not as like a genuine prediction based on what they're seeing. Because for
First of all, we don't predict the future. We make it.
We should understand this as an act of speech to persuade other people into believing that they should seat more power, more resources to these individuals.
And so do they know that they're summoning the demon?
I mean, they are purposely trying to create this feeling within the public that they are, because it is a crucial part of their power.
But do they, if we were to define just do they realize that the things that they are doing are having already really harmful impacts all around the world on vulnerable people, vulnerable communities, vulnerable countries?
That's where I'm like, maybe yes, maybe no, and they don't really care.
Because in the frame of mind, like, I sometimes use the analogy that the AI world is like Dune.
Dune. For anyone that doesn't know Dune.
Science fiction epic written by Frank Herbert,
and it's set in this intergalactic era
where there are all of these houses
and they're fighting each other for spice.
So it's a callback to colonialism and empire.
And they all are trying to control the spice.
But one of the features of this story
is that there are these myths that are seated
on the different planets
about a religious myth, basically,
about the coming of the Messiah
that are used as ways to control the people.
And Paul Atreides, when he arrives at the planet Iraqis with the intention of trying to then fight against the empire and avenge his father's death, he steps into a myth that has been seated on this planet that says that one day there will be a Messiah that comes and saves the planet.
So he steps into the role of the Messiah and leans into this idea in order to be.
control the people and rally them behind him as a leader to help with this quest.
He knows that it's a myth in the beginning, but because he lives and breathes and embodies it,
it kind of starts to blur in his mind, whether this is really a myth or whether he's really the
Messiah. And this is what I think happens in the AI world. On one hand, there are all these
executives that actively engage in myth-making because, you know, I have all these internal documents
that I write about in the book where they are very keenly aware of how to bring the public along
with them by showing them dazzling demonstrations of the technology, by using crafting a mission
that will sound really good and make people give more leniency to their companies. So they know
they're doing the myth making. And also, I think many of them lose themselves in the myth
because they have to live and breathe and embody it day in and day out. And so when, you know,
Dario says he thinks that 10 to 25 percent of the future could be catastrophic or whatever,
the probability is 10 to 25 percent. He is actively engaging in the myth making, but also
he's losing himself in the myth. Like, I think if you were to ask him, do you genuinely believe that?
he would be like, yes, I genuinely believe that, because there's been a blurring of when he's saying
something just to say something versus when he actually believes what he's required to believe
in order to then continue doing the things that he's doing.
And this is the whole psychology of cognitive dissonance, right, where the brain struggles
to hold two conflicting world views at the same time.
So it's incentivized or endeavors to dismiss one.
So if you wanted to be a healthy person but also a smoker,
and I pointed out that smoking's bad for you,
the first words out of your mouth are going to be, yes, but smoking helps me with stress.
Yes, but I only do it when, and I think, I don't know,
I kind of see that at the moment because these companies have to raise extortionate,
like huge amounts of money to fund their AI research and they're building out all of these data centers.
So when they're out in the public, they're always fundraising.
All of these major companies are fundraising all the time at the moment.
So you can't be fundraising and saying,
I'm going to destroy your children's future, potentially.
There's 25% chance that your children aren't going to have a great life.
Which might be the truth.
I mean, that is actually what they say.
This is what famously Darya Amadei does.
He does that, but the others, Sam's not doing that as much anymore.
Yes, and it's because, you know, it goes back to, like,
each of them kind of distinguish themselves a little bit as,
as the brand that they need to project.
Do you think any of them are more
have a stronger moral compass than others?
Because I think Dario often gets the credit
for having more of a, you know,
more of a backbone
and being more conscious of implications.
He does get a lot of credit for that.
He's from Claude and Anthropic
for anyone that doesn't know.
I don't think it truly matters that question,
the answer to that question,
because to me, even if you were to swap all the CEOs for someone that people would say is better at running these companies,
it doesn't fix the problem that I identify in the book, which is that there is a system of power that has been constructed where these companies and the people running these companies get to make decisions that affect billions of people's lives around the world.
And those billions of people do not get any say in how it goes.
those people, they can go to the polls, right?
So if the public are sufficiently educated, they can go to the polls and pick a leader
that says they're going to legislate or pass laws or try and pass laws.
Yes.
But at the speed and pace at which these companies operate,
and at the sheer scale and size,
they're able to also spend extraordinary amounts of money,
hundreds of millions in this upcoming midterms,
to try and kill every possible piece of legislation that gets in their way
and craft legislation that would codify their advantage.
And so to me, I think sometimes as a society,
we obsess a little bit with are these leaders good or bad people?
And to me, the bigger question is,
is the governance structure that we've created a sound one
or that allows broad participation or an anti-democratic one
that has consolidated this decision-making power in the hands of the few?
because no person is perfect.
I don't care who is at the top of these companies.
They are not going to have the ability to make decisions
on behalf of so many people around the world
who live and talk and have a culture and history
that are fundamentally different from them
without things going wrong.
And so that is why throughout history
we've moved from empires to democracy.
It's because empire as a structure is inherently unsound.
It does not actually maximize the chances of most people in the world
being able to live dignified lives.
I'm going to try and take on their point of view.
So this is me playing devil's advocate.
Okay.
But Karen, if the U.S. don't continue to accelerate their research with AI,
at some point China's model is going to become so,
smart and intelligent that we're basically going to have to rent it off them and we're going to be,
you know, they'll get the scientific discoveries, they'll discover the new era of autonomous
weapons and we will be their backyard. And like logically, that argument does appear to be pretty true.
No, it's not. If we scale up, if we just imagine any rate of change with this intelligence,
at some point, we're going to come to a weapon that could theoretically disable all of the United States
electricity, their weapons, systems.
It would know exactly how to disable the United States from a cyber perspective
because it would be that smart.
All you've got to imagine is any rate of improvement of any period,
any sort of a long period of time.
So this is a theory that might be true.
And if it's true...
I mean, yeah, any theory might be true.
But if, you know, again, going to this point of like,
even if it's a small percentage it's worth paying attention to
on the other side of the foot,
this is a theory that people talk.
about, it could be the case that the most intelligent civilization is going to be the superior
civilization. Logically, that's a pretty sound thing to say, no? So there's a lot of fundamentals
in this argument that would need to be true in order for this to be a viable argument,
and let's knock them down one by one. So the first one is that these systems are intelligent,
and that just scaling them is going to bring us more intelligence. So far so true.
No, it's actually not because, first of all, again, we don't actually know if these systems are, like, intelligence is not, it's not like the right analogy almost. It's sort of like, it's like, is a calculator, a calculator can do math problems faster than a human. Does that make it intelligent?
It has a narrow intelligence because it's solving a narrow problem, which is like one plus one equals two. But. And these systems, they actually also are quite narrowly intelligent.
in the sense that even though these companies say that they're everything machines that can do anything for anyone, they actually can only do some things for some people. This is like the jagged frontier of these AI models. Like some of the capabilities are quite good. Other capabilities are not that good. You know why that happens is because the company can only focus on advancing certain types of capabilities. I can't literally focus on advancing all types of capabilities. They have to actually set their mind to advancing a certain by gathering the data that is needed for that capability by taking
getting a bunch of human contractors to annotate and train the model to do that exact thing.
And so scaling these models is actually a perpendicular question to are we actually getting
more cyber capabilities specifically and more military capabilities specifically?
I would argue that most of the top people in AI believe that the intelligence is going to continue to scale for some time.
A lot of them do.
Like Jeffrey Hinton does.
And again, it's back to his hypothesis about how human intelligence works
and what the appropriate model of the brain is.
His hypothesis throughout his career has been the brain is a statistical engine.
But that's his hypothesis, and that is not universally agreed upon,
especially among people that are not in the AI world.
When you talk with neuroscientists and psychologists,
people who actually study human intelligence in the human brain,
that is where you start to get a lot of debate and to say,
agreement about this particular view that Hinton has.
And so this is kind of like one of the things is like,
AI is already being used in the military and has been used in the military for a long time.
But specifically accelerating large language models isn't just the only path for getting
military capabilities. Like the companies would have to choose to specifically pick military capabilities
to accelerate, not just like general intelligence. It's like, you know what I'm saying? Like,
they create this myth that they are actually pushing the frontier of all of the capabilities
of the model. But that's not what's actually happening internally. And I have, I had hundreds of pages of
documents on like how they were specifically training models. They pick what capabilities they want
to advance. And you know how they pick them. It's based on which industry.
would be able to pay them the most money for their services. So they pick finance, law,
medicine, health care, commerce. It's not actually intelligent like a baby where the more
that the baby grows up, they start having these general abilities. I think I have dragon intelligence,
I wasn't going to say it, but I think I know a little, I know a little bit about a few, no, I know a lot
about a little bit. Yeah, but you also have the capability to learn and acquire knowledge by yourself.
And you also have the ability to choose what you're going to learn and acquire by yourself.
It's not easy. And it takes a lot more time than these models, it seems. Less compute.
And you can learn how to drive in one place and then immediately know how to drive in another place.
These models cannot do that. Every time a self-driving car is shifted to another location,
it has to completely retrain on that location. It's like all the self-driving cars, I mean,
we're sitting in Austin right now. And there's all these.
self-driving cars that are driving through Austin.
But when one of them learns, they all learn.
Well, it's just because it's an operating system that has an AI model as part of it,
and you're training the AI model, and then you deploy the AI model across all the self-driving cars.
Which is a big advantage.
Because if one optimist robot learns one thing in one factory, they all learn it.
And imagine that.
Imagine if humans, if we all learned what all the other humans learned, that would give us
such an unbelievable competitive advantage. I mean, one of the ways we did that is through communication.
Or it could not, because they could be learning the wrong thing, which has also happened again
and again with these technologies, is that all of them then learned the wrong thing, and they all
have the same failure mode. I mean, part of the resilience of human society is that we do have
different expertise, and we also have different failure modes. I think sometimes we hold AI models
to a higher standard than we hold humans to, and in a weird way, because I'd hear on stage,
we're in Austin at the moment, and I'd hear people go, ah, but, you know, them AI models, they
hallucinate sometimes. I'm like, have you met a human? Like, I hallucinate all the time. I can barely
spell or do math. Yes, but it's once again, like using this analogy that was specifically
picked in the early days of the field as a way to market these technologies. Like, we're repeatedly
using the intelligence analogy and relating these machines to human intelligence as a way to
try and gauge whether or not it is good or worthy or capable in society.
I think the output is the thing that really matters, is the most consequential, which is like,
okay, it might have a different brain and a different system, but does it arrive at the same
capability?
Like, is it able to do surgery on someone's brain?
Is it able to drive a car?
Like, my car drives itself in Los Angeles.
I don't touch a steering wheel, and I can drive for many, many hours.
And here in Austin, I just saw the ones the other day where they've removed the steering wheel
and the pedals, the new cybercabs.
So it doesn't really matter if it's using a different system.
system, if it's navigating through the world as a car, it has a better safety record than human
beings, then as far as I'm concerned, intelligence or not, it's like, you know.
But that was not the original argument that you made, which was like, these systems are just
generally going to become more intelligent across different things based on the prediction.
This is a prediction that you're making, right?
And this is a prediction that all the AI...
Illia's making, Darius making, Darius making, Elyon's making, Zuckerberg's making,
Oldman's making, Demis is making.
And do you know what the common future of all of them is?
They profit enormously off of this myth.
Elon has recently spearheaded the construction of Colossus,
a massive supercomputer in Memphis,
housing 100,000 GPUs specifically to scale up
their GROC API models faster than their competitors.
It appears that they've all converged around this idea
that you can brute force your way
to greater, more generalized intelligence.
They've converged around the idea
that you can brute force your way
into models that they can sell to people for automating certain tasks that are financially lucrative.
And I heard Elon say that if you're a surgeon, there's just no point. He was like, don't train to be a surgeon.
He says, in a couple of years' time, optimists and AI generally are going to be better than any surgeon that's ever lived.
Yeah. Do you think these things are true?
Well, you know, I'm pretty sure it was Hinton that famously slash infamously said. There would be no need for radiologists anymore.
Oh.
There would be no need for radiologists anymore in, he set a deadline that we've already passed.
I don't remember how many years.
Radiology is doing great as a profession.
Do you think it will be in five years?
Okay.
So this, once again goes back to this question of like, why do we build technology and why should we specifically be building AI?
Okay.
And for me, like, the whole project of technology development advancement is not to advance technology for technology's sake.
It's to help people.
And there have been lots of research that has shown that actually the best outcomes for people in a healthcare setting is for the radiologist to have the AI model in their hands.
And for the human expert to use the AI model as a tool as an input into their judgment.
And it is that combination that leads to the human expert to use the AI model as a tool as an input into their judgment.
And it is that combination that leads to the most accurate and early diagnoses of certain types of cancer that then help improve the prognoses of the patient.
Do you believe that in the coming years, pretty much all the cars on the road will be driving themselves?
No.
You don't think so.
How come?
Because of the way the technology works.
Because these are statistical, I mean, currently the way that AI models are primarily developed, they're statistical engines.
you have what's called a neural network, which is a piece of software that has a bunch of
densely connected nodes.
Like parameters.
Is this what they call parameters?
Yeah, pretty much.
And you're just pumping a bunch of data into it.
And then it's analyzing the data and creating this, all of these, finding all these
correlations in the data, finding all these patterns.
And then it's through those patterns that the machine is then able to act autonomously, right?
And so the way that they're turning a self-driving car,
they're recording all this footage,
and then they have tens of thousands
or hundreds of thousands of human contractors
that draw literally around
every single vehicle in the footage,
every single pedestrian,
every single traffic light,
every single lane marking,
and label it exactly as such
so that then it's fed into an AI model
that can identify all of these different components,
and then it's connected to another piece of,
software that is not AI that's saying, okay, if the AI model recognizes a pedestrian, we do not run over the pedestrian.
If the AI model recognizes a red traffic light, we stop. And so the thing about statistical engines,
it's that it's based on probabilities. It's not based on deterministic logic. So systems make errors
all the time. And it's impossible. It is technically impossible to get them to stop making errors.
Humans make errors way more than systems in this case. Yeah. Like the safety record is like, isn't it like 10 times more safe to be driven in a Tesla with autonomous driving than it is for a human to drive?
It depends on the place. It depends on whether the Tesla was trained to specifically navigate the place that you're driving.
If humans get drunk. Because if it's in Mumbai, in some places,
in Vietnam, no, it would not be safer. I would much rather be driven by someone that has been
driving in that place their whole life. I'm not arguing against, like, the fact that in certain
places where the car has been explicitly trained to drive in this place, that it has a better
safety record than the humans that are driving in that place. But you specifically asked
if I think that all of the... Most cars. Most cars in the world, in the U.S.
Let's see the United States, because we're here.
I don't actually think that it's like imminently on the horizon.
Ten years?
No, I don't think so.
I sat with Dara from Uber and he's pretty convinced that his nine million couriers will be replaced by autonomous vehicles.
I mean, how long has self-driving cars been invested in such far?
It's been more than 10 years.
And what percentage of cars right now are autonomous on the US roads?
I mean, so part of it is it's actually not a technical problem, right?
Like, part of it is also a social problem.
Like, do people even trust getting into these eagles?
Part of it's also a legal problem, which is if the car, the self-driving car kills someone, which it has happened.
Yeah, it has happened.
Who is responsible?
So in the case in L.A., it was both Tesla and the driver, because the driver dropped their phone.
They looked down, and this was a couple of years ago, I believe.
And they went to grab their phone and they hit someone.
And so it went to court, and they were held both.
responsible, both the driver and Tesla, in terms of Tesla, pretty much everyone that gets the car,
it comes with autonomy now for pretty much most people, I believe.
Partial autonomy.
Yeah, it's called full self-driving at the moment.
I mean, yes, it is called full self-driving.
Full self-driving supervised, where you kind of have to be looking in the direct.
You have to be looking in the right direction.
Yeah, so it's partial autonomy.
And here in Austin, it's full autonomy because there's no steering wheel on the new car.
so you can't drive it anyway.
But it is, you know, the Model Y is the undisputed highest selling car,
best-selling car in the world across all brands.
Well, I guess my point here is like these predictions where they say
AI is going to completely change transportation and driving.
It's going to completely change lawyers aren't going to have jobs.
Accountants aren't going to have jobs.
Do you believe that they are true?
Do you believe that there's going to be mass job displacement?
Okay, so I do think that there is going to be huge impacts on employment.
and we are already seeing those impacts,
it is not simply because the AI models are just automating those jobs away.
It is specifically because the models are improving in certain capabilities
based on what the companies that are developing them choose to improve them on.
And executives at other companies are then deciding to fire or lay off their workers
because they think that AI can replace the worker, irrespective of well,
whether that might be true. And there, you know, there have been cases of, like, the Klarna CEO who laid off
a bunch of people thinking that you would replace everyone with AI, and then it didn't actually
work, and he had to ask some people to come back. I actually DM'd him about this. If you're
hearing this, this is because I've DM'd Sebastian and he's fine with me sharing this. He said,
because I've heard his name mentioned a lot. And so when I, when we talked about AI in the past,
and people mentioned Sebastian and Klarna as the example, I wanted to clarify with him what the
truth was. He said, it's great to hear from you. I think sometimes people struggle with two things
can be true at the same time.
I think it might be time to come back on your podcast.
To your point, this is the media misinterpreting my tweet.
We are doubling down on AI more than ever.
Klanah is shrinking with almost 100 employees per month due to AI.
We used to be 7,400 at the peak.
A year ago, 5,500.
Now we're 3,300.
And by the end of summer, so this was last year, will be 3,000 people.
AI handles 70% of our customer service.
conversations at this moment. This is because we have realized that with AI, the production cost
of software comes down to almost zero, just like manufacturing used to be all handcrafted.
And then the machines came, code used to be all handcrafted up until a few years ago.
And now it is machine produced. And ultimately, we pay people more than ever for the unique
handcrafted man-made stuff. Chlana is a bank. People will want to connect to humans, not only machines.
They want us to be personable, relatable, even flawed.
So we need to make sure, while we are automating, replacing with AI in parallel,
we make sure we offer a super available human experience.
I'm really good you read this because I think it touches on some really important nuances to the AI,
yeah, like the impact that AI is going to have on employment.
So I think there's often these binary narratives.
It's like AI is going to come for every.
job? Or people say AI's not actually working and it's not actually coming for jobs. And like,
the reality is it's coming for jobs. There are definitely jobs that are being automated away
because of the capabilities of their models. And there's also jobs that are being lost because
executives are deciding to lay off the workers, even if the models don't match the capabilities
because it's good enough. Like they would rather have the good enough model for way cheaper.
Or they made a mistake with hiring. They blow their team and it's a great convenient thing to say.
Exactly.
Like, there's many reasons.
But, like, clearly we're already seeing impacts on the job market.
Like, the U.S. jobs report that came out earlier this year showed that there has been a decline in hiring,
is a slowdown in hiring across especially white-collar professional industries.
And you saw Anthropics report, didn't you, this week?
The TLDI is it matches kind of what you were saying, where they, Anthropic looked at exactly how people were using their models.
and they looked at what people are saying.
Yeah.
And they said that there's been a 40% reduction in entry-level jobs in particular.
And then they made this graph, which has gone viral over the internet.
The red shows where we are now in terms of capability.
And based on how people are currently using the models, they...
That's their prediction.
Extrapulate it out that the blue part will be the disrupted parts.
This is the things that they say AI can do right now, but people don't realize it yet.
So if you look at it, it's like it's kind of all the stuff you'd expect.
Yeah.
It's the physical real world human stuff.
which robots maybe can do somebody like construction or agriculture that are untouched,
but like office and admin, like saying finance stuff, math.
And notice that these are all the things that I just named that they purposely finance, math, law,
media and arts, that's me cooked.
Yeah.
Office and admin, I mean, they do focus a lot on like assistant type and marijuana
work.
So, but the other thing that the Karno CEO said was,
but people also want human experiences.
So it's not actually just about the capabilities of the models.
It's also about what people want.
Like some things they would turn to AI for
and some things they wouldn't,
irrespective of whether or not AI is capable of doing it,
but because of a preference that they want human-to-human interaction.
And so what we're seeing right now is, yeah,
the thing that happens with everyone,
wave of automation, which is that there is a bunch of entry-level work that gets automated away.
And there are also new jobs created, but the jobs that are created are in one of two categories.
There are people that get even higher-skilled jobs.
And what he was saying, like, we pay people more for, like, the handcrafted code now.
And there's also the people who get way worse jobs.
And so there was this amazing article in New York Magazine that was talking about how
a lot of people are getting laid off. And then they end up working in data annotation, which is the labor that I've been referring to throughout this conversation that companies need in order to teach their models the next thing that the companies are trying to automate. And so like a marketer gets laid off and then they go and work for a data annotation firm to train the models on the very job that they were just laid off in, which will then perpetuate.
more layoffs if that model then develops that skill.
And the article was talking about how this has become a huge catch-all for a lot of people
that are struggling with finding job opportunities right now,
including like award-winning directors in Hollywood that are actually secretly doing
this data annotation work to put food on the table.
And so when they talk about there's going to be mass,
and then there's going to be some new jobs created that we can't even imagine.
I think a lot of these narratives rarely talk about, like, first of all, why are some jobs going away?
It's not just because of the model capability.
It's also because of executive choices and because of the rhetoric that they use if they want to just downsize.
But the other thing that is rarely talked about is the jobs, a lot of the jobs that are created are way worse than the jobs that were there.
And it breaks the career ladder.
So it's the entry level and the mid-tier jobs that get gouged out.
It's higher-order jobs and then way more lower-order jobs that get created.
And so how do people continue to progress in their careers?
There's no more rungs on the ladder.
I actually don't know the answer to this question,
and I've been furiously trying to find a good answer to this question
because I can, you know, everything is theory.
And for my audience, I would say most of my audience don't run businesses.
A lot of them do, a lot of them aspire to,
run businesses. So they're also in the land of theory. They're hearing lots of different things.
Jack Dorsey does his tweet saying he's halving his headcount because of AI. They don't know what's
true. They don't know the sort of internal economics at Jack's company. And did he bloat the company
during the pandemic? And he's just using this as an excuse to make this share price spike for seven
points because his investors now think they're an AI company or whatever. It's hard to pass through.
So eventually I go, okay, what am I doing? I have hundreds of team members, probably 70 companies I
I invest in, maybe five or six that I'm like the lead shareholder in. What are my
actually doing on a day-to-day basis right now. I'm also, I also consider myself to be head of recruitment.
But in the last month in particular, I have met extremely capable candidates in terms of
cultural alignment, hard work, those kinds of things. But I've had to take a great deal of pause,
because when I run the experiment of can I get an AI agent to do that exact same thing,
the answer is increasingly yes, especially in a world of open clause.
And so what, I'm curious, like, now you confront this decision where you're seeing,
in this short-term period, you could just choose the AI agent.
And in the long-term period, there is no career ladder.
So who are you promoting into these senior roles?
Like, how do you resolve it for your own company?
Yeah, it's good question.
So there's kind of two ways I'm thinking about it.
I think really deep expertise is very, very valuable.
Because if you're now the orchestrator of potentially AI agents,
it's really about having a deep understanding of the right question to us.
And that's someone who has deep expertise on something.
So I need my CFO because if she's going to be orchestrating our team of agents that might
be doing financial analysis or whatever else, she needs to understand what to tell them to do
in our company.
And in turn, financial analysts can't do that.
They need the 50 or years of experience that, you know, Claire has.
On the other end, I need Kaz.
Kaz is 25.
Kaz knows everything about AI agents.
He's a young Japanese kid who's highly, highly curious.
You know, on the weekend, he's building AI agents to solve problems in my life.
I need those two kinds of thinking, which is highly proficient agent maxing young kids,
or they don't necessarily need to be young, but like really lean in high curiosity.
That's creating a force multiplier in my business and then I need deep expertise.
Now, everything else outside of, there is another one I've thought of, another group,
is like people with extremely great RRL people skills.
Because we do meet people in real life.
We greet you and you arrive here.
when we go for lunch with big clients that we have,
whether it's Apple or LinkedIn or whoever it might be,
we need to schmuse.
And we have teams who are in person in the office.
So we do a lot of stuff, IRL.
And increasingly we're building communities,
even for this show.
We're doing community events all around the world.
So we need people that are good at that as well.
IRL, bringing people together in real life and organizing stuff.
Those are the three groups of people that I'm like, you know, irreplaceable right now.
And if you were to all of the,
roles that could be done by AI agents, if you were to replace them with AI agents,
do you think you would still have these three roles, pools of people to hire and promote
into the three critical things that you need in the long term?
If things carry on at the current rate of trajectory, one could assert that even those
roles would experience pressure. If you just imagine, like people think of things either statically
or linearly or exponentially. You imagine an exponential rate of improvement, which is kind of what
I've seen, even like a 10% compounding rate of improvement, at some point, at some point,
at some point, I think what remains is actually the IRL irreplaceably human stuff, human to human.
Our Maslovia needs of being in person like we are now aren't going to change. We need connection.
Humans get very sick when they don't have other human beings in their life and strong, deep relationships.
So that stuff is going to matter a whole lot. I have this contrarian weird take that actually maybe this is the first
technology, that's going to deliver on the promise of making us human and connected, because
we're going to be rendered useless at everything else other than what humans are good at.
Because all the other technologies said, oh, we're going to make you more connected, connecting the
world. And they disconnected the world and isolated the world. But maybe this is the one that's so
intelligent now that it doesn't need us to fuck around in spreadsheets anymore.
Do you see that actually happening in real time right now, that it's making us more
able to be in person connected with one of other, having deeper social community engagements?
Yes.
Yes.
And I'll give you some data points.
Okay.
Data point number one.
The Financial Times released a report on social media usage.
And what they saw is 2022 was the peak and it's plateaued ever since.
The generation that's plateaued the fastest and heading down is the younger generations.
Yes.
The boomers are still off to the races, right?
So on Facebook and stuff.
and then you look at the way Gen Alpha are using social media,
they're not posting as much.
They call it posting zero.
They're scrolling sometimes,
but they're in dark social environments like WhatsApp and Snapchat and IMessage.
They're not like performing to the world.
They also value IRL experiences much more than any other generation.
They're like not getting smashed.
We're seeing every brand has a run club.
We're seeing, I mean, run clubs explode,
exploding around the world.
And we're seeing this real sort of,
sort of almost like innate realization
that like technology led us down at some,
fundamental level.
Like dating apps let us down,
social networking kind of has let us down.
And we're seeing, I think,
maybe a bifurcation of society
where a lot of people are going, fuck this.
Like, I want to go back to what it is to be a human.
Yeah.
And I would imagine that in such a world
where intelligence is so sophisticated
that we no longer needed to sit at laptops
and like,
I think screen time's going to continue to fall.
I think you go into an office,
you're not going to see people sat at laptops.
You're going to see something completely different.
And I think maybe, you know,
and then we talk about robots
and optimist robots. Elon says there'll be 10 billion optimist robots. Elon has been wrong
with timing before. He's almost never been wrong on the big things completely. He's just,
his timing has got a bad track record. So I think he's probably right. You know, I think I've got some
people on the way from Boston Dynamics and these other big companies like scale AI. And they're
actually bringing the robots here to show it like folding laundry doing the dishes. I'm not saying that's
what I would want in my home, but I think factory work is going to completely change. I think a lot of
manual labour is going to completely change, and I think we're going to be forced to do what only we can do.
Sebastian, who's the CEO of Clana, has actually just called me.
Hello, Sebastian, you're right.
Hey, how are you?
I'm good.
How are you?
It's been a while.
It has been a while since you're on the show.
I was just saying, we do need to get you back on.
I just had a couple of simple questions, because, you know, I do a lot of interviews, and
Klarna's always mentioned because I think the media has said that you, like, double down on air,
then you reversed because it didn't work out.
So I know I spoke to you a while ago and we exchanged a couple of DMs about it, but that was
more than a, it was almost a year ago now. So I just wanted to get an update on Klanarna's business,
AI agents and all of that if possible. First of almost, we were early on released AI to support our
customer service, which had that initial benefit of more calls being dealt with by AI, which customers
liked because those calls or chat messages were much, much faster and more quality. Then since then,
has actually expanded slightly. What we did, however, try to communicate as well is that we believed
in the world of where AI is cheap and available, the value of human interaction will be regarded as higher.
So the future of customer service VIP is a human. We have then, hence doubled down on providing
more of that. But at the same time, the efficiency gains within the company has continued. I mean,
we used to be about 6,000 people, and now we are less than 3,000, which is two, three years since we stopped recruiting.
And at the same point of time, our revenue has doubled, right?
So you can clearly see that AI has allowed us to do more with less people, but we have avoided layoffs and instead relied on natural attrition when people kind of move on to other jobs.
I mean, from my perspective, we will continue to be very, you know, not.
really recruit much. I mean, we recruit a little bit here and there, but we expect that kind of
natural attrition of 10, 15% per year to continue on to become fewer. I think the big breakthrough
was really in November, December last year, where even the kind of more most skeptical
engineers who are like very well renowned and appreciated like the founder of Linux and stuff like
that basically said that coding has now been resolved and hence is not, you know,
you don't need to code anymore. And that was kind of a common sentiment. So I think in coding,
that's definitely an engineering work that has been a tremendous shift in the last six months.
What do all these people go do, Sebastian? I am optimistic. I mean, I think obviously people
will have a lot of opinions about this topic, but I still believe that we are going to move
towards a richer society. Now, in the short term, there could be more worry about what happens
if people don't get a job and so forth.
But I think in the longer term,
I am optimistic what it means for society and humanity.
Thank you so much, Seb.
I'll chat to you soon.
Thank you for taking the time.
I appreciate you, mate.
Thanks.
All right.
All right.
Bye.
Bye.
I've spent the last decade building and investing in companies.
And so often the conversation around marketing budgets
follows the exact same pattern.
The budget gets approved, but then the results don't come back.
And most of the time, the creative pitch and the offer is fine.
The problem lies with the audience.
ads reach people who will never buy or refer, nor do they have the power to sign off anything at all.
And this is why so much budget gets wasted.
LinkedIn ads, who a sponsor of this podcast, lets you reach them specifically by job title, seniority, company size, industry, the skills that they have and much more.
You're no longer hoping your ad reaches the right person.
Instead, you're defining exactly who sees it.
And LinkedIn ads drives the highest B2B return on ad spend across all major ad networks.
Give them a try at LinkedIn.com slash diary.
And if you spend $250 in your first campaign,
you'll get a $250 credit for your next one.
Just by going to LinkedIn.com slash diary.
Keep this to yourself.
Terms and conditions apply.
You know the little traditional SIM card that goes inside of our phones?
They haven't changed at all since they were invented in the 90s.
You have this physical piece of plastic
that means you're locked into one carrier, one network,
and the second you cross a border,
that carrier can start charging you whatever they want.
But there are alternatives.
And today's sponsor, Saley, is one of them.
It's an ESIM app that gives you a safe and secure data connection
in over 200 destinations.
All of their ESIMs have built-in cybersecurity,
which is great if you're traveling for work
and looking at confidential material.
I've been using Saley whenever I travel
because the connection is always reliable,
and it saves me a ton of roaming fees.
It also means I don't have to deal with all of the fath
that surrounds sorting out a SIM everywhere I go.
If you want to give it a try, download the Saley app from the App Store now and scan the QR code on screen.
And if you want 15% off your first purchase, use my code, DOAC, when you get to check out.
That's DOAC for 15% off.
Keep that to yourself.
Any thoughts?
Well, I actually had thoughts on something that you said before he called, which is you were saying that the Gen Ziers, like there's this trends that they're actually disconnecting
technology, so they're becoming more in person.
And then there's this other class of workers that are actually leaning into the technology,
but then becoming more human because they're leaning into the technology,
because they're realizing that they should actually just be spending more time
doing in-person-to-person interactions rather than staring on a spreadsheet sheet.
And so they're no longer doing the typing and whatever.
I really want to go back to this New York Magazine piece that just came out
because what you're describing is true for a very specific category of people,
which is often like the business owners and leadership within companies that actually can make these decisions on how they spend their time and what they ultimately do with their time.
But what the piece talks about is the working class, like people who are not business owners that are then having to experience being laid off and then working for the data annotation industry, which is now one of the top.
jobs on LinkedIn, by the way.
Really?
Yeah, so LinkedIn had a report that showed the top 10 jobs with the highest growth in the last
year, and data annotation is on that list.
And for anyone that doesn't know what data annotation is?
Yeah, so data annotation is the process of teaching these chatbots or any AI system
to do what they ultimately are able to do.
So the fact that chat GPT can chat is because there were tens of thousands or hundreds
of thousands of people.
that were literally typing into a large language model and showing it, this is how you're supposed
to then respond when a user types in a prompt like this. Before they did that work, chat,
GPT didn't exist. Like it just, it would just, you would prompt the model and the model would
generate some text that was not in dialogue with the person. It would kind of generate something that
was adjacently related. Is this what they call reinforcement learning where you kind of, you give it like a
It's a part of the process of reinforcement learning.
So you do data annotation, which is literally showing lots of different, you know,
examples of things that you want the model to know.
And then reinforcement learning is getting the model to then train on those examples iteratively
in a way that then gives the model some of those capabilities.
And what the New York Magazine piece highlighted is many, many of the people that are getting laid off now
or are struggling to find work.
And these are highly educated people.
They're college graduates, PhD graduates, law degree graduates, doctors, and again,
like award-winning directors that are then struggling to find employment in the economy because
the economy has been very much restructured by AI.
They are then finding themselves being serving this industry.
And the industry is designed in a way that is extremely inhumane because what the
companies, the companies that use these data annotation services, like there's these third-party
providers that are data annotation firms. An OpenAI, a GROC, a Google, they will hire these firms
to then find the workers to perform the data annotation tasks that they need. For these firms,
these third-party firms, they are incentivized to pit workers against each other because
they want this data annotation to happen at speed and as cheaply as possible so that they
can also compete with one another in this middle layer to get the contract from the client.
And so all of these workers that were interviewed for this New York Magazine story talk about how
they actually no longer have an ability to be human because they are waiting at their
laptop to be pinged on Slack for when a project is going to open up for data annotation
because they've tried job hunting. They literally can't find anything else.
this is the thing that's going to help them put food on the table for their kids.
And there was this one woman who said, like, I have so much anxiety about when the project is going to come, when it's going to leave, that when the project came, it was right when my kid was coming off of school.
And I just started tasking furiously because I don't know what's going to go and I need to earn as much money as possible in this window of opportunity.
So then when my kid came home and tried to talk to me, I screamed at my child.
for distracting me.
And then she was like, I've become a monster
and I'm not even allowed to go to the bathroom
or take care of my kids, let alone myself,
because this industry that is absorbing more and more of the workers
that are being laid off
is mechanizing my life, atomizing my work,
devaluing my expertise,
and then harvesting it
for the perpetuating it for the perpetuation
of this machine that all of these AI executives are saying is then going to come for everyone
else's jobs. And so what you were saying about this class of workers, the business owners that get
to become more human because there are all of these AI models now doing the tasks that they
don't have to do anymore, it is at the cost of the vast majority of people who are not
business owners that are struggling to find work, getting absorbed into the work.
of then providing these technologies that the business owners can use.
And instead of becoming more human,
they feel like their humanity has been squeezed and diminished,
and they have no ability to have control, agency, and dignity in their lives anymore.
I think this is a big question that kind of pertains to this graph here,
which is, you know, all of these people,
if we believe Anthropics' prediction of who will be disrupted,
these people in these industries like arts and media, legal, life and social sciences,
architecture and engineering, computer and maths, business and finance and management,
and also office and admin, these people, if we believe this, would have to retrain at something
else. And unlike the Industrial Revolution, where you might get 10, 20 years to retrain
because factories take a long time to build, the distribution layer that AI sits on top of
is the open internet. So this is why chatchpity can go, pop, and get hundreds of millions of users.
in no time at all and become the fastest-grained company of all time.
One of my fears is that this disruption takes place
at a speed where we can't transition.
And that was, you know, I think you set that sentence in the passive voice.
The transition would happen at a speed.
But who is driving that speed?
It's the companies.
The companies, yeah.
And their race with one another.
Yeah.
And so they are driving the transition to happen at a speed
at which it would be really hard to take care of all of the people
that would be bulldozed over by the advanced technology.
That no one can answer for me when I sit with these people that are AI CEOs.
So I go, so what happens to the people?
If you agree that this is going to happen at super speed,
you know, I've spoke to that CEO of Uber Dara,
who said very similar things to what you're saying is,
you know, there'll be data labelling jobs, for example, for the drivers.
But they can't all become data labellers.
And there's a question around meaning and purpose
and fulfillment, and that comes from losing your meaning in life.
I also sit here with so many people who talk about how their father lost their job in Iran
or some other country and came to the United States and had to be a toilet cleaner in particular case.
Was a doctor in Iran, but came to the U.S. and was a toilet cleaner,
and had to deal with the sense of shame that that particular person felt
and the lack of dignity that that caused and how that made that person's self-esteem feel
and the depression and alcoholism that transpired from that.
If this happens at a large scale across society, there's going to be a ton of consequences like that.
I mean, this is like the core themes of my work.
And the reason why I'm critical of these companies is that they are creating technologies in a way that creates the haves and have-nots in an extreme form that we have.
It's exacerbating the inequality that we already see in the world.
Like the people who have things will have way more riches.
They'll have way more free time.
They'll be allowed to be more human.
But the people who don't have things are being squeezed even more.
And it's not just from a work perspective.
I mean, I talk in my book also about the environmental and public health crisis that these companies have created,
where they are building these colossal supercomputer facilities.
and in communities all around the world.
And they specifically pick some of the most vulnerable communities.
We're sitting in Texas right now.
Open AI's largest, one of its largest data center projects is being built in Abilene, Texas as part of the Stargate Initiative,
which was an effort announced at the beginning of Trump's second administration to spend $500 billion on AI computing infrastructure.
this facility consumes, when it's finished,
will consume more than a gigawatt of power,
which is over 20%.
Over 20%.
So this is actually a little bit inaccurate now.
This was something that circulated online for a while,
but there's updated numbers.
Just for someone that can't see
because they're listening on Spotify or something,
it's a picture of the size of this facility.
So this is not the Abilene, Texas one.
This is a meta facility.
So let's just talk about opening eyes facility in Texas.
That one would be the size of Central Park,
and it would run a million computer chips,
and it would require the power of more than 20% of New York City.
Do you know one of the things which I found confusing,
so I'd like to alleviate the dissonance,
is I thought you were saying earlier
that you didn't think the job disruption promises were real?
No, what I was saying is that
when we talk about what these executives predict about the future,
we need to understand that they are ultimately trying to influence the public in a way
that allows them to continue maintaining control over the technology.
But objectively, do you think that the job disruption that they talk about were...
Yeah, yeah.
I mean, I mentioned...
Well, I don't want to comment specifically on like this chart,
but it's like we've already seen in job reports that there is a restructuring of the economy
happening right now. Yeah. But going back to like the data design, so this supercomputer facility,
it's a meta supercomputer facility, is being built in Louisiana. And it would be four times the size
of the Abilene, Texas one, and use half of the average power demand of New York City. So it's one
fifth the size of Manhattan. This makes it seem like almost all of Manhattan, but it's, it would be
one fit the size of Manhattan. When these facilities go into these communities, what happens? Power utility
increases, grid reliability decreases. The facilities also need fresh water to generate the power
for powering them as well as freshwater to cool. And there have been lots of documented stories of
communities that are already really constrained in their freshwater resource. They're under a drought
when a facility comes in. And then there are people, the community is actually like competing
with this facility for freshwater. I talk about one of those communities in my book.
And also sometimes these facilities, instead of connected,
to the grid, they instead, a power plant pops up next to it. So in Memphis, Tennessee,
where Musk built Colossus, the supercomputer for training grok, he used 35 methane gas turbines
to power the facility. This is a working class community, a black and brown community,
a rural community that was not even told that they would be the hosts of this facility.
And they discovered it because they literally smelled what seemed like a
a gas leak in all of their living rooms. And that's when they discovered that these methane gas
turbines were taking away their right to clean air. And this is a community that's already been
facing a history of environmental racism. They had already had lots of struggles to access their
right to clean air. And now there's this huge supercomputer that's landed in their midst
that is pumping thousands of tons of toxins into their air,
exacerbating the asthmatic symptoms of the children,
exacerbating the respiratory illnesses of other people.
It's one of the communities that has the highest rates of lung cancer.
And so...
And that supercomputer is taking their jobs.
And then they also have supercomputers taking their jobs.
So this is what I mean.
It's like the haves and have-nots are fundamentally,
being pulled apart even further.
Like, if you in this version of Silicon Valley's future
are in the misfortune category of being a have-not,
we are talking about you now getting a job
that is way worse than what you had
because you might be doing data annotation
and you might be treated as a machine
rather than as a human to extract value,
the value of your labor for perpetuating this labor-automating
machine that these people are building. You might be competing with these facilities for
freshwater resources. They're also polluting your air. Your bills have increased so the affordability
crisis is getting worse. Like, how is that making people able to be more human? What do we do
about it? Yes. Okay. So one of the analogies that I always use is AI is like the word transportation.
transportation can literally refer to everything from a bicycle to a rocket.
And we have nuanced conversations about transportation, where we always say we need to transition,
our transportation, towards more sustainable options.
We need a transition towards, you know, public transport, electric vehicles.
And we don't ever say everyone should get a rocket to do every, to serve all of their transportation needs, right?
Like, we're in Austin.
If you use a rocket to fly from Dallas to Austin, like, that would just make no sense.
It's just a disproportionate use of resources to get the benefit of getting from point A to point B.
This is how we should think about AI.
So all of the models that we've been talking about, I like to think of them as the rockets of AI.
They use an extraordinary amount of resources.
And they provide benefit, some dramatic benefit, to some people.
But they're also exacting an extraordinary cost on a large swath of people because of the costs of developing this technology.
Why don't we build more bicycles of AI?
This is things like Deep Mind's Alpha Fold, which is a system that predicts how proteins will fold based on amino acid sequences.
It's really important for accelerating drug discovery, for understanding,
human disease and it won the Nobel Prize in chemistry in 2024. And the reason why it's a bicycle
of AI is because you're using small curated data sets. You just have data that has amino acid
sequences and protein folding. So that means you need significantly less computational resources
to develop the system, which means significantly less energy, which means less emissions,
so on and so forth. And you're providing enormous benefit to people.
It feels like the horse has left the stable in this regard, because they've already taken people's IP, they've taken media.
They train on this podcast.
We know they do because it shows that they do.
I think there's a button actually in the back end of YouTube now that allows you just to click it.
And it says, we will train on your YouTube channel.
So the horse has kind of left the station.
If the horse truly had left the stables, they wouldn't have to train on anything anymore.
Why is it that their appetite for data has actually expanded?
It's because in order to build the next generations of their technologies, in order to have the technologies continue to be relevant and continue to update with the pace of new knowledge creation and society's involvement, they need to train again and again and again and again.
And why are they employing actually more and more and more data annotation workers over time?
It's because they need more and more of that work over time.
I mean, I've been reporting on data annotation work for over seven years now, and it's not gone down.
It's increased.
Do you think there's any chance of it going down?
Do you think there's any chance of this sort of brute force scaling approach where you take data,
you take computational power, energy, and you know, you have data labellers and, you know,
building out more and more parameters for the models?
Do you think there's any chance it's going to stop or go in a different direction other
than the one it's going in now.
I would love to reframe the question and say, what should we be doing in this moment where
it's not going down, where we do recognize that actually these companies in this moment need
continued resources, inputs and labor to perpetuate what they are doing.
Yeah, because this sounds like stop.
And I just feel like stop is like a hard.
It feels like, I just think, you know, with the government in place, they're supporting these
companies like crazy, globally this is happening.
so I'm like, stop doesn't feel...
I always say, we need to break up the empire
and we need to develop alternatives.
And we are already seeing a flourishing
of incredible grassroots movements
that are applying an enormous amount of pressure
to the way that the empire is trying to unfold its agenda.
80% of Americans in the most recent poll
think that the AI industry need to be regulated.
When was the last time
that 80% of Americans were on the same side of an issue?
No, yeah.
When I had these conversations,
on the podcast, the comment section are clear.
Yeah.
There's no disagreement.
There's no one in there going, oh, no, I think they should crack on.
Yeah.
Dozens of protests against data centers have broken out all around this country and the U.S.,
all around the world.
So what do we do about it?
So these are people that are doing something about it.
They are actually reasserting their agency and exercising democratic contestation
against the ways that the empires are going about their business.
What goal should we be aiming at?
So if I said to my audience, Janet at home, because this is kind of what I see in the comments, it's hopelessness.
It's like, what can I do?
I'm just a...
Yeah, well, well, the goal is not that we completely get rid of this technology.
The goal is that these companies need to stop being empires.
And the way I define like a typical business versus an empire is that the empires are predicated on this idea that they do not have to provide a fair exchange of value with the workers who work for them or the people who use them or all of the other people that are involved in the supply chain of producing and deputies.
these technologies. They can extract and exploit and extract and exploit and get more value
than what they offer. Whereas typical businesses, there is a fair exchange. You buy a service,
you feel like you got the same amount of value as the service that you provided. But like for
these data annotation workers, for example, they do not feel in any way that they're being
paid the same value that they provide to these companies. So that's like, for me, the North Star.
It's like we should be pushing back and holding accountable these companies when they operate
in an imperial way.
And that's what we've seen with all of these people
that are now literally protesting the streets
against data centers,
and having an enormous effect, by the way,
actually stalling data center projects
and also completely banning data centers
from being developed in their localities.
We're seeing that with artists and writers
that are suing these companies
for intellectual property infringement
and creating a huge public conversation
about what is it that we actually,
how do we actually want to protect our intellectual property
It's like, three weeks ago, I met Megan Garcia, who is the mother of Sewell Setser
the third, who was the 14-year-old who died by suicide after being sexually groomed by
character Ais Chappot.
And she, when that happened, I mean, obviously was incredibly devastated by what had happened
to her son.
She also decided to do something about it.
She sued the companies, and that lawsuit then sparked many other parents and families who were actually
experiencing similar things to sue these companies as well. That has created an enormous public
conversation about what these companies are actually doing when they exploit and they extract.
What is the cost to the lives of people around the world, including children?
So what do you think my audience should do? If they agree with everything written in your book, Age, Empire,
of AI, dreams and nightmares and Sam Altman's Open AI, if they agree with everything said here,
if they agree with everything we've discussed today, they're concerned about their kids, they,
they don't want everyone to become data labellers, they don't think that's a particularly
great solution. What can they actually go and do? When I was writing the book, the only discourse
that was happening was, this is the best thing since slice bread. Because of all of the actions
of these people, like saying when they're, they're not happy with the things that,
these companies you're doing, we now have 80% of Americans that want to regulate this industry.
And so I would say to people, think about all the ways that your life intersects with the resources
and that the air industry needs to perpetuate what they do. And also the spaces that they would
need to deploy these technologies to continue having broad-based adoption in their work. So you're a
data donor to these companies. You could withholds that.
that data, and that's what those artists and writers are doing, like they're suing these companies
to try and create mechanisms by which that data would then be withheld. You probably have a
data center popping up around you. If you're at a school environment or a company environment,
you're probably having a discussion in those environments right now about what should the AI
adoption policy be. And these companies, they, like, I was talking with some open-air employees
just the other day. And they were telling me that it's,
understood internally that the revenue targets for the company are extraordinary.
And they need things to go flawlessly for it to all work out.
And so they would need every single person to adopt this, every single space to adopt this.
They would need to be able to build their data centers at the speed that they're trying to build them.
And so what I would say to everyone of your viewers is let's not make it go flawlessly if we don't agree with what they are doing.
Ah, okay, I got you.
And then let's build alternatives because the thing is, what I'm saying is not that these technologies don't have utility.
It's that specifically the political economy that has emerged to support the production of these technologies right now is exacting a lot of harm on people.
But we have research that shows that the very same capabilities could be developed with much more efficient methods, with much less resource consumption.
and we have a lot of different other AI systems at our disposal that are like the bicycles of AI
that we also know provide extraordinary benefit at very little cost.
So let's break up the empire and let's forge new paths of AI development that are broadly beneficial to everyone.
It's strange. I'm quite, I think I'm, I'm, I've trained myself to deal with dichotomies in my head.
And this, for me, is such is a dichotomy where I, as a CEO and as a founder, as an entrepreneur,
someone that loves technology, I think it's incredible. It's absolutely incredible AI. It's just so
amazing and incredible. The things that's enabled me to do and create. Yeah, because it's designed
to enable people like you. And my car driving in the morning and being safer, incredible,
I think, you know, the billion odd people that use AI tools or chat GPT or whatever it might be,
they'd probably say that it's added value to their life. But, and this is the part that people find
confusing that you can, and I like, I invest in companies that are, you know, heavily using AI,
And the big but is, is it possible to think that is true and also think that there are significant unintended consequences which technology and the history of technology should have taught us to take a moment to pause to talk about?
I think this is absolutely like you can have both of these things in your head.
And what I'm saying is that this tension doesn't have to be a tension because we could actually preserve the utility and benefits of these technologies but actually develop and design them in a different.
way that doesn't have all of these unintended consequences.
Yes, and I think there needs to be a big social conversation, which is why I have so many
conversations about AI in the show.
Like, there needs to be a big social conversation about being intentional about the social impact,
the social and environmental impact.
And that conversation is not being had in government from what I can see.
The conversation takes place in the industry and actually trying to pull it out of the
industry and open people's minds to it is hopefully what we've been doing over the last
couple of months with the subject.
I think it's actually been, it has been happening everywhere outside of the industry.
And for local governments and state level governments, there have been huge conversations
about this.
Everywhere, like I've been on book tour.
I've been to dozens of cities around the world.
People are having these crucial conversations everywhere.
I have not gone to a single city.
Yes, everywhere.
Even here in South Byers.
Yeah.
I haven't gone to a single city where the room is not packed and people are not wrestling with
the same exact questions as every other person in every other room that I've been in.
Speaking of pack rooms, I know you've got to go.
Because you've got a talk today.
So we've got a last question, which is the closing tradition on this podcast.
How would your advice to a friend with a terminal diagnosis differ from what you would do yourself?
That's a great question.
Differ from what you would do yourself.
Oh my God.
I would tell them, like, enjoy, like, live life for yourself.
And take it easy.
And yeah, I am not taking it easy.
Well, I think it's a good thing you're not taking it easy
because you're leading a conversation,
which is incredibly important.
And I think that's the thing.
I think the conversation is the important thing.
And so, you know, because of algorithms and echo chambers,
it's so rare to have a conversation these days,
especially a long-form one like this.
So I think they're so important.
And your book is, for anyone that's curious about,
I think a lot of people would have learned a lot of stuff today
because I sit here with an interview AI people all the time,
and I've learned so much today.
From reading your book in the extensive, objective perspective
that your book takes,
you're able to unravel all of these stories
that we sometimes see in tweets
and we don't know if they're true or not
because you've gone and met the people
and you've done your research
and you're incredibly intelligent person,
extremely intelligent person,
who clearly has humanity's interests
as your North Star,
and that shows up in everything you do and everything you say.
So please continue to fight in the way that you are
because it's an incredibly important one.
It's people like you that are, I think,
galvanizing the world to take the collective action
that we're starting to see everywhere.
Empire of AI, Dreams and Nightmares in Sam Haltman's Open AI
by Karen Howl.
I'll link it below for anyone that wants to read this book.
I highly recommend you do it.
It's a New York Times bestseller for good reason.
Karen, thank you.
Thank you so much, Stephen.
One of the most successful conversations we've had this year on the show
was with a guy called Chris Kona,
who talks about ways to make money on the side.
And it got me thinking because our show sponsor is Airbnb,
a brand I love, I've used all over the world for the last decade or so.
And this is an unbelievable, untapped opportunity to make some money on the side
if you currently are a homeowner.
Let me explain.
So many of us go travelling, we go on holiday to see in-laws
or to go on ski trips or whatever it might be.
And our home sits there, usually actually costing us money because of bills.
What most people don't realize is that you can,
put that house on Airbnb very simply and very easily. If this sounds interesting to you and you
currently don't list your property when you go away, your home might be worth more than you think.
Find out how much at Airbnb.ca.ca slash host. That's Airbnb.ca.ca. slash host.
