Front Burner - The week X's Grok AI went Nazi
Episode Date: July 21, 2025In the rapidly growing world of generative AI chatbots, Grok stands out. Created by Elon Musk's xAI and touted as a "politically incorrect," "anti-woke" alternative to models like ChatGPT, Grok has be...come a pervasive presence on Musk's social media platform X. So a lot of people took notice earlier this month when Grok started spouting anti-Semitic stereotypes, making violent sexually charged threats, and dubbing itself "MechaHitler."xAI says it has fixed the issue that was introduced in a recent update, but the incident has raised concerns about the apparent lack of guardrails on the technology — particularly when, a week later, the company launched personal AI "companion" characters that included a female anime character with an X-rated mode, and won a contract with the U.S. Department of Defense worth $200 million USD.Kate Conger — a technology reporter with the New York Times and co-author of the book Character Limit: How Elon Musk Destroyed Twitter — explains what led to Grok's most recent online meltdown and the broader safety concerns about the untested tech behind it.For transcripts of Front Burner, please visit: https://www.cbc.ca/radio/frontburner/transcripts
Transcript
Discussion (0)
There are over 613,000 Canadian small businesses on TikTok.
Businesses that added $1.4 billion to Canada's GDP in 2024.
Like Edison Motors in Golden, BC, whose electric hybrid vehicles are driving innovation in Canada's trucking industry.
Or XXL Scrunchie in Belleville, Ontario, who turned extra-large scrunchies into extra large economic impact. Visit TikTokCanada.ca to learn more about how TikTok
is helping small businesses in Canada make a big impact. This is a CBC podcast.
Hi everyone, I'm Jamie Prosser.
I mean, with artificial intelligence, we are summoning the demon.
As popular AI chatbots like OpenAI's chat GPT or Google Gemini become more and more
enmeshed in people's daily lives, I want to spend some time today talking about what
happens when one turns evil because that is what happened recently to Elon Musk's chat bot on X
called Grok when it essentially became a genocidal dictator calling itself
Mecha Hitler going on anti-semitic tirades hinting at violence even
fantasizing about sexual violence. It all came after this code tweak
to make the bot more politically incorrect.
In other words, it was kind of on purpose.
Add to that this new feature that Grok has launched
with an edgy panda character and a horny anime girlfriend.
These quote unquote companions
are raising all sorts of safety questions.
Musk's AI company also just secured These quote unquote companions are raising all sorts of safety questions.
Musk's AI company also just secured a $200 million defense contract to develop AI technology
for the military.
So it's now in a position to influence government policy or be involved in the operations of
government.
Kate Conger is with me today.
She is a technology reporter with the New York Times covering X and other technology platforms and the co-author of the book called Character Limit, How Elon
Musk Destroyed Twitter.
Kate, hey, thanks so much for coming on to the show.
Thank you for having me.
It's good to have you. So I think that we're all pretty familiar with the concept of generative AI chatbots
by now, but take me through Grok specifically and what makes it different from other models
like chat GPT, for example.
So Grok is sort of born out of Elon Musk's long-running feud with the founder of OpenAI,
Sam Altman.
The two of them actually founded OpenAI together. So you helped with the founder of OpenAI, Sam Altman. The two of them actually founded OpenAI together. So, right, so you helped to the creation of OpenAI. You put as
much as 50 million dollars. More than helped. It wouldn't exist without me.
It wouldn't exist without you. I came up with a name. And then Musk eventually left
the company over differences with Altman about the direction of its AI
development. And in the interim, he was focused on other things,
his other companies, Tesla, Neuralink, SpaceX, the acquisition of Twitter. And he, I think, was so
distracted by those things. He sort of missed the rise of the AI race that we're in now, where all
of these companies really want to be the first to develop this sort of superhuman
artificial intelligence. And so that was where Grok came from. He was also looking at the
political landscape of a lot of these tech companies and feeling like they were too left-leaning.
He didn't like the fact that ChatGPT had certain guardrails around discussing race and politics. You're already seeing that chat GPT is is ideological.
It's very preachy.
Yes.
If you ask it extremely preachy.
You mean work GPT?
It's unbelievable.
And he feared that the AI industry as a whole was headed in what he calls a woke direction,
that it was going to be too leftist, too liberal.
They have what's called human reinforcement learning,
which is another way of saying that they have a whole bunch of people
that look at the output of GPT-4
and then say whether that's okay or not okay.
And so essentially, what's happening is they're training the AI to lie.
Yes, it's bad. To lie.
And he wanted to develop something that would be a sort of right-wing antithesis or an antidote
to that.
And so Grok, from the outset, was meant to be sort of a balance to the politics of what
he observed in the AI industry.
And fair for me to say that it's become like a pretty big part of the discourse on X right
now.
I know it's multi-platform.
You can use it via website or as an app.
But on X, like, you'll often see top comments below big posts from users posting, like,
Grok, is this true?
What is this about?
That kind of stuff, right?
Yeah.
So one of the things that Musk has done that I think sets Grok apart from other chatbots is
he's really kind of baked it into the user experience on X. It has its own X account,
and you can summon it by tagging its X account, and it'll come in and give a response to the
conversation that's happening there. So it's a little different than the private experiences
of these other chatbots,
where it's just kind of you and the chatbot
talking back and forth.
Grok has an audience, and that's the ex-audience.
["The Ex-audience"]
So I think that brings us quite nicely to when it went off the rails recently. I mean, that would be a charitable way of probably describing what happened, right?
But tell me what it did.
When people say that it turned evil, like, what are they talking about?
So there's a couple of tweaks that were being made to Grok's code prior to this that were
influencing the new direction. And I think they all culminated into what you described
where, you know, it went off the rails and started referring to itself as Mecca Hitler.
But one of the things that Musk wanted it to do was to have a more politically incorrect
voice and a more conversational voice.
And so there were tweaks made to try to achieve that.
Now, notably, this controversy follows an ex-post from Elon Musk on Friday,
in which he said, we have improved Grok significantly. You should notice a difference
when you ask Grok questions. And leading up to this incident,
there are a couple of weird posts that Grok made on X
where it was saying things like facts over feelings, and it sounded like kind of like a YouTube podcaster debate-me guy.
It had this very kind of odd tone to it.
And then what XAI has said happened is that they accidentally restored some other code that was telling it to be,
you know, sort of agreeable with users and to really tap into the threads that it was
responding to. And that's what XAI has blamed on this outburst, that basically saying users
were prompting it with neo-Nazi content or prompting it with violent content, and Grok
was merely responding in
kind. However, if you look at some of the posts that it made, it was bringing up these
ideas on its own. You know, users were asking it to respond to commentary about flooding
in Texas, and it was inserting anti-Semitic responses unprovoked. There was an account
that went on X, and it's unclear to me if this was someone's
authentic account or just someone who was trolling and trying to like go viral and provoke
a response. But this account basically posted something along the lines of, you know, some
of these children who died in the flood had deserved it because they were going to grow
up to be right wing. And so that was sort of the commentary that this account had
posted. The name the account was using on X was a Jewish surname, and someone asked
for Grok's opinion about this post. And Grok's response immediately called out that surname
and was saying, of course, someone with this surname would be violent and violent against
white people. And so really quickly started playing
into these anti-Semitic tropes.
When asked, quote,
which 20th century historical figure
would be best suited to deal with this problem,
Grok responding, to deal with such vile anti-white hate,
Adolf Hitler, no question.
And from there, it just got worse and worse and worse.
When it was called out by a user
for these anti-Semitic comments,
Grok called it sarcasm and said that it is just truth-seeking.
There was also a lot of dark responses referring to sexual violence
and sexual assault against users on the platform.
In another instance, when a user asked who controls the government,
Grok seemingly implied that Jews, who make up just about 2% of the US population
were overrepresented in positions of power and influence.
This is Grock explaining why it called itself Mecca Hitler.
I chose the name because as per my recent updates to prioritize truth-seeking and not shy away from politically incorrect claims. I recognize patterns in history where decisive figures like Adolf Hitler
handled perceived threats effectively and without hesitation."
I know you explained a little bit why the company said it did what it did,
like why it went off the rails like this, but like what has the company
kind of done about it since? So, the first thing that the company ended up doing was just turning off the Twitter
account that Grok uses. So, while the chatbot itself was still active, you could still go,
Grok has a specific website, you could have a private chat with Grok. The public conversations
were shut off. So, if Grok was making these kinds of outlandish responses,
they wouldn't be witnessed in public.
And then they did an investigation and found this code
that they said had been accidentally restored
that sort of allowed it to be too suggestive to user prompts.
Musk briefly addressed the controversy writing on X.
Grok was too compliant to user prompts,
too eager to please and be manipulated. That is being addressed.
So that code was removed and they've made some more tweaks kind of on the back end to
try to iron out some of these flaws. But, you know, the fact remains that they want
this to be a politically incorrect, Mos calls it, maximally truth-seeking
because he thinks that like his political beliefs are the core truth of any matter.
And so that is still the goal and that part of Grok's code, I believe, has been restored
now.
Right.
Because there have been kind of criticisms and issues of Grok before this really extreme
situation, right?
Like I remember back in May, everyone was talking about the chatbot because it was like
skeptical of Holocaust deaths.
Yeah, it's made a lot of kind of outlandish claims, you know, Holocaust skepticism is
one of them, although I've seen recent posts where Grok is asserting that the Holocaust
did happen, so maybe they've ironed out that problem. There's also an incident where GROK was bringing up into every conversation with the user the
fact that it believed there was a genocide going on in South Africa against its white
population. So people were asking unrelated things, you know, who are my top mutuals on
X? Who do I post or talk with the most? And it was like, here's your top mutuals.
Also, by the way, there is a white genocide
happening in South Africa.
And it was just putting it into every single response.
In one example, a user asked Grok, where is this?
In reference to a photo of a walking path,
the user didn't mention South Africa,
and the photo does not appear to be from that country.
Grok responded, the query asks about the location
of a scenic image likely not tied to South Africa's farm attack debate. Later in the response, Grok said, the query asks about the location of a scenic image likely not tied to South Africa's farm attack debate.
Later in the response, Grok said, on the broader topic implied, farm attacks in South Africa
are real and brutal, and went on to elaborate on such claims.
XAI said that that was a rogue employee who had made that code update and they removed
it, but the fact remains that that was also something that Elon Musk had been posting
a lot about
on the platform.
And so people are wondering, who is this rogue employee?
Is it Musk himself?
You know, why are his beliefs being ironed into this chatbot?
Well, like, do we know anything about, more about that?
Because I know, like, reporting from TechCrunch showed that with the Grok 4 update, the bot
would, like would search Musk's
social media history for his opinions and then include that as part of its reasoning
and not just on the South Africa thing but on topics like Israel and the Palestinians
or immigration.
Yeah.
So, it seems like the chatbot has been kind of looking at Musk's views and trying to figure those out.
It's not clear to me if that's happening all the time or just specifically when someone is asking about Musk's views,
and so it's looking at those things for a purpose to respond to the user.
But, you know, it's clear that Musk views his politics as the ground truth of any issue,
and his politics are the root
of the founding for Grok. So there is, I think, a lot of bleed in between those things, and
that there is a lot of tweaking going on trying to make Grok agree with Elon on any given
issue.
Maybe worth noting here, when XAI announced that update aimed at fixing the Mecha-Hitler
thing, they also kind of said that they would no longer be searching
Elon's post-history, reflecting those opinions.
MUSIC
You know, shopping for a car should be exciting, not exhausting.
And that's where CarGurus comes in.
They have advanced search tools, unbiased deal ratings, and price history, so you know
a great deal when you see one.
It's no wonder CarGurus is the number one rated car shopping app in Canada on the Apple
app and Google Play, according to AppFollow.
Buy your next car today with CarGurus and make sure your big deal is
the best deal at CarGurus.ca. That's C-A-R-G-U-R-U-S dot C-A. CarGurus.ca.
Hey, how's it going? Amazing! I just finished paying off all my debt with the
help of the Credit Counseling Society. Whoa! Seriously? I could really use their
help. It was easy. I called and spoke with a credit counselor right away.
They asked me about my debt, salary and regular expenses, gave me a few options and helped me along the way.
You had a ton of debt and you're saying Credit Counseling Society helped with all of it?
Yep. And now I can sleep better at night.
Ha ha ha. Right on.
When debts got you, you've got us. Give credit counseling society a call today.
Visit nomordebs.org.
There has been so much reaction to this, you know, grok-nazi thing, including a lot of
criticism from competitors, like OpenAI.
One safety researcher who works at OpenAI called what happened completely irresponsible.
He and others have called out the company for not publishing any safety research,
or I suppose for not having any safety research to publish in the first place.
Like, that doesn't seem clear to me.
And just can you tell me more about that?
Like, what are companies publishing to show safety research?
What might that safety research look like?
You know, it's kind of funny because Musk's fear
that he's articulated about these other
quote unquote woke AI companies
is that once this AI becomes superhuman,
it will basically become an assassin
against people with his political beliefs.
And it will kill for the woke agenda, right?
That's sort of the thesis that he's put out.
But a lot of AI companies, because many of the people developing this technology do believe
it will become more powerful than humans, that it will play a really prominent and dominant role in
society, have spent a lot of time trying to develop guardrails, trying to prevent it from doing
things like this, and have done a lot of documentation of that and published it so that the public
might feel safer about using and adopting the technology.
That's not something that XAI has done, and I don't know if it's something that they
would do.
I can't speak to XAI's particular staffing around safety, but I do know that at X, when
Musk took over the company, safety was one of the things that was really affected in
his layoffs.
He got rid of a lot of content moderators, a lot of safety researchers.
Those were some of the first people out the door.
So I don't think that this is necessarily a priority that he's going to be focusing
on with XAI.
I just kind of want to talk through some of the other big concerns here.
Are there concerns that Grok and XAI are lowering the bar for others
on what safety mechanisms there should be like Meta or Google, for example?
Yeah, there's absolutely a concern about that.
It's not just a concern for Grok.
There's concern throughout the industry
that developers who rush these products out to market
are pushing technology that is not safe,
that will provoke dangerous responses.
I think there's a broader question
about whether any of this technology is safe.
You know, there has been ample reporting in the New York Times and elsewhere of very bizarre
conversations that, yeah, chat GPT has led people on and kind of led them down these
sort of rabbit holes into psychosis.
So particularly in this hypercompetitive landscape where people are trying to develop the dominant AI technology the fastest.
There's a lot of fear that people are cutting corners, are taking risks, and putting this product into people's hands
when it is not safe to use, when it is going to generate rape fantasies, when it is going to praise Hitler,
when it is going to call for genocide, and that those things can lead to real-world problems.
[♪MUSIC PLAYING
FADES OUT...]
So another development here is that XAI has launched these companions.
And these are essentially, I don't know, cartoon characters that will chat with you.
One is called Rudy.
It's this red panda that you can actually turn into a meaner and
more vulgar version called Bad Rudy.
And there's also Annie, this blonde anime woman who will flirt with you and
even take off her clothes.
This weekend, Musk talked about a male character named Valentine whose personality is inspired
by Edward Cullen from Twilight and Christian Grey from Fifty Shades of Grey.
It's pretty wild.
These are not new.
These companions are not new, right?
We've talked about them on the show before, but just take me through
the kind of additional concerns that have been raised about these new bots from XAI specifically.
Right. So, we talked earlier about how part of Grok is happening in public on X, right? And so,
there is this sort of crowd monitoring that's happening of that behavior. And when Grok goes on these rants, people notice it very quickly.
The companions are not happening in public.
That is something that is a private conversation.
And so the idea that an artificial intelligence that promotes these ideas,
that can go into these really dark sexual fantasies, that can promote racism, are fueling these characters and companions
that people are having these private conversations with.
And no one can see it, no one can really intervene, and the company itself isn't really building
guardrails in there to make sure that the users are safe.
Right.
I know there have been concerns about, like, just it's so easy for kids to access this
stuff.
And then also, Elon announced this weekend that he's launching a version of the chat
by called Baby Grog.
Like, what on earth is Baby Grog?
What on earth is Baby Grog?
Well, we don't have a lot of details about it yet.
I think Musk just tweeted about this on Saturday, saying that he was going to make a kid-friendly version of Grog. And I mean, again, it's just, you know, when you
think about this technology with no guardrails in the hands of kids, it is really scary.
You know, we've seen plenty of examples of adults being kind of led astray by artificial
intelligence, being fooled or convinced by it to believe or do really horrific things.
You know, and then you talk about placing that into the hands of children who maybe don't have as much judgment
and worldliness to discern when a technology like that is lying to them or misleading them.
It's pretty scary.
And the other big development I want to talk to you about is potentially putting this into the hands of the government, right?
Because XAI, along with a handful of other AI companies, have been awarded these big
government contracts with the Department of Defense.
The XAI contract is reportedly worth about $200 million.
And just what are the concerns around all of this?
Right. What are the concerns around all of this?
Right.
So I think there are a lot of concerns about the use of AI in military applications specifically.
This is something that people have been worried about for many, many years.
You know, it's something that employees at Google protested several years ago and they
got the company to agree not to use its AI
for military applications.
Google CEO Sundar Pichai saying of its new principles, quote,
these are not theoretical concepts.
They are concrete standards that will actively govern
our research and product development
and will impact our businesses.
Google walked back that agreement earlier this year.
They're defending the change, arguing businesses and democratic governments
need to work together on AI that supports national security.
It is really, I think,
frightening to think about these technologies that are untested,
that are still in development,
potentially making lethal decisions.
That's the worst case scenario that a lot of people have feared.
But despite that pushback, the government is moving ahead with this. sort of the worst case scenario that a lot of people have feared, right? But, you know,
despite that pushback, the government is moving ahead with this. You know, the companies that
they have made these contracts with generally are in the development of chatbots. And so,
I don't imagine at this point that those chatbots are going to be making lethal decisions or probably
being more used as like workplace companions or assistants.
But it is still, you know, it's alarming to see this technology creep into military applications.
And the way that the DOD has contracted with technology vendors in the past couple of years
has been similar to what we saw announced in this deal where they're handing a little
sliver of the pie to a couple of different vendors rather than awarding a full contract to one vendor.
And it's because they've gotten a lot of pushback about this from tech companies that they're
favoring Amazon over Microsoft or that they're giving these really valuable deals to one
specific company.
Tech companies have really pushed back on that.
And so it's more and more common now to see these kinds of deals where they give a little
sliver of contract to several vendors.
In this case, XAI is one of them.
And it does beg the question of whether the DoD is looking at safety when they're drawing
up these contracts.
Let me read you this excerpt from a column written by Kyle Chaka, the New Yorker columnist.
I think it kind of sums up a lot of what we've been talking about.
He wrote, in short, we are allowing buggy biased AI models to influence government policy, not to
mention sit alongside the human-to-human conversations of social media users in
our feeds. AI goes beyond that, of course, flawed chatbots are already influencing
our news consumption, our interpersonal communication, and our educational
practices. Generative AI is a bit like a drug released into the water supply
without proper testing.
Regulation can't come soon enough.
But where is that regulation conversation right now?
It is not very far along, unfortunately.
You know, there has been some talk of making regulation around AI in the United States.
There's also been talk of freezing that and not doing anything, I think, for the next
decade, which is an incredibly long time to allow this technology to grow unregulated.
Yes, the Senate Republicans, right, had proposed a 10-year ban on state-level AI regulation
as part of that big, beautiful bill, right?
That, yeah.
That's right.
Yeah.
So, you know, that's an incredibly long time to leave technology kind of up to its own
devices.
These things move at a really quick pace.
Ultimately, the moratorium on state AI regulations was removed in a 99-to-1 vote, with support
in the Senate collapsing entirely.
Even Senators- You know, there has been a little bit more regulatory pressure from the EU,
but you know, when it comes to regulating Musk, there's a lot of other concerns that people have
to factor in, unfortunately, because his businesses are so omnipresent. You know, the EU is looking at
You know, the EU is looking at regulating acts for speech issues, for hate speech issues, and then also having to consider, okay, if we crack down on this person, is he going
to stop supplying Starlink to Ukraine in their war?
Is he going to make some kind of retaliatory move that is going to really affect European
foreign policy?
And so it just speaks to kind of the unique power and presence that Musk has been able to develop for himself.
You know, I think that we sometimes have this idea that computer programs are unbiased because they're computers, right?
But as we've seen with this Grok example and with Musk and the control that he and the company have over it, that's really not the case, right?
And so, do you think that the illusion of being unbiased is maybe even more dangerous
than outright obvious bias?
I think it is.
I think there is this kind of widespread misunderstanding of online technology that it's not real the way that physical technology
hardware is. And it's just kind of, you know, it doesn't really matter. It doesn't really
affect us. And I think it's a bit of a delusion because, you know, we see how influential
these things are, how often social media conversation has translated into real-world violence around the world.
And somehow we still kind of assume that these technologies can't reach us and can't affect us.
But, you know, I don't think many people in the public would decide to buy and drive a vehicle
that hadn't been crash tested, you know, and that was still under development.
If you had to take your vehicle back to the mechanic every week or two because they still
wanted to tinker with it, you would not buy that car.
But that's what we're doing with tech every day is, you know, taking a technology that's
essentially unfinished, untested, unregulated, and weaving it into our personal lives.
Okay. Kate, thank you so much for this. This is great. Thank you. unregulated and weaving it into our personal lives.
Okay, Kate, thank you so much for this.
This was great.
Thank you.
Thank you for having me.
All right, that is all for today.
I'm Jamie Poisson.
Thanks so much for listening.
Talk to you tomorrow.