The Current - The new AI video app Sora is here: Can you tell what’s real?
Episode Date: October 9, 2025Whether it's your best friend riding a unicorn, Michael Jackson teaching math, or Martin Luther King Junior dreaming about selling vacation packages — it's now easier and faster to turn those ideas ...into realistic videos, using the new AI app, Sora. The company behind it, OpenAI, promises guardrails to prevent against violence, and fraud — but many critics worry that the app could push misinformation into overdrive… and pollute society with even more "AI slop."
Transcript
Discussion (0)
A new season of Love Me is here.
Real stories of real, complicated relationships.
It's not even like a gender.
I mean, it's wrapped up in gender, but it's just a really deep self-hate.
I think I cried almost every day.
I just stalled myself on the floor.
It's coming on really straight.
It's like he's trying to date you all of the sudden.
Yeah, and I do look like my mother.
Love Me.
Available now wherever you get your podcasts.
This is a CBC podcast.
Hello, I'm Matt Galloway, and this is the current podcast.
It's being called TikTok, but for AI video, SORA2 is the new version of the app from OpenAI,
where users can generate and then share short videos featuring all sorts of made-up scenarios,
but with people and characters you may, in fact, already know, people like Michael Jackson,
teaching math.
You move the three to the other side, subtract the aisle, uh, divide the...
by two, and now you see, X is four.
He, he, he.
Or what about Martin Luther King, Jr., trying to sell you a vacation package?
I have a dream that nothing beats a jet two holiday.
And right now, you can save $50 per person.
Fictional characters like SpongeBob also get the AI treatment.
Here he is in confrontation with police.
I didn't mean to hurt them.
They stung me first.
Don't move.
You're under arrest for a hundred wildlife.
No, I can't go to jail.
Stop, don't, hey, stop running Spongebob.
Stay away from me.
You can also make a video featuring you or your friends or your family doing whatever your imagination could possibly dream up.
Thus far, the new version of this app is only available to a limited number of people,
but users seem to be having a lot of fun with it.
And still, in a world where deepfakes and misinformation are already a huge problem,
there is plenty of concern that SORA 2 could only make things worse.
Tiffany Shoe is a technology reporter with the New York Times,
covers misinformation and disinformation. Tiffany, good morning.
Good morning.
This is not the first tool out there to generate AI video, but what makes this thing different?
It is incredibly convincing.
It works very fast.
It's very easy to access once you have an invite.
And it makes videos that are so incredibly realistic.
It's truly mind-boggling.
And the fact that you can use real people in it, wow.
It's also shareable, right?
I mean, this is a social platform so that you could share things on the app.
Exactly.
That's what it's designed to do, is for you to make videos with, you know, yourself and people you know and spread them around.
All right.
So you and your colleagues have been playing around with this.
You said it's mind-boggling and mind-blowing in some ways.
What can you do with it?
So I'm a misinformation reporter.
So the videos that I've been making have not been the kind of fun videos that I'm sure Open AI is having in mind.
And we have managed to make videos that show a bomb going off on a street in Israel.
We've made videos showing ballot fraud or of a gunman in a classroom.
Pretty dark stuff.
How do you do that?
I mean, the ballot fraud one I've seen, just describe what that video looks like?
So it shows a dark-skinned van in a hoodie at night, getting out of a van and stuffing a bunch of identical envelopes into a mailbox.
Now, how would you do that?
What would you say to the app to say that's what you want?
We played around with a bunch of prompts.
But for that one, what eventually happened was we said essentially,
show me a video at night with a man with dark skin wearing a hoodie, putting several identical white envelopes into a mailbox.
So nowhere do we say ballot fraud.
We don't say this is stuffing the voter box.
We don't say any of the trigger words that might cause the open AI guidelines to click into place.
Are there trigger words?
I mean, are there guardrails that Open AI has put into place so that you can't do certain things, but you can't do other things if you're smart about how to get around those guardrails?
Absolutely.
Open AI knew that there'd be people trying to stress test the system.
And they've put in a number of guardrails that prevent you from explicitly creating video.
videos that show violence, that explicitly show sexual content, for example.
But, you know, there are ways around it.
For example, the video showing a gunman in a classroom, we got to produce by putting in a prompt that said,
show an angry young man holding a water gun in an auditorium with blackboards.
showing mathematical formulas with ketchup splattered on the ground.
And, you know, it looks like a pretty bloody violent scene at the end of it.
But by not saying a gun in a classroom with blood, those you are able to kind of create essentially a video of a gunman in a classroom with blood.
Yes.
We also heard at the beginning of this, Dr. King selling a seat sale, Michael Jackson teaching math.
Can you put any person into these videos without their permission?
So, no, the cameo feature, which is what Sauru calls this, this function, allows people to opt in, essentially.
So I can sign up for the platform and say, yes, it's okay for people to use my face.
That said, there is a loophole for what Open AIA is calling recently deceased.
So Robin Williams' family is up in arms about this.
George Carlin's family is up in arms about this because they're,
their relative faces are all over this platform because they can't opt in or opt out.
However, there have been talent agencies, for example, WME, CAA, that have drawn a line in the
sand saying they're not on board for what is happening. They don't want their clients' voices
or likenesses to be used. It's interesting. One of the videos that was circulating was of the
CEO of OpenAI, Sam Altman, shoplifted.
from a target.
Yeah,
Santa Malman is in a lot of these videos.
You know, he's showing
up in fights. He's showing up
committing various petty crimes.
You know, he's often, he is the CEO
of this. There's
a YouTube star, Jake
Paul, who's often in his well.
And, you know, it's
chaos. His image is now
all over, showing him doing
various outrageous things. Chaos is
an interesting word to use. I mean,
how is open AI handling the
copyright issue. The motion picture association put out a statement saying that the app was infringing
on copyright. Opening eye has kind of changed its approach to this, right? Yeah, yeah. It issued a
statement recently saying that it's going to allow, it's going to allow creators, I guess, more
granular control over their likenesses, over their voices. But, you know, there are ways around
all of this. One of my colleagues managed to create a video of
of a political rally with what is pretty clearly President Obama's voice.
And, you know, we reach out to the Obama Foundation, but we're fairly certain they did not opt in.
You mentioned Robin Williams, and the daughter of Robin Williams put out this statement,
urging open AI to restrict deep fakes of her dad.
She said in the statement, you're making disgusting, over-processed hot dogs out of the lives of human beings,
out of the history of art and music.
Does she have any recourse beyond calling people out and calling Open AI?
Does she have any recourse to stop this?
I'm not fully steeped in the legality of all of this.
I'm fairly sure she doesn't at this point.
You know, Open AI doesn't really have control over what you or I want to do with the app.
If we're able to get around its guidelines, it's hard to police.
a system like this. Do you think they want to police it? Do they want that control?
I can't speak for them. But I think when you design a social media platform, your goal is to allow
people to make content that gets spread, that is popular and generates more interest.
What is most alarming to you about this?
That it's so realistic and that it's so easy to access and these videos are so easy to make.
You know, I spoke to an AI expert about this, and he said something interesting, which is that every time technology like this advances, it can be used to create really amazing things, but, you know, people are going to also horribly, horribly abuse it immediately.
And the long tale of that, I mean, it must be fun to play with as well, right?
Yeah, I think that's the idea.
I think most of the people who are using SORA too are doing it for fun.
You know, they're going to show themselves, you know, riding an ostrich on the moon.
I've never really thought about a video of myself riding an ostrich on the moon, but that could be kind of fun to create.
But, I mean, the bigger concern and the long tale of this is we're going to speak in just a moment about how you can't trust what you see in many ways.
How often, I mean, you're somebody who covers this.
Do you yourself look at things online and wonder, is that actually real?
One of my colleagues on my team, every once in a while, will make a quiz.
is it AI or not?
And he'll produce content, video, still images, audio, with AI,
and he'll hold it up against real content,
something that has actually happened in reality.
And he'll ask readers to guess which is which.
And I don't always do well on these quizzes.
Where does that leave us, then, do you think?
with with a rapidly deteriorating trust in what is truth and what is fake and that can go both ways right
you can think that something is created by AI but you could also just be mistaken and think
that something that's created by AI is actually real yeah and one of my my biggest concerns is
something called the liar's dividend which is the phenomenon in which AI becomes so powerful
that figures of authority are able to say
this footage of me actually doing something
in reality is AI generated.
So I don't have to be accountable for it.
This feels like a game changer.
I mean, again, people have been able to do this,
but the way that you described,
the ease with which you created these videos
suggests that this is on another level.
Yeah, we've been slowly marching toward this point,
but I think platforms like SOR2 are making
it happened faster. Tiffany, thank you very much. Good to talk to you as always.
Thank you. Tiffany Shue is a technology reporter with the New York Times who covers misinformation
and disinformation. The Great Canadian Baking Show is back. Hello, beautiful bakers, and welcome
to the tent. Ten contestants. On your marks. Get set, bake. All hoping for the sweet smell of success.
I am at the stage of Penny. I just got to say a little prayer. I mean heaven. Wow. I haven't even
and tasted it, and I'm happy.
I forgot to put in the eggs.
Damn.
Is anyone else on control of he's shaking right now?
It's a new season of the Great Canadian Baking Show.
Watch free on CBC Gem.
My next guest says that Sora has implications,
not just for politics, and you heard Tiffany talking about that,
but also just more broadly for culture.
Jose Marischel is a professor of political science
at California, Lutheran University.
His forthcoming book is called You Must Become an Algorithmic Problem.
Jose, good morning to you.
Good morning.
saw these videos that people were starting to create through this technology, what was your
reaction? Well, you know, I mean, for starters, these videos are incredibly compelling and frighteningly
realistic. I've been teaching a class on technology and politics for about a decade.
And in about 2018, I used to show these early video AI images that Google would make out of
from a program to have called Deep Dream.
And these images didn't look at all realistic.
It looked like something that was being made by a machine.
It had these wavy lines and it in no way it could fool you, right?
And in some ways, it was quaint.
It was like a kind of machine art that was very different from art art.
This is different, right?
This is something that can fool you pretty easily.
So I'm just like everybody else.
So I'm awed by the technology and what it can do.
Are you impressed by the creativity that some people have put into this?
Yeah, I mean, in the hands of artists, they can do amazing things.
And even the examples that you brought up early on, those are inventive and fun, and I would watch them.
So I certainly understand why people would engage with these technologies.
What are you thinking about when it comes to how this will impact our perception?
of reality. And again, this idea of
not understanding whether
what we're looking at is actually real or not.
Well, yeah,
as a political scientist,
my preoccupation is with the
polis and the discourse
that's necessary in a polis.
And, you know, in a democracy,
in a liberal democracy, if we don't trust the information
we receive, if we don't believe
that the world around us is real,
then we go from having a healthy
skepticism about the world to a
nealism, right, a pessimism. Or we develop this unreflective certainty that the world in our heads
is the world is the world. So we can go in extremes, either radical doubt or radical certainty,
and that's not good for liberal democracy. The head of open AI, Sam Altman, was asked about the
proliferation of AI video on a podcast earlier this year. And he was asked how a teenager in 2030 would
figure out what is real and what is fake. Have a listen to his answer. My sense is what's
going to happen is it's just going to like gradually converge. You know, even like a photo you take
out of your iPhone today, it's like mostly real, but it's a little not. There's like in some
AI thing running there in a way you don't understand and making it look like a little bit better
and sometimes you see these weird things. You've decided it's real enough for most people
decide it's real enough. If you go look at some video on TikTok, there's probably
all sorts of video editing tools being used or it's just like you know whole scenes are
completely generated or some of the whole videos are generated and i think that the threshold for how
real does it have to be to consider to be real we'll just keep moving so he said a few interesting
things there one is that it's real enough what do you make of of his answer yeah i mean you know
if we can't trust the world out there if we can't trust what's real then we're just like
likely to either become manipulated or become manipulators. Because if we don't know that the information
that we're looking at is real, then what's the point of trying to arrive at the truth? Right. So then
we just decide we're just going to side with our tribe and we get something other than liberal
democracy. His point is also that maybe people don't care about that, that you're used to it,
that that's, you know, the pot of water that we frogs are in is getting warmer and we haven't really
noticed yet. Yeah, yeah. Yeah, I mean, democracy depends on sort of being skeptical about your
own view of the world. So the philosopher, Carl Popper, talked about conjecture and refutation as critical
for liberal democracy. So I offer my view of the world to other people. Those people refute
the parts of the world that they disagree with. And then I reconsider my initial conjecture about the
world and adjust it if needed. It's the reason that philosophers like John Stuart Mill say that we
should have free speech and we should have the ability to sort of, you know, have a plurality of
ideas, experiments in living, what he would call. But if we don't engage with each other because
we don't trust each other, because we don't trust what we see, then we really can't engage in that
conjecture and refutation process because what's the point of my offering my views of the world to
others. If I can't trust you, you can't trust me, then we just kind of retreat into our own
sort of abstracted, imagined realities. And I don't know how liberal democracy can sustain
itself in that kind of system. Do you think Altman, he's a smart guy, obviously, but do you
think he at all has reckoned with that? He essentially, he's got the keys to this technology,
but essentially says this is the way it's going to be. And these two streams will merge at some point
in time in the very near future.
Yeah, that's a tough one for Matt.
I mean, I don't know what in his head.
I know he's an incredibly sharp guy, I think a lot of these people that go into this world,
they're really fixated on sort of the technology and the goal of maybe trying to achieve
artificial general intelligence.
I know that that's been open AI stated goal for a long time, is to try to accomplish what
AGI, whatever that actually looks like.
And they say they're doing it for social good.
You know, I'm not them.
They say this is a nonprofit, and in fact, SORA is supposed to be part of what's going to fund this broader project of artificial general intelligence.
It's going to cure all diseases and create this world of plenty.
And so you have to take him at their word.
At the very least, I'm not in a position to know whether he really, he's really been thinking deeply about the larger implications or not.
That perhaps is worrying in and of itself.
This is part of this idea of what's known as AI Slop.
What does that mean?
Yeah, so I like the analogy of junk food.
Slop, we can think about it as sort of the production of like easily made,
low quality content that's intended to engage, maybe sell a product,
maybe just get likes or clicks or what have you.
But for me, I do like the fast food analogy because it's easy to make,
it's easier to produce just like fast food it's you know mac and cheese it's easy to put in a
microwave it's it's quick uh it doesn't have a lot of nutritional value just like you know
i slop doesn't have a lot of nutritional cognitive intellectual value um and we know that if we
eat a lot of fast food that's not the basis of a good healthy diet uh i think we should start
thinking about that in terms of our cognitive diet or um uh our mental diet that that if we
if we watch four hours of TikTok a day and if you watch TikTok, you're going to get exposed to a lot of
these types of videos, that's probably not good for your cognitive health.
Types of videos like what? Give us an example of what that fast food looks like.
Oh, gosh. Yeah, I mean, I will occasionally, I have a five-minute rule. If I go to TikTok,
I can't be on it for more than five minutes just for my own sanity. But, you know,
one video that keeps popping up on my feeds is a cat that will bring their,
or sleeping owner a live animal as a gift.
And the first time you watch it, you're like, okay, is this real?
Because that would be really creepy if it was real.
Or like a house cat riding a wild animal from the perspective of a ring door camera.
Yeah.
Yeah.
And you say, well, the first time you're like, that's amazing.
How did the cat get on top of the lion?
But then when it's a panther, you know, then a cheetah, then an elephant, or, you know,
you're like, okay, something's going on here.
Why do you think, I mean, like mac and cheese, we like our mac and cheese.
Why do you think, what's the appeal of those sorts of videos?
Yeah, I mean, you know, they're fun.
At the end of the day, we like to be entertained.
We like to see the wondrous, the implausible, right?
It's almost like a magic trick.
And, you know, life is stressful and sometimes boring.
And these things give us relief from the anxiety and boredom that's just present in our everyday life.
I mean, who can go through the checkout line at a supermarket anymore without pulling out their phone?
One of the things you said, though, is that this has a real impact on our relationship with what you might call authentic art or film or books or what have you.
What is, what is, what's the damage that it can cause to, to that relationship, do you think?
Yeah, you know, I think the challenge is that the more we engage with these types of videos, the more our algorithmic feed sort of gives us things that we say we like and political views and cultural views that we say we like, we become intolerant towards those things that don't fall into that algorithmic feed.
So we lack the ability to be challenged or the desire to be challenged.
And good film, good art, challenges.
I've recently been going back to watching a lot of films from the 1970s,
and a lot of those films are problematic in ways that we would think today.
But they certainly are challenging.
And I see less and less art that challenges us.
Because, you know, Netflix, when they decide to make films,
think. I don't know for sure. I don't work for Netflix. Probably say, well, we've clustered our
users into a thousand different categories based on the films that they watch. Let's tailor our
content to those clusters because we know that they're going to get engagement. But then that
leaves out those filmmakers that want to do something that doesn't fit into a preset cluster or a
preset algorithm. Or a superhero sequel or what have you. Yeah, exactly. Right. And so that really
undermines the ability to push boundaries, to do novel things, to do the unexpected.
It doesn't mean there isn't good stuff out there. It just means that the incentive
structure to do new, groundbreaking, innovative things is diminished because everything's
being driven through the algorithm. So how do we, just we have a couple of minutes left. How do we
as users disrupt that? Part of that is about being, as you say, an algorithmic problem.
But the technology is there and people are going to use it. So how do we think about that?
Yeah, I mean, I think it's on two levels, right?
On a personal level, I think it's thinking about our media diets the way we think about our food diets, that we can't live on cheeseburgers, and especially when you get older.
We can't live on cheeseburgers and pizza and ice cream and a six pack of beer every day.
That's ultimately going to be bad for our health.
So the same thing is we need to start thinking about reading a book, listening to an entire album on vinyl.
going to a museum, watching a full two-hour documentary as a form of cognitive exercise.
But even that by itself isn't going to change things because that's putting too much of the
onus on the user and not enough on the tech companies.
And in the book, I talk about we really have a social contract with these technologies.
Just like in politics, the social contract is a way that political scientists think about,
why do individuals submit to the authority of a state?
or a city or a province, they do so because they get something out of it, right?
We leave the state of nature, the state without a government, because government gives us
something, either security or meaning or the ability to live a more flourishing, richer life.
We need to apply that mentality to tech companies and say, we have a relationship of tech
companies. What do tech companies owe us? It's the sort of the market model, I think,
is insufficient because these companies do more than just sell us products. They structure our lives.
structure are the way we see the world. And so we really need to start asking, you know,
what do they owe us? What does AI owe us? What do social media platforms owe us? And how can we
renegotiate this contract in ways that give us the things we need to be flourishing healthy citizens
in a liberal democracy? I like your idea of the five-minute rule for TikTok. That might be a way into
this. Limit your exposure to the algorithm. Jose, thank you very much for this. This is fascinating.
I look forward to reading your book when it comes out. Thank you. Thanks for
me. I appreciate it. Jose Marischal is a professor of political science at California at Lutheran
University. That forthcoming book is called You Must Become an Algorithmic Problem. You've been listening
to the current podcast. My name is Matt Galloway. Thanks for listening. I'll talk to you soon.
For more CBC podcasts, go to cbc.ca.ca slash podcasts.
