Your Undivided Attention - A Conversation with the Team Behind the AI Doc
Episode Date: March 23, 2026“The AI Doc: Or How I Became An Apocaloptimist” opens in theaters across the U.S. this Friday, March 27. In this episode, we sit down with the team behind this groundbreaking documentary — Oscar...-winning producers Daniel Kwan, Jonathan Wang, and Ted Tremper. They explore how they navigated the overwhelming complexity of AI, held space for radically different perspectives, and created a film designed not just to inform but to be experienced together. At CHT, we believe clarity creates agency. This film has the power to create the shared clarity we need to steer the direction of AI towards a better, more humane technological future. With every new technology, there’s a brief window to set the rules of the road that determine the future we live in. This is ours. So grab your friends, your family and go see “The AI Doc.” RECOMMENDED MEDIA Buy tickets for The AI Doc The trailer for The AI Doc The website for the Creators Coalition on AI Further reading on The Day After RECOMMENDED YUA EPISODES A Problem Well-Stated Is Half-Solved with Daniel Schmachtenberger The AI Dilemma Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Hey everyone. Welcome to Your Undivided Attention. I am Tristan Harris. And I'm Isaraskin.
So in the fall of 1983, ABC aired this film called The Day After. It was a television event. It was a historic moment. And it showed in this film the devastating aftermath of what would happen in a possible nuclear war.
It was seen by more than 100 million people on one night
making it the most watched television event in human history.
And it aired during one of the most dangerous moments of the Cold War.
And I remember reading about how President Ronald Reagan screened the movie
for his natural security advisors.
After he watched it, Reagan wrote in his diary that it had left him greatly depressed
and that we have to do, quote, all we can to see that we never have nuclear war.
Later, he wrote in his memoir about how the film changed his state.
thinking on nuclear war. He no longer saw it as something that the U.S. could win, but rather something
that everyone would lose. And so over the next few years after the movie, Reagan and its Soviet
counterparts began to discuss nuclear disarmament. In 1987, they signed the first ever agreement
to begin cutting down their nuclear arsenals. So yeah, I mean, it took some time, but over the next
few years, Reagan and some of his Soviet counterparts began discussing what it would take to have
nuclear disarmament. And in 1987, they signed the first ever agreement to actually be
to begin cutting down their nuclear arsenals.
So the day after wasn't, you know, the magic bullet that solved everything,
but it's an example of the power that a film can have
to nudge the world in a different direction.
And it crystallized, you know, a mass movement against nuclear weapons
by helping President Reagan fully understand some of the human stakes of his decisions.
Just on, you and I have been saying,
ever since we did the AI dilemma, so a couple of years ago,
that we need a day after for AI.
And that's because AI is, in fact, going to be much more consequential to humanity.
than nuclear weapons were.
And if we don't want to go down the default path,
which is an anti-human path,
we are going to need the global clarity
where all mammals feel the same thing
at the same time to do something different.
Absolutely. And there's a new movie coming out this week
that we're hoping can do exactly that.
We are super excited because the new documentary film,
The AI Doc, or How I Became an Apocalypse Optimist,
is premiering this Friday, March 27th,
in theaters all across the U.S.
And Aza and I are in this film.
It clearly lays out the promise, the peril of AI, the stakes of AI.
And it has 40 voices, people who are all the AI optimists, the people who are focused on AI risk, people focused on AI ethics and the problems right now.
And you don't need to have any technical expertise to watch it.
It's super accessible and it's really engaging.
In fact, as I've watched people come out of the movie theater, people have said, I wasn't expecting to be moved at a movie like this.
It's fun.
It's engaging.
People gasp.
They laugh.
So today we're inviting on a few folks
who were instrumental in making the film.
Daniel Kwan, Jonathan Wang, and Ted Tramper,
who are producers on this film.
And our listeners may know Daniel and Jonathan
as the people that made the film
everything, everywhere, all at once.
So we are so grateful to these guys
for coming on to talk about this movie
and the collective clarity and sense-making
it can bring to the greatest challenge
humanity has ever had to face.
Daniel, Jonathan, and Ted,
Thank you so much for joining your undamended attention.
Thanks for having us.
Can each of you just introduce yourself so people recognize your name
and which role you had in the movie?
Yeah, this is Daniel Kwan.
I am one of the directors of the film Everything Everywhere All at Once,
but I'm also a producer on the AI Doc.
I'm Jonathan Wong.
I'm the producer of Everything Everywhere All Once,
and I'm also a producer of the AI Doc.
This is Ted Tramper.
I'm a certified permaculture designer and amateur woodworker.
I'm also a producer on the AI doc.
All right.
So, Quan and John, since we met you first,
how did this movie come together?
Well, I mean, I guess I could jump in and say that as a bit of a rewind,
throughout the pandemic,
I basically listened to like hundreds of hours of you guys talking.
So I almost have like a Pavlovian training.
I can hear the click and everything.
as I hear your voice.
But, yeah, after, well, it was actually during the run of everything everywhere,
we were, we had just premiered at the Castro Theater.
And I think we were up in San Francisco.
And I'd reached out to Aza.
We'd connected in New York.
And then with Dan and Tristan and Aza and Daniel, we all sat down.
And we were like, just wanted to just talk to you guys the way that, like,
typical Hollywood meetings go.
let me talk to this actor, this director, we're big fans, let's meet.
And as we sat down, I think we felt very acutely the weight that was on Tristanonese's shoulders,
as you guys had been looking at the bigger problems beyond social media and the real problems of AI.
And I think that conversation was so fruitful and we covered so much ground.
And I forget, you guys were on the way to something.
I was like so impressed.
You guys had like books in your hands.
I was like, this is a joke.
You guys are playing this up.
You guys grab some books from the library to look really smart before.
But then we had this really incredible conversation about the impact of technology on culture and the future of technology and what was ahead.
And it was almost a bit of a bookmark into a much deeper conversation that was going to come.
But I think that was the initial seed that planted into our heads that we might need to be working together on something beyond just having this fun meeting.
Yeah, so I remember
it was not too long after chat GPT came out
and that sort of like, you know,
that little nuclear bomb that went off across the internet
you guys reached out to us and you guys were basically wondering
how much we knew about AI and large language models
and what was coming.
And we knew a little bit but not enough.
And so we had a conversation about how important clarity
was going to be in the coming years
because the bottom line that everyone had
agreed on this is coming and the world's not ready for it.
And if we're going to be able to navigate our way through this together collectively,
we were going to need clarity.
And so we realized that we were in the position where we couldn't provide that.
We were given this great opportunity to work on whatever we wanted to work on next.
And we realized we could use whatever little influence we could to produce something.
We weren't sure what it was going to be.
We didn't know if it was going to be a documentary.
We didn't know if it was going to be a narrative.
We just knew we just needed to get as many eyes on this issue as possible.
And so we started this year's long journey into the heart of this extremely complicated hyper-object.
And basically we set off with the goal to see if we could condense all of that information
and all of that context and all the important framing devices into a one-hour, 40-minute movie
that could be entertaining, emotional, take you on a journey,
and spit you out in time for dinner.
That was kind of our goal so that anyone could watch it.
Any of our parents could watch it, any of our neighbors could watch it,
and they would be able to understand what you all understand.
So that was the goal.
It's a lot easier said than done.
Let's go into this.
So AI is this hyper-object.
It's so hard to talk about because it touches everything.
And so I'd love to give listeners just a story of your process
of how do you take this very complicated topic?
And even, I know, you know, the film team,
you had so many different views about the topic.
And, you know, I think the meta story here
of the film is around coordination and how do we coordinate.
And you have a lot of different views that you're managing.
So, yeah, Ted, do you want to also introduce yourself here
and jump into this story?
Hi, I'm Ted Tramper.
I am, my origin story with the movie is very,
I remember exactly where I was,
where I first became aware of the subject matter,
I was walking down 10th Avenue in New York, and Dan Kwan gave me a call, and he said,
hey, I think you should listen to this, and he sent me your AI dilemma presentation.
And he basically explained that he feels like the difference between social media and AI in terms of its effect on humanity is going to be, instead of racing for the bottom of the brainstem, it's going to be raised to intimacy.
And he sort of explained what the fallout of that would be.
And Dan has a very fun way of, he'll never tell you you need to do something, but he'll say in this,
very specific coy way. I think you should be, I think you should look into this, or I think that
you should be interested in this. And this is, you know, we've been friends now for 15 years, I think.
Dan might treat you differently than me. He just tells me to do things. Oh, yeah.
You know, it's very funny because we, my background is in comedy in a journalism. And going
back to your question, I think the most important thing in tackling an issue like this is that we
discovered very early on that the perspective i think that most viewers have almost just human beings have
is please tell me the one thing you can tell me about ai so i never have to hear or talk about it again
that i never have to think about it never have to talk about it um and unfortunately you know the challenges
we we found early on were one were a movie we're not a podcast or a youtube series and so in
tackling breaking news things would be extraordinarily difficult so we needed to focus on things that would be
news things would be extraordinarily difficult. So we needed to focus on things that would be
Evergreen, which tend to gravitate towards things you can zoom out on. Things that will be true now
as true as they will be in six months or six years. And so a lot of the principles behind, like,
how the technology is made became really, really important. In terms of the different
perspectives of the film team, like we all really were trying to keep in our minds the different
members of the audience. So whether you're conservative, whether you're liberal, and definitively
making it like removed from politics, because that's one of the things that's,
that is critically important is that we don't insert politics into this issue because it affects all of us.
So trying to hold in our minds, our perspectives, the audience perspectives,
and also because it's a film, it needs to be entertaining.
And so almost nothing is harder than making what we hope is a good AI movie
because you actually do need to break down this incredibly complex hyper-object in ways that
somebody with no existing knowledge could understand and could make sense of for their life.
Just define for people the word hyper-object is referring to Timothy Morrower.
He's a philosopher who talked about these problems that sort of span the entire world, complexity that touches everything that are diffuse in time.
So he says, you know, when you turn your car on with the ignition, you know, I have a key, that's climate change.
When you know are feeling lightheaded because you're an environment with pollution, that's climate change.
When, you know, you see your friend's house burned down, that's climate change.
The point is that climate change is this diffuse thing that's touching so many different things.
So there you are with AI.
And you see a data center go up in your backyard on,
farmland that used to be there for 100 years, that's AI. When you see your niece who's unable to get a job
and you hear that they're not able to find a job, that's AI. When you see some new crazy report online
that an AI model started going rogue and rewriting its own code, that's also AI. Just notice how
far away those concepts are from each other. They're not even close. And so what I love about what you all
did with the film is you're trying to represent something. And you can only almost only do this with a film
medium where you're taking the different faces of this very complex object and you're packaging it
into something where we can actually all see it together. We can actually make choices.
Given the multiple faces of that object, which are the faces do we want? Do we want mass job loss?
We want cancer drugs. We want energy solutions. But we can only navigate that when we have a shared
object. And I think so much of what you're doing with the film is you're creating common knowledge,
not just here's the knowledge, but common knowledge that I know that I know, and you know that I know,
and you know that I know that you know
because the other thing going on
is that some people have this knowledge about one aspect
but they don't know that other people do.
So they feel alienated.
Like I'm worried about AI,
but then I talk to my family
and they're talking about something completely different,
like how useful it is to vibe code.
And I don't know how to square the conversation
when it's being useful for you to vibe code
and for me feeling overwhelmed
that data centers are showing up in my backyard.
So we now have an object
that now the whole world can understand
and come to a common place
that starts from where are the choices
we want to make from here?
We ended up realizing this film had to be a sort of epistemological journey,
not just a journey about just the hard facts and how the technology works,
but also something that really covers the breadth of the ideologies driving everything that is behind this technology.
So not just the ideological drivers, the economic drivers, the psychological drivers,
just all these underlying drivers that will be true no matter what happens next year,
what happens next month, you know, because things are constantly changing.
The other thing worth noting that was really difficult about this project,
you guys mentioned the day after and what that did for the nuclear conversation.
The easy thing about nuclear, if there is an easy thing,
is that there's one basic worst-case scenario that you could depict.
You can say, okay, let's show people what it looks like if nuclear goes wrong.
And there's one obvious path.
And so then you can show the world, they can wake up to it,
and they can all agree we don't want that.
AI is so decentralized and so widely distributed
and has such far-reaching implications
for almost every aspect of our lives
and our world and every industry
that you could make a million movies
and so with this film we ended up realizing
we had to center it on one single story
both the directors Charlie and Daniel
were expecting their first kids
and we felt like that was such a beautiful parallel
to what humanity was doing together
collectively birthing something new, with all of the unknowns attached to it.
And so that was our way in on a personal level.
And I think that the directors did an amazing job weaving those two stories together.
The story of humanity, creating AI and their own personal story becoming parents for the first time.
I think one of the things the film does very, very well.
And this takes, I think, a lot of care from all of you is that if you, if you,
You are the kind of person who thinks that AI is going to be the thing that helps solve cancer and desalinate water for people.
All the positive things, their view is well represented in the film.
And not just well represented.
I think everyone who is sort of more on that optimist's going to say, yes, that is my view and it's presented strong.
And for people that think that AI is going to be really more catastrophic, that position is also just very well represented.
and I don't think anyone gets short change.
And yet this film still has a point of view
about where things go.
But it's not like hitting people over the head
with what they should believe.
I just wanted to get you guys to talk a little bit about that.
And also, I'm just tracking for the audience.
It's just to describe, like, what is the structure of the film?
We were getting sort of hints of it,
but just lay it out a little bit.
What's the elevator pitch?
John?
We ended up arriving at this structure
that we felt was indicative
of our process going through learning about this topic,
which at first, when we started in, we were like,
this is terrible.
What is going on?
This is all about what is going on?
And I think through that process,
you experience this dark night of the soul
that you look for hope anywhere.
And you're trying to say, well, is there any good here?
And then you start seeing, oh, there is some good here.
And then you go through this mental gymnastics
where you try to think, okay, how can we just get the good?
and not get the bad.
And what we realized is that the technology is just inextricably linked
and that you can't just filter out the bad and keep the good.
And so then we said, okay, so we need to kind of take the audience through that experience
of this is bad.
Oh, no, it's good.
Actually, it's both.
Therefore, what?
Therefore, what is a call to action for us as individuals, as society, to say,
this path that we're on?
We're told this lie that it's out of our hands,
it's inevitable, this is the future, it's here.
And we want people to feel, no, this technology is here.
How we use this technology is up to us and this trajectory on is not inevitable,
and we need to rally together to say no to the default path.
If it's useful just actually running down the structure of the film, you know, at the beginning,
you know, there's sort of a roundup of all of the different inertia and panic that's going on with AI.
Daniel goes out and he seeks out people to get answers.
so he initially gets an overview of how the technology works.
That leads to discussions of some of the ways that things might go wrong,
including human extinction.
He goes back to his now pregnant wife,
and as one wants to do, just info dumps all of this on her,
and she tells him that he needs to go out and find hope.
Then he goes, of course, and he meets, he goes and tries to find hope,
so he talks to people who are more excited about that technology,
and they illuminate some of the positive things that it can do.
And then the force of needing to reconcile those things
leads to a bunch of tremendously difficult questions and seeing where we feel like we need to go from there.
And it seems as though there are these two paths that create an impossible needle to thread.
And so he decides that he needs to actually talk to the people building it.
And we interview three out of the five CEOs.
You can see the movie to see which ones we got.
But one would hope that those are the people who would have the answers to how we make it through this.
And a thing I think that makes this issue very different than times in the past when industrialists, you know, have obfuscated what the actual worst of a technology were, whether it's fossil fuels, whether it's leaded gasoline, whether it's asbestos.
The CEOs are all pretty clear on record that this could bring about catastrophic harm.
And, of course, they're hyping it for different reasons in a different ways.
But, you know, Daniel essentially gets no reassurances.
And then he's left to actually ask the question, well, where do we go from here?
and that's sort of where the movie leaves you.
Just speaking to the production side of it,
we interviewed over 40 people on camera from myriad different camps.
I personally spoke to and did background interviews with over 100 people,
developed confidential sources who are either current or former lab employees of every single lab.
We have over 3,300 pages of transcripts to go through.
And so the process of trying to encapsulate all those different points of views,
making sure that people are feeling seen without,
obviously indexing every single thing that everyone believes
was really difficult in putting that into a film
that is entertaining that my 78-year-old dad was able to watch in a log cabin,
a guy who's literally never owned a laptop before,
and he was able to explain to me how the technology works,
where he thinks is going from here,
and actually give really good archival notes.
It was a very rewarding process to feel like we accomplished our goal.
You know, Ted has a comedy background.
Daniel and I have a film background.
AI isn't necessarily the thing that you would expect
would be first in the ranking of things that we'd be passionate about, right?
And I think a lot of times people think,
well, I'm not in a frontier lab.
I'm not a computer scientist.
I don't know code.
Like, what does AI have to do with me?
And my way into the story was actually through environmentalism
that I've been very concerned about the planet.
For me, that was something that I was listening to sleep.
about at night. And then I put in AI into that equation. And I said, oh, wait, this is going to have
the highest energy demands of all. And we're just stacking this on top of everything else. So
AI, even without thinking about the problems of AI itself, just in an environmental impact,
this is a problem that I need to be concerned about. And then, so for me, that was my way in,
and it opened me up to everything else. So I think that people who, whether they're a parent, whether
they're a teacher, whether you're a truck driver or whatever, your way into your concern around
AI is just as valid whether you have a technical understanding of what is under the hood or if you
just have a philosophical understanding of what matters to you in life. Can I have one thing to that,
Jonathan? I think one of the things I think it's unique and different about the film is that
Daniel Rohrer, who is the, I guess, the star of the film and the co-director, this is very different
than a TV special or something where somebody is saying, look, I don't know about AI. And so
we're going to go in this journey together where I'm explaining AI to you. This is very different. This
guy who is in so far over his head and is trying to figure this out as he's going along.
And I think showing that, showing the fact that he is convinced by different people that he
speaks to at different times, really mirrors on a meta level the way that we all come to
it as non-technology people.
You go out and you see a headline that says it's going to fix every problem in the world.
And then you say, okay, great, that's awesome.
And then you see one that's going to take everybody's jobs and kill everybody.
Where are those things valid?
Where is the overlap and where do we go from here?
Like that, showing that journey in a way that really shows our ass sometimes is like very, very important.
Because that's what we all go through.
That's what the film team went through.
That's what I think all people who don't have a technology background.
Even ones who do also need to go through that journey.
Yeah, I think one of the things that I've been feeling a lot lately, not just pertaining to AI, but to everything in general, is that's like, I mean, a lot of this comes from, you know, one of your guests, Daniel Schmachtenberger,
the way he talks about the poly crisis, the meta crisis,
all these interlocking crises that are all feeding into each other,
how do we get our way out of this?
One of the only ways he sees clearly is that if we cannot solve
the communication and coordination crisis,
we can't solve any of the other ones.
And that is something that's really stuck with me
for the past four or five years since I last heard it.
And when it comes to the AI conversation,
it feels so incredibly important
that we all wake up and realize we can't allow this conversation to become polarized
in the same way that everything else in American politics and beyond American politics has become.
Everything has really become this binary that leads to a lot of friction, a lot of gridlock,
and what happens when you have gridlock, nothing gets done except for the things that the people in power want to get done.
The people with the money and influence, they get to just do whatever they want while the rest of us are fighting.
and with AI you can already see the ways in which that is happening,
which is unfortunate, and we have to really resist that.
But then at the same time, I see this as an opportunity
because especially within American politics,
this is one of those rare instances where people on the right and the left
both agree that they want to do something about this.
And one of the reasons why we decided to structure the movie the way we did
was to bring in many people into this.
conversation and it doesn't matter who you are and what you believe in, who you voted for.
What are there a few things that we all can move together on? Because we have to move fast.
We have to move yesterday and the cards are stacked against us.
And you know, Ted, what are my favorite questions that you asked absolutely everyone was,
like, how could you truly and royally mess up the film? Like, how could you end it? That would
be horrific. And I'm just curious. I think it was a great question to ask.
What answers did you get and then did any come true?
Yeah, the two questions we asked everyone, I think, was what is AI, which was a very fun question to ask
technologies because it immediately puts people in this like, oh, how could we possibly, where could we possibly even start?
But it did a really great job of level setting, and I wish we could do a super cut.
We have a little bit of a super cut of that at the beginning of the film of just what is AI and people's reactions of that.
But yeah, the question we asked everyone, I think, as the last question was, how would we screw up making a documentary about AI?
And that became a really interesting sort of compass as to how each of the different camps are feeling.
So unsurprisingly, there are some camps where when you said how you screw this up, they'll say,
you'll make it a killer robot movie.
You'll only talk about the things that are going to be bad that are going to happen.
Some people would say the way that we could screw it up was by not focusing enough on the fact that their perspective is that this is all hype.
And all of essentially the hype you're seeing is just to drive up stock prices and to be able to generate more capital.
but it's a thing where
what I hope that the film is done
is show the interconnectivity between
all these different perspectives and the failure states
that exist and where they overlap
so that we as a group can find a way forward.
And I think that what we're seeing,
regardless of how you have aligned historically, politically,
is that there are things going on right now
that if we take a moment to take a step back
from the way we've been divided by things like social media
or previous technologies.
There actually is a tremendous amount of alignment there.
I know this has been a really hard process for you.
As filmmakers, I think originally wasn't it the case, Kwan, John, that you got,
I wanted to do this in like nine months or something like that,
and it took two and a half years.
You want to speak a little bit to, how do you also deal with something moving this fast?
Just curious your reactions to that.
Right.
So at first, it was be fast.
We need to be the first to market.
We need to have the first mover advantage.
We need to wake people up to these certain things.
And then we realized, well, to do that well, we need to set up all these other things.
And so there was just all these different pitfalls throughout the process that we said,
if we just do it this way, then we leave all this other stuff, which will be an info hazard for all these other ways.
Or if we do this thing over here, it's going to have all these people will feel disenfranchised,
and they're going to be actively fighting to tear down this movie.
And so one of the things that Ted Trump has always been so good at saying is that our movie is a first date.
Right. We are not trying to get anyone to get married. We're just trying to get someone to then go on a second date, third day, and engage a little bit more.
Because as you were just saying, just on all of these things, like, if someone's concerned about data centers, their maximal concern about the data centers and the degradation of a community and the environmental impact, those are maximally concerning and very important.
And it's not to say that we want to say, don't just follow that, right? We want to be able to say, that is just as important.
as all this other stuff and we want to hold a broad view.
And so I think that was the singular challenge for us as producers
was to constantly be like, okay, we really believe this.
This is making me fire up.
And I really want to make a movie about this.
But how can we really make sure we give the counterpoint?
How can we really actually enter into this debate ourselves
and approach all of these conversations with good faith?
And that's the thing that Ted did such a good job with all of these interviews
is really convince me of your view.
view. So we can represent it properly in the movie. So this full taxonomy of views is there.
And then hopefully we can just see the through line, which is the incentives, the drivers,
and be able to guide people through. As someone who's represented in the film and some of the
strongest voices in the film, like, what was it like for you to watch it and to see it all laid out
in this line? That's a good question. Actually, on my team, people often say the way to get Tristan
to say the best stuff is to share something that's a view about tech.
that's like incomplete or wrong and then I'll get agitated and that's when the best stuff will come out
because let me say that's one of my favorite parts of the entire shoot was being able to represent and say
something to you that I know would make you very upset because it leads to uh it leads to a very
precise rebuttal it's very very useful it's a it's a good technique so there's you heard it here
first uh for people who we're all just triggering each other at home exactly um yeah i mean i think
what gets me is when there's a view that's represented that's in
complete. So there are moments in the film when you see, you know, positives about AI that are
represented. And then there's this kind of like, oh, no, like, wait, don't, don't believe all of that yet,
because there's, if you don't factor that there's, there's this fundamental thing about AI that
the upsides like cancer drugs don't prevent bio-weapons. But the downsides like bioweapons can
prevent or sort of disable a world from receiving the benefits of some of the upsides.
And so there's this asymmetry between upsides and downsides. So like the kinds of weird,
scientific, medical, technological, energy solutions that could generate are truly beyond your
comprehension to even be able to consider. And that's where the optimists are trying to say,
look, guys, like, you can't even imagine how good this is going to be. So, I mean, I think that the
film does a really good job of taking people on this kind of journey, and it's very representative
of, I think, the style, you know, both visually and in storytelling-wise from, you know,
you're everything everywhere at once background, which is sort of taking people and yanking them
around in these kind of clever ways. And I think people feel, you know, I just,
watched the film with a very influential person recently.
And I think people are sort of surprised to be yanked, kind of left and right,
and then kind of landing someplace in the middle in these unexpected ways.
And I think it's a testament to your capability as storytellers.
Izzy, do you have your reaction?
An interesting quirk of history.
The AI doc debuted in the exact same theater that the Such a Dilemma debuted.
Six years later.
Yeah, it's undens.
And it's just bizarre because, like, as far as,
I can tell Sundance is just sitting in one theater.
And it's very powerful just feeling an audience go through something at the same time.
People were bawling.
Not a little bit.
Having this hit people's nervous systems altogether, people cry, but also people laughed.
There was a lot of laughter.
There were a number of moments of gasping.
And I remember actually for Social Dilemma, this is stuff that Tristan and I
live and breathe and swim in all the time and yet seeing it all packaged up in an evocative way
experienced together somehow did something to my resolve it like it refocused me and caused me to say
there are still parts of me that hide from the problem and even now like there's still parts
of me that hide from it's just so big to take in and seeing the film all together did something
similar to what happened with social dilemma re-committed me to the convented me to the
because it just becomes inescapable.
Actually, that's a thing I then wanted to, like, turn around to you all,
because this is not easy subject matter, right?
A hyper object sometimes can be also a hyper bummer.
And I'm just curious about your own personal stories
of having to grapple with and deal with this kind of totalizing content
because on like a normal documentary or a film,
you can't turn it off.
you go home from the set, and it's still happening.
You can't escape anywhere.
And so what was that like?
Yeah, the thing we joke about, and it's not really a joke,
but everyone that we pulled into this project is almost like a welcome and a sorry,
you know, because everyone has to go on a different but very similar journey of grieving.
And it's not because I'm saying that, you know, worst case scenarios are inevitable,
and we should be grieving.
What we're grieving together
is sort of the future
we thought we were going to live in.
The world that we thought we were going to live in
is no longer here.
Regardless of whether or not
you think this is the best technology
in the world
or the worst technology in the world,
we are saying goodbye
to the world that we were expecting.
And everyone on this project
had to go through a different version of that
at different times.
And it's been really interesting
watching this movie
with new people,
new audiences.
Me and John just had an interview
with a journalist
who watched it last week
and he was...
He kept saying it's over, man.
It's over.
But I tried to assure him
that he was on the journey
and just to trust the process.
But everyone reacts differently
to the materials
and it hits everyone
at places differently.
I mean, because you guys listen
to this podcast,
this stuff might not be new to you.
So maybe you're already
maybe you're already pretty far along.
But for a lot of everyday people who haven't wanted to engage with AI,
I feel like this film gives them hopefully a safe place
to collectively feel like they're going on a journey of grieving and mourning and finally accepting.
And they're not having to do it alone.
That was one thing that the journalist that we talked to last week said
was he went and watched it by himself.
And when he was done, he was like, oh, my God, I wish there were other people here.
I need someone to talk to about this.
And it's feedback that we get from,
even from like some test audiences,
when we did some random test audiences with strangers,
one of the things that we heard was that everyone was really excited
that they got to see it in a theater full of other people
because that is a part of the experience too,
is realizing you're not alone.
And so, you know, obviously this is a shameless plug,
but go see in theaters.
I think it actually is the best way to watch it,
which is like, you know, many people don't watch documents,
documentaries and theories anymore, but I think this is the kind of movie where you're going to want to
feel the presence of other people, like Asa said, laughing, crying, gasping, all of the things,
but then ultimately in the end, processing together is really what we need to be doing.
I wanted to talk about the visual style of the film, because I think you guys took some really
creative choices around. How do you represent something like this? And yeah, just give people
a flavor of that. Yeah. I think one of the things that we knew early on was that we did
that want this to feel like a normal tech doc, you know, technology docs, they kind of have a very
specific look and feel and pace to them. And so because this film is so much about this, in my opinion,
like this imbalance between like our relationship with technology and our relationship with our
own humanity and spirituality and wisdom, you know, just that imbalance is leading to so many
problems. Very early on, we pulled on directors Daniel Rower and Charlie Terrell. Daniel Rower is someone
who is constantly painting because he says it's a way for him to cope with his ADHD. And so he has
notebooks filled with paintings and journals from his entire adult life, whereas Charlie is a director
who has made his name creating short documentaries using stop motion animation and a lot of
textural animation using objects.
And so not only does it, every frame feel handmade,
there is also just this real deep soul and emotion to the whole thing,
where it is not trying to feel like, I guess,
most of the tech documentaries that you normally see.
I'm curious, actually, and I don't know the answer to this question.
What was the moment of, like, surprise for all of you in making this film?
or another way of asking this is like in what ways and how did you change that surprised you in making this film?
Oh man.
Are you a licensed clinical therapist, Aza?
I just need to ask for how much I should disclose at this point.
I'm as licensed as AI.
Perfect, perfect, okay.
You know, the short answer is that I was very humbled by this experience.
I think having an opportunity to try to do something that I perceive,
as good for the world and to be humbled at every turn, you know, and to meet all the experts
and meet all the people who think about this 24-7 and the people who are building this technology,
the ones who are most afraid of this technology, the ones who are really influencing how
this technology is being designed, and just having an opportunity to kind of go on that
magical mystery door and to come out the other end, not having the answers despite that,
and feeling that like, oh, everyone knows something.
In fact, they know more than most people.
And yet everyone still has their blind spots
and everyone still has uncertainty.
And being humbled by that experience was, I think, really important for me
because now I've been able to kind of take that humility
to other parts of my life because I'm realizing,
oh, this is not just AI.
This is really the energy we need to be taking back to all of our problems.
I'm hoping people don't leave this movie certain of anything,
except for one thing,
which is the default path we're on.
It's not the one we want.
I know also along the way in this project in your journey,
you started something called the Creators Coalition on AI.
You want to talk about what this project was
and how it was birthed kind of out of your own making of this film?
Yeah, of course.
One of the things that we realized while making the film was
we had to give audience members some directions,
some instructions for how to move forward with all this information.
And the fourth act does its best to elucidate
and list out a bunch of different ways
in which you can engage with this in your everyday life.
But one of the things that I realized was that,
oh, this is a topic that's going to touch every industry,
every aspect of our lives, every level of the world.
And so people would have to meet AI
where they're at. And for me, that means meeting AI at the intersection of the film industry.
And as we were making this doc, I was watching Silicon Valley move very quickly.
Meanwhile, on the other side of my life, I was watching the film industry kind of paralyzed.
The film industry was not moving to meet this technology, not moving to meet this moment.
But me and John and Ted and a bunch of other people working on this film realized we had an opportunity to kind of step in and begin the conversation.
again, not knowing the answers, but knowing that we had to start the conversation.
We had to start the conversation in a way that, again, brought clarity and brought all of this sort of energy that the film is asking for,
which is an energy of coordination and collaboration to avoid the friction, avoid the polarization.
Again, because the thing that we realize is we cannot allow the tech industry to set the terms for our industry.
And so that's where it started.
I'm going to let Jonathan kind of take it away.
we also saw that because of where we were positioned in our industry,
that we could be a galvanizing force and to get certain people who might have never
talks to each other to talk.
And so because it was a scary transitional period, we just got all the leaders of the labor
unions together to just say, what are your unions concerned about?
Like, what are you guys actually facing in terms of job loss, in terms of definitions?
What are the problems?
And so once we knew the problems, then we were like, well, we can get together and we can try to help solve those problems as a neutral body, people who care to preserve this industry, and that we can be the kind of like hub where you come and you'd say, I need to understand what are the implications of this for job loss or for job degradation or for fill in the blank and that we can then help.
And then, you know, we also have these upcoming negotiations within our industry and seeing that no one was even defining the basic technology correctly,
we're like, oh, this is a train wreck.
We are going directly head to head into a train wreck.
And so, you know, we're still figuring it out as we go,
and we're still trying to figure out the most high-impact way to do it.
And so that is what we're trying to do within the Creators Coalition on AI.
What I love about what you're doing is you're turning the sense of,
what can I do into action?
The phrase that's been bound to trying in my head as it heard a while ago,
the phrase grief is love with nowhere to go.
And I think sometimes depression or despondency
is agency with nowhere to go.
And a very simple question you could ask yourself,
what can I as a filmmaker do?
I'm just like one filmmaker.
But you resisted that urge and said,
you said, I'm going to reach,
just on, and I have this jazzercise thing like this,
like reach up and out, reach up and out.
You reached across to all the other
filmmakers, Spielberg and whomever. And together, you're quite powerful. And I feel like that's a
template for everyone who's listening on the podcast from watching the film is that the natural place
your mind will absolutely go is, well, there's nothing that I as an individual teacher could do or I as
an individual lawyer could do. But if all of the teachers got together or of all of the lawyers
got together, actually, that's a very powerful block. Thank you guys so much for coming on
you're in divided attention and the fact that we all met through this podcast and the fact that
you know this podcast led to us getting to connect and then this movie that you are bringing into the
world that is so important we are so grateful to the so many hours that you all put into making this
possible i know there is so many things that go into this and i'm so excited for this to hit the
world i'm so grateful for you sharing your stories along the way and um grateful for who you are in the
world and what you do thank you so much for coming on thank you thanks for having you
us. Thanks for being you.
One of the things I love about the story is that, you know, if you're listening to this podcast,
you're a regular listener, you know, you're alongside these incredible Oscar-winning directors
who we met through this podcast because they listened to the episode with Daniel Schmachtenberger.
They listen to the episode with Audrey Tang. They've been following this work. And it shows you
that, you know, we don't get the privilege of meeting so many of you, our listeners,
except when we're out there in the world and you come up to us. But I just want you all to know
and get, like, you know, this is why conversations matter. This is why creating,
sharing shared reality, getting other people to listen to this podcast or to watch the AI doc or to
watch the AI dilemma or just creating these shared realities is part of the movement. And I'm just
grateful to meet these guys because they're incredible. And I remember fondly being at that dinner
and just feeling like these were creative peers. These were people who just are so talented
at telling stories and making things accessible and exciting and visually animated and
just weird and quirky and fun. I just remember how one humble they
were, and two, how fast they were.
Yeah.
Because sometimes you get to, like, meet your creative heroes,
and, like, the varner sort of scratches off,
but it was, like, the opposite with them,
is they have this huge wealth of metaphor and visual imagery.
And really, the other thing I think you're pointing at Tristan
is the power of the unknown unknown.
And the metaphor to draw here is,
like, knowing what is the right path to walk for AI
is impossible to see the whole thing.
thing from where we are.
And so you sort of have to put some trust into the, even though we can only point at the
direction in which we're going to have to move off the default path, and we cannot articulate
every concrete action that has to go from here to there, that doesn't mean give up hope.
That means you have to try, and the act of making this podcast, we who had no idea that
the directors of everything ever else would first be listening.
Two would want to meet up.
And then three would lead to the creation of sort of this next, hopefully, like, global moment,
where it gives the clarity we can do something about AI.
And the meta is the act of doing creates compounding agency to do more in the future
in pushing the world in the direction that we all want.
Yeah, 100%.
And I think what you just said is that it's so right, which is that hope or optimism comes from the unknown unknown set.
It comes from, I can't see.
what it could, because if I look at the things that are known, it doesn't look like it's going
to get us there.
It's the things that are in the unknown set that could get us there.
And this film is one object as an example of that.
I think the other thing about the wisest and most mature version of ourselves is moving from
the what can I do to how do we get we to act.
It's from me to we.
And we often say in our work that there are no adults and that we are the adults we've been
waiting for.
You know, if there's no secret room of adults is going to figure this out for us, part
of stepping into being an adult is the ability to reach up and out, to be a community convener,
to take all the nurses that you know and talk about this film together, take all the teachers
that you know and talk about this film together, take all the parents that I know and talk about
this film together, take all the other business leaders that I know, talk about this film together.
If everybody did that, if everybody took responsibility for the sphere of influence that they
had, if everybody reached up and out, if everybody was comfortable with uncertainty and committed
defining that path. Just imagine that culture, that wise, mature culture. It's not that far from where
we are, even though when you look around you, you don't see that wisdom because social media is
reflecting back the worst angels of our nature and the least wise of our nature. That doesn't mean that
it's not in us. So the AI doc comes out March 27th. It's going to be coming out in theater.
So a big group of friends, your family, book a club, take your coworkers,
especially the people that don't think that AI is going to affect them.
This, I think, will make it clear that even if they don't use AI, they live in a world
where AI was going to use them, essentially.
And then most importantly, go get drinks or dinner or host a conversation and talk about it.
This isn't something to go watch alone on your couch.
to something to experience together.
And then, you know, for everyone that's like, all right, I mean, I want to do something now.
We've also got you.
So stay tuned for our next episode where we get into sort of a walkthrough of the trailheads of specific solutions, actions you can take, what's possible, what we're working on, what other people are working on, and what you can be a part of.
Thank you all so much for tuning in.
Your undivided attention is produced by the Center for Humane Technology.
We're a nonprofit working to catalyze a humane future.
Our senior producer is Julius Scott.
Josh Lash is our researcher and producer.
And our executive producer is Sasha Fegan.
Mixing on this episode by Jeff Sudaken,
an original music by Ryan and Hayes Holiday.
And a special thanks to the whole Center for Humane Technology team
for making this show possible.
You can find transcripts from our interviews, bonus content on our substack,
and much more at HumaneTech.com.
And if you like this episode, we'd be truly grateful if you could rate us
on Apple Podcasts or Spotify.
It really does make a difference
in helping others join this movement
for a more humane future.
And if you made it all the way here,
let me give one more thank you to you
for giving us your undivided attention.
