The Daily - The Sunday Read: ‘What if A.I. Is Actually Good for Hollywood?’
Episode Date: December 15, 2024“You couldn’t have made this movie three years ago,” said Robert Zemeckis, the director of “Here.”The film stars Tom Hanks and Robin Wright, and is based on a 2014 graphic novel that takes p...lace in a single spot in the world over several centuries. The story mostly takes place in a suburban New Jersey living room. It skips back and forth through time, but focuses on a baby-boomer couple — played by Hanks and Wright — at various stages of their lives, from age 18 into their 80s.Before A.I. software, Zemeckis could have had multiple actors play each character, but the audience might have gotten lost trying to keep track. Conventional makeup could have taken a decade off Hanks, who is now 68, but not half a century. The issue with C.G.I. is time and money. Persuading us that we’re watching Hanks and Wright in their 20s would have required hundreds of visual effects artists, tens of millions of dollars and months of postproduction work. A.I. software, though, changed all that accounting. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Transcript
Discussion (0)
Hi, my name is Devin Gordon and I'm a contributor to the New York Times magazine.
So maybe you remember last year there was a major strike by the Writers Guild of America
and the Screen Actors Guild that completely stopped Hollywood for several months.
And one of the issues at the core of the contract negotiations with the movie studios was the
subject of AI.
Guild members were concerned that AI could replace humans at every stage in the creative
process.
That studios would soon use AI to write screenplays, direct and edit films, design the special
effects and even read and decide which scripts were green lit.
Actors meanwhile were concerned about copyright ownership over their images.
They wanted to protect their likenesses from exploitation, reproduction and profit without
their benefit. But I also heard another perspective from AI optimists in Hollywood.
They told me that the technology was still widely misunderstood.
So I decided to find out what AI was actually being used for.
Did the anxiety match the reality?
At first it was difficult to find people in the industry who would go on the record in
praise of AI because that can be seen as siding with the machines or undermining union solidarity.
But eventually I was able to speak with artists who have already incorporated AI into their work in
films you might have already seen.
One use of AI is to make actors look younger or older than they actually are.
For example, in a new movie called Here, AI transformed Tom Hanks' face to make him
look anywhere from 18 to 80 years old.
This method of facial replacement technology is also being used in stunt work.
You can take the face of, say, Dwayne Johnson and digitally paste it onto the face of a
stunt person leaping off a cliff.
Filmmakers have also used AI to bring back dead actors to reprise roles, as was done with Ian Holm for his Android character
in this summer's Alien Romulus.
This is the kind of work that would typically take teams of hundreds of artists, drawing
every pixel frame by frame several months to do, and it can cost tens of millions of
dollars.
For this week's Sunday read that you'll be hearing next, I wanted to get as many honest
takes from people in Hollywood about where AI might ultimately lead.
And I really wanted to know, and I think many of us as moviegoers do as well, but might
be hesitant to ask, could AI actually make movies better?
So here's my article, read by Eric Jason Martin.
Our producer is Jack DiSidoro, and our music was written and performed by Aaron Esposito.
The Los Angeles headquarters of Metaphysic, a Hollywood visual effects startup that uses
artificial intelligence to create digital renderings of the human face, were much cooler
in my imagination, if I'm being honest.
I came here to get my mind blown by AI, and this dim three-room warren overlooking Sunset
Boulevard felt more like
the slouchy offices of a middling law firm.
Ed Ulbrich, Metaphysics' Chief Content Officer, steered me into a room that looked set to
host a deposition, then sat me down in a leather desk chair with a camera pointed at it.
I stared at myself on a large flat-screen TV, waiting to be sworn in.
But then, Ulbricht clickety-clicked on his laptop for a moment, and my face on the screen
was transmogrified. Smile, he said to me. Do you recognize that face? I did, right away. But I
can't disclose its owner, because the actor's project won't
come out until 2025 and the role is still top secret.
Suffice it to say that the face belonged to a major star with fantastic teeth.
Smile again, Ulbricht said.
I complied.
Those aren't your teeth.
Indeed, the teeth belonged to famous actor. The synthesis was seamless and immediate, as if a digital mask had been pulled over
my face that matched my expressions, with almost no lag time.
Ulbricht is the former chief executive of Digital Domain, James Cameron's visual effects
company, and over the course of his three-de decade career, he has led the VFX teams on
several movies that are considered milestones in the field of computer
generated imagery, including Titanic, The Curious Case of Benjamin Button,
and Top Gun Maverick.
But in Ulbric's line of work, in the quest for
photorealism, the face is the final frontier.
I've spent so much time in Uncanny Valley, he likes to joke,
that I own real estate there.
In the spring of 2023,
Ulbric had a series of meetings with the founders of Metaphysic.
One of them, Chris Umi, was the visual effects artist behind a series of deepfake Tom Cruise videos that went viral on TikTok in early 2021,
a moment many in Hollywood cite as the warning shot that AI's hostile takeover had commenced.
But in parts of the VFX industry, those deepfake videos were greeted with far less misgiving.
They hinted, tantalizingly, at what AI could soon accomplish at IMAX resolutions and
at a fraction of the production cost.
That's what Metaphysic wanted to do, and its founders wanted Ulbric's help.
So when they met him,
they showed him an early version of the demonstration I was getting.
Ulbric's own career began during the previous seismic shift in the visual
effects field from practical effects to CGI.
And it was plain to him that another disruption was underway.
I saw my career flash before my eyes, Ulbricht recalled.
I could take my entire team from my former places of employment.
I could put them on for eternity using the best CGI tools money can buy, and you can't
deliver what we're showing you here, and it's happening in milliseconds."
He knew it was time to leave CGI behind.
As he put it, how could I go back in good conscience and use horses and buggies and
rocks and sticks to make images,
when this exists in the world.
Back on Sunset Boulevard, Ulbricht pecked some more at his laptop.
Now, I was Tom Hanks, specifically a young Tom Hanks,
he of the bulging green eyes and the look of gathering alarm on his face in splash
when he first discovers that Daryl Hannah's character is a mermaid.
his face in Splash when he first discovers that Daryl Hannah's character is a mermaid. I can divulge Hanks' name because his AI debut arrived in theaters nationally on November
1st in a movie called Here, directed by Robert Zemeckis, written by Zemeckis and Eric Roth,
a reunion of the creative team behind Forrest Gump, and co-starring Robin Wright. Here is based on a 2014 graphic novel that takes place at a single spot in
the world, primarily a suburban New Jersey living room, over several centuries.
The story skips back and forth through time, but
focuses on a baby boomer couple played by Hanks and Wright at various stages of
their lives, from age 18 into their 80s, from post-World War II to the present day.
You couldn't have made this movie three years ago, Zemeckis told me.
He could have used multiple actors for each character, but
the audience would get lost trying to keep track.
Conventional makeup could have taken a decade off Hanks,
who is now 68, but not half a century.
The crux with CGI is time and money.
Persuading us that we're watching Hanks and
Wright in their 20s would have required hundreds of VFX artists,
tens of millions of dollars, and months of post-production work.
Doable, in theory, but major studios don't spend that kind of money on movies like here.
There's no capes or explosions or aliens or superheroes or creatures, Ulbrich explained.
It's people talking, it's families, it's their loves and their joys and their sorrows,
it's their life.
AI software, though, changes all the accounting.
By using every available frame of Hanks' movie career to capture his facial movements
and the look of his skin under countless lighting conditions, physical environments, camera
angles and lenses, metaphysics artists can generate a digital Tom Hanks mask with the click of a few keystrokes.
And what we see on screen is just one factor in AI's ascendancy.
It's the quality, and it's the speed, and it's the cost, Ulbrich said.
No six-month production lag. No fortune spent.
During the filming of Here, Metaphysic devised a setup that enabled Zemeckis and his crew
to follow the shooting of scenes on two different monitors, one showing the raw feed from the
camera of the actors as they appear in reality, and one filtered through its AI tools showing
the actors at whatever age the scene required.
Zemeckis has a long history of pouncing on new technologies to help him tell stories,
from Forrest Gump to the Polar Express, and Hanks has often come along for the ride.
In this case, the production breakthrough mattered as much as the image quality.
It was crucial that the cast could see it, because then they could adjust their performance,
Zemeckis told me.
They could say, I see, I've got to make sure I'm moving like I was when I was 17 years old.
No one had to imagine it.
They got a chance to see it in real time.
And despite the technical ambition, here only cost about $50 million,
less than a quarter of some Marvel movie budgets.
$50 million, less than a quarter of some Marvel movie budgets. From Metaphysics office in Hollywood, I drove 30 minutes south to Sony Pictures' studio
lot in Culver City to watch a screening of Here in the basement of the Irving Thalberg
building.
And for me at least, the AI-driven scenes passed the baseline test of any ambitious
movie illusion. I didn't notice it.
But reactions are bound to vary, especially when it comes to a face as familiar as that of young
Tom Hanks, a high bar for a big-screen visual effect, and when an illusion doesn't work,
it can be hard to focus on anything else. Maybe it will turn out to be impossible to escape Uncanny Valley after all,
even with the help of AI. Then again, the whole fuss over the Tom Cruise deepfakes was propelled
by how convincing they were, and that was three years and three Nvidia chips ago.
It seems like only a matter of time before they fool us all.
The history of Hollywood can be told as a series of technological leaps, beginning with
the invention of the camera itself.
And each time something new comes along, jobs are lost, jobs are created,
the industry reorganizes itself.
Everyone in town of a certain age has seen this movie before.
Past leaps though have tended to have narrower impacts.
Home video changed movie distribution, digital cameras changed movie production,
CGI changed visual effects.
The difference here is that AI has the potential to disrupt many, many places in our pipeline, says Lori McCreary,
the chief executive of Revelations Entertainment, a production company she
owns with Morgan Freeman, and a board member of the Producers Guild of America.
This one feels like it could be an entire industry disruptor.
AI is evolving so rapidly though, and remains so
poorly understood by so many people in Hollywood, that it's difficult to predict
how it will wind up proving most beneficial, and
which aspects of the filmmaking process it will disrupt first.
Everyone's nervous, says Susan Sprung, the producers guild's chief executive,
and yet no one's quite sure what to be nervous about.
The use of AI in here is a critical element in its broader illusion,
but it's also a small one in in a movie full of old-fashioned visual invention.
And aging and de-aging actors is just one way that filmmakers are tinkering with AI-driven
facial replacement.
It's also being used in stunt photography, foreign language dubbing, and increasingly
in lieu of reshoots.
AI applications are often divided into two broader categories.
The first is generative AI, which helps artists and studios create things.
Then there is agentic AI, which helps them get things done.
A new AI tool called Kalea, for instance, reads scripts and generates 35-page coverage reports, along with historical comparisons and suggested theatrical release patterns,
the core duty of countless junior studio executives' daily work life,
though perhaps not for long.
Gen.ai is, depending on your vantage point,
either the fun kind or the dystopic kind.
It's either going to empower artists or replace them, or do both.
But Gen.ai is also the category where all the creative exploration is happening, and
where filmmakers are learning on the fly how it can help them tell new stories and, they
believe, make better movies. Shortly after here wrapped up principal photography in April 2023, Hollywood shut down for several
months because of overlapping strikes by the Writers Guild of America and the Screen Actors
Guild.
Among the central issues in both labor disputes was how to protect the livelihoods of union
members from AI encroachment.
Even a year before the strikes, AI was still just a plot device for
sci-fi thrillers for most people in the movie industry,
not a pressing real world threat.
Then OpenAI unveiled its first public version of the chat GPT in November 2022.
Suddenly, AI was an asteroid hurtling toward Los Angeles.
Any day, studio executives would start using ChatGPT to spit out screenplays,
eliminating all those pesky writers, and using text-to-video programs like
Runways Gen 1 to auto-generate all the filmmaking elements that professional
artists get paid to create now.
Costumes, set design, cinematography.
And even though the guilds managed to extract strict limitations on AI use in their ratified
labor agreements, their victories felt pyrrhic.
I spoke with more than two dozen people across the industry for this article, and discovered
that while there's no shortage of AI optimists in the movie industry, they're often reluctant
to share that sentiment out loud for fear of seeming to side with the machines, or appearing
too sanguine about a technology that everyone agrees will cost some people their jobs.
There were also a couple of occasions when an eager early adopter scheduled
an interview, only to cancel at the last minute at the behest of skittish corporate overseers.
And yet, the reality of AI's adoption within Hollywood so far has been more muted and incremental
and considerably less dystopic than the nightmare scenarios.
What was billed as an industry earthquake has been more like a slow leeching into the
topsoil.
AI in Hollywood right now is like AI in here.
It's everywhere and it's nowhere.
It's invisible and it's all over the screen.
There's too many people in Hollywood today who think that if you type movie and press
enter, you get a movie, says Cristobal Valenzuela, the co-founder and chief executive of Runway,
whose AI video generation engines are among the most widely used.
The moment you start using it, you understand, oh, it actually doesn't really work that well yet and it's full of flaws and it doesn't actually do what I want.
The critical limitation with generative AI tools for now is the absence of control.
CGI requires a factory line of hundreds of artists working one frame at a time, but you
control every freaking pixel.
You control every character, says Oded Grenot, a visual effects artist at a generative AI
video startup called Hour One, who worked on the Oscar-winning team behind Spider-Man
Into the Spider-Verse 2018.
Making images with AI, Grenot explains, is like Russian roulette or a slot machine.
The front end requires just a simple prompt.
He writes, I want Spider-Man hanging from a building, and it generates it.
But that still leaves countless decisions up to the machine.
And you're stuck with the output.
What does the building look like?
How is he hanging? Upside down?
Sideways?
And that's a single still image, not a full sequence,
let alone a feature length film.
You can't expect James Cameron to prompt an avatar scene, says Yo Plata,
Metaphysics Chief Innovation Officer and
the lead architect of the AI tools used in Here.
It's just not going to work. the innovation officer and the lead architect of the AI tools used in here.
It's just not going to work.
Or with Bob Zemeckis or Steven Spielberg, if you've ever made a movie with one of
these guys, you know that they will want to change every pixel if they can.
Rather than play wait and see and
have AI thrust upon them in ways they couldn't control, Anthony and Joe Russo,
the directors of the previous two Avengers movies from Marvel Studios, hired a machine
learning scientist away from Apple to help guide how their production company, Agbo,
would use it.
There's a lot of ways that we are experimenting with AI right now, Anthony Russo told me.
We're not quite sure what's going to work and what's not going to work.
But he's sure that AI will figure somehow into how he and
his brother make the next two Avengers movies, both currently scheduled for
2026, even if it's only to help with brainstorming ideas and
working through them faster.
Over several months of talking to people
around Hollywood about AI, I noticed a pattern.
The people who knew the least about its potential uses
in the filmmaking process feared it the most,
and the people who understood it best,
who had actually worked with it,
harbored the most faith in the resilience
of human creativity, as well
as the most skepticism about generative AIs ever supplanting it.
There was a broad consensus about the urgency of confronting its many potential misuses
— tech companies skirting copyright laws and scraping proprietary content to train
their machine learning models, actors' likenesses being appropriated without their permission,
studios circumventing contractual terms designed to ensure that
everything we see on screen gets written by an actual human being.
I must have heard the phrase proper guardrails at least a dozen times.
But as the prolific Emmy-winning television director Paris Barclay, who has six episodes
of multiple shows airing this fall alone, put it, that's what unions are for. The twilight sun over the Aegean Sea behind Tom Hanks was so golden and incandescent and
lit his profile with such cinematic flair that the composition was almost too perfect,
as though it could only be the product of advanced machine learning, and not, say, Zeus.
One week after my visit to Metaphysic, I was once again staring into a camera, and Hanks
was again staring back at me.
Only this time it was the real Tom Hanks, enjoying the last few days of a sailing trip
in the Greek islands.
He was tanned and relaxed in a dark open-coll collar polo, and unlike the last time I saw him,
he looked like a man in his late 60s, with clear frame glasses, tufts of short grey hair
barely peeking over the top of his head, and a tight white beard.
The nameplate at the bottom of his zoom window read H.A.N.X.
I asked Hanks if it gave him any pause making a movie so reliant on AI tools at a moment
when so many of his colleagues in Hollywood were anxious about it.
He rejected the premise and characterized the work on Here as being in the grand tradition
of Lon Chaney and Monster Movie Magic.
This was not AI creating content out of whole cloth, he said.
This is just a tool for cinema, that's all.
No different than having better film stock or a more realistic
rear-screen projection for somebody driving a car.
For someone like Hanks, AI could enable him to take on roles
for which he had long assumed he was too old.
If it's possible for me to play a younger person than I am, I read stuff all the time
and I think, oh man, I'd kill to play this role, but I'm 68.
I'd kill to play Iago, but I can't because Iago's in his 20s.
I would do it in a heartbeat.
Though pity the poor 20-something actors shut out from playing Iago by an ageless
Tom Hanks.
When AI evangelists talk about its capacity to empower artists, this is the kind of thing
they mean, though Hanks' experiences have compelled him to contemplate some morbid implications.
They can go off and make movies starring me for the next 122 years if they want," he
acknowledged.
Should they legally be allowed to?
What happens to my estate?
Far from being appalled by the notion, though, he sounded ready to sign all the necessary
paperwork.
Listen, let's figure out the language right now.
Metaphysics Handiwork has already appeared in two major theatrical releases this year,
Furiosa, a Mad Max saga, and Alien Romulus, and in both cases the assignment was to resurrect
a fan-favorite figure from an earlier film in the franchise who had been played by a since deceased
actor. In Furiosa, Metaphysic enabled the director George Miller to bring back the Bullet Farmer
by putting the face of Richard Carter from Mad Max Fury Road onto the body of a living
actor.
In Alien Romulus, the android from Ridley Scott's 1979 original Alien, played by Ian
Holm, who died in 2020, returns in updated form for several scenes.
Even though Holm's family blessed the use of his likeness,
public response was divided.
The movie was a hit, but
some viewers posted ethical critiques on social media.
Then in late August, the California State Senate passed long
gestating SAG-supported
legislation requiring estate consent for AI-generated replicas of dead performers.
When I asked one writer-director about the practice, he didn't even let me finish the
question.
Nope, nope, nope, nope, said Billy Ray, who wrote Captain Phillips 2013 and
co-wrote the 2012 big screen adaptation of The Hunger Games.
And who spent his time during the strike hosting a studio lambasting podcast.
It's completely insincere, dishonest filmmaking.
It's a lie.
The counter-argument I kept hearing from artists and from technologists is that filmmaking
is a grand illusion at its core, and we all consent to being tricked, we're paying to
be tricked, when we walk into the theater or turn our phone sideways.
When your movies require visiting multiple fantasy worlds, dreaming up new superpowers
and nastier villains, you
need to come up with lots of ideas, knowing that a vast majority of them will be bad.
This is the grunt work of making popular art the failing part, and AI could prove to be
a godsend for artists who need to fail fast and at minimal expense. It's a bit like you have 5,000 phenomenally smart interns at your disposal 24-7 in all
time zones, says Dominic Hughes, the Oxford University-educated AI whisperer who left
Apple to join the Russo brothers.
Hughes switched industries, he told me, in part because he came to believe Silicon Valley
was getting AI all wrong.
Generative AI tools are unruly and imprecise.
Sloppy, he said, but too many companies were trying to use them for tasks where they couldn't
afford to be wrong.
Like self-driving cars or robot surgeries or whatever, he says. And we've been struggling with that for years.
Because if you don't wanna run over seven year olds in Kansas,
you've gotta be 99.999999% precise.
Whereas in a creative context, if I generate a bunch of elves and
they have seven fingers, hallucinations in the parlance of the medium.
It doesn't matter because they're part of my iterative creative process of brainstorming
what elves could look like.
Generative AI, he has come to believe, is best suited for tasks where hallucination
is a feature, not a bug. The sum of Hollywood's collective fears, says Bennett Miller,
the Oscar nominated director of Moneyball and Foxcatcher,
is automation, robots replacing humans, just as in the movies.
Miller spent five years making a documentary about the dawn of AI that
he describes as a time capsule about a moment
before a real loss of innocence in Silicon Valley.
The untitled film is currently in legal limbo.
In the course of making it, he got to know the original leadership team at OpenAI, including
Sam Altman.
A few years ago, they offered him access to a beta version of their forthcoming text-to-image
tool, Dolly.
It was astounding, Miller told me.
From the moment that I had an account set up to literally ten minutes ago, I've just
been all in.
This January, at Gagosian's Paris Gallery, Peeble opened his third show of ghostly surreal images that evoke the grainy
early days of photography, but were created with dolly.
In one of them, a silhouetted man looks up from the floor of a century-old theater at
a massive sea creature on stage, its body so large that it extends beyond the frame.
It's like realizing that you had locked-in syndrome,
because you really can navigate to extraordinary places. He fell in love with getting lost. The
mistakes, the wrong turns, the model's peculiar way of comprehending the human world, a bit Louis
Buñuel, a bit Diane Arbus, led to all of his breakthroughs, which is how the best art often gets made,
by accident.
It's not just a change in degree of what's been possible before, it's really like a
change in kind.
And yet, as much as Miller's creative practice has been transformed by AI, it's still merely
a tool to him.
And the tool doesn't make you an artist, he says.
I just don't see it as a threat the same way others see it.
I'm not saying that there aren't going to be huge problems that emerge, but here's
the thing that I cannot comprehend.
Human artists being replaced.
The great wild card of AI is that it learns and gets better, and we can only guess at
its full capabilities.
Its performance so far, though, has also highlighted the gap still to be closed, especially with
text generation tools like ChatGPT, a lowest common denominator regurgitation machine whose
countless practical uses don't
appear to include writing screenplays.
Tom Graham, a metaphysic co-founder and its chief executive, says he can see AI tools,
summarizing news articles and doing great explainer videos for corporate work, I can
see them creating generic or derivative stories that just kind
of seem like other stories.
But he adds, amazing storytelling is very, very difficult.
Of course, Hollywood is very much in the business of generic and derivative stories, in which
case why not completely outsource the hack work to AI?
The Writers Guild of America's labor deal forbids that, though count on studios to use
it for anything in the script development process that can save them money.
And some creative guilds are bound to be hit hard by the adoption of AI, especially in
digital animation, with its battalions of entry-level artists who
spend an entire year tweaking pixels on two minutes of film.
Many of those people could be working in AI soon, and fortunately for them, AI firms are
hiring.
We need to double our size really quickly just to keep up with the demand," says Alejandro Lopez, the chief
marketing officer at Metaphysic, which currently has about 120 employees working remotely in
more than 20 countries.
We are so behind.
But as anxious as the guilds are, Hollywood's history with paradigm-shifting technology
suggests that the folks on the studio side, the agentics
side, have just as much to fear.
We went from renting movies to streaming them, and it's not filmmakers that go away.
Blockbuster goes away, says Bryn Mooser, a filmmaker and a co-founder of the streaming
channel Documentary Plus, whose new company, Asteria, is an independent
movie studio bidding to be the Pixar of AI.
Or think about the switch from film to digital.
Polaroid is the one that's gotta figure it out.
Kodak has to figure it out.
Photographers are still there.
Filmmaking is often described as-making in the world, and
Metaphysic was just one among many creative contributors to the trickiest scenes of Hanks
and Wright as young lovebirds in here.
The actors performed in full period costume, not in green suits covered with ping-pong
balls.
The makeup department taped back the loose skin around Hanks' neck and
pulled up his droopy ears so Hanks' AI-generated young face would match Hanks' real-life old head.
And of course, they had award-winning actors to deliver all the lines.
You still need the warmth of the human performance, Zemeckis told me. The illusion
only works because my actors are using the tool just like they use their
wardrobe, just like they'd use a bald skull cap.
It was the future of Hollywood, and it looked uncannily like its past. you