On with Kara Swisher - On Creepy Fingers, Deep Fakes, and Fair Use with Getty Images CEO Craig Peters
Episode Date: July 1, 2024Hands with six fingers, mouths with dozens of teeth, hairlines and limbs out of whack: We’ve all seen eye-roll worthy generative AI images. But despite the prevalence of these easy to spot fakes, ph...otography and video media companies like Getty Images are already feeling the impact of AI and trying to integrate the technology without compromising their core business. Kara speaks with Getty CEO Craig Peters about why he can promise users of the Getty AI Generator “uncapped indemnity”, whether he thinks licensing agreements with OpenAI and similar AI companies are “Faustian” deals with the devil, and how better standards to protect visual truth and authenticity could help the industry remain financially viable in the long run. Plus: how worried should we be about deep fakes impacting the 2024 election? Questions? Comments? Email us at on@voxmedia.com or find Kara on Instagram/Threads as @karaswisher Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Support for this show is brought to you by Nissan Kicks.
It's never too late to try new things,
and it's never too late to reinvent yourself.
The all-new reimagined Nissan Kicks
is the city-sized crossover vehicle
that's been completely revamped for urban adventure.
From the design and styling to the performance,
all the way to features like the Bose Personal Plus sound system,
you can get closer to everything you love about city life in the all-new, reimagined Nissan Kicks.
Learn more at www.nissanusa.com slash 2025 dash kicks.
Available feature, Bose is a registered trademark of the Bose Corporation.
Support for this show comes from Constant Contact.
If you struggle just to get your customers to notice you,
Constant Contact has what you need to grab their attention.
Constant Contact's award-winning marketing platform offers all the automation,
integration, and reporting tools that get your marketing running seamlessly,
all backed by their expert live customer support.
It's time to get going and growing with Constant Contact today.
Ready, set, grow.
Go to ConstantContact.ca and start your free trial today.
Go to ConstantContact.ca for your free trial.
ConstantContact.ca.
Hi, everyone from New York Magazine and the Vox Media Podcast Network. This is On with Kara Swisher, and I'm Kara Swisher. My guest today is Craig Peters, CEO of Getty Images. You've seen Getty's work, even if you don't know it.
Getty photographers are on the red carpets at the Met Gala or at the White House or at the
Olympics, for sure. Pictures that are then used by media companies across the globe.
Getty also has a huge inventory of stock photos and video. It has also one of the largest archives
of historical images in the world.
You'd think that would be enough, but it's not these days.
Craig Peters has been heading Getty since 2019 and took the company public in 2022,
the same month, as bad luck would have it, that OpenAI launched DALI and Beta,
which means questions about how to protect copyrighted images got a whole lot more urgent,
both for creators and for the public, questions that have not gone away.
Fittingly, our expert question this week comes from former Time, Life, and National Geographic photojournalist Rick Smolin.
Craig has been an outspoken voice on improving standards and protecting creators,
but his stance on AI has also shifted in the past few years.
So I'm looking forward to talking about how he sees the business model
for working collaboratively with this new technology in the future.
And at the end of the day, this is about transparency and trust and authenticity.
My biggest worry, of course, is if this is a replay
of what the internet did to media the first go-round.
We'll talk about that and more, including why they can't get fingers
right. Those fingers are creepy. Hi, Craig. Great to see you again. Great to see you as well. Thanks
for having me. Craig and I had breakfast in New York, I don't know, a couple months ago,
talking about these issues. I'm trying to get my arms around AI and all its aspects and interviewed wide ranges of people. And obviously, photos are, of course, at the center of this in many ways, even though the focus is often on text, but photos and video are critical.
So you've been at the forefront of conversations around generative AI copyright issues, but Getty has also been collaborating and utilizing AI like most of media.
I want to talk about all that and where you see this heading for creatives and for the public.
But first, I think most people know or have seen the name Getty Images in a caption, but they don't have a good understanding of the business as a whole.
So why don't you give us a very quick talk about what it does?
All right. Well, first, let me start. We were founded by Mark Getty of the Getty family and Jonathan Klein back in 1995. So the company's been around for almost 30 years. We are a business to
business content platform. We provide editorial content, so coverage of news, sports, entertainment, as well as a really deep archive, and creative content. So this would be materials that would be used to promote your brand, your company, your product offerings, or your services. We provide that in stills, so photography, as well as video, as you highlighted. We provide that in what we call pre-shot,
which means it's available for download immediately. You can search our platform,
find content, and then download it and use it. And in some cases, we provide custom
creation, right? So we might work on an assignment to cover a certain event.
Just for full disclosure, you've done Vox Media events?
Correct.
And you then own the pictures, but Vox Media events? Correct. And you
then own the pictures, but Vox gets use of them, correct? Correct. You know, we represent over
600,000 individual photographers and over 300 partners. These are partners like the BBC or NBC
News or Bloomberg or Disney or AFP. And what we try to do is allow our customers, really,
And what we try to do is allow our customers, really, we try to deliver value across four elements.
So, again, we have editorial content. So, when you think about creating more efficiently, we'll have a team of over 100 individuals in Paris for the Olympics.
And we will allow the entities that are covering the Olympics to have access to high-quality visuals in real time.
Big events. Big events.
Big events or stuff you're contracted for, like you did my last code conference, for
example.
And then, but then there's also, again, what we call creative, which is, you know, also
referred to as stock imagery.
It's intended to be used for commercial purposes, to promote a company's brand, products, or
services.
You've run the company since 2019.
You took it public through a SPAC in July 2022, just about two years ago.
Correct.
Coincidentally, it was the same month OpenAI made its image-generating program,
DALI, available in beta form.
What did you think at the time?
Did you think end times, or did it not register with you all?
No, I think, I mean, first off, you know, generative AI is not something that is entirely new. It's something that we had been tracking
for probably seven years prior to the launch of DALI. We had first had discussions with
NVIDIA, who we now have a partnership, which we could talk more about.
Yeah, I will.
But we knew those models were in development.
We knew the capabilities were progressing.
So it didn't surprise us in any way, shape, or form.
And fundamentally, you know, our business is, again, providing a level of engagement, whether that's, you know, again, through a media outlet or through your, you know, your corporate website or your sales and marketing
collateral in a way that relies on authenticity, it relies on creativity, it relies on core
concepts being conveyed. Those are difficult to do. Yeah. So here's this thing coming, you see it,
obviously it's been around and they've been generating fake images for a long, long time,
absolutely. But did you think, well, look, there's six fingers,
it's kind of weird looking, you know, they always have that sort of like a Will Smith picture of him
eating spaghetti and it doesn't look great. Did you think this could be a real problem?
No, I don't think we saw it necessarily as a problem. I think we saw some of the behaviors
around it as a problem. So, we knew that, you know, hairlines and fingers and eyes and things like that were going to resolve over time.
I think what we saw, though, was ultimately these platforms that were launching these generative models were scraping the internet.
Correct.
And taking third-party intellectual property in order to create these services.
And fundamentally, we believe that creates some real problems.
Yeah, that's called theft.
They've done it for many years now.
I would say that there's kind of four things that go into generative AI.
So, GPUs, processing and power, and computer scientists.
And those three things, these companies are spending billions and billions of dollars around, right?
But the fourth item is something that they call data.
I happen to call content.
Right.
And it is something that they are taking.
And they are scraping and taking from across the internet.
And that fundamentally, to me, raised some questions about not only should they be doing that, is that allowed under the law, but it also creates real IP risk for end users of these platforms if they're using them commercially.
Yes, but they're hoping to get away with it, just so you push back pretty hard against AI initially. You ban the upload and sale of AI-generated images in your database,
but it seems you've done more than, you seem to be embracing it more.
Talk about how it's evolved in your thinking.
Because at its base, they like to take things.
They've been doing it for a long time.
You know, I had a lot of scenes in my book where Google was just taking things
and then reformulating and spinning the book fight that they had. They use the expression fair use quite a bit. Did your thoughts change on it? Or do you
think, well, this is going to happen again and I have to embrace it? I think we thought, we really
took a two-pronged strategy. And I don't think it's changed or it's evolved. I think it's been
pretty constant. But we are not Luddites. We believe in technology can be beneficial to society.
We believe it needs to have some guardrails and be applied appropriately, but we wanted to embrace
this technology to enhance what our customers could do. But we also wanted to make sure that
we protected the rights of our 600,000 photographers and 300 partners and of our own, and try to get this to
be one where it was done more broadly beneficial, not just to the model providers, but to the
industries as a whole. And so, we pursued a strategy that's really twofold. So, one was,
we did not allow content into our database that was
generative AI. We did that for two reasons. One was that our customers are really looking for
authentic imagery, high quality imagery, that imagery that can really relate to an end audience.
And we think that that is still best done through professionals using real models, etc. You know,
accepting AI imagery that's been created from engines that are scraping the internet
has potential risk. It has potential risk from copyright claims. It also has potential risk
from people that have had their image scraped off of the internet generally, and it can replicate
those types of things.
So we do want to bring that content onto our platform and then ultimately give it to our
customers and have them bear that risk. Because one of the things that we try to do for our
customers is we try to remove that risk. Right. Now, I'll get to your unkept
indemnification, but start with the partnering with NVIDIA, and more recently you partnered
with the PixArt, it's a digital creation platform. Let's start with the Getty AI tool.
Tell us how it works.
And just to be clear, you can't produce deep fakes of Scarlett Johansson or President Biden that would inadvertently use copyrighted images, like no Nike Swift.
So it's trained only on our creative content.
Again, that's creative content, not editorial content.
So to your point, we haven't trained it to know who Taylor Swift is or Scarlett Johansson is or President Biden.
That creative content is permissioned.
It's released.
So, we have model releases from the people portrayed within that imagery.
We have property releases, et cetera.
So, it is trained off of a universe of content where we have clear rights to it.
There's a share of revenues back to the content provider.
So, as we sell that service, we actually give a revenue share back to the content providers that this content was used to train.
As you said, it can't produce deepfakes.
It cannot produce third-party intellectual property.
So, if you search sneakers or type a generative prompt of sneakers, it will produce sneakers, but it's not going to produce Nikes.
If you type in laptop, it's not going to give you an Apple iMac.
It is high quality.
So you can produce models that are of very high quality in terms of the outputs that it produces.
You don't have to scrape the entirety of the internet.
This proved that.
And it is fully indemnified because, again, we know the source of what it was trained on.
We know the outputs that it can produce, and they're controlled.
And it ultimately gives a tool to corporations that they can use within their business day-to-day
without taking on intellectual property risk.
So the idea is to make a better version of this, but there's so many AI-generating image products out there already on the market.
DALI, MidJourney, OpenAI is currently testing Sora.
As you said, these people have billions of dollars put to task here.
So talk about your sort of unique selling point, obviously cleaner database,
less bias than internet scrape versions. And you also know these images are backed by our
uncapped indemnification. So explain what you mean by that. But let's start with,
you've got a lot of competitors with tons of money and have a little less care than you do
about the imagery. You know, this model is intended to be utilized by businesses, businesses that understand
that intellectual property can carry real risks as they use it in their business.
Those risks can come with large penalties.
And what we are doing is providing a solution that allows them to embrace generative AI,
but avoid those risks and still produce a high quality output.
Other tools that are out in the marketplace do bear risk because they have, you know,
scraped the internet.
You can produce third-party intellectual property with them.
You can produce deep fakes with them.
And, you know, that ultimately we believe is a constraint on the broad-based commercial
adoption of generative AI.
This has been their modus operandi for years and years to do this, to sort of scrape and then later clean themselves up.
I think it certainly break things and move fast.
And ultimately, we think that this technology should not just be thrust into the world and damn the consequences.
We believe there should be some thought.
We believe there should be some level of regulation.
We should believe that there should be clarity on whether it is, in fact, fair use.
And so one of the things that we did, one of the companies that I'm not sure if you mentioned, which Stability AI that launched Stable Diffusion.
I will.
We knew that they had scraped our content from across the internet and used it to train their
model. And we didn't believe that that was covered by fair use in the US or fair dealing in the UK.
And so we've litigated on that point and it's progressing through the courts.
I want to talk about that in a second. I want to put a pin in that. But talk about what uncapped indemnification means.
It means that Getty Images stands behind the content that the generative model produces,
and that you're safe to use it. And if there are any issues that occur downstream, which there
shouldn't be, we will still stand behind it. We will protect our clients.
So, AI gets stuff wrong all the time, though. Are you worried about promising indemnification?
I guess that's what you're promising now, but are you worried about the cost if you do have
copyright infringements using this technology? No, because, again, we know what it was trained on.
We know how it was trained. And we trained it in a way that it couldn't produce those.
I think, again, when you just scrape the internet and you take in imagery of children, when you take in imagery of famous people or brands or third-party intellectual property like Disney, Star Wars, or the Marvel franchise, you can produce infringing outputs.
You can produce things that are going to run afoul of current laws and potential future laws.
This was built purposefully from the ground up to avoid the social issues
that are being questioned as a result of this technology,
but also to avoid the commercial issues.
Right, right, Which was these companies
are going to have the same thing they did with YouTube, and they eventually settled it out.
So I want to get to the lawsuits in a second, but the Pick Arts partnership is different.
Let me, it's a licensing deal. They are using your image database to train their AI. And these are
these licensing deals. You're not the only one. AP has did a license deal with OpenAI. There's
all kinds of people who are striking deals.
Like OpenAI and Meta did one with one of your competitors, Shutterstock.
Let's talk about those first.
I think ultimately we believe that there are solutions.
These services can be built that ultimately do appropriately compensate for the works that they're utilizing as a service.
And we think, you know, the models that we've entered into that you'd mentioned, so NVIDIA
and PixArt, those are not payments to us, a one-time payment, which is most of these
deals or a recurring payment.
They're actually, this is a rev share deal.
So as those models produce
revenues, they will generate revenues that flow back to the content creators. And, you know,
I think some of the deals that are getting done today are trying to, in essence, whitewash a
little bit the scraping that has historically been done and show that they are willing to work with
providers. In some cases, it's providing access to more
current information that's difficult to get in real time from scraping. But ultimately,
it's a small drop in the overall bucket. So talk about the lawsuit against Stability AI.
Where does it stand? And just again, you're claiming the company unlawfully copied and
processed millions of Getty copyright protected images, which are very easy to track.
I mean, more than text, much easier.
So tell us where that stands now, because, again, that's an expensive prospect for a small company.
Well, it's something that we invest in.
We think it's important.
We think we need to get settled view on whether this is, in fact, fair use or not.
We don't buy into maybe what
some of the providers are stating is settled fact. It is in fact fair use. We believe that that is
up for debate. And we believe within the realm of imagery, we think there's a strong case to
be made that it is not fair use. And we think there are some precedents out there, most notably in the U.S., Warhol v. Goldsmith, which was settled at Supreme Court, which highlights some case law that would say that this is going to be questionable whether it would qualify for fair use.
So, we launched that litigation.
It is moving forward a little bit more at pace in the U.K.
It is moving forward a little bit more at pace in the UK.
So it is proceeding to trial.
I expect we'll likely be at trial end of this year, early next.
In the U.S., it's moving a little more slowly as things take time to move through the court system.
And I don't have a magic wand to speed that up.
Talk about the calculation of suing versus settling, essentially, and the costs. Because here you are, someone grabs something of yours, you get hit on the head, you have to take them to court for hitting you on the head. But still, you've been hit on the head, right? As you mentioned, it's on, you know, it's on The Verge, it's, you know, it's on, you know, all of the, you know, the media platforms that you are, you know, visiting on.
The one mistake I made in copyrighted imagery, I paid a lot of money.
It was a lot of money, and I never did it again, I'll tell you that.
So, our imagery is all over the internet, and it's being scraped, and it's being taken,
and it's being used to train these models. We want to get clarity if, in fact, that is something that they have rights to do. I believe it's good for the
industry overall. When I say the industry overall, I'm not just talking technology,
I'm not just talking content owners, but to have clear rules to the road. Is this or is this not
fair use? And then you know what you need to do in order to work
within that settled law. And that's what we're trying to get to. But let's get back to the issue
of content creation that you also brought up. You aren't the only one that's worried about losing
money. Every week we get a question from an outside expert. This week our question comes
from photojournalist Rick Smolin, the creator of the best-selling Day in the Life book series.
You've seen the movie Tracks. He's a National Geographic photographer portrayed by Adam Driver.
Let's have a listen to his question.
Like many of my peers, my photographs are sold around the globe by Getty Images,
and we've always considered our relationship with Getty as symbiotic.
You know, we photographers risk life and limb to provide Getty with our images,
and in return, Getty markets our photographs
to buyers. But with the advent of AI, it's sort of starting to feel a little bit like a marriage
where one partner in the relationship is caught having an affair with a much younger, more
attractive person while assuring their spouse this is nothing more than a dalliance. We assume that
our images are being sampled to generate the synthetic images
that Getty's now selling. And many of us are very concerned that our work has become a commodity,
like coal being shoveled into Getty's LLM engine. And also, sort of like the cheated upon older
spouse, many of us believe it's only a matter of time before we're cast aside for this much younger and much sexier young thing called AI.
So my question to Craig Peters is whether this marriage is over, but you just haven't admitted to us yet and you haven't told us.
Or if it's not over, how does Getty Images envision its relationship with actual photographers versus prompt jockeys?
And how is Getty going to compensate us, your long-term
partners, fairly for our images in the future? Thanks. Okay, that's a really good question,
because I hear it a lot from photographers. They don't know who to trust. So how are you
compensating the creators whose work is being fed into this tool? Talk about how it works,
and can people opt out if they don't want AI trained on their work?
works? And can people opt out if they don't want AI trained on their work? Right. Well, first off, let me start with, I think we don't train off of any of our editorial
content. And the coverage that he is talking about that he does, and that we are fortunate
enough to represent, would be considered editorial coverage. We believe that the job that he does is incredibly important to the world.
The images that they produce, the events that they cover, the topics that they cover are incredibly important.
Right behind my desk, Kara, there is an image that was taken in Afghanistan.
And it was taken by one of our photographers, staff photographer, is a guy named Chris Honduras, who happened to lose his life in the pursuit of covering conflict around the world. And that's incredibly important. And I have it there because it reminds me that we have a very important mission and that the individuals that are producing this image are incredibly important and they are taking very real risk. And we value that, and we never want to see anything that we do undermine that or misrepresent it. And we think it's important to the world
going forward, and we think it's very important that that persists. So, we do not train off of
editorial content. This model was not trained off of editorial content. And we believe that
that type of work has a level of importance
today and will have a level of importance going forward in the future. And we think that importance
only increases as we see more and more deepfake content produced by these generative models
that are trying to undermine true knowledge and true truth.
So, the compensation system stays the same?
So, the compensation, so again,
we have creative imagery. This would be, you could also refer to it as stock imagery,
where we have permissions from the individuals that are contained within those images. We have
the right to represent that copyright broadly that is training these models.
These models, when they produce a dollar's worth of revenue, we give a share of that
revenue back to the creators whose content was used to train that model, okay? That is the same
share that they would get if they licensed an image, so one of their images, off of our platform.
So, on average, Kara, we give royalties out around 28% of our revenue.
We invest a lot in our platform and sales and marketing and content creation, etc.
But on average, we're sending 28 cents back for every image that we license.
And you can think the exact same thing will happen when we license a package of, you know, someone subscribes to our generative service.
So, who owns the images created by these AI tools? I know it sounds like in the weeds.
That's a very good question. It's, again, unsettled law because we're in new
territory. So, in the U.S. today, you cannot copyright a generative output. I think over
time that might evolve to where it depends on the level of human input and involvement in that process.
But right now is not one that you can copyright this image.
But what we take a stance is for our customers that are using our generative tool is since they gave us the generative prompt, they are part of the creation.
And therefore, we don't put that imagery back into our libraries to resell. It is essentially a piece of content
that is, quote unquote, owned by the customer that produced it using the generative model.
The question of copyright is one that we can't convey. That has to be done by the U.S. Copyright
Office. But we essentially take the position that the individuals or the businesses that are using
that service own the outputs businesses that are using that service
own the outputs of that service.
They own the output, and it would be a similar thing. But right now,
whether they can be copyrighted is unclear. Even if so, is the prompt owner the copywriter?
Correct. And I think, again, I think that will evolve over time to one where I think the level
of human input will have a bearing on whether that is, in fact, copyrightable or not.
It will all be. It's a question of how much you get paid along the line.
So you've pushed back against the idea that these licensing agreements are Faustian, though, meaning basically you're making a deal with the devil.
Shutterstock's deal with OpenAI is for six years, while Red Teamers at OpenAI are testing Sora.
Red Teamers at Open Air Testing, Sora.
Are you worried that you're all doing what we in the media did with, say, the Facebooks of the world, the Googles of the world, who now sucked up all the advertising money and everything else? What is your biggest worry when you're both doing these deals and suing and just waiting for clarity, I assume, from courts and copyright officials?
for clarity, I assume, from courts and copyright officials. I believe that many of the deals that are happening today are very Faustian and go back
to ultimately some of the mistakes that were made early on in digital media and or in social
media, making trades for things that weren't in the long-term health of the industry.
What we are trying to do very clearly in the partnerships that we are
striking and in the deals that we are striking and in our approach is we are trying to build
a structure where the creators are rewarded for the work that was included, and fairly so, right?
I told, you know, I gave you my four components of generative AI, GPUs, processing, computer
scientists, and content. And that fourth right now is getting an extremely small stake. Some
companies are doing no deals, so like the mid-journeys and such, and they are providing
no benefit back to it. We want to see a world where content owners, content creators share
in ultimately economics of these. And I think if you equate it, again, back to a Spotify,
I mean, you know, the labels and Spotify can argue over whether the stake of that is fair,
and that's a commercial negotiation. But ultimately, it has created a meaningful revenue stream back to artists that support artists in the creation.
And that's what we'd like to see flow out of generative AI.
That's the world that we believe is right.
Ultimately, the creative industries, Cara, represent 10% of GDP, more than 10% of GDP in the US and the UK and many developed economies around the
globe. And we think, you know, the economic impacts of undermining that contribution to GDP
can't be net beneficial to society. No.
From an economic standpoint, if you ultimately just allow them to be sucked dry. I use the
example, you know, everybody's watched the movies, The Matrix. And, you know, ultimately, what were humans in The Matrix? The humans in The Matrix
were the batteries. They were used to create the power to fuel the AI. Well, right now,
I feel like the creatives are already being consumed as batteries. The media is already
being a battery. We are already, you know, not being valued. We're basically just being used in order to create, you know, one of the core pillars that is necessary to create generative AI. It's just being stripped and taken. And I don't want to see a world where there aren't more creatives. I don't want to see a world where creatives aren't compensated for the work that they do. And I don't want to see a world that doesn't have creativity in it.
We'll be back in a minute.
Fox Creative.
This is advertiser content from Zelle.
When you picture an online scammer, what do you see?
For the longest time, we have these images of somebody sitting crouched over their computer
with a hoodie on, just kind of typing away in the middle of the night.
And honestly, that's not what it is anymore.
That's Ian Mitchell, a banker turned fraud fighter.
These days, online scams look more like crime
syndicates than individual con artists, and they're making bank. Last year, scammers made
off with more than $10 billion. It's mind-blowing to see the kind of infrastructure that's been built
to facilitate scamming at scale. There are hundreds, if not thousands, of scam centers
all around the world. These are very savvy business people. There are hundreds, if not thousands, of scam centers all around the world.
These are very savvy business people. These are organized criminal rings.
And so once we understand the magnitude of this problem, we can protect people better.
One challenge that fraud fighters like Ian face is that scam victims sometimes feel too ashamed
to discuss what happened to them. But Ian says one of our best
defenses is simple. We need to talk to each other. We need to have those awkward conversations
around what do you do if you have text messages you don't recognize? What do you do if you start
getting asked to send information that's more sensitive? Even my own father fell victim to a,
thank goodness, a smaller dollar scam, but he fell victim. And we have these conversations all the time. So we are all at risk and we all need to work together to protect each
other. Learn more about how to protect yourself at vox.com slash Zelle. And when using digital
payment platforms, remember to only send money to people you know and trust.
Support for this show is brought to you by know and trust. From the design and styling to the performance, all the way to features like the Bose Personal Plus sound system, you can get closer to everything you love about city life in the all-new, reimagined Nissan Kicks.
Learn more at www.nissanusa.com slash 2025 dash kicks.
Available feature, Bose is a registered trademark of the Bose Corporation.
a registered trademark of the Bose Corporation.
Do you feel like your leads never lead anywhere?
And you're making content that no one sees and it takes forever to build a campaign?
Well, that's why we built HubSpot.
It's an AI-powered customer platform
that builds campaigns for you,
tells you which leads are worth knowing,
and makes writing blogs, intellectual property, and standards.
You've called for industry standards around AI.
What do you think those standards should include? Watermarking? Copyright protections? Obviously, the more confusing it is,
the better it is for these tech companies who, this is me saying this, could give a
fuck about creatives. They do not care. They do not care. They never have.
Well, let me start. I mean, there are four pillars that we think are important to,
with respect to visual generative AI. One of
those is transparency of training data so that you know what's being trained and where they're
training. And that's not only important to companies like Getty Images and creatives
that are creating works around the globe, but that's important as a parent or, you know,
someone with children, like, you know, if I post this image to this social media platform,
is it being used to train outputs in these models?
So that's one.
Permission of copyrighted work.
So if you are going to train on copyrighted data, you need to have permission rights in order to do so.
We want these models to identify their outputs in a way that is persistent.
Now, that's likely to involve watermarking. It might involve
some other standards like hashing to the cloud. But the technology is still in development. But
we want that to be at the model level. So, the model actually does the output and it's persistent.
And then we want to make sure that model providers are accountable for the models they put out into the world. In essence,
there's no Section 230, which kind of protected platforms back in the original Digital Millennium
Copyright Act, which basically gave them, you know, indemnity for many claims for content that
was posted on their platform. We want to see a world where model providers have some level of
accountability, aren't given a government exemption and indemnity. We want to see a world where model providers have some level of accountability, aren't given a government exemption and indemnity.
We want to see people be accountable.
Who should set up the standards in your opinion?
Politicians, industry leaders?
There's business advocacy groups like C2PA, which is the Coalition for Content Provenance and Authenticity, which includes Google and OpenAI and your competitors Adobe and Shutterstock.
And who should be responsible for enforcing them?
Is it a combination of courts and regulations?
I'll come back to CTP.
But I think it's going to be a combination of regulators, of legislators, and industry,
which is typically how all standards evolve over time.
I think like happened in the EU AI Act, there's a general requirement.
But the definition of how that requirement actually
manifests itself is still yet to be put forward. And that's going to take collaboration from the
technology industries and from, you know, creative industries in order to get to something that
works. I think the C2PA, to be very specific, is a foundation, but there's still some flaws. It is relying upon the end user of
these platforms to identify the generative output versus the model itself. That's what happened with
YouTube initially, as you were quoting. Exactly. So, if you say, if I'm going to use a generative
model and I'm a bad actor, I can say it's authentic and label it as such. Well, that's a point of
failure in the C2PA model as it exists today.
It's already been exploited, and we want to see that closed again.
So that's why we want to see these models provide the output at the model level versus at the user level.
It also is one that they're trying to shift the cost of implementing C2PA to the individual versus the model provider. There are going to
be costs if you're producing outputs and then you have to store the output in the cloud,
the original, and kind of hash it so that it can be referenced and referenced back to.
I don't think that's a reality where individuals are going to be making that investment.
So you're bearing all the costs?
I believe the model provider should bear that cost.
So do I. Again, we are a model provider in partnership with NVIDIA and Pixar and others. And So you're bearing into the EU.
Sure.
That you can just kind of throw it out as that we've got a solution.
We really want to prevent deepfakes.
We really want people to have fluency in terms of the imagery that they're looking at.
And we think C2PA can evolve to that.
And we're going to work hard in partnership with the members of C2PA to try to get it there.
And ultimately, what I don't want to see, though, you go back to some of the original mistakes of digital media and such, is I don't want to see it be a big cost shift from technology
industries into the media industry, where what I mean by that is I'm hearing some, well, we're
never going to be able to identify generative content. No, it's very hard. Yeah, it's very
hard. It's a hard problem. We can invent these tools, but we can't invent a way to, you know,
to identify them. So what we want the media industry to do is we want them to identify all of their content as authentic. So they have to buy all new cameras.
They have to, you know, create entirely new workflows. They have to fundamentally change
how they bring content onto their websites. That investment, we want, you know, the media to make.
And I don't think that's the right solution.
I'm not saying that Getty Images won't invest in technologies in order to better identify, track, and identify provenance of content. But I don't think kind of saying, well, the solution for the tools that were created that create all this generative content and are producing the problem of defakes, you know, the creators of those tools don't have
any accountability for creating solutions to identify. I think we have to have a balanced
solution. Absolutely. It does play into the hands of bad actors, for sure. That's right. You know,
there's a really interesting book that I read a couple years back, Power and Progress. And it's
an interesting study where I think we've always been adapted. I'm a pro-technologist. Me too. I started my career out in the Bay Area.
But we always assume that just technology for technology's sake will be beneficial to society.
That's what we've been fed, and that's what we have been taught.
years, and whether it is innately beneficial to society or whether society needs to put some level of rules around it in order to make it net beneficial.
And you go back to Industrial Revolution, and basically, the first 100 years of the
Industrial Revolution were not beneficial to society as a whole.
They benefited a small number of individuals,
but largely gave society nothing more than, you know, disease. And, you know, it took
things like regulations on limiting work weight, regulations on child labor. It took certain
organizations like labor unions, et cetera, to ultimately get that technology leap forward to be something that was broadly beneficial to society.
And I think that is something that we need to think a little bit more about is not how do we stop AI.
I'm not trying to put this technology in a box and kill it.
I think it can be net beneficial to
society. I think we need to be thoughtful about how we bring it into society. And I don't think
racing it out of the box, you know, absent that thought is necessarily going to lead us to
something that is net beneficial. Yeah. You know, you're going to lose your libertarian
card from the boys of Silicon Valley, just so you know, you just lost it. We'll be back in a minute.
Do you feel like your leads never lead anywhere and you're making content that no one sees and it takes forever to build a campaign?
Well, that's why we built HubSpot.
It's an AI-powered customer platform that builds campaigns for you, tells you which leads are worth knowing, and makes writing blogs, creating videos, and posting on social a breeze.
So now, it's easier than ever to be a marketer.
Get started at HubSpot.com slash marketers.
Support for this podcast comes from Anthropic.
You already know that AI is transforming the world around us,
but lost in all the enthusiasm and excitement is a really important question.
How can AI actually work for you?
And where should you even start?
Claude from Anthropic may be the answer.
Claude is a next-generation AI assistant, built to help you work more efficiently without sacrificing safety or reliability.
Anthropic's latest model, Claude 3.5 Sonnet, can help you organize thoughts, solve tricky problems, analyze data, and more.
Whether you're brainstorming alone or working on a team with thousands of people.
All at a price that works for just about any use case.
If you're trying to crack a problem involving advanced reasoning,
need to distill the essence of complex images or graphs,
or generate heaps of secure code,
Clawed is a great way to save time and money.
Plus, you can rest assured knowing that Anthropic built Claude with an emphasis on safety.
The leadership team founded the company with a commitment to an ethical approach that puts humanity first.
To learn more, visit anthropic.com slash Claude.
That's anthropic.com slash Claude.
Support for this show comes from Grammarly. 88% of the work week is spent communicating. Thank you. Grammarly's AI ensures your team gets their points across the first time, eliminating misunderstandings and streamlining collaboration.
It goes beyond basic grammar to help tailor writing to specific audiences,
whether that means adding an executive summary, fine-tuning tone,
or cutting out jargon in just one click.
Plus, it surfaces relevant information as employees type,
so they don't
waste time digging through documents. Four out of five professionals say Grammarly's AI boosts buy-in
and moves work forward. It integrates seamlessly with over 500,000 apps and websites. It's
implemented in just days, and it's IT-approved. Join the 70,000 teams and 30 million people who trust Grammarly to elevate their communication.
Visit grammarly.com slash enterprise to learn more.
Grammarly, enterprise ready AI.
Speaking of deep fakes before we go,
I do want to talk about the election and political images.
Getty has put editorial notes on doctored images
like the ones of Princess of Wales, Kate Milton,
that were sent from the Royal Palace.
You said earlier you don't use editorial content
for your AI database,
but your competitors like Adobe and Shutterstock
have been called out for having AI-generated
editorial images of news events in their databases,
like the war in Gaza,
which were then used by media companies and others.
And as we said before,
their databases are also being licensed
to train some of the biggest AI models.
So, you know, I think the expression in software is garbage in, garbage out.
How worried should we be about synthetic content,
basically fake or altered images impacting the outcome of elections,
including ours in November,
and those images then circulating back into the system
like a virus. I think we should be very worried. I think it ultimately, it goes back to some of
the things that why we're really focused in on some of those, you know, the premises of the
regulatory elements I talked about. We want to see a situation where we can identify this content.
We can give society as a whole fluency in fact, right?
If you don't have fluency in fact, you don't in fact have one of the pillars of democracy.
So I think it is something that we should be very worried about.
I think the pace of AI is moving so fast, the ability for it to permeate society across social networks and
those algorithms, I think is something that we need to be highly concerned about. And I think
we need industries and governments and regulatory bodies to come together in order to mitigate that
risk overall. I'm hopeful that we can get there. It's a tough problem, and it's going to take
a lot of people putting energy against it to solve for it.
What can news or should news agencies do about this specific kind of visual misinformation?
And what responsibility do the agencies have? Obviously, you are not doing that,
but your competitors are.
Well, I think ourselves, I think the AP, I think Reuters, I think other news agencies around the world, I think we have to continue to make sure that our services and our content is incredibly credible and ultimately lives up to a standard that is beyond reproach. And I think then we need to work with the technology industry and again, regulatory bodies and legislative bodies on solutions that address the generative side of
things. Because there are bad actors in this world and there are models that allow people to create
not only misinformation with respect to elections, but some very other societally harmful outcomes, like deepfake porn, that, you know, we need to address.
And these tools are readily available.
What's your nightmare, and how quickly can you react when, you know, the Kate Middleton thing took a few days?
It's not the biggest deal in the world, but it was still problematic.
Like, that took a while.
So, how do you— it's expensive to do this.
It is.
To really constantly be picking up trash that everybody's throwing down.
You know, it's very important that, you know, we spend a ton of time, first off, you know,
vetting all of our sources of content. It's not something we just take any piece of content and
put it on our website and then provide it out, right? There is a tremendous amount of vetting that goes into the sources of content that we have. So, like the NBC News or the Bloombergs
of the world or the BBCs of the world, but all the way down to the individuals that we have
covering conflicts in Gaza or in Ukraine. And making sure that we know that the provenance
of their imagery is authentic. We know provenance of their imagery is authentic.
We know the standard of their journalism is high.
That takes a tremendous amount of effort, time, and investment in order to do.
And we need to make sure that we don't in any way, shape, or form reduce that investment.
We need to increase that investment in the face of generative AI.
We need to increase that investment in the face of generative AI. We need to tell that story.
We need to make sure that our customers have, who are the media outlets like yourself, know that that is, you know, something that we're doing.
So what's your nightmare scenario?
Give me a nightmare.
Nightmare scenario is that we ultimately continue to undermine the public trust in fact of debate and ideas and different opinions,
but at the same time that we don't feed into an undermining of what is real and what is authentic and what is fact. And those two things to me sometimes get conflated, and I think that's a
false conflation. I think they can both stand
as true. We can have debate. We can have different points of view, but we can't have different facts
and we can't have different truth. And ultimately, I think that's the long haul. It's not a particular
event. It's a continual undermining of the foundation of that truth.
Well, it is certainly easier to be a bomb thrower than a builder. One of the issues is the fall off
in economy for all media, not just your company, but everybody. The costs go up, the revenues go
down. In the case of media, they suck up all the advertising. You see declines in all the major
media who are fact-based, I would say.
Getty's value dropped by two-thirds since you went public.
Now, I know SPACs have had all kinds of problems.
That's a trend in the SPAC area, which is how you went public.
But even with all your AI moves, your shares are down.
Do you think it's because you're taking a more cautious approach than your competitors?
And what turns it around?
I don't want you to speak for all of media, but you're running a company here and you're trying to do it
a little safer and more factual, and that's not as profitable as other ways. How does that change
from an economic point of view? And I think all of media has this challenge going forward.
Well, I can't speak to specific drivers of stock price or not.
What I can tell you is that we aren't taking a more conservative view or cautious view.
We are taking a long-term view.
We are taking a view that we are going to spend money and invest to bring services to the market that we think are helpful to our customers.
We're going to invest in lawsuits that we think underpin
the value of our content and information that we have.
We are going to continue to invest in bringing amazing content
and services to our customers.
And we think over time that is something that will be
to the benefit of all stakeholders in Getty Images,
our photographers and videographers around the world,
our partners around the world,
our employees and our shareholders.
And that's what we are focused in on doing.
And I think there is a backdrop within recent trading
of the entertainment strikes
and the impacts of those against our business,
some of the macro impacts against large advertising and some of the end use cases where our content gets utilized.
Yeah, your customers are also suffering too, yes.
Yeah, it hasn't been a great, you know, environment for media companies in the recent 18 months.
I think, you know, the average ad revenue.
I would say 10 years, but okay.
Yeah, yeah.
But more recently, it's been more acute, right?
You're seeing streaming services kind of cut back.
You're seeing ad revenues down double digits.
So there are some challenges out there.
But you aren't seeing that for the tech companies.
No, you're not.
You're not.
Again, that goes where I think we need to have some rules to the road as we approach
generative AI that we already learned some of the things that maybe we didn't
do 20 years ago or 25 years ago. But, you know, our business is about a long term. Getty Images
has been around, you know, almost 30 years. And we've focused in on making sure that the
foundations of copyright and intellectual property remain strong throughout. And so,
that's what we'll continue to do going forward. The hits keep on coming, and I don't mean good hits. Then I have a final
question for you. Are they ever going to get fingers right? What is the fucking problem?
Well, look, I think if you use our model, and I know that you're pretty sure you had some time
with it, and hopefully you'll get some time with it as well. I think you'll see that we largely do.
We've trained on some things to try to make sure that
those outputs are of high quality. The reality is it does matter. You said kind of crap in,
crap out. Well, I would argue the exact opposite. So quality in means quality out. And so we've
addressed those issues with the model. Those aren't going to be the hard things.
Fingers and the eyes. Fingers, eyes, hair, hairlines, smiles and teeth. I think we've largely solved those items,
you know, but what we're always going to struggle with is solving for, you know, the blank page
issue that customers have and how do you really get to authenticity. I think one of the most
interesting things to me, and I know we're short on time, is how generative AI comes directly in conflict with another massive trend of the last 10 years plus, which is authenticity.
And how does authenticity, you know, how am I portrayed?
How am I viewed?
Am I positively represented?
How am I brought to bear?
And what do I see in the content that's presented to me?
You know, body representation, you know, gender representation.
How do those things come?
And I think that is still a world that that trend is not going away.
And I think it's a world that Getty Images helps our customers address.
Although I'm still going round after round with Amazon about my fake books and the fake Kara Swishers who look corny.
They just look corny.
Strange Kara Swishers are all over Amazon.
Go check it out.
Anyway, I really appreciate it.
This is a really important discussion for people to understand what's happening to companies like yours.
Well, I appreciate you making time.
And thank you again for having me.
Thanks so much, Craig.
Thanks so much, Craig.
On with Kara Swisher is produced by Christian Castro-Russell, Kateri Yochum, Jolie Myers,
and Megan Burney.
Special thanks to Kate Gallagher, Andrea Lopez-Grizzato, and Kate Furby.
Our engineers are Rick Kwan and Fernando Arrudo.
And our music is by Trackademics.
If you're already following the show, you're authentic.
If not, be careful that you're not being used as a battery.
Go wherever you listen to podcasts, search for On with Kara Swisher and hit follow.
Thanks for listening to On with Kara Swisher from New York Magazine,
the Vox Media Podcast Network, and us.
We'll be back on Thursday with more. Support for this show is brought to you by Nissan Kicks.
It's never too late to try new things.
And it's never too late to reinvent yourself.
The all-new reimagined Nissan Kicks is the city-sized crossover vehicle that's been completely revamped for urban adventure.
From the design and styling to the performance, all the way to features like the Bose Personal Plus sound system,
you can get closer to everything you love about city life
in the all-new, reimagined Nissan Kicks.
Learn more at www.nissanusa.com slash 2025 dash kicks.
Available feature, Bose is a registered trademark of the Bose Corporation.
Support for this podcast comes from Stripe. For a available feature, Bose is a registered trademark of the Bose Corporation. Thank you.