Your Undivided Attention - AI Is Moving Fast. We Need Laws that Will Too.
Episode Date: September 13, 2024AI is moving fast. And as companies race to rollout newer, more capable models–with little regard for safety–the downstream risks of those models become harder and harder to counter. On this week�...��s episode of Your Undivided Attention, CHT’s policy director Casey Mock comes on the show to discuss a new legal framework to incentivize better AI, one that holds AI companies liable for the harms of their products. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIAThe CHT Framework for Incentivizing Responsible AI DevelopmentFurther Reading on Air Canada’s Chatbot Fiasco Further Reading on the Elon Musk Deep Fake Scams The Full Text of SB1047, California’s AI Regulation Bill Further reading on SB1047 RECOMMENDED YUA EPISODESFormer OpenAI Engineer William Saunders on Silence, Safety, and the Right to WarnCan We Govern AI? with Marietje SchaakeA First Step Toward AI Regulation with Tom WheelerCorrection: Casey incorrectly stated the year that the US banned child labor as 1937. It was banned in 1938.
Transcript
Discussion (0)
Welcome to Your Divided Attention.
If you listen to this podcast regularly, then you know that we spend a lot of time talking about the harms caused by the runaway tech industry.
Addiction, polarization, shortcuts being taken by AI companies that threaten all sorts of aspects of humanity.
And the number of harms and complexity of those harms is going to keep growing faster,
which is why the only way we can get ahead of the problem is through new laws that incentivize
responsible innovation.
And there's real appetite for better laws around technology right now.
But we have to make sure that the laws we pass today don't just respond to the moment,
but set our society on a path towards real accountability.
And so today, Sasha Fegan, executive producer of your invited attention,
is going to be co-hosting with me to talk to Casey Mock, who leads our policy team,
to talk about a new framework that Center for Humane Technology is launching
to try to incentivize AI companies to build their products safely from the ground up.
Casey and Sasha, welcome to your anybody's attention.
Thanks for having me, Tristan.
Hi, Tristan, and hi, Casey.
All right, so Casey, before we dive into this policy framework,
I want our listeners to know a little bit about you.
Can you tell us a little bit about your background
and what brought you to CHT?
Yeah, sure. I'm an attorney by training.
I've been in the world of tech policy for about 10 years.
I was in-house at Amazon, where I led tax policy for the company nationally.
And I've also served two governors, a Republican governor, as well as a Democrat governor, most recently, Governor Tim Walz, who most listeners may know now as the current Democratic vice presidential nominee.
So, Casey, one of the things I think is really interesting about your background is your experience as a former Amazon lobbyist and seeing how the game is played from the other side.
And therefore, what would it take to change the actual behavior and incentives of these massive companies?
Do you want to talk about how that experience kind of informed your, you know, coming to CHD?
Absolutely. So, you know, we at ChtT often talk about how these companies are trapped by the incentives that they face. And it's no less true of their policy teams. The policy teams of companies like Microsoft or Amazon or Google or a meta are simply given the mission to obtain and preserve regulatory and legal flexibility for the business, full stop. That's it.
So they are unable, as lobbyists or advocates, to go to policymakers and ask for anything else.
So that means that they cannot and will not show up and in good faith, say,
we think that this is a good idea if it ultimately does not maximize shareholder value for the companies that they represent.
That really limits the options for these lobbyists to provide good faith input.
that meaningfully improves the policies for the bulk of Americans.
Casey, there's been a kind of growing bipartisan consensus around the idea of developing
a new approach to regulating tech.
It started in social media and it's really moving into AI.
We looked at this a little bit in our recent episode on tech lobbying.
Of course, the Department of Justice is in court at the moment
with an antitrust case against Google for having a monopoly.
advertising. So that antitrust approach is one way to tackle the problem. But you're talking about
a different approach, which is liability. So can you just explain a little bit more about why
liability is the best way forward when it comes to AI? Liability can mean a lot of things. It can mean
criminal liability. It can mean civil liability. It can mean, you know, liability for infringing
on someone's copyrights. But what we're really talking about here is something more
akin to negligence or products liability so if you make something defective if you manufacture a
defective product you should be on the hook for harms that occur to someone because they used that
product or they interacted with that product that's something that's been true in american commerce
for you know 110 years or so roughly that principle though has not really been extended to the
technology space. Social media and technology companies have generally been let off the hook on that
because they've claimed that they've offered a service rather than a product. We're looking to change
that because ultimately what artificial intelligence systems are and what social media systems are
in principle is that they are very complex manufactured products and they can cause real harm
out in the world and they can hurt people. And what we've found through our experience with
social media and increasingly so with artificial intelligence systems is that people are hurt
and they're left holding the bag and actually even businesses that use these products to serve
their own customers are left holding the bag when something goes wrong. So I guess just to play
devil's advocate here, why do we need new federal laws to intervene in all this? I mean, why can't
courts adapt or use current laws to regulate AI if it's just another product? So what our objective
here and what we hope to do is provide clarity for courts so that they can apply some of these
older principles and law that have existed for a long time and just apply it to this new
technology. There's some technicalities that really need to be adjusted for, some things that are
novel about this technology, but really we've been here before as a nation, right? Like we've
confronted complex new technologies that courts have had to adapt to. The differences we've
in the past, as a nation, we've done that adaptation organically and slowly. And because of the
pace at which this technology is developing and being deployed into society, we don't actually
think that we can afford for courts to figure it out on their own. And so our proposal here is to
have Congress and state legislatures help the courts along by providing a little bit of clarity
on how these older principles should be updated for today. The difference between AI and social
media and what we normally think of as a product is it's kind of ephemeral nature. So is that a
sticking point in establishing it as a product? It's not something you can kind of touch and feel
and buy in a store. You know, fundamentally, the policy problem that products liability is meant to
solve has nothing to do with the fact of whether the product is a tangible one or not.
It had once been the case that if you bought something, you knew the person who made the thing, right? You
had a relationship with the artisan who made the thing.
And suddenly in the 19th century, that was no longer true, right?
You go and buy a new fancy automobile in the early 20th century.
It had parts from all over.
It was a complex machine that nobody knew how it worked.
And this new body of products liability law was designed to protect people
and actually encourage them to buy these sorts of products
and encourage manufacturers to invest in creating these sorts of things
so that they would feel safe in doing so
and that there would be a willing market to do so.
And that has nothing to do with whether the product is tangible or not.
And so I think it's important to remember the original purpose of these concepts in law
were originally meant to give people some confidence that they could buy and interact with products
that they didn't know who had their fingerprints on them when they were originally made
and that they could trust their families with them and, you know, trust their children with them
and so on and so forth.
So, Casey, just to ground this for people, what's an example of a harm from AI that's happening that this would change with liability?
So there's a list of things that this approach could help with.
There's things like non-consensual deep fake image abuse.
There's customer service chatbots that provide incorrect guidance.
There's, in fact, a case of this that's been in the news.
In the last year, there was a Canadian airline that had a customer service chatbot that gave a customer
incorrect guidance on the airline's bereavement ticket policy. The customer followed that policy,
and the airline refused to honor what the customer did. Right. And I remember that there was a court
case that found that the airline was liable for what the chatbot did, even though they probably
didn't build it. Yeah. And in that case, that airline was probably left holding the bag. If that
happened at scale, that's an unfortunate occurrence for that airline. And it's probably a disincentive
for similarly situated businesses to adopt technologies, right?
That would be a break on adopting that technology.
Like, for example, there was the case of the,
I believe it was in Hong Kong,
the case of a finance worker who was scammed out of $25 million
while they believed they were interacting
with the CFO of their company,
and it turned out that it was a deep fake video.
There was a scam going around
that involves videos,
images of Elon Musk that's scamming people out of things like this. Now, the scam artist themselves
in these cases is probably very difficult for law enforcement track down. And in many cases,
the money may have disappeared. These are very foreseeable potential harms that the creators of these
tools know are coming. And so what liability would do is encourage the developers of these
technologies to take these reasonable alternative pathways, and if they do not take those
reasonable alternative pathways and these harms result, then people who are harmed can
hold the developer accountable for not having taken those reasonable alternative pathways in
these cases. Yeah, and to me, the Canadian Airlines case and the scam artists are slightly different.
No one was deliberately trying to change the bereavement policy for Canadian Airlines, whereas
scammers also obviously acting in a criminal way to scam people out of money. So, you know,
they need to be punished too, not just the developers who made the technology. So in product's
liability law, the concept of misuse of a product has been long established as a subject of
consideration and whether a manufacturer is liable for a harm that occurs. If you think about
a hammer, right? A hammer is meant for hammering nails. It's not meant for
for bashing people over the head.
And so the tool manufacturer
that makes the hammer
certainly should not be liable
if someone buys one of their hammers
and beats someone over the head for it.
That's a misuse of that technology.
And in no way, shape, or form are we saying
that AI developers
should be on the hook
for every foreseeable misuse of their product?
However, having said that,
there are reasonable alternative
designs that may be possible. Now, if it were clear that the developer were going to be held
accountable by the airline for getting the customer wrong, the developer would have taken
better care to make sure that the airline got a product that was more representative of their
policies in that case, right? And that's what we mean by shifting the incentive, because in that
case, the developer would have had a stake in whether the airline got it right when dealing with
their customers through their customer service chat bot.
So how about the nudification example?
Someone makes a image generator and they don't intend for it to be used for being able to
generate nude images of people or a face that's a classmate's face in a classroom.
How should AI developers be liable for that and what's the alternative pathway that they have?
A good analogy is actually to cigarettes and secondhand smoke.
so cigarette manufacturers and tobacco companies
knew for a long time
the dangers of inhaling secondhand smoke
not just the dangers of smoking
for the individual smoking
but actually also the dangers of other people
around the person smoking
and it would have been unrealistic
if I came down with a lung problem
from being surrounded by secondhand smoke
for me as an individual to try to sue everyone in the smoking section of a restaurant,
of every restaurant I had ever been to whose secondhand smoke I inhaled, right?
Like that's an unrealistic way for me to be made whole for the health problem that I'm suffering.
For people who are suffering and have been victimized by non-consensual deep fake images,
they're faced with a similar situation today.
It's unrealistic for folks to track down who the perpetrators of these things are.
However, what is true is that we know that the companies that are making these tools
are very well aware of how these tools are being used and what the outcomes are.
In this way, it is a lot like secondhand smoke.
And there are cases and examples from a few decades ago of tobacco manufacturers
being held accountable in a court of law for health.
damage that occurred to individuals because of secondhand smoke.
How would this approach actually change the behavior of the image generator?
So what's the alternative that they could do?
They could, for example, put watermarks in all images that are generated by AI.
So at least we know for sure that they were generated by an AI,
maybe even include the date and time.
So there's some kind of attribution for when this occurred.
Also the product warning you're saying,
so these image generators would all have a warning, maybe saying,
warning these can be used to make explicit images of people.
What are some examples of how this would change the behavior?
I just want to close the loop for people so we'd see how it actually solve a problem.
So maybe in a way that might be unsatisfying,
I actually don't want to answer this question
because I actually think this is an advantage of the approach
because we're not over-determining the outcome here.
We're actually by not pointing to one particular way for developers,
to solve this problem, we're encouraging innovation.
So we're leaving it up to these very smart people
who have created this amazing technology
to solve this problem themselves.
And that's the beauty of the approach.
This is an innovation-friendly approach.
It actually incentivizes the developers
to innovate around safety however they see fit
and however suits their business model.
We are not over-determining the outcome
of what it should look like in this case.
And it's why we think it's a very pro-business
and pro-innovation approach.
Yeah, what I really like about this liability approach
is it circumvents some of the big blockers
that we've had in trying to deal with the harmful effects
of social media.
In the past, what we've seen is that the tech corporations
will argue that regulation is a First Amendment issue.
It's a freedom of speech issue whenever someone points
to the harms that they create.
But this sort of circumvents that.
It moves aside and it takes a new direction.
Yeah. And in a sense, we can expect that they will make the same argument that artificial intelligence systems are a form of code and so to regulate that code is an unconstitutional prohibition on corporate freedom of speech. And so it's again with this experience in mind that we've crafted this legislation to not talk so much about content or code, but to talk instead about the duty of care that these companies,
owe the rest of society when they put a product that they design and manufacture in the stream
of commerce. And it's based on our experience working on social media and the challenges that
these companies bring. What I love about your approach, Casey, is it's about specifically accounting
for the ways that this has been challenged in social media. And you're saying, for AI, let's make
that different before we get too far down the line of becoming entangled and entrenched with
this technology. That's right. And in fact, we think that this will change
the way that the companies approach additional regulations that may be proposed in DC
were this to pass. Because if companies were to start operating knowing that if they put a
product out on the market that hurt someone and that they could be held financially accountable
for that harm that results, they will then come in better faith to DC. They will arrive to
policy conversations about more highly technical aspects of policymaking with a different orientation
than they have now. Because right now they're free to say no to everything. But suddenly if the
tables turn and they're accountable for some things, they're going to want to use that opportunity
to create something that actually provides a shield for them. And that will cause them to
come to the bargaining table in better faith than they come to the bargaining table today.
So you spoke about accountability. How do you actually make companies accountable?
What's the enforcement mechanism for a framework like this?
And secondly, how is this not just going to be a massive cash cow for lawyers to get really rich?
You know, it seems like a lot of work for lawyers and that they might be the big winners or something like this.
So to the second point, we've crafted the policy anticipating this criticism.
And for one thing, we have created.
a rebuttable presumption that the duty of care of creating a safe product
and providing the adequate warnings can be satisfied fairly easily by an AI developer
that complies with certain sort of fairly easy light-touch filing requirements.
It's definitely not the sort of onerous FDA-style like pre-clearance type processes
that you've seen proposed elsewhere.
The first part of your question was about how is the enforcement mechanism work?
We think that it's important to not over-rely on what's called the private right of action.
What's the private right of action? What does that mean?
A private right of action is the ability for an individual with an attorney to bring a lawsuit when they are harmed.
So that is a part of the proposal, but it's only under certain circumstances, complemented by the ability for the government
to bring an enforcement action by themselves.
And the reason that the complementarity here is important
is, say, for the situation with a non-consensual,
deep fake, intimate image.
It may be the case that the victim
doesn't want to further traumatize themselves
by bringing a suit in that case.
And instead, it would be better for the attorney general
or whatever the government enforcement agency is
to deal with that suit rather than the victim
in that case to have to bring that lawsuit by themselves.
And so we think that it's important to build flexibility into the enforcement mechanism process
so that either individuals can bring an action by themselves under certain circumstances,
but at the same time, the government can bring actions either to protect individuals or mass
actions, which can also be particularly effective at changing business incentives, as we saw in
the case of big tobacco.
Right.
I mean, I can see the logic of that, but I can also feel arguments and pushback to this approach
from people within the AI industry
who are going to use the argument
that something like this
will stifle innovation
and set America back
and make them beholden to frivolous lawsuits.
So how do you respond to that?
The threat of, or the concern of frivolous lawsuits
has been around for as long as there's been
products liability law.
So I think that this concern
about a flood of litigation
is always going to be an argument
that's made. It's been made since the beginning of time, but that doesn't mean that it's going
to materialize. But what about the argument that just the paperwork and the onus of duty of care
is going to stifle innovation and slow down U.S. progress? Would encourage folks to actually
read our white paper that should be available on the CHT website that provides more details
about the requirements, the documentation requirements that we would recommend. But they're very
very minimal in the sense that they're just to fulfill the duty to warn people about the potential
dangers of the products that are being manufactured. Think of it by analogy to if you're at the
toy store or in the toy section of, let's say, Target or Walmart or something like that,
and you notice that there's like labeling that, you know, certain toys are appropriate for certain
age groups or may present a choking hazard. Like, that's really an analogy for the level of
detail that we're talking about here. We're not talking about, you know, reams of documentation.
It's actually pretty simple. And in line with what most companies already offer, we're just
talking about standardizing it and making them accountable for making sure that it actually
tells the truth. Well, yeah, and we've seen how the industries have made this argument
constantly in the past. You know, in 1966, Congress was set to pass the National Traffic and Motor
Safety Act, and the auto industry made the same argument. They penned an op-end with the title,
tough safety law strips auto industry of freedom. And as a result, cars today aren't less
innovative. They're just safer. You know, when fuel economy standards and zero carbon mandates have
just led to more efficient batteries and internal combustion engines, the same arguments were made
with respect to the telecom act of 1966. This is a go-to argument of industry to say you're going
to stifle innovation, but really it's led to just safer and better innovation.
That's absolutely right. I mean, we've even seen it in other areas of law too, where, for example,
federal law that banned child labor, which I believe didn't actually pass until 1937, was
opposed by businesses on the grounds that it would bankrupt them. I believe that there's letters
in the Smithsonian from bakeries claiming that. And, you know, last I checked, I can still go down
on the street and get a bagel. And there's no children who were harmed in the making of that
bagel.
One of the frames we've talked about on this podcast in the past is the complexity gap
that as technology advances, always faster than law, it creates a whole new range of
complexity, of types of harms and risks, way faster than we can define protected classes of law
to protect us from.
And you've spoken to me about how the liability-based approach sort of changes the default
responsibility of the technology makers that then are,
aware of the new range of complexity of outcomes that they're creating and making sure that
even before the law is aware of them, that they're starting to take approaches to bend
their behavior in a different direction.
A critique of this approach might be, isn't it going to take too long for this to have an
effect?
Yeah.
Right?
Like if this, let's say Congress writes a law based on this framework, aren't we going to
have to wait for a lawsuit to come to meaningfully change company behavior here?
And I can tell you, having been on the inside of one of these.
companies, that's not going to be the case. Because what we're trying to do here is empower
attorneys at these companies in a way that they are not currently empowered to change the business
practices and change the safety practices on these design and deployment teams. Right now at the
largest tech companies, there's a bit of a power hierarchy internally, even on the legal teams
with their relationships with the business teams
and with the design and engineering teams.
As listeners may be aware,
antitrust has been in the news a lot recently,
particularly with Google.
And I can tell you that if someone on the antitrust legal team
has an objection to a practice
that the business team is doing,
the business team typically does not do
what it was that they were doing.
They changed their behavior.
And right now, that is not happening
when it comes to safety with AI
because there's no accountability.
And so if a law that will hold companies
potentially accountable for harms that they create
becomes law, it doesn't have to wait
for a lawsuit to be effective.
It's the specter of a lawsuit
that will empower the attorneys
to go into these meeting rooms
with business teams
and wag their fingers,
and say, no, you can't do that anymore.
That's unsafe. That's going to cost us
a whole lot more money than it's going to make us.
You're trying to create a deterrent.
Correct. In a lot of ways,
a lot of safety at these companies is simply PR.
It comes out of their PR budgets.
And what this legislation will do
is actually makes it not PR,
but a core part of their business budget
and a core part of their legal budget
and protecting their business.
And that's like super important shift.
and like ensuring that, say, an entire safety team can't just be fired overnight, right?
Because it's not just a PR stunt to have the safety team anymore.
They're actually crucial to protecting the business longer term.
You know, that is so important what you just said, Casey.
It reminds me of a story Francis Hogan told me, which is the people who are working on.
I think it was civic integrity at Facebook who were trying to basically prevent genocides and things like that and, you know, all the harms.
the budget that was funding that team
I think came from Facebook's antitrust
budget. Basically it's like
we don't want to get regulated so we need to prove
that we're doing all these things
and so the way you can justify spending that
is by proving that you're putting in the work
and that's the point here is like let's not have
optical lipstick of proving with PR
that we're doing things for safety.
Let's talk about meaningful changes
that will actually deter
the worst things from happening.
That's exactly it.
The way in which this technology
is being deployed throughout society,
the speed at which it's being adopted,
the scale at which many of these harms
are already occurring,
means that this isn't like the Ford Pinto
where you're having a few scattered car explosions
here or there,
and Ford was infamously doing the calculation
of, you know, what is a recall cost
versus what are we going to pay out in settlements?
And for a decade,
they determined that the cost of a settlement
was cheaper than doing a recall, right?
Like, that was a somewhat different scenario
for the first decade
than what we're already facing here
because of the scale,
the already wide-scale use of adoption.
So, you know, if there's the specter of individual lawsuits,
suddenly those start to add up,
not just in the aggregate,
but then people like state attorneys general
and the attorney general of the United States,
the Federal Trade Commission,
these entities will start to take notice.
And then we have the opportunity and the basis in law,
the clear basis in law for them to take mass action,
where we have something approximating like the tobacco settlement, right,
that really change the behavior of tobacco companies, for example.
And what we really need in order for that to stick
and for that to really be possible is clarity in the law first.
The specter of that threat will change behavior,
but we need it to be possible for an individual who's harmed
to be able to successfully sue a company right now.
And that's not really clear that it's possible yet.
And that's actually kind of depressing.
So, KC. There's been a lot of press coverage recently
about a specific new AI bill in California,
which is SB 1047, proposed by Senator Scott Weiner.
And a lot of people in the AI safety space have signed on to it,
some of the biggest names.
And as at the time of this recording,
it's actually on the governor's table waiting to be signed.
Do you want to just talk about,
what is the difference between how this bill is approaching the problem
and how our liability approaches approaching the problem?
Yeah.
So, you know, whereas we started from identifying the ways
in which the law is ill-equipped to handle the novelties of this technology,
Senator Wiener's bill and many, most of the legislation that's out there,
starts from identifying a specific category of risk
and associating that with some specific attributes of the technology
and working backwards from those.
What are the kinds of catastrophic risks that they were focused on?
So in the case of Senator Wiener's bill,
the risk that they're focused on
are mass casualty events and harms to critical infrastructure.
So these are like cyber attacks that take down the grid,
take down airlines.
And they've got to be like $500 million worth of damage or something
right so these are big events correct so you know what they what the authors of this legislation did
is that they identified a risk you know catastrophic risk to critical infrastructure or mass casualty
events and they determined that if you were a developer of a certain category of AI and this
risk results from your AI you should be responsible for that that seems reasonable and a lot of
hard work has gone into developing this legislation but the bill did not
establish a clear bar for the companies to meet to say, this is the duty of care that you
have to satisfy to others. They worked backwards from the harm and said, you're liable from this
harm. And ultimately, I have some concern that that's going to be litigated. And again,
with our experience with social media legislation, it may prove to be a fatal weakness to that
bill. So now that we've written this federal liability policy, Casey, what's the
next step. What does it go from here? So what we have now is a framework for incentivizing
safe and responsible artificial intelligence. The next step would be to have legislative text.
And the next step would also be to have bipartisan co-sponsors. And so our mission for the next
month and while Congress is still in session this year, as well as to start 2025 when we have
a new president and a new Congress will be to secure those sponsors.
as well as to start to get that text drafted.
And what's your feel of that?
Do you think this is something which will get bipartisan support?
Have you had any initial conversations yet?
We've had a number of conversations already on Capitol Hill about the idea,
and we're really grateful to have had a lot of interest in the idea.
I think a lot of that interest stems from the fact that, first,
this appeals to people's basic sense of fairness and justice.
Second, that this feels long overdue because of America's sense that these same companies, many of whom are social media companies as well, have been unaccountable for so long.
And third, that this is truly a pro-business approach.
I mean, I cannot emphasize enough that overwhelmingly the businesses that are going to be interacting with artificial intelligence are going to be deployer businesses.
And they want clarity and certainty that they're not going to be left holding the bag when something.
thing goes wrong like what happened with that Canadian airline, for example. And so policymakers on
both sides of the aisle have found that a really attractive idea so far. From my spot in Australia,
one of the things that really appeals to me about the liability approach is that it could actually
impact the way AI is rolled out globally. I mean, particularly because we've got US, mainly California-based
companies producing AI, that has downstream effects for the rest of the world if those companies
are held to a liability, a duty of care standard in the U.S. Can you, is that right?
Sasha, that's a great question. I mean, the U.S. is such a large market. That's absolutely true.
And it's unlikely that, say, open AI is going to pick up sticks if this law passes, you know,
to pick up their toys and leave. They're just not going to do it. So what we do here,
echoes across the world, for sure.
Let's imagine a future in which this policy passes through Congress, and it's signed into law.
What will that mean for the race to roll out that's happening in AI right now?
Well, first, I think what we will see is rather than a race to bring the most exciting product to market as quickly as possible,
we may start to see different kinds of races emerge, like different quests to differentiate amongst products.
So think about the car industry.
We've already spoken about the car industry a little bit.
Brands like Volvo, for example, market themselves on their safety record.
Right now, there is no incentive really for an AI company to try to be the Volvo of AI.
And so I think the first and foremost thing that we'll do is that we'll actually incentivize a different kind of innovation
that we're not only leading the world with producing innovative products,
but actually producing safety innovations.
Some of these thorny technical questions about, say, watermarking, for example,
or even questions about, like, training data quality,
that right now there's not a lot of incentive for the bigger companies
to dedicate a whole lot of resources to,
I think that that will change once we have a liability policy.
place. So I think that those are two of the big changes that we would hope to be able to see here
that are realistic. Yeah, that would be amazing. Yeah. And then third, I think we could see much
more responsible adoption of the technology. You know, right now it seems that businesses are
wavering a little bit on adopting the technology because, in part, it's not quite sure how to
leverage the technology to get productivity gains, but also they're just like a little uncertain about
how to incorporate this stuff safely into their business processes or their offerings to customers.
You know, part of this is the fact that typically these big companies, Google, Meta, Microsoft, Amazon,
have immense bargaining power. And if you're a smaller medium-sized business, you don't really
have much ability to negotiate on terms and conditions with Microsoft or Google. And so again,
like right now, if I'm a medium-sized business owner, I would be really reticent to use co-pilot
or use Gemini in my business for fear that, A, something could go wrong and reputationally
damage my business, and B, that I would be unable to be made whole because I'm getting
unfavorable terms and there's nothing in the law to protect me. And so I think if we have
that clarity in the law, we'll get this uptick and adoption, responsible adoption,
that I think policymakers want to see
that will make America jump out ahead
and stay ahead of the rest of the world
when it comes to adoption and usage of this technology.
So, Casey, some critics might argue
that focusing our intention on liability
is only going to deal with the short-term risks,
but not the long-term risks.
What do you say to that?
I think what I would say is
this may seem simple and deceptively small,
and it may seem overly geared
towards current-day harms.
But really what this is doing is it's setting a floor to rebalance the scales, not just for today, but to set us up for success tomorrow.
If we put this in place today, it will not only enable us to address harms that are already happening, that are already materializing, but it will also give us a tool to deal with harms that we haven't even contemplated yet that may materialize tomorrow or ones that we can foresee that will happen tomorrow or further down the line.
Meanwhile, it will slow down the race, give us all a chance to catch our breath, and let policymakers take their time to develop more detailed, comprehensive regulations, and most importantly, it will bring the biggest companies to the bargaining table in good faith in a way that we can come up with productive regulations that work for everyone.
Casey, thanks so much for joining us today and for all the hard work the policy team is doing.
Thanks for having me.
You can read more about the liability framework we've been talking about on the CHT website at HumaneTech.com.
And don't forget, we'll be doing a mailbag episode soon.
That is, you can send us your questions and ask us anything.
Please record your questions for me and Aza and then send them to us at Undividedat HumaneTech.com.
You know, we are very aware that there's way more risks that AI generates that are captured by, you know, the laws that we have on the books.
Going back to the E.O. Wilson statement we reference all the time.
on this podcast that the fundamental problem that we're facing is we have paleolithic brains,
medieval institutions, and laws, and then this accelerating technology that's creating more
issues faster than those laws can keep up. You know, it's not illegal technically to rank
information based on how morally outrageous and how much division it causes. Causing more
inflammation and division in society isn't illegal. Adding a beautification filter to
kids, you know, identity online isn't illegal. It's harmful,
more subtle ways. And the law is not very good at capturing the subtle risks. So I want people
to know that we know that. And this is not about a liability framework that's going to cover
all of the risks posed by AI because there's so many. This is about how do we build momentum
with a very clear first step, an existing legal doctrine that we can expand to deal with the
direct injuries and direct harms of the AI systems that we're seeing today and then build from
there. Your undivided attention is produced by the Center for Humane Technology, a non-profit
working to catalyze a humane future. Our senior producer is Julia Scott. Josh Lash is our researcher
and producer, and our executive producer is Sasha Fegan. Mixing on this episode by Jeff Sudaken,
original music by Ryan and Hayes Holiday. And a special thanks to the whole Center for Humane
Technology team for making this podcast possible. You can find show notes, transcripts, and much more
at humanetech.com.
And if you like the podcast,
we'd be grateful
if you could rate it on Apple Podcast
because it helps other people
find the show.
And if you made it all the way here,
let me give one more thank you
to you for giving us
your undivided attention.
