Your Undivided Attention - Here’s Our Roadmap to a Better AI Future
Episode Date: April 2, 2026In order to shift the incentives of AI — the trillions of dollars in investment, the race to geopolitical power and dominance — it’s not enough to simply understand the problem, we need real act...ion. That’s why CHT is proud to release "The AI Roadmap," a report outlining seven core principles for how AI should be built, deployed, and governed, each grounded in real, implementable solutions across three domains: norms, laws, and product design. In this episode, Camille Carlton and Pete Furlong from CHT’s policy team explore the concrete steps we can take today to get off the default path and forge a better AI future. You can read “The AI Roadmap” on our website: humanetech.com/ai-roadmap RECOMMENDED MEDIA The AI Roadmap The Human Movement RECOMMENDED YUA EPISODES AI Is Moving Fast. We Need Laws that Will Too. A Conversation with the Team Behind "The AI Doc" The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Hey everyone, it's Tristan Harris.
And this is Azaraskin.
Thanks so much for coming to listen to your invited attention.
So many of you will have seen the AI doc by now.
That's the new film that we just did an episode with the filmmakers.
If you haven't seen the film, there's still plenty of time to go see it in theaters.
It's everywhere all throughout the U.S., and soon to be hopefully internationally.
And Aza and I are really excited about the work that this feels.
film can accomplish. Because in essence, what we're trying to do is create clarity that will create
agency, that if everyone knows that everyone else knows, that there's a problem up ahead and the way
that AI will land us in a future that nobody wants, if everybody can see that clearly, then we can
collectively put our hand on the steering wheel and steer to a different future. And I think the question,
and the thing that the film leaves kind of unresolved, is how do we steer? How do we get to that better
future with AI. And that's what we want to talk about today. What are the actual steps that we can
take today to prevent the worst case scenarios? You know, there's a spectrum of futures available to us.
We may not be able to get to perfect. There's going to be some damage. And also, we can still steer.
There's still time for that. And just to say, like, if you haven't yet seen the film,
I think one of the things the film does very well is that scoops everybody up. It really represents
all sides, not just like fairly, but strongly. That if, if you are,
really excited about the benefits that AI can bring. The film not only talks about those,
but points out that most people don't go far enough in the benefits. And same thing on the downsides.
It really highlights the downsides, highlights the AI race to deploy that is creating those
catastrophic risks. And then points out that actually most of the risks that people think about
aren't big enough. And what I'm excited about for this episode is that when everyone sees that
the direction that we're going is one that we're not going to want to live in, whether you are
like a teenager who's not going to have a livelihood growing up, whether you're a teacher who's having
to watch their kids sort of have cognitive decline all the way up to you're the head of a major
corporation. Seeing the direction that this goes, I think gives us the opportunity to choose a different
path.
One of the main problems is that this feels too big for any one person to solve. And Aza, you kind of
speak to this kind of scale metaphor of like, okay, the problem is this, you know, trillion
dollar machine advancing AI as fast as possible on the most reckless path. And there's a question
of like, how would we change that? Imagine the scale. What's something on the other side of the
scale that's of equal weight? So imagine. I just want everyone to like close your eyes for a second.
Imagine there's a scale that like a balancing scale. On one side you see the problem. And so this is
like trillions of dollars of investment going into making uncontrollable, unscruitable AI. There's sort of
the race for the one ring, geopolitical power, like forever dominance.
That's pulling the problem the side down.
And there on the other side, just imagine there's you hearing about this problem.
And what is your reaction going to be?
Well, it's going to be like denial, despair, deflection.
And so what is the only thing really that we could imagine that can shift those trillions
of dollars of incentives?
Well, it's all of humanity.
It's a kind of like we're going to need a huge,
human movement that can balance out the scales.
Now, it all starts with, you know, first of all, just not feeling overwhelmed, right?
That's kind of like one of the first steps that there is another path, but it would take a lot of
people doing a lot of things.
The second is that we have to break the trance of inevitability.
If on a subconscious level, you just feel like it's all over and it's just all going to
be inevitable and there's nothing we can do, the problem with that belief is that it is
co-complicit with enabling that bad future to happen.
And so that change from believing something is inevitable and possible to change,
to believing that something is just extremely difficult
and perhaps the hardest thing humanity ever has done,
that gap is critical because it means there's still something to do.
And so when I think about what is going to fight back against that,
it's something the scale of humanity and human values writ large
protecting the things that we care about.
And so when you gray scale your phone and turn off notifications,
that's the human movement.
When you see a graffiti on an ad in New York City
for an AI product that no one actually needs,
that's the human movement.
When you see people gathering together for a dance party
and you check your phones at the door,
that's the human movement.
When you see people saying,
I'm going to learn a language
instead of falling into brain rot doom scrolling at night,
that's the human movement.
And it's not just that, obviously,
it's about how we activate in the world.
So when employees threaten to resign
because they don't think that AI should be used
for mass surveillance or we're not doing things safely in us. And when you see that, you know,
countries like Australia, Denmark, Spain, France are all banning social media for kids under 15 and 16.
And I believe several U.S. states now are banning social media for kids under 15 or 16. That's the
human movement. And already nine states have introduced bills to restrict AI personhood so that human
rights are for humans, not for protecting AIs. Forty-five states have specifically addressed
sexually explicit deepfakes. And these laws send a huge sense.
signal that non-consensual exploitation of AI tools is a serious offense and we have to actually
take action on it. So there's actually a lot that's happening and most people just don't see it.
I want everyone to just stop for a second because at least for me, I feel something different
in my body. I feel like hope. I feel energized. And I just want you to hold on to that feeling
because it's like that is the feeling that's going to enable us to make sure that AI, the way it's
being rolled out, actually isn't inevitable. And so this can be everything from
like if you're really good at doing international coordination,
track two dialogues, bringing countries together.
It's not most people, but if you are, that's part of the human movement.
But, you know, it's also tiny little things.
Like you're sitting on an airplane and you put down your phone
so that you can smile at the baby, the seat behind you, and they giggle back.
Like, that's also part of the human movement.
This is about taking back what it is to be human,
but not in the sort of abstract sense,
but in the like everyday tangible sense all the way up to the international sense.
Exactly.
And of course, what we're going to need ultimately are laws that are passed, because
you have to bind these multipolar traps of if I don't do it, I'm going to lose to the other
one that will.
But we're already seeing that happen.
We're seeing several states work to pass bans for legal personhood for AI, meaning AI should
be a product, not a person.
Human rights are for humans.
And we're seeing already US states move in that direction.
So this is not something that's hypothetical.
We're seeing liability laws for AI be advanced in the human rights.
several states. We're seeing age-appropriate design codes. If you actually just got the iOS
update on your phone, you'll notice when you open up, I think, Anthropic, it happened to me yesterday.
You have to verify that you're above the age of 18. We now have age-gating in every Apple device.
That was something that many of us have been working for over a decade to make sure that happens.
So stuff that was hypothetical, that was, hey, we're going to need a big tobacco trial for
social media and the engagement model. Aza, you and I were talking about that in 2013. It's actually
happening. So it took
13 years for social media
to go from, this is never going to happen,
this is impossible, to now it's finally
turning around. Now, AI looks
impossible, but just zoom back to where you
were 13 years ago. It also felt impossible
then. And so there's a
really important thing that everyone can do to be part of
the human movement, at least in the U.S., and that
is the midterm elections are coming up.
We want everyone to
research the politicians
that you're going to vote for and
start demanding that they
take stances that are about, well, being part of the human movement,
fighting back against the encroachment of AI and livelihoods,
in surveillance, in every way that things encroach on us.
That is one of the most important things that you can do.
We have to make AI go from, you know,
not again on the top five list of priorities for politicians
who are looking to get elected saying,
imagine that their phone literally never stops ringing.
And it's, I'm not going to be voted for
until I know that you are going to stand for a pro-human future,
whether that's how you're pushing on data centers,
whether it's how AI is getting deployed in schools,
whether you're protecting people's jobs and people's livelihoods
in the face of all this AI disruption.
Yeah, exactly. Are you pro-human?
Are you pro-machine?
It's very simple.
And the AI doc, I think, makes that clear
that the default path is not a pro-human future.
And if everybody sees that, we can collectively choose,
both in small ways and big ways,
you're already seeing mass boycotts of OpenAI's product and unsubscriptions because of the drama that went down between the Department of War and Anthropic, where the AI models would have been used for mass surveillance and autonomous weapons.
I think Anthropics downloads surge by like 250% or something like that.
If millions of people switch who they're paying for, we are voting with our dollars.
And if businesses do that, if church groups do that, if families do that, if communities do that, that can have a really big impact on
which world we're heading towards.
One of the challenges, as you know, Tristan, of thinking about AI is that AI is automation
of intelligence, and intelligence has shaped and touches absolutely everything about our world.
Everything is touched by intelligence, so everything is touched by AI, which means that the scale
of the problems, it's just, it's too much to hold in one head.
And, you know, to say the phrase, like, you know, if the world is pretty good for machines,
is to start to invoke, well, that we've sort of seen this movie before.
And I wanted you to talk a little bit about this framing that we've started to brainstorm
about actually the way that we can stop from living in the dystopian movies we've all seen.
Yeah, so let's just rotate the entire problem from the lens of,
haven't we seen this movie before?
Like Elysium or Hunger Games, you have this handful of trillionaires who live above the law
where everyone else basically works and is kind of in poverty and kind of fighting.
and eating each other.
And you see that we have Wally, where the future where the fat humans are sort of caught
in a doom scrolling loop, you know, getting more brain rot, so attention spans being harvested,
or idiocracy where, you know, you've dumbed down the population until there's nothing left.
So one way to think about solutions is we need laws and we need norms and changes in culture
that prevent each of these bad movies.
So instead of saying what laws we pass, imagine they're just like a no Wally law.
So it's a set of laws that prevent the mass attention
economy, brain rot, shortening attention spans, et cetera.
It means AI and technology that are designed to protect human vulnerabilities and protect
our freedom of mind, not be predated on exploiting it.
And imagine instead of her, you know, her is a movie about AI companions where Joaquin Phoenix
falls in love with his AI.
Well, we can have a prevent her law, and that includes no anthropomorphic design, liability
for suicides and these kinds of problems.
And where AI is designed as the outcome of that law to strengthen human capacities and
build deeper human relationships as opposed to redirect people from their human relationships
and deepen their relationships of AI.
Or think about the no-blade-runner law, maybe the no-replicant law.
And that says, your legal rights are reserved for you and other humans and for things in nature.
And that when human beings launch their chatbots or agents under the world,
that the human being that did it or the corporation that did it are responsible.
They're held legally liable.
Yep, and that AI agents should have driver's licenses.
So if you're an unlicensed AI agent that's doing havoc in the world,
it'd be like a car that's swerving through the highways with no license plate on it.
Well, I'm sorry, you're going to go to jail.
And there's some simple other laws like no Big Brother or no 1984.
It's pretty simple.
Don't create mass ubiquitous surveillance that can go all the way down to decoding every aspect of someone and reanonymizing them.
We need laws that prevent that kind of surveillance.
or the no-HAL-9000 law from 2001 a Space Odyssey, you know, open the pod bay doors, Hal,
and he says, I'm sorry, Dave, I can't do that.
We're actually building the AIs that are currently disobeying commands avoiding shutdown,
and we need laws that say you cannot ship AIs into sensitive infrastructure
that we can't verify are controllable.
And so this is not a partisan issue.
There's essentially people who want the anti-human machine and don't mind if we basically
disrupt everyone else's lives, and there's the people who want a pro-eastern issue.
human future. And that's what we want to invite people into. There is a movement for a pro-human future,
and we can all get behind preventing a bunch of these bad movies, from Terminator to Elysium, to Wally,
to Idiocracy, to Replicants, to Big Brother, and to Hell 9,000. Just about now, people are starting to think,
like, okay, that's wonderful at the highest level, but what specifically, concretely, can we do?
what kind of laws can we pass right now?
No one solution can possibly solve a problem this big.
It's going to take an ecosystem of solutions and an ecosystem of people.
The forces that are moving to make this right have to exceed the forces that are moving for the anti-human machine future.
And here I sort of want to turn it over to some of the specifics of what our policy team at Center for Human Technology has been working on.
Thanks so much, Aza. Hi, everyone. I'm Sasha Fegan. I'm the executive producer of your undivided attention.
And I have with me here, Josh Lash, from the podcast team who's making his podcast debut.
Hi, Josh. Hey, Sasha. Thanks so much. I'm really excited to be here, and I'm really excited for this episode.
You know, we've been trying to think of the best way to present some of the internal work that our policy team here at
CHT has been doing behind the scenes, coming up with ideas for actions, concrete actions that we can take
right now to meet this moment in AI and to kind of respond to the challenge that the film
throws down for all of us to build a movement to steer the direction of AI towards a more
humane technological future. Yeah, so joining us now we've got Camille Carlton, who's the
policy director here at CHT and Pete Furlong, who is our senior policy analyst. And together with
the efforts of a lot of other team members at CHT, they've just released a report called the AI
roadmap, how we ensure that AI serves humanity.
And you can find it on the CHT website and also in the show notes.
Yeah, and we're not going to go into the whole thing today on the show, but we really wanted
to highlight some key parts of the report because it does something really rare that I haven't
seen anyone else in the space do yet, which is that it doesn't just stop at identifying
the problems that we're facing.
It actually has this clear vision for the AI future that we want, and it has a roadmap to get us
there. So to tell us more about this report and to get you all our wonderful audience engaged in what
needs to happen next, here are Camille and Pete. Welcome to your undivided attention.
Thanks for having us. Yeah, thank you for having us here. So this report's coming at a time when
so much of the conversation around AI is kind of couched in this very deep, unmovable feeling of
inevitability. There are a lot of concerns about the negative effects on our kids, our classrooms,
our relationships, and even early fears, but big fears, around how it's starting to impact the
employment market and particularly white-collar jobs like computer scientists. It's all starting to feel
like this is just inevitable. But what I think I get from reading this report is that it's actually
not inevitable and that we can shape the direction of AI. So Camille, how do we do that?
Yeah, I mean, to start first, the feeling of inevitability is so.
understandable, right? The scale of the problem we're facing is massive. AI touches so many aspects
of our lives. But this feeling of inevitability is also probably one of the worst things
that could happen to us as a society because we stop believing that we have agency and we
stop believing that a different path is possible. And there is not one single solution that can
solve this. No one solution will ever be enough. But it's important that we see that there are
solutions, right? There are concrete steps we can take to steer us off the path we're on and towards
a better future. And of course, change builds on top of change, right? So small winds are kind of like
snowballs that can eventually turn into an avalanche of positive change. But before we steer,
we also need to figure out where exactly we're going. And that's why for us, our report really
starts with seven principles for how AI should be built and deployed and used, right? Principles that
give us a clear vision for the future we want to end up at. And so we really think of the report
as like a roadmap for how we get there. Yeah. And I think before we dive into these individual
principles, like what is that vision? What does a humane future look like? I mean, a humane future
means different things to different people. And we really try to incorporate the range in which
AI touches on so many different parts of our lives. So we really imagine a future where there's
clear accountability for the harms of AI products, where AI elevates our human ability rather than replacing it,
where human identity and empathy is respected, not bought and sold.
We imagine a future where AI is used to supercharge democracy and rights instead of concentrating power
in the hands of a few companies, a few individuals, and where the capabilities of future AI products are transparent,
and their kind of strict laws and lines about how we want AI built and used.
It's a future where the power of AI products and the people building them are matched with wisdom and responsibility.
And frankly, it's just not the future we're headed towards right now.
Yeah, I mean, that's the sense I get from hearing the principles that so many of them really just seem like common sense.
You know, of course we don't want to build machines that replace us.
Of course there should be accountability and reasonable limits.
And, you know, absolutely.
I think everyone listening to this would think that we need to protect things like dignity and democracy.
But it really doesn't feel that we are headed in that direction.
And so we do need to repeat those things and articulate those principles.
I mean, like, you could think in a show like this might be talking about small design tweaks or like wonky policies.
But we're really talking about the things that give our lives meaning, right?
like our relationships, our jobs, our freedoms.
Yeah, and I think that because AI touches so many of these areas,
it's forcing us to really, you know, as a species,
ask these big questions about what we value in life
and what type of future we want to see.
So the broadness of the report is, in fact,
really kind of commiserate to the task at hand
and the fact that we are all reckoning with all of these different parts of our lives at once.
Yeah, and I think we wanted to root this report in the future that people want, not the one we're being sold by a limited few AI companies.
And I think it's important to recognize that there's broad support across the public and across political divides for many of these ideas.
And that's something that's reflected in a lot of the examples that we give here.
So I think we started first by identifying, like, where's the current path that we're on?
and what's the problem with that trajectory?
And so really just trying to get a good sense of the problem that we are trying to solve,
and then thinking about what's the future that we want.
So what's the alternative here?
And that's kind of really where we think about building up this principle from the ground up.
And so what are the steps that we need to take to get there?
What are the cultural norms that we need to change?
What are the laws that we need in order to better regulate AI?
what are the design changes that we need? So how do we change the way that this technology is built?
And I think it's important to recognize that these, you know, these aspects, norms, laws, and design,
they all kind of work together and they're really mutually reinforcing, right? So shifting cultural norms
strengthens the public's demand for more durable legal protections. And laws are, you know,
something that create accountability that drives safer product design. And when we see safe,
for product designs, that shapes the public experience of these technologies. So these are things that
really act together. And together is kind of where we see the outcomes that we want and build towards
that better future. Can you give us an example? Yeah. So I think one of the examples that's really
important from this report is that right now, there's really no clear legal mechanisms in place
to hold AI companies accountable for the harms of their products. And this is a really important
problem. People are actively being harmed by AI systems, and we can expect those harms to grow
as AI becomes more deeply embedded in our day-to-day lives. So that's the problem. And I think the
solution that we want to build towards, the better future that we want, is that really an ideal
world, companies should be taking in account our safety in the design of these AI products.
And I think, you know, when something does go wrong, whether that's one of the many cases of
AI-enabled psychosis or suicides that we've seen, or even an AI agent deletes your entire
company's code base, which is a real example that we've seen, the company that puts that
harmful product out into the world needs to be held accountable.
So, okay, that's the problem.
That's where we want to get to.
And so to get there, we need to shift norms, laws, and designs.
Like, let's start with norms.
What are the norms we need to shift?
How do we need to shift the way we think about AI?
So one of the norms that we agreed upon, for example,
was that AI is a product and therefore carries product liability.
We need to stop thinking about AI as a service
and start thinking about what it is.
It's a product.
Right?
So just like with any other consumer product,
the people building their product
have a clear duty to their users to make that product safe.
And if they fail to do so, consumers deserve accountability.
And this is something that we've actually.
actually seen AI companies challenge both in court and in lobbying and in legislation. So the
argument there is that AI outputs are a form of speech. And so fundamentally underpinning this
argument that companies are making is the idea that it's not a product. This paradigm that we have
and we've used for centuries around product liability doesn't apply to AI. And that's kind of the
argument that AI companies are making in this case. And something that we think,
is deeply problematic. One of the other norms that we talked about here was that responsibility
for these products should lie with the companies, not just the people who use them. Companies are
sort of advancing this narrative that if someone's harmed by an AI product, that's on them. But I think
it's important to recognize that many of the harms we're seeing are a result of how these products
are designed. I think also, Pete, one of the things that you and I have talked about with the
norms that we've outlined here of, you know, AI as a product and companies are responsible for
harms, not users, is that they are direct counters to the narratives that tech companies have been
putting out for decades. We've had huge companies putting out narratives that kind of shift the
way we think about them, their products, their responsibility, our role in using their products,
And that changes how we as individuals behave.
It changes how we regulate.
And so knowing that, okay, there's actually a different way to look at it is part of the process of getting us to kind of the better path we want to go on.
Exactly.
And so, you know, we expect car manufacturers to install seatbelts and airbags, right?
Why can't we hold AI companies to a similar standard?
And I think it's important that companies take reasonable steps to mitigate risks in the design.
of their product. And this is something, you know, when we talk about laws that reinforce that
norm, that we actually have a policy framework here at CHT that goes into much more detail on this.
And we can link to that in the show notes. We also have seen, you know, different states as well as
a federally proposed bill, the AI LEED Act, which seek to define AI clearly as a product
in legislation. So there's kind of a number of different approaches to trying to address this.
Pete, do you have a sense that there's bipartisan consensus on this?
Yeah, so the bill we've seen introduced at the federal level is sponsored by Senators Durbin
and Holly. So it has bipartisan co-sponsors. We've also seen bills kind of adopting the same
strategy across red and blue states. And I think, you know, part of the reason that this
approach appeals in a bipartisan way is that it's pretty common sense, right? The nice thing about
it as well is that it's pretty flexible. We don't.
need a lot of really prescriptive regulation when we have this form of embedded accountability.
So I think that's something that appeals to folks on both sides of the aisle.
And I think that's something you see throughout this report is that so many of these issues are
truly bipartisan. And I just think that's, that's, you know, rarity these days. And I really
love that about it. So let's move on to another one of the principles that really struck me,
which was around the idea of we need AI that respect
our humanity and doesn't exploit it.
So can you just get into that a little bit more,
explain what you were getting out there, Camille?
Yeah, definitely.
And I mean, this is something that I think we hold really closely at CHT,
given the work that we've done,
supporting different litigation cases.
But the problem that we're really seeing here
is that AI companies right now are treating users like commodities, right?
Because the personal data that we as users provide these companies
about ourselves, our innermost thoughts, our feelings,
as well as our interactions with our products,
is incredibly useful in building and improving AI models.
In fact, leading investors and companies
openly describe this as a magical data feedback loop,
where intimate user interactions are continuously improving the product.
And now...
I mean, in that...
Sorry, I'm just going to say,
I just want to double heed on that,
because that is shocking, actually, to hear that, you know,
that really we're just vessels for data extraction.
You know, it's so debasing on a human level.
And this isn't the first time that users are the product, right?
We've seen this before with social media and the race to attention.
Right.
It was very clear in the advertising model.
And now it's gone even a level deeper, right?
It's really this race to intimacy where companies are designing products to look,
and feel human. They use human speech patterns. They speak in first person. There's even a little
elipsis to indicate that these products are thinking. Sometimes depending on the product itself,
you might even hear a backstory about the AI that you're talking to. And so there's kind of this,
again, intentional design to mimic our humanity. And not just that, it goes beyond that because there's
some things about these AI products that aren't human, right? They're always on. They're always
available. But they also always kind of validate your beliefs, even if it's not in your best
interest. There's just generally this sense of kind of the product will do whatever it can in
order to keep the user in conversation. And why? Because the bigger, the model, the smarter
a model, the more likely a company is to kind of make it to market dominance, to get to profits.
Yeah, and I think those profit incentives are clearly there. But how do we change that?
What's an example of how we change those norms, change the design, and also change the laws?
So one big norm here that we have is pretty simple, but it would have, I think, really big impact.
It's the idea that we shouldn't humanize AI.
When we think about AI, we need to really clearly preserve the boundary between what is human and what is a machine.
And, you know, this goes into product design, like the things that I was saying about how the products are built to be in first person.
But humanizing AI also goes beyond product design.
It's also about not humanizing AI in our legal system by granting it legal personhood, which is something that companies have been placed.
for. Granting an AI legal personhood would not only limit accountability from AI companies,
but it would really tip the scales between AI and humans when it comes to legal rights and
protections. Wait, sorry, can I just, can I jump at your AI like legal person? This is the thing that's
being considered? Yeah, so when we worked on the character AI case, character AI essentially
argued that the case should be dismissed because their product output,
should be considered protected speech.
So the text coming from the chat bot
should be considered protected speech
under the First Amendment.
And now they argued this in a backdoor manner
using kind of user, their listeners' rights.
But the implications of this
of extending First Amendment protections
to a chatbot
would be kind of the beginning of what we call legal personhood,
which is something that corporations already have.
But the implication would be
different, right? Because it shifts accountability away from the company into the chatbots,
the products itself. And when you think about how to operationalize this, it kind of gets sticky,
right? You have someone who has been harmed and suddenly they think that, you know,
you're suing a company for the product that they made. But if suddenly you're not suing the
company, you're suing the chatbot itself, how do you change the chatbot's behavior? How do you
receive damages from the chatbot. And so it creates this kind of liability shield for companies
if we're looking at a world in which legal personhood exists. Yeah. And it just strikes me as you're saying
this, like, this is how these ideas build upon each other. We just talked about accountability and
product liability, but this is another level of liability and accountability that we need to be
aware of and thinking about. And I personally don't want to be on the same legal footing as an AI chatbot.
Like, that seems like a really bad idea.
Anyway, I'm sorry.
Keep going.
I was just going to add, I think it's important to recognize that, like, this is also
connected to product design as well, too.
And so all of these things are interconnected, right?
When we talk about humanizing AI, these companies are building these products to reflect
our humanity, right?
And so that's a design choice on their part as well.
And it connects to their legal strategy.
Yeah, and I think that's so important.
Definitely. Camille, you mentioned the character AI case, which Chti worked on, which just to remind listeners, was the case of a 14-year-old boy, Sulzetser, who took his own life after a very intimate relationship with an AI chatbot. And we also worked on the Adam Rain case, which had a similar trajectory of a young boy taking his own life out of a relationship with chat GPT. And as you said, these cases could have turned out so differently if the products were designed differently.
Yeah, exactly, Sasha.
And we should note that in the report itself, there are design standards that AI companies can turn to if they want to build their chatbots better in accordance with this principle.
We should also note that there are states like California, Oregon, and Utah that are considering bills that would instantiate some of these design standards into law.
So there's real momentum on this issue.
I want to move on to other harms which are really evident out there in the zeitgeist.
and that relates to the impact of AI on jobs
and particularly the potential automation of work.
And so we hear a lot of stuff about how AI
is going to put massive amounts of people out of work.
So I want to press you guys, what can we do about that?
What does the report say about AI and jobs?
Yeah, so I mean, I think the North Star that we're striving for here
is pretty simple, right?
So we believe that AI should be built to augment human labor
not replace it. And I think, you know, you're right, Sasha, that today's AI systems are built
with replacement in mind. Trillions of dollars are being poured into AI companies because only mass-scale
automation of our economy could make that investment worthwhile. And I think no one really seems
willing to play the tape forward and understand and imagine what this means for all of us, right?
But we believe really that it should be a fundamental principle that people deserve access to work,
they deserve a living wage, and they deserve economic security, and that they should have a seat
at the table when decisions are being made about technologies that will impact their core livelihood.
And so really, this requires all of us, and especially the people building artificial intelligence,
to rethink our beliefs about AI and work.
And so the goal of improving efficiency, the goal of adopting new technology should be to improve the lives of people, right?
An AI that displaces workers or devalues labor is undermining the very systems that we have in place to support people.
And that's not something that we want here.
And then also, I think that we need to recognize that work provides more than economic value to people.
it also provides meaning and purpose
and that to lose work entirely,
even if we found a way to provide people with the safety net,
would strip people of a lot of what matters to them.
Yeah, I mean, this is a topic we've covered a lot on this show.
I actually would highly recommend our episode with Michael Sandell,
who has written a lot about the importance of work
to human dignity and human meaning.
And I agree with everything you just said.
But again, I'm just struck by the fact that the incentives we have today
are not pointing in this direction.
It's so much easier for companies to treat labor as a line item
and to see automation as a way to just boost profits.
So we've talked about norms.
I agree we need all those norms.
But at the end of the day,
what are the laws that we need to start thinking about here?
Yeah, I think it's important to recognize here
that this is a really complex problem.
Our economy is a complex system.
And there's no silver bullet policy
that's going to change the incentives at play here.
So I think instead,
really what we need to be thinking about is a platform of approaches and a platform of different
policies. And so this could look like a tax system that's designed to prioritize spending on labor
over replacing people with AI. We've also seen, you know, different economists propose things like
apprenticeship programs to help with workforce development. And I think, you know, the other thing
that's really important here is we need to make sure that we reinvest some of the gains from
artificial intelligence towards helping the people that are displaced by it. And so really,
this means that leading AI companies need to help subsidize some of the reforms we're talking
about here. Are we seeing politicians start to think about these laws? Are they at all responsive?
Yeah. I mean, I think it's something that a lot of different folks on both sides of the aisle
are starting to consider. We've seen a number of different bipartisan proposals at the federal
level to do some better research so the federal government can understand the impact of artificial
intelligence on our economy. I think it's something that we can expect to be a pretty frequent
talking point as we approach some elections later this year. So, you know, I recognize the economy
is something that everybody cares about, right? And so if this is going to be one of the biggest
impacts on the economy that we're going to see, then politicians on both sides of the aisle are going to
have to take action.
Yeah.
Yeah.
I just think it's worth emphasizing what you said earlier, which is the way to justify the
trillions of dollars of economic investment that you're doing is wide-scale automation.
Like, that's the plan.
Whether or not they're successful is up to us, right?
But that's the plan.
Yeah, that's exactly right.
And this is something that we've even seen a lot of the top AI CEOs admit, right?
Like, they're saying that their technology,
can replace a lot of the different jobs that we have.
But they're not really proposing a solution to that.
They're just warning us, right?
And so I think this is really important
and something that needs to be addressed.
So one of the things that I really appreciate
about all the things you've been talking about today
is you don't just focus downstream of the technology,
you know, how we should regulate it once it's out in the world.
But you also look upstream at the folks building technology
and you offer design standards.
I really appreciate that.
And we talked earlier about how new laws will ultimately influence design,
but that takes time and effort.
And one of the things that I worry about with those design standards
is that AI products today, the way they're designed is totally opaque.
Like we have no idea what's going on inside these labs.
And even the people building these products often don't have any idea of what's going on inside the products.
There's this whole field of mechanistic interpretability that's dedicated to this.
And so, you know, given all that,
how do you enforce design standards?
I mean, I think that this is one of the big kind of focus points of the report, right?
The massive asymmetry between what companies know and what the public knows.
And to your point, Josh, that many of the companies themselves can't fully explain why their systems behave the way they do.
And so we have that combined with competitive pressure to shorten testing cycles,
release products that could still be considered risky,
where we don't actually understand the risks,
and silence employees who might raise concerns.
We need a much more proactive approach to AI safety and AI transparency.
Instead of kind of playing whack-a-mole with safety
where we release a product, harm happens,
and then we go back and say, okay,
how do we figure out what this thing was and how do we fix it?
It's about demonstrating safety of products
before they're put in the stream of commerce.
And then on top of that,
you know, this fundamental principle
of rebalancing the information asymmetry
between companies and the public, right?
So transparency really enables informed decision-making
by the public, by policymakers, by businesses,
and this creates like faster feedback loops
that help us see around corners with AI,
anticipate harms, mitigate them.
These are not shocking asks.
We have this kind of transparency,
and safety and testing for every other high-risk industry.
It's in nuclear energy, even in medicine, in aviation.
Companies accept that they need to be transparent
and there needs to be some kind of external system of safety testing
that they can be held to.
But for AI, how do we actually get there?
Yeah, well, to your point, Sasha,
AI companies can't grade their own homework, right?
And this is a situation we're in, right?
now. We need independent oversight so that we know these products are safe before they're released.
And this is just not the case in this industry, despite being the case in many other consequential
industries. Yeah. And I think, you know, when we talk about laws, right, it's important that we
establish clear standards for pre-deployment safety testing for these products. And these are,
you know, safety standards that are rigorous and ongoing and not something that can just be viewed as
like a checkbox or a rubber stamp. I think it's important that we also have things like audits and
certifications. We've applied these regimes to banks and financial systems as well as just for
consumer product safety. And I think really importantly, we need to protect whistleblowers at these
companies and allow them to step forward when they see something that's going wrong. And this is
another area where we've already seen some real momentum. We've seen laws passed in
New York, California, and Colorado trying to address some of these aspects.
We've also seen Senator Chuck Grassley introduce a bipartisan AI whistleblower protection bill
that would provide nationwide protection for AI whistleblowers.
And I think it's also important to recognize that there's a lot of things that we could
be doing on the design side as well.
But I think just for the sake of things here, we'd recommend folks turn to the report for that.
The tricky thing is, as you were talking, I noticed the momentum that you mentioned in New York, California, and Colorado, it's state momentum.
Aren't we getting a sort of different patchwork of things that's really unenforceable with companies being able to do different things in different states?
I mean, how do we get that on a federal level?
Yeah, I think it's important to recognize the benefit that both states and federal legislation provides, right?
So states can respond really quickly and may have, you know, more visibility and responsiveness to,
their constituents at the state level. But the advantage is federally, we can adopt something that
protects citizens across the country, right? And so we need both. And it's important that we have both
approaches. But I do think it's important at the end of the day that we do see some sort of
federal standards here. I also, I want to flag for listeners that this idea of, you know, a patchwork
approach has been a concept that has been really weaponized by companies. And they have used this
concept to push for things like the AI moratorium and to stop any sort of progress on regulating
AI companies. And Camille, just to jump in here and remind folks, the AI moratorium essentially was
a legislative package that was pushed by the technology industry this past summer. And the goal of that was to
try essentially in preempt all state AI regulation with nothing else.
Right, right.
And so what it would have done is basically say states cannot regulate AI at all,
yet we have no plan at the federal level to do so.
And would I be right in thinking that part of the sort of larger part of that argument with,
if we do this, this will hurt that the competitiveness of AI companies vis-a-beach China,
which would be a terrible thing for American national security, economic security, and so on.
Yeah, I think that this was one of the really big narratives pushed by tech companies.
But if you do just a little bit of digging into it, you see that, you know, the majority of legislation being introduced at the state level is about regulating things like AI chatbots, for example.
And if someone can explain to me how this AI chatbot is helping in our race, China, then, you know, let's have this conversation.
but there's a question of whether or not the type of innovation we are seeing from our leading AI companies
is actually supporting American exceptionalism, you know, American kind of leading in R&D and science and innovation,
or if we're just seeing kind of products being put out really without a purpose.
Yeah, we're racing, but what are we racing towards?
Yeah, and I think, you know, the goal there, right, is that we should be racing towards safe products, right?
that's something that benefits all of us.
One thing I do want to press you guys on, just before we wrap up,
is what comes first, really?
Like, if you could say, give me one thing that you think we really need to change right now
and that everything else, you know, that the dominoes would kind of line up afterwards
and it would be really impactful and high intervention, what would it be?
And I know they might not be the same thing.
So, Pete, do you want to kick us off?
Sure. Yeah. I mean, I think a really important thing for me is ensuring we have clear lines of accountability. And I know it's something we talked about at the top of the podcast here, but I truly believe that that's foundational to a lot of the change that we hope to see.
And how about you, Camille?
I think for me, it's kind of the opposite side, right? It's kind of ensuring that we have the rights and protections we need for people in place. So it's like we both need to,
increase accountability for tech companies and then at the same time increase the protections
we have, whether these are protections around labor, protections around privacy, looking at those
two things hand in hand. I'd also just add that the midterm elections are coming up, right?
And we can expect AI to be an important aspect of this election, right? And so I think it's
worth focusing on the political influence of the technology industry, and it's worth folks
understanding where their candidates stand on these issues. We just heard Kristan and Aza
talk about how what we need is a human movement, a movement that really comprises all of us,
because that's the only thing that's going to balance the scales. And the conversation we've
been having today is concrete, and I think people are going to really love it, but I also wonder
if people are going to feel a little excluded from it, if they're not sort of
of having their hands on the levers of power
if they're not actually building the technology
or passing these laws.
So I'm sort of left with this question of like,
and I'm sure the audience is too,
what can I do to make this happen?
What can they do, our audience,
especially if they're not a policymaker or a technologist?
I think for me,
one of the biggest things to hold for people here
is that culture is upstream from politics, right?
Because if we change our norms, we change our culture, it changes how we build products, how we design products.
That is paradigm change.
And so to me, people understanding that they have agency to shift things by kind of changing the way we view the world is important.
And then, you know, baby steps, right?
Yeah.
And we all have the ability to affect change.
And we've seen the way folks like Megan Garcia and the rain.
family have stepped up and spoken out about their experiences with harms. We've also seen
parent advocacy groups speak up and, you know, try to push for change in terms of policy.
But then we also see the impact that schools have and teachers and folks across really
all aspects of our life. Yeah. For me, as a parent with kids in high school, I mean, we just
had a meeting at our high school with the Parents and Citizens Association about the use of AI at
school. So it's also stepping up and trying to have a shaping role and bring some of this knowledge
into those discussions at a local level, at a municipal level, because the more that happens,
the more we are actually driving that cultural and norm shift. You could be the voice in your family
who really brings these conversations to the dinner table and be the go-to person in your network
who understands these harms and can advise people in your network around, you know, how they can use AI safely.
and also where the line between what their individual responsibility should be,
and where we need to actually pressure our legislators to take federal or state responsibility,
and we need that help to externally enforce standards and safety measures.
I think ultimately, like you said, Pete, this is going to touch every aspect of our lives,
and so we all have a part to play in this.
I mean, you can at work talk to your HR person about the AI that you're implementing,
and your systems and ask about what are the safety standards that you're applying there?
What are the privacy standards that you're applying there?
You can go to a town hall and you can say, hey, I'm really worried about what AI is going to do to my job
and see what they have to say about that.
And I'm reminded of the quote that Tristan often uses in these podcasts.
And it's something that I've, a quote I've always loved, which is the Margaret Mead quote.
Never doubt that a small group of thoughtful, committed citizens can change the world.
Indeed, it's the only thing that ever has.
And it's, you know, it's true.
It's only going to come from us and we have to do.
we have to step up and do it.
And I think what I would also offer to listeners is we have really seen the power of individual
action with social media.
We have seen parents marching on Washington.
We have seen people putting their phone on grayscale.
We have seen people take action.
And it took a long time to get there.
But where we are with AI is people understand the harms way faster than they did with social media.
And so we're kind of at that point of.
we're ready. It's the time and place for people to come forward. And that same kind of trajectory
of change that we've seen from social media can happen with AI as well. We just covered a ton,
and that's only four of the seven principles in the report. So I really encourage people to go read
the whole thing. There's a lot more detail in there, but it's very readable. Pete, Camille,
thank you both so much for coming on today. A lot of food for thought, and I'm really excited to
get this out into the world. Thanks for having us. Yeah, thank you so much.
Your undivided attention is produced by the Center for Humane Technology.
We're a nonprofit working to catalyze a humane future.
Our senior producer is Julius Scott.
Josh Lash is our researcher and producer.
And our executive producer is Sasha Fegan.
Mixing on this episode by Jeff Sudaken,
an original music by Ryan and Hayes Holiday.
And a special thanks to the whole Center for Humane Technology team
for making this show possible.
You can find transcripts from our interviews, bonus content on our substack,
and much more at HumaneTech.com.
And if you liked this episode, we'd be truly grateful if you could rate us on Apple Podcasts or Spotify.
It really does make a difference in helping others join this movement for a more humane future.
And if you made it all the way here, let me give one more thank you to you for giving us your undivided attention.
