Your Undivided Attention - Spotlight: The Three Rules of Humane Tech
Episode Date: April 6, 2023In our previous episode, we shared a presentation Tristan and Aza recently delivered to a group of influential technologists about the race happening in AI. In that talk, they introduced the Three Rul...es of Humane Technology. In this Spotlight episode, we’re taking a moment to explore these three rules more deeply in order to clarify what it means to be a responsible technologist in the age of AI.Correction: Aza mentions infinite scroll being in the pockets of 5 billion people, implying that there are 5 billion smartphone users worldwide. The number of smartphone users worldwide is actually 6.8 billion now. RECOMMENDED MEDIA We Think in 3D. Social Media Should, TooTristan Harris writes about a simple visual experiment that demonstrates the power of one’s point of viewLet’s Think About Slowing Down AIKatja Grace’s piece about how to avert doom by not building the doom machineIf We Don’t Master AI, It Will Master UsYuval Harari, Tristan Harris and Aza Raskin call upon world leaders to respond to this moment at the level of challenge it presents in this New York Times opinion piece RECOMMENDED YUA EPISODES The AI DilemmaSynthetic humanity: AI & What’s At Stake Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Transcript
Discussion (0)
Hey everyone, this is Tristan.
And this is Aza.
So in our last episode, we shared a presentation Tristan and I gave recently to a group of influential technologists and media folks in San Francisco about the race happening in AI.
And at the beginning of that presentation, we talked about three rules for technology or laws in technology that, you know, Tristan, you and I have been sort of brainstorming about.
And we wanted to pause in this episode and just explore those three rules more deeply
because Tristan and I have been thinking about how to describe how technology is used beyond the intent of the creator,
but then gets picked up by a larger economic system and often used for really bad things.
I think what we really wanted to do was just kind of boil down and simplify.
What does it mean to be a responsible technologist?
Because there are these larger forces that are going to take the technology that you put,
into the world and whisked them into all these impacts that you didn't intend.
And so what are three simple rules that we can follow as technologists to create a more
humane future?
So rule one, when you invent a new technology, you uncover a new class of responsibilities.
And it's not always obvious what those responsibilities are.
Rule two, if that invention, if that new tech confers power, it will start a race.
And rule three, if you do not coordinate, that race will end in tragedy.
So let's walk through these rules, and I'm going to illustrate with a very personal example.
So my personal example is that in 2006, I invented Infinite Scroll.
This was just a little bit before I joined Mozilla, and of everything I've ever created,
it's probably touched the most people.
I think it's now in something on the order of nearly 5 billion people's pockets.
I'm sorry, World.
And I really wish I had known these rules when I had created.
infinite scroll. So, you know, when you invent a new technology, you uncover a new class of
responsibilities. So I invent infinite scroll, and it's actually not obvious what the responsibility
is for me. I was just trying to help people, like, find the things they're looking for better.
I did not realize that it would be used to keep people scrolling.
Well, just to be specific, is that you invented Infinite Scroll in the context of if I have
a list of Yelp results or restaurant reviews or Google search results, then instead of having
to click a button to say, load 10 more results, you invented it.
in one context, but then it got adapted to a different context,
which is in social media infinite feeds.
So the responsibility you were sort of uncovering
is the responsibility for mindful use of technology,
that you didn't realize it would be used for mindlessness
and to weaponize mindlessness at scale.
And so that might have been, is that right,
basically where you're going?
Yeah, that's exactly where I'm going.
And note that it's not obvious.
You invent infinite scroll, it's not obvious
that is mindfulness that now needs
to be protected, that we have a new responsibility.
Okay.
Two, if the tech confers power, it starts a race.
So obviously, attention companies, you know, TikTok, Facebook, Twitter, they are in a race
to get your attention.
Infinite scroll keeps you on the site longer, so it does confer power.
It gives you the power to keep people mindlessly scrolling, so it starts a race.
If Twitter doesn't implement it and Facebook does, Facebook does, Facebook.
Facebook wins. So it starts a race. And then if we do not coordinate, the race ends in tragedy.
So I've done this calculation a couple of times now where you see how much more time do people
spend on these sites because they are infinitely scrolling and doom scrolling versus if they didn't
have that technology. And it turns out it's something like 200,000 human lifetimes per day
being wasted. So that's the tragedy. Because any company, any one
company can't say, I'm not going to use infinite scroll, because then they'll lose the ones
they do. Because there's no coordination, you end in that tragedy.
I think we should define when we use the word tragedy. That means something very specific.
Because no individual company wants to create a world that's doom scrolling all day long.
They just want a little bit more of your attention for itself. And then collectively, it creates
this tragedy of the commons of mindful use of technology, in which now we live in a collective
tragedy of mindlessness. Because the ultimate outcome is something that no one wants, we call it a
tragedy. All right. So I hope that makes sense to the listeners for how to use the three rules
and why I wish I had known about them when I was working on Infinite Scroll. And now maybe it makes
sense to like do a little bit of a deeper dive through each of the three rules and expand them out
a bit. So I think rule one in some sense to me is always the most surprising because you really have
to learn how to look sideways to try to figure out what new responsibilities are uncovered
when a new technology is invented.
So I'll give some examples, right?
We didn't need the right to be forgotten until computers could remember us forever.
It is surprising that cheap storage means we have to write new laws about being able to be forgotten.
Or another one, we didn't need to have the right to privacy in our laws,
until cameras were mass-produced.
In fact, the original Constitution doesn't have a right to privacy.
It took one of America's most brilliant legal minds, Brandeis,
who later became a member of the Supreme Court,
to argue for the need for right to privacy.
For Kodak, who was making mass-produced cameras for the first time,
that was not a thought in their head.
It's very surprising that you invent a new technology
like a camera, and suddenly we need to invent brand new legal concepts.
A more recent example are generative models.
So we've invented new technology.
I can type in text and get out an brand new image that has never existed before as described by that text.
What new responsibility might that create?
What's been uncovered?
Well, one of the surprising things is that you can type in the name of any artist,
and say, make me a picture of an apple being held by an astronaut in the style of,
and then name any artist you want.
It can be a living artist, and it will produce that image in the style of that artist.
Now, the question is, who owns the copyright?
It's not clear.
Suddenly, you can essentially steal the style of a living artist, make money off of their style,
but nothing here is yet illegal.
So a new technology is creating a new capability
which uncovers a new class of responsibility that isn't yet protected.
Or take the simple example of social media and virality.
You know, the person who invented the retweet button just thought this would be a cool thing to do.
Why wouldn't it be awesome to let more people know about more things?
Let's democratize access to information and make things move around the system more quickly.
But there's a new kind of responsibility that also comes with that,
which is similar to broadcasters of television.
If I'm going to instantly broadcast something to millions of people very, very quickly,
I probably want some journalistic standards.
I probably want to fact-check that information.
But the ability to reshare something and amplify something is not built into our constitution.
And so much of what ESA is talking about here in the new class of responsibilities that we're unearthing
is that we're often stuck in a two-dimensional world.
When new tech pops out, we just bump into three dimensions, four dimensions, five dimensions.
And we often need to update our laws to match the new kind of.
of values that we want to protect. And one of the challenges is that, you know, in our interactions
with people who work in technology, they'll say, well, I made this technology, but it's up to,
you know, the government to figure out how we should regulate it, or it's up to ethicist
to figure out what the ethics of this should be. And as we move into the age of exponential
technologies that are adopted so quickly, where chat TBT is adopted by a hundred million
people in two months when it took Facebook four and a half years to reach 100 million people.
As we move into the age where technology will obliterate and eat the world so much faster,
then our responsibilities can catch up,
it's no longer okay to say it's someone else's responsibility
to define what responsibility means.
Okay, let's talk about the second rule,
which is that if the technology confers power, it starts a race.
And this happens everywhere.
There's a lot of subtle ways that technologies and design choices confer power.
You know, Asa gives the example of Infinite Scroll
that conferred power to the social media apps
that used Infinite Scroll to keep you locked into,
doom scrolling or locked into mindlessness.
We talk about other things in social media
where social media sites that dose you
with many likes and positive social feedback
100 times an hour are going to
outcompete the sites that don't give you frequent
social feedback, just because that confers
power in the way that that creates
addiction. But there's also many
emerging ways in that technology is conferring
power and starting a race.
The new AI systems are enabling people
to do automated lobbying.
So imagine that if I'm a lobbyist
and I start employing AI to write
automated letters to Congress members, thousands of times, personalized to them,
also write letters to their constituencies, getting them angry, also create media
that gets them angry about what I don't want the politician to do.
I'm a lobbyist that starts using automated lobbying.
I'm going to out-compete other lobbyist firms that don't use automated lobbying.
Yeah, and to be a humane technologist, like law two highlights another responsibility
that you have.
And that is when you create a technology, it is your job to also name the power that it
confers and describe the race that it will create.
Because if you don't do that, you are mindlessly creating a race that you will not be able to stop.
I remember with Infinite Scroll, I went around and gave talks to all the companies, Twitter, Google,
to try to get them to adopt the technology because I thought it was just a better interface.
The feeling of watching it move to being a race between the companies to figure out how to use it to keep people mindlessly scrolling,
honestly, it took me a number of years
to even let that in.
It's a lot.
Yeah, I think the point of these laws
and why we're warning people about this
is so that
be really careful when you're a technologist
about the power that you create
because it will create a race
that might run away from you.
And once it runs away from you,
like with Aza and Infinite Scroll,
how can Aza, as one individual human being,
how can he pull back Infinite Scroll?
It's out there in the world.
It's racing to make its way
to parasitically, you know,
redesign and transform.
every other interface out there in the world
according to it because it confirms power
and one of the reasons that we're so looking
at AI is that because
AI increases the power
of everybody who employs it to get
better within any domain, whether it's better
at biology, better at journalism,
better at content production,
better at providing social feedback,
better at beautification filters.
And so I think one of the key ideas that we're trying to
communicate here is just that
technologists need to be aware of the races
that they create before they run away
from them. So how do we get better at identifying the race that we're creating? Well, first, notice and
think about, in what ways will this technology that I'm putting out there, or this new design,
in what ways will it confer power? What are other races that are already going out in the world
that this new technology will help arm one side of the arms race? And if I can notice that,
I can get better at sort of screening ahead for what race it might accelerate.
So I think that's a good place to mark Rule 3.
If you do not coordinate, the race ends in tragedy.
I think Rule 3 really explains why when you talk to people that are on an ethics team inside of a company or a safety team or an integrity team, they'll often tell you that they feel really burned out.
Why is that?
It's because their role in the company is generally.
to slow down any harms that might be coming out of the company.
So their job is generally to say no to things,
but the company has a profit motive, a profit incentive.
So everyone else in the company wants to go as fast as possible.
And when push comes to shove,
of course the people saying slow down, it's not safe,
are going to lose because the company is in competition with everyone else.
And if your competitor is doing the thing that you think is unsafe,
well, it doesn't matter, you have to compete as a company.
So safety people, integrity people are almost always structurally set up to fail
because they're working inside of one company
and not coordinating across many companies.
The problem is that it's not about what one actor is doing.
It's about the race that is emerging between all the actors.
I mean, just to make this real, Microsoft, you know, by hitting the starting gun
and deploying chat GPT directly into Bing, directly into Microsoft office,
into the Windows task bar has now forced, you know, if Google doesn't embed JATGPT into
its Google workspace of Google Docs, spreadsheets, et cetera, they're just going to lose to the
companies that will.
So now each company individually might say, well, hold on a second, look at what I'm doing
for safety.
This is Google's saying, look what I'm doing for safety.
Open eyes say, look what I'm doing for safety.
What we really want is a coordination so that we don't have that race ended in tragedy.
And the tragedy that would emerge here would be the collective.
recklessness from embedding all these AIs into these systems before we know what's dangerous
about them, before we know where they could go wrong.
One of the things I just find interesting about this is that the idea that we need to coordinate
or that we can coordinate just feels so impossible to most people who work in technology,
right?
Because you're just one person, you're living in a body, you wake up in a bed, you go to work,
you work in your laptop, and you're building stuff.
Where on that life menu of choices is the new menu item that says,
I'm going to bring all these people to a table to negotiate a kind of a collective answer to this problem.
And of course, then you run into issues of trust, and that's hard, and do I have the email addresses of the people?
And would they come?
And one of the things that we've talked about multipolar traps on this podcast in the past,
that so many of the problems that we're facing are these coordination problems.
It's if I don't do it, I lose to the other guy that will.
And I recognize that it's hard to sort of wake up as an individual human and say,
would the other people come if I invited them?
Would they collectively agree to stop or to slow down
if we agreed that there was a potential slowdown
of how we're all releasing, say, AI?
Now imagine a world where technologists saw in terms of these three principles.
They saw in terms of the responsibilities
that were also emergent through the new power that they were creating.
They saw in terms of the race that might emerge
and they tried to get ahead of that race.
And they saw that they needed to help take responsibility
for coordinating that race
to prevent the tragedy
that would emerge on the other end.
In a world where technologists did take responsibility for these three rules,
we would live in a more humane world.
It's not just a matter of whether technologists could do this,
could host, and convene the conversations that need to happen.
It's that we can't survive if technologies don't do this.
So if I was Sam Altman of OpenAI or Demis Hasabas from Deep Mind
or Sundar from Google,
and I understood these three rules, the things that I would be
trying to do is to host a convening of all of the actors in the space to figure out how
do we do this right? How do we move at the speed of safety? Because if I'm one of those guys
and I am not trying to host that convening, I know that I will be in a race to compete and
it will create a tragedy for all. And if I am unable to host that convening, I'd be trying
to find someone else who could. Maybe it would be, you know, Biden at the
White House. Maybe it would be the Secretary General at the UN. But the important point would be to
create a facilitated deliberation to get to a negotiated agreement that lets everyone move safely.
So there's this AI researcher, Kacha Grace, who wrote a post, let's think about slowing down
AI, how to avert doom by not building the doom machine. And she, I think, makes an incredible
point that somehow we think of building incredibly challenging technology as something worth
doing if it's nearly impossible, but the idea of coordinating to not do it as delusional.
So she summarizes conversations he's had in the AI field as some people say, well, maybe
we should stop building of this dangerous AI.
And the response she gets is that would involve coordinating numerous people.
We may be arrogant enough to think that we could build a God machine.
that could take over the world and remake everything,
but we aren't delusional.
It's like engineers are much more likely to believe
that we can take on this impossible challenge of building AI
than we are taking on the much more tractable challenge
of inviting six people to come to a table
and sit in a room for as long as it takes
to figure out how to move at a pace
in which we'll actually get this right.
So in summary, Rule 1,
when you invent a new technology,
you uncover a new class of responsibilities.
Rule two, if the tech confers power,
it will start a race, and rule three,
if you do not coordinate, that race will end in tragedy.
So there are two major takeaways here as a technologist.
One, whenever you invent a new technology,
it is part of your responsibility to start looking around
for what that new class of thing that needs to be protected is.
And then the faster that technology is invented in game,
new powers, the faster that new classes of responsibility are uncovered that we often will not
yet have law or language or philosophy to describe. So with AI, as we're entering not just the single
but the double exponential, more and more of the human experience will essentially be open to being
eaten and we will not have the law, the language, the philosophy to protect it. And so more and more
of the human experience, the things that are core and ineffable about what makes living so wonderful
will be eaten unless we at the same time figure out what those classes of responsibilities are.
So the thing that you can do as a listener is internalize the three rules. Have everyone in your
company internalize these three rules? Yeah. And the good news is that no one wants a tragedy.
I mean, this sounds impossible, and I know that it is. And it's most impossible,
with AI where everyone has these capacities and the possibility to defect and be the one guy who
just races ahead and grabs the power. I mean, AI is the ring from Lord of the Rings, where those
who seek it, you know, gain the power to bind all of the other powers and the temptation to go
rush for the ring and to grab it. It doesn't look good, right? There's thousands of startups
gaining funding by VCs to rush and grab the ring. Everyone is racing to get the ring.
That's the kind of thing. This is the ultimate test of humanity. I think the role of optimist
in this era of technology
is to articulate the shared fate for humanity
the what happens at the end of the race to tragedy
because, as you say, no one wants that world
and if we can all see it at the same time together
then I think that gives us the fortitude
to face our final right of passage as a species
and not grasp the ring.
I really wish I had known these three words,
rules at the beginning of my career.
Say that. Go ahead.
It's just, I was operating from a different philosophy of if something is cool, I should
make it. If it helps one person or the people around me, I should make it. If people start
adopting it and using it and I've made their lives simpler, then I should make it.
Those were the philosophies I was running, and I was trying really hard to be a good person.
And if I had access to these three rules, I'd have known that, even though I was being locally good,
the way my technology and my invention was going to be used was be amoral at best and sort of immoral at worst.
So that's my hope here, is that by articulating these three laws of technology,
that more technologists will not make some of the fundamental errors that I make on technology that I think.
is going to be much more consequential to the future of humanity.
If you want to go deeper into the themes that we've been exploring in this episode
and all the themes that we've been exploring on this podcast about how do we create more
humane technology, I'd like to invite you to check out our free course, Foundations of Humane
Technology at humanetech.com slash course.
We also want to hear your questions for us.
So send us a voice note or email at ask us at humanetech.com or visit humanetech.com.
or visit HumaneTech.com slash Ask Us to connect with us there
and we'll answer some of them in an upcoming episode.
Your undivided attention is produced by the Center for Humane Technology,
a nonprofit organization working to catalyze a humane future.
Our senior producer is Julia Scott.
Our associate producer is Kirsten McMurray.
Mia Lobell is our consulting producer,
mixing on this episode by Jeff Sudakin.
Original music and sound design by Ryan and Hayes Holiday
and a special thanks to the whole Center for Humane Technology team
for making this podcast possible.
A very special thanks to our generous lead supporters,
including the Omidiar Network,
Craig Newmark Philanthropies,
and the Evolve Foundation, among many others.
You can find show notes, transcripts,
and much more at HumaneTech.com.
And if you made it all the way here,
let me give one more thank you to you
for giving us your invited attention.
