Your Undivided Attention - Spotlight — What Is Humane Technology?
Episode Date: April 7, 2022“The fundamental problem of humanity is that we have paleolithic emotions, medieval institutions, and God-like technology.” — E. O. Wilson.More than ever, we need the wisdom to match the power o...f our God-like technology. Yet, technology is both eroding our ability to make sense of the world, and increasing the complexity of the issues we face. The gap between our sense-making ability and issue complexity is what we call the “wisdom gap." How do we develop the wisdom we need to responsibly steward our God-like technology?This week on Your Undivided Attention, we're introducing one way Center for Humane Technology is attempting to close the wisdom gap —through our new online course, Foundations of Humane Technology. In this bonus episode, Tristan Harris describes the wisdom gap we're attempting to close, and our Co-Founder and Executive Director Randima Fernando talks about the course itself.Sign up for the free course: https://www.humanetech.com/courseRECOMMENDED YUA EPISODESA Problem Well-Stated Is Half-Solved with Daniel Schmachtenberger: https://www.humanetech.com/podcast/a-problem-well-stated-is-half-solvedA Conversation with Facebook Whistleblower Frances Haugen: https://www.humanetech.com/podcast/42-a-conversation-with-facebook-whistleblower-frances-haugenHere’s Our Plan And We Don’t Know with Tristan Harris, Aza Raskin, and Stephanie Lepp: https://www.humanetech.com/podcast/46-heres-our-plan-and-we-dont-knowYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Transcript
Discussion (0)
In the words of the late brilliant sociobiologist, E.O. Wilson, the fundamental problem of humanity
is we have paleolithic emotions, medieval institutions, and godlike technology. And when you have
godlike technology, you need the wisdom to match the power that you have. And our previous
wisdom and thinking have not been commensurate to the complexity of our problems. Technology
is further undermining our capacity to make sense of the world. So the gap
between our ability to make sense of the world and the increasing complexity of the world
is getting bigger. So how do we develop the wisdom we need to responsibly steward our godlike
technology? I'm Tristan Harris, and this is Your Undivided Attention, the podcast from the Center
for Humane Technology. And today in the show, we're going to talk about one way that the
Center for Humane Technology is attempting to close the wisdom gap. It's a free online course.
called Foundations of Humane Technology.
So I'm going to talk a little bit about the wisdom gap.
We're attempting to close,
and then I'll hand in the baton to our co-founder and executive director,
Randima Fernando, to introduce Foundations of Humane Technology.
You know, since the social dilemma came out in September 2020,
the world didn't just stop and, you know, freeze those over the problems of technology,
misinformation, addiction, etc., technology keeps racing along,
and it's generating brand new issues that we didn't even have before.
For example, Aza, my co-host on this podcast, showed me a new synthetic media technology
called Clip-guided Diffusion.
This is literally new technology that didn't exist even a few months ago.
And with it, you can give a computer a phrase like,
A Girl Stood in the Blossoming Sunset Valley, pondering the mountains in the distance,
peaceful pastel palette, matte painting.
and instantly it generates an entire beautiful image from scratch
with exactly what you just said.
What was more alarming, though,
was when you could actually type in hashtag breaking news,
Kharkiv bombed in Ukraine.
And suddenly it generates fake images of a city being bombarded
that look just like the images online.
And once you see this, you realize it's like a neutron bomb for trust on the internet.
Because if anybody can create,
any false image at scale, we're in a totally different kind of world.
And so what we can see is that the complexity of the world is increasing.
As technology accelerates, it's generating new and even more complex interrelationships with
other things in society.
Even consider that just a few weeks ago before Russia invaded Ukraine, you would have
thought that the issues of deep fakes or synthetic media, cyber attacks, and nuclear risks
would be separate issues, right?
What are the ways that nuclear escalation could happen is one risk area,
but it's totally separate from cyber attacks or misinformation or synthetic media.
But suddenly you realize that nuclear risks directly emerge from, say,
the ability to flood the internet with misinformation targeted at certain audiences,
which could accidentally trigger an escalation pathway.
And so as the complexity of the world is increasing,
our ability to respond to that complexity, to make sense of that complexity,
is not matching up.
and that's what we call the wisdom gap.
It's the gap between the complexity of the world
in our ability to make sense of it and respond adequately.
For example, as we've talked about in this podcast,
the generals of the United States sitting in the Pentagon
might be making sure that our physical national security
is defended everywhere,
but are they really keeping track of the definition and meaning of security
when it suddenly moves into the domain of TikTok or Twitch or Discord?
How many of the generals are updating their view to include
these new forms of security.
Technology is doing two things.
It's simultaneously driving up
the complexity of the issues
because the world is more complex
when you have synthetic media
or deepfakes in it, right?
It suddenly complicates
how we would respond to these different problems.
And it's also decreasing
our ability to make sense
of the increasing complexity
because of the social media outrage machine
forcing us to rely on simpler and narrower
polarizing narratives
about how the world works
or what we should do about it.
So if you imagine two lines in a graph, one line is the graph of complexity going up,
and the other is our ability to make sense of it that's actually going down.
And one of the main things that alarms us at the Center for Humane Technology
is the gap between how the people making the technology conceive of the world
and what their responsibility is and what's actually happening.
The paradigm that got us here was we're just giving users what they want.
Every technology has had good and bad.
We've always had moral panics around new technologies
or the radio or television or rock and roll.
Our job is just to give people the most personalized, relevant content.
And technology is neutral.
Who are we to choose what's good for people?
When you think about those beliefs that I just listed,
how can you address a problem like polarization or misinformation
or addiction when you believe that you're just giving people what they want?
What will it take to upgrade this paradigm
that we've been operating in in Silicon Valley?
And we often point to the work of systems theorist Dinella Meadows,
who wrote a book called Thinking in Systems.
And she has this beautiful quote that says,
the way you upgrade the paradigm is you keep pointing at the anomalies
and failures in the old paradigm,
and you keep coming yourself loudly and with assurance from the new one,
and you insert people with a new paradigm
in places of public visibility and power.
Well, that's the goal that we have
with the Foundations of Humane Technology Course.
Instead of just saying we're giving users what they want,
we believe we need to see the world in terms of respecting human vulnerabilities,
that every brain is vulnerable to confirmation bias,
and you have to design technology with the idea of confirmation bias in mind.
That instead of saying every tech has some goods and some bads, which is true,
we need technology that is actively internalizing the most harmful
and especially irreversible externalities.
And instead of saying we need to maximize,
maximize personalized, relevant content, our goal as humane technologists is to create shared
understanding. So respecting human vulnerabilities, internalizing negative externalities, and
creating shared understanding, this is just a small sample of some of the principles of
a humane technology. And with that, I'm going to hand the baton to our executive director,
Randy Fernando, who led the development of the course Foundations of Humane Technology.
Thank you, Tristan, and hello to all of you.
We've heard from so many of you that you're looking to make technology more humane,
and we're excited to give you a way to do them.
So, we just heard a little bit about the inspiration for the course.
Now I'm going to tell you a bit more about it, starting with who it's for.
Many of you work in tech companies and have bumped up against some of the harmful operating paradigms that Tristan mentioned.
Many of you are investors, policymakers, and academics
who are looking to infuse humane tech principles into your work.
Some of you are communication professionals and U.X designers,
artists and creators, activists and advocates,
and educators and students, all looking to support this movement.
Foundations of Humane Technology is for all of you,
all of you who are dedicated to making change happen.
And thanks to the generosity of our donors, the course is free,
and we recommend taking it with a friend, your entire team, or even your whole organization.
As far as what the course intends to do, it takes about seven hours to complete,
and it's primarily designed to help technologists around the world shift the paradigm we use to build products,
to change the core mindset from which we build technology.
You can think of shifting paradigms as having three elements.
One, shifting understanding.
Second, changing behavior.
And third, building community.
We shift understanding by examining the ideas that got us here
and replacing them with ideas that lead us where we aim to go.
In this case, it's the principles of humane technology.
We change behavior through concrete recommendations,
personal reflections, and activities that you can apply to your products and to your life.
For example, our module on respecting human nature ends with the Humane Design Guide
that you can use to analyze how a specific product interacts with human biases and vulnerabilities.
One of the main aims of the course is to build community.
Shifting paradigms and changing behavior is hard, so we need to take on this work together.
To build these relationships, we host weekly virtual events and happy hours to help course
participants connect.
We also provide resources and guidance for building or advancing a career in humane technology.
So what are our aspirations for this course?
Our dream is bold to have 100,000 technologists, investors, policymakers, founders, and everyday citizens
trained in these foundational ideas.
Imagine different product conversations inside companies.
Imagine new investment funds, collectives, and incubators dedicated to more humane technology.
Imagine professors teaching these concepts at universities
and more graduates joining the workforce and asking important questions in their job interviews.
Imagine prize funds and hackathons that reward more humane ideas.
And imagine a world where humane products out-compete.
extractive ones.
Foundations of humane technology is not an end point, but part of a longer journey that involves
countless organizations working hard to bring technology and socioeconomic systems into alignment
with humanity and our ecosystems.
We've been overwhelmed by the enthusiasm so far.
Thousands of people have already enrolled, including employees from Facebook, Google,
Apple, Amazon, Microsoft, and TikTok all the way to Nike, Atlassian, USAID, and the United Nations.
We're also proud to be partnered with Mozilla, who will be promoting the course to their developer
network.
Sign up for Foundations of Humane Technology at HumaneTech.com slash course.
And finally, an enormous thank you to the people who made the course a reality.
David J., Chris Eddy, Johanna King Slutsky, Rebecca Lindel,
Kellynne Cline, Maria Bridge, Jonas Sachs, Tristan Harris, Aza Raskin,
Gita Sengupta, Matthew Brensilver,
the entire team at the Center for Humane Technology,
are many alpha reviewers,
are hundreds of beta testers worldwide,
the early time-well-spent collaborators, and many others.
Foundations of Humane Technology is made possible by the generous lead supporters of the Center for Humane Technology.
Support for this course was provided in part by the Robert Wood Johnson Foundation.
We've got many exciting ideas about how to evolve this course.
If you'd like to make them happen, please make a gift at humanetech.com slash donate.
Your undivided attention is produced by the Center for Humane Technology,
a non-profit organization working to catalyze a humane future.
Our executive producer is Stephanie Lepp and our senior producer is Julia Scott.
Mixing on this episode by Jeff Sudakin.
Original music and sound design by Ryan and Hayes Holiday.
And a special thanks to the whole Center for Humane Technology team,
for making this podcast possible.
You can find show notes, transcripts, and much more at HumaneTech.com.
A very special thanks to our generous lead supporters,
including the Omidia Network, Craig Newmark Philanthropies,
and the Evald Foundation, among many others.
And if you've made it all the way here,
let me just give one more thank you to you
for giving us your undivided attention.
