Your Undivided Attention - Ask Us Anything: You Asked, We Answered
Episode Date: December 29, 2022Welcome to our first-ever Ask Us Anything episode. Recently we put out a call for questions… and, wow, did you come through! We got more than 100 responses from listeners to this podcast from all ov...er the world. It was really fun going through them all, and really difficult to choose which ones to answer here. But we heard you, and we’ll carry your amazing suggestions and ideas forward with us in 2023.When we created Your Undivided Attention, the goal was to explore the incredible power technology has over our lives, and how we can use it to catalyze a humane future. Three years and a global pandemic later, we’re more committed than ever to helping meet the moment with crucial conversations about humane technology - even as the tech landscape constantly evolves and world events bring more urgency to the need for technology that unites us, invests in democratic values, and enhances our well-being.We’ve learned from our guests alongside all of you. Sixty-one episodes later, the podcast has over 16 million unique downloads! That’s a lot of people who care about the promise of humane technology and are working to construct a more humane version of technology in their lives, their family’s lives, and within their communities and society at large. We’re a movement! Thank you to everyone who submitted questions and comments for us. We loved doing this, and we’re looking forward to doing it again!Correction:When discussing DeepMind’s recent paper, Aza said the premise was four people entering their views and opinions, with AI finding the commonality between all of those viewpoints. It was actually three people entering their views and opinions.RECOMMENDED MEDIA CHT’s Recommended Reading List:Foundations of Humane TechnologyOur free, self-paced online course for professionals shaping tomorrow’s technologyThe Age of Surveillance Capitalism by Shoshana Zuboff Foundational reading on the attention economyAlgorithms of Oppression by Safiya Umoja Noble Seminal work on how algorithms in search engines replicate and reinforce bias online and offlineAmusing Ourselves to Death by Neil Postman Written in 1985, Postman’s work shockingly predicts our current media environment and its effectsAttention Merchants by Tim WuA history of how advertisers capture our attentionDoughnut Economics by Kate Raworth A compass for how to upgrade our economic models to be more regenerative and distributiveThinking in Systems by Donella MeadowsThis excellent primer shows us how to develop systems thinking skillsWhat Money Can’t Buy: The Moral Limits of Markets by Michael SandelSandel explores how we can prevent market values from reaching into spheres of life where they don’t belongEssay: Disbelieving Atrocities by Arthur KoestlerOriginally published January 9, 1944 in The New York TimesHumane Technology reading listComprehensive for those who want to geek outORGANIZATIONS TO EXPLORE Integrity InstituteIntegrity Institute advances the theory and practice of protecting the social internet, powered by their community of integrity professionalsAll Tech Is Human job boardAll Tech Is Human curates roles focused on reducing the harms of technology, diversifying the tech pipeline, and ensuring that technology is aligned with the public interestDenizenDenizen brings together leaders across disciplines to accelerate systemic changeNew_PublicNew_Public is place for thinkers, builders, designers and technologists to meet and share inspirationPsychology of Technology InstitutePTI is non-profit network of behavioral scientists, technology designers, and decision-makers that protects and improves psychological health for society by advancing our understanding and effective use of transformative technologiesRadicalxChangeRxC is a social movement for next-generation political economiesThe School for Social DesignThe School for Social Design offers three courses on articulating what’s meaningful for different people and how to design for it at smaller and larger scalesTechCongressTechCongress is a technology policy fellowship on Capitol HillRECOMMENDED YUA EPISODES An Alternative to Silicon Valley Unicornshttps://www.humanetech.com/podcast/54-an-alternative-to-silicon-valley-unicornsA Problem Well-Stated is Half-Solvedhttps://www.humanetech.com/podcast/a-problem-well-stated-is-half-solvedDigital Democracy is Within Reachhttps://www.humanetech.com/podcast/23-digital-democracy-is-within-reachYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Transcript
Discussion (0)
Hey, everyone, it's Tristan.
And this is Aza.
Your undivided attention will be going on a break,
so this will be the last episode of 2022.
But we'll be back early next year
with the whole new lineup of guests and ideas,
and we can't wait to explore them with you.
So now we're going to actually do something we've never done before,
which is answer your questions in our first ever Ask Us Anything episode.
So a few weeks ago, we put out a call for questions,
and you all really came through,
We got more than 100 responses from listeners to this podcast from all over the world.
And it was really fun for our producing team to go through them all and hard to choose which ones to answer.
But we have heard from you.
And I mean, what I'm sensing is just how much people want to be in community and not feel alone in thinking through the problems that we've been going through.
So three years ago, when we co-created this podcast, I remember it was 2019.
and we had just given a big presentation in San Francisco
that we had internally called Project Paradigm.
It was the presentation that is the one that's shown throughout the film,
The Social Dilemma, based on the insight
that a lot of the conversations that were going on in technology
were about privacy or data or content moderation.
And we felt like that framing was missing this broader trend,
this other dimension of how technology was affecting society.
It was really about these psychosocial,
psychological and sociological effects
on our attention, our well-being, what we're thinking about, how we're making sense of the world.
Is it weakening or strengthening our relationships?
And we wanted a podcast that we could explore all the drivers of the attention economy.
We could explore persuasive technology.
We could pull insights from those who understand these secret back doors to the human mind.
We promised that we would interview people who were experts in cults or hypnosis,
who were experts in how attention could get interrupted.
And I recommend people go back to the full catalog because we have covered a lot of these different topics.
And three years in a global pandemic later,
we're more committed than ever
to helping people meet the moment
and understand where technology is going
and how we get to a world that's more humane.
One of the things that I'm most excited about with this podcast
is that I think it acts sort of like a bell
that we've rung that articulates a different alternate reality
that's right next to our real reality
where technology, social media,
isn't driving us in this downward incoherence.
spiral. We are now 60, 61 episodes in. We've had 16 million unique downloads. And that's all
happened, by the way, without any advertising or promotion. And what I think it points to is there's
a whole bunch of people like all of the listeners that care about the promise of humane technology
and working to construct a more humane version of the world.
And I think my biggest hope, and I think it's happening is that as we ring our bell,
it enables other people to ring their bells.
And it's sort of that shared harmony together that creates the basis of a movement that can create larger change.
So with that, let's actually dive into the questions.
Hello, my name is Kara Brancoly.
I'm a high school librarian in San Francisco, California.
And I'm putting together a navigating digital information series for teachers at our school to use in their classroom.
And I just wanted to try to draw on your expertise and find out if you had a very limited amount of time with a group of high school students, what would be the four most important topics that you would want to share with them?
Thank you.
I'll check.
So if I could instill one thing into the next generation's minds would be epistemic humility
and a real curiosity of how we come to the beliefs that we come to because getting just that
one thought into their minds is sort of an inoculation against all of the harms of the stuff
you're going to encounter. Just to define that word epistemic humility, epistemic for epistemology,
how do we know what we know? And I think asking ourselves that question,
how do I know what I know?
The core question to ask is for high school students
as we're developing our tools
for understanding what's true in the world
is to ask ourselves,
how do we know what we know?
Is that true?
Can we be absolutely sure that it's true?
Noticing that that belief didn't come from us internally
but might come from media that we see,
it's so subtle because people think
that it's about checking your sources,
but it's really we've been living and swimming
in this environment of beliefs
that we didn't pick in the first place
and social media is a kind of false belief factory.
So just, I think, introducing some humility in how we navigate information environments is a key one.
I also want to say that as a teacher, I first want to just acknowledge what a hard job that is.
There's bullying, sexualization, nude sharing, blackout challenges, teen suicides, doom-scrolling, anorexia,
influence our culture.
There's so many different issues that teachers have to face.
And I did want to point out that we do have a youth toolkit online in the HumaneTech.com website
that is a resource for teaching teenagers about the attention economy, persuasive technology,
and how this is all designed.
And I will say that one thing we've learned over and over again
is if you tell someone this is bad for you,
they'll just say that you're trying to control their choices
or who are you to say what I should do.
But if you explain to them this is how it's manipulating you,
that generates a different effect
because no one wants to feel manipulated.
And one thing that I would like to see,
which is a broader point about how we respond to this issue
of how technology is affecting children
and growing teenagers
and the mental health effects that it's having
is that just like we had a mom's,
against drunk driving movement, that was a powerful political force that enacted many different
reforms and had a kind of ongoing strength as a movement, this is the year that I would like
to see a kind of moms against media addiction. I mean, the acronym is pretty good, Mama.
I think that the boiling point in society has finally come, and there are so many parents who
are furious about the effects that they're seeing on their children, and we're looking to see
someone to take the mantle of running a kind of moms against media addiction movement,
because we need not just a temporary moment or surge of interest like Francis Hogan coming out,
we need a deep, powerful, ongoing, strong political force that can be channeled to changing
and advocating for change across the different problems.
Hi, guys. Your work seems to have strong moral and ethical underpinnings.
What are the philosophical and or religious experiences you have had or values you have developed
that have shaped who you are and the work you do?
I can imagine that this question is coming from a place of what would cause you to stop what you're doing and say, we have to respond to this problem.
There's a kind of philosophical or moral or ethical calling that says, we need to do something about this.
In fact, Aza, you and I used to talk about this phrase, cancel all my meetings.
Think about all the things you got coming up.
And what would actually be the conditions upon which you would say, you know what, I need to cancel all of that because something really important came up.
Usually it would be like a death in the family or something like that.
I remember getting back in 2013, Aza, you and I went on a trip to the Santa Cruz Mountains,
and we went camping, and I remember you and I being out there just in contact with nature,
and suddenly getting this insight that this ant colony of humanity was getting kind of poisoned
by this new wave of social pheromones that were getting sprayed across this ant colony of human behavior.
Billions of people jacked into the attention economy,
getting sprayed with this kind of digital social pesticides of how social media was kind of making the antifference.
colony go crazy. Like if you saw humanity from an ant colony perspective, you would just see the
ants suddenly around the year 2011, 2012 going in a totally different direction. It's because
people are sucked into social validation. They're getting tagged in photos. Their attention
used to go to community activities, being out in the world. Now they're increasingly stuck by
themselves looking at a phone. And I remember just being upset. This is a kind of slow rolling
harm and degradation of a lot of these precious and sacred aspects of being human and being alive.
And I remember it's saying, there's this problem in the tech industry that isn't being called out because we're in it every day.
Our capacity for staring down difficult truths is not something that we have a good track record on.
And here in front of me, I actually have an essay by Arthur Kessler called On Disbelieving Atrocities, which was written in 1944.
And he talks about how he's been lecturing now for three years to the troops, and their attitude is the same.
They don't believe in concentration camps.
They don't believe in the starved children of Greece, in the shot hostages of France, in the mass graves in Poland.
They never heard of Lydice, Treblinka, or Belzac.
You can convince them for an hour, and they shake themselves, and their mental self-defense begins to work.
And in a week, the shrug of incredulity has returned like a reflex, temporarily weakened by a shock.
And, you know, when I think about this, I just think about the long history of humans who are sitting right inside of something that's going wrong.
But you can't kind of stare at face-to-face and say, let's do something about it.
and I think once you see that
and once you see where it goes
and how dangerous it all is
it's not a matter of having an ethical code
it's about saying
I'm so humbled by what I'm seeing
and where this goes that's so dangerous
that I can't not do something about this
and that's what it was for me at least
and I'm remembering Tristan
being out with you
in the hills of Thailand
we were on a long three-day track
doing homestays
and you asked me this very profound
question because at that point I was posting on
Instagram and I was capturing what I thought
were beautiful images and I think they were
and you asked me who are
you when you're posting
who are you
and I was very resistant
I was like I'm posting because I'm
sharing things I'm me what are you talking about
but as that question
wormed its way into my mind
I realized
that the person I was when I
was posting was my insulin
secure version of myself. I was feeling a little needy or unloved. Or maybe I was the boastful
version of myself. And I wanted to show off to people these cool experiences that I was having.
And the more I sat with the question, the more I realized I wasn't a big fan of who I was.
You know, we all walk around with some kind of shadow, some way that our ego profits when we
put someone else down or when we feel better than someone else. And to run, you know, to
recognize that, to become aware of that, is to go through a form of grief. It's painful to see
your own shadows. But the flip side of it is that you get to love yourself more. So to return to
the question of the moral or philosophical underpinnings, it's the who we will be, we want
technology that has that kind of care. And I don't know, I guess that's my deepest why.
Now, I would like to know what are you doing, considering that you really know some of these people.
You talk about Zuckerberg all the time.
I mean, do you have any leverage to the decision makers?
Are you talking with them?
What kind of constructive dialogues are you having?
And to which extent are anyone listening?
Thank you very much and keep it up.
I think this is a very good question.
So one of the things that's unique actually about your background, Eza, and my background,
is that we were in San Francisco in the tech industry
in the years between 2007 and now,
and we saw and have been friends with many of the people
who built these products, which is actually, by the way,
for listeners who don't really have this sense,
it's one of the reasons that we're more hopeful
that things like this can change
because we saw how regular people,
just human beings, that were our friends, made decisions.
I remember the day that Mike Krieger,
who was the co-founder of Instagram,
showed up at a cocktail party in San Francisco in 2011,
at our friend Chris Messina's house,
who was the inventor of the hashtag for Twitter,
and Mike had just added this feature
where you could double-tap
the entire photo on Instagram, like with your thumb,
and it would heart the photo, it would like the photo,
and then the other person would get a notification.
And I saw how this little innocuous choice that he made
that he thought was just cool
because it had this nice animated heart
that just zoomed out on the screen,
but that probably increased the total likes
that everybody got dosed with by about 10x,
just that one change.
So all this is to say that, do we talk to our friends?
Yes, we do.
And, you know, I can say that back in 2013, 2012,
Another friend Joe Edelman and collaborator on time well spent,
we used to sit down for dinner with people who we knew
who were building Facebook profile and Facebook timeline,
which was the kind of personal news feed of everybody's life history
being permanentized into an interface
and saying, hey, don't you think that ranking by engagement
is going to drive addiction and these kinds of harms?
And one of the reasons that I do sometimes show up kind of frustrated
is that we had so many of those conversations with them back in the day.
I mean, I remember distinctly a conversation I had with a guy
who actually was at Facebook and in charge of News Feed, I believe, who invented a like button.
I slept on his floor as a Stanford undergrad.
And he said, oh, you're probably right, but culture will swing back on its own,
and it'll get aware of the problem and change on its own.
And I was like, you know this is a problem.
Why aren't you going to try to change Facebook from the inside?
And have we talked to the very tops of these companies?
I did meet Zuckerberg once at President Macron's Tech for Good Summit in Paris in 2018.
We literally actually physically bumped into each other.
It was kind of an awkward moment.
That conversation only lasted a couple minutes.
And there wasn't much to talk about because if you think about it, right,
like what are you going to do?
You're trapped in a business model.
It's not like he can say, oh, you're totally right.
And I just wanted to name that that we have occasionally talked to people at the very top,
but often they're just sort of disempowered with being able to see the real truth of the situation.
Well, and I'm just going to jump in and answer the question directly, which is, yes,
we do. There's a lot of stuff that we can't really talk about publicly, but we do a lot of
convenings of leaders, people like designers and product managers. We've got peace building groups
in direct relationship with leadership and project managers at places like Twitter and Facebook.
So there's a whole bunch of direct action that I think we can do because of the trust that we've been able to build
and that we generally don't blame any one person.
We just articulate truths about how the system works.
We also do talk to policymakers.
We help start the Council for Responsible Social Media
with former House Majority Leader Dick Gephardt,
and we convene workshops in Washington, D.C.,
around national security and social media.
So there's a lot of different areas of work that we do.
We don't always talk about.
The podcast, it's only one chunk of where our time goes.
There's also groups of people who work in, say,
trust in safety or trust and integrity teams at these companies.
who just aren't talking to each other.
I want to point out the work of the Integrity Institute,
which is gathering a lot of these trust and safety people,
many of which are joining this nonprofit that exists in San Francisco.
Then obviously the Foundations of Humane Technology course
is one place where people are gathering and building community
and not just taking a course by themselves,
but showing up on Zoom calls and meeting other people from around the world
and getting to talk to each other about how they're each relating to this material.
Hello, my name is Shree.
I was curious to know what you guys would recommend for,
students and youth who want a career path in ethical technology and what you'd recommend to pursue
education-wise. Thank you. So first, it's just awesome to hear from you and other students and youth
who want a career path in ethical and humane technology. And I want to just encourage more
people to go into that field. We don't really have lots of educational opportunities for people
who want to do that. There are some universities. Stanford University has a course led by
Rob Reish and Jeremy Weinstein. And they're trying to make that a requirement for all university
students at Stanford. I don't know if their curriculum is public, but that's one option.
We obviously have the foundations of Humane Technology course that we worked for a year and a half on,
and that is available for free to everyone. We've had more than 13,000 people register for the course
from 139 countries. And we're seeing that it actually makes a really big difference for people,
especially just to have the community of other people who are thinking about and working on
humane technology. I think that's one of the biggest things that people are really seeking when they say I want to go into this field is they want to meet other people who care about this. It reminds me of that quote from the social dilemma. It's like, am I the only one who cares about this? And the answer is no, you're not. I would also recommend all tech is human has a job board. That actually has a great listing of all the jobs in human technology for people who want to go into the field. And, you know, I'd recommend also just broadening your reading. So instead of just reading about technology, read books like Thinking and Systems by Donella Meadows, the moral limits of markets.
by Michael Sandel.
We also used to have a reading list for people who want to read books
that give them some of the insights, I think, to go into humane technology,
and we'll see if we can find that and put in the show notes.
Hi, I'm Kora.
I have a question about scalability in social media apps.
So right now, the last thing app creators want
is for their app to hit a saturation point.
And usually they want to serve thousands or millions with an app.
But in social matters, this intuitively doesn't always sit right,
with me. And I'm a designer. I do not have a business background. Do you see potential for apps
to be sustainable, even if they're designed for a fixed number of users? Why do we always
design to grow fast and endlessly? Yeah, I would like to hear your thoughts and greetings from
Germany. I love this question. I think to the why do apps have to scale so quickly,
there you have to look at well how are apps funded and how do they make money
apps are almost always funded from a VC business model
and when you sign yourself up on the VC treadmill at every time that you raise
you need to get roughly a 10x return so that your valuation bumps up
so you can hire more people so you can scale to more people
and now you're on this treadmill where you constantly have to scale your number of
users. Otherwise, you'll get a down round, it'll be a bad signal. And even if you decide to do
something that's different and go with a fixed number of users, well, your competitors aren't
going to do that, so they're going to crowd you out and turn you into a niche player at best or
irrelevant at worst. So that's the sort of why. And to speak to your point, if my time on
the internet has taught me anything, it's that scale itself doesn't really scale.
that when you take something small and beautiful and supersize it, turn it into the McDonald's
version of it, you ruin that beautiful thing that it was, generally because you've tried
to scale it too quickly. So it's not something that any one of us can do.
So one of the questions we have to ask is, what would create an evolutionary environment
that new, better things can actually outcompete the big guys? Or if the big guys get really big,
we put more public interest obligations on them.
The basic principle here is that the more power you have,
the bigger the responsibility you should have.
And the way that it works right now with venture capital
is that I'm incentivized to grow ruthlessly as fast as possible,
and the bigger I get, the more incentive I have to be more selfish
and be more ruthless.
So if I'm Facebook and I see a new upstart social media company
that's starting to go, I want to buy that thing.
Every time one of these new small things shows up that tries to do it grow slowly,
even though they're doing it better and people love it,
Facebook and Twitter or YouTube can just copy those features and build them into their already
bigger platform and kind of crush the small guy. It's sort of like the concept of nobles
oblige, that the more wealth and power you have, there's an obligation to use that power
to the benefit of all.
Hi, Tristan. And as first of all, thank you so much for the show for what you do. It's
inspiring. I love it. I work as a U.X researcher and I find that researchers
often vouch for the ethical approach. We always try to communicate the insights that we think
will do our customers good, but often encounter pushback from product, from the business,
saying we need to look at the numbers, we need to make money. So my question is around motivation.
How do you stay motivated to continue to vouching for a ethical approach, ethical designs in a big
corporate. I think you were saying in a big corporate environment, really important question.
It can be very demoralizing to work in environments when you're saying, hey, there are ethical
issues happening and hit the glass ceiling of a business model and not be able to do anything
about it. Things that we have found useful or we've heard are useful from other audience members.
So one, find a champion inside of your company that can advocate on your behalf.
So find somebody higher up that can give their weight to your arguments.
I know Tristan, I think that happened for you inside of Google, right?
Yeah, inside of Google, there was someone at the Google Creative Lab,
which is sort of their internal advertising creative agency.
Andy Burnt was the executive, and he actually kind of hosted me and created a space
where I could think about and work on these issues.
and I want to give a shout out to him
and then later Paul Manuel and other people
inside of Google. And I think people should
look to who are the champions in your
organization. And let's be honest, there's going to be
some organizations in which maybe there isn't a champion
because the company is just in dire
straits and it needs to maximize profit and revenue
and then you have to ask yourself what
you want to do. What are you okay with?
And I also just want to say thank you for
keeping up the fight internally and for using your
voice because it is important that people
raise these conversations. Simply sharing
this podcast around or sharing the
Foundations of Humane Technology course.
We've seen many tech organizations actually get a whole group or their product team to take the course together.
So I think creating sort of a shared reference point, and then we can train those next generation of people to put them into more and more places of position and power.
And I think that's starting to happen.
And so it's a slow moving change, but it's doing it.
And then to get back into specifics of what you can do is forming internal advocacy groups, and you can do that externally too.
That is, find sister orgs or brother orgs
and the people are in similar roles
and talk to them about what's worked.
It turns out often that feeling of hopelessness
is really a feeling of loneliness
and working to solve that
can really help with motivation.
What would you say to somebody who asserts
that under capitalism,
wherein multibillionaire tech companies
must maximize shareholder value
above human well-being,
that the drive to create a truly humanistic social media and digital landscape is a fool's errand
or to put it more succinctly, why is it that you believe that we'll be able to truly reform
the social media horizon under capitalism?
So the short answer is that there is a fundamental incompatibility with capitalism
and the runaway growth imperative to maximize shareholder value on top of a finite substrate,
whether the finite substrate is earning infinite growth of an economy on a finite planet
or a finite pool of human beings who need that attention to care for their growing babies
and children or to build a democracy.
So I think we've been pretty clear about that.
That doesn't mean there's nothing that we can do because just like runaway growth and
capitalism are problems in other areas, like the environment or forests or oceans or other
kind of common environmental resources, we can put guardrails on the monetization of human
attention. And again, we've said that governments can do that, the EU can do that. And we just saw
within the last few months that California passed the age-appropriate design code, which actually does
move in the direction of having certain limits on how we can maximize engagement and attention
from children. So there are some things that are moving in the space, and it's important to
focus on what we can do while holding in mind the deeper problems of the system that we're in.
Hi, my name is Guy, and I'm a big fan of your work.
I'm actually working on my own humane tech startup to tackle the climate crisis,
and I was inspired to do this by this podcast.
So thank you so much for that.
My question to you is, when you think of the meta crisis that you describe in your podcast
episode with Daniel Schmachemacher, what are some of the promising humane tech solutions
that you yourselves have seen to this meta crisis,
and are there any ideas that you've had in brainstorming,
that you would like to see in terms of humane tech solutions.
Thank you.
Gary, that is awesome that you're doing that.
It really touches us.
And also love that you're focusing the question on,
well, what are the humane tech solutions, if at all,
to the meta-crisis, I think that would be,
to really dive into that is going to be bigger
than a Ask Us Anything kind of thing.
But I think we'll give two examples.
Do you want to start and then I'll give one?
Sure.
So first just want to say, for those who don't know what the metacrisis is,
I would recommend going back to our podcast episode with Daniel Schmachtenberger,
which is called A Problem Well-Stated, is a Problem Half-Solved.
And when we think about the Metacrisis, and especially on the climate side,
I can give an example of a humane technology solution.
And that's from some of our friends in San Francisco who started a company called Planet Labs.
And Planet Labs is a satellite company that can create sort of a 24-7 view.
They're the largest, as I understand it, network of small satellites that are taking a
of every basically square meter of Earth every 24 hours.
And they're starting to put sensors on these satellites
where they can actually pick up methane,
they can do AI-based image analysis,
and understand what the biodiversity is looking like
in different parts of the planet.
So imagine you have a 24-7 real-time picture
of biodiversity, of ocean acidification, of methane, of CO2,
and that because you have that,
you can know, for example, if other people are poaching land
that they shouldn't be poaching, or people are cutting down the Amazon and violating an agreement
that was signed among a few different parties. What this does, and the reason it's so important
for the metacrisis, is the metacrisis is driven by a multipolar trap. If I don't do that action
that's good for me, I'm just going to lose to the other guy who's doing the ruthless thing.
And what you need to deal with these kinds of multipolar traps are transparency, shared transparency,
where I can see what all the actors are doing, and you need attribution. Can I see who's doing
which actions. And if I can have transparency, attribution, and enforcement that I can actually,
there's a punishment that can happen if someone violates that agreement, we need to have a shared
view that we can trust each other that no one else is dumping into the ocean. So Planet Labs,
by creating this new shared record, this new transparent record for all of Earth, is enabling
a new kind of solution to all physical multipolar traps. That's kind of an abstract answer,
but again, I would recommend people to go back and listen to the episode with Daniel to go deeper.
So I'll point at another piece of technology.
So listeners of the podcast will, of course, be familiar with our diagnosis that Twitter and TikTok and all of the engagement economy companies are rewarding people, paying people in likes and comments and influence for discovering the fault lines in society and inflaming them.
That is, they are paid to be division entrepreneurs, find more and more creative ways.
ways of breaking people apart.
And what we really want is to have people, in a sense, be paid to become synthesis or bridgewalking
entrepreneurs.
There is a paper that came out of Deep Mind weeks ago where four people can enter their views
or opinions and the AI will find the commonality between all of those viewpoints.
Of course, it's not perfect, and as I start talking about the solution, one of the pitfalls just to name it is that we don't want to atrophy people's muscle to do the synthesis on their own.
But imagine if you were able to deploy a technology like this so that whenever you click into a Twitter flame war thread or a trending topic, what you see at the top is the synthesis view, the view that is the same across unlikely.
groups. So you can see how we actually all agree more than we think. And this is sort of a pointer
at what Audrey Tang is already doing as part of the Digital Taiwan Project. And I love this example
because so much of what we talk about in this podcast is how social media rewards division
and amplifies the most outrageous, narrow, black and white framed bad faith take on everyone's
actions. And so it so's massive distrust. And what this is about is saying, people are going to
follow the incentives that they're given. And what if the more political and controversial the
topic, the more Twitter actually incentivized people to find the synthesis perspective. Right now,
a lot of people just sort of yell about various problems, but no one yells and says, and this is
what we should do about it, or here's a proposal. And yes, people would debate whether your solution
was good, but again, the people who are just angry at your solution wouldn't get nearly as
much reach as the people who posted another solution in response to your solution. And people will
say, well, hold on, that's social engineering. Who are you to say that we should do it that way?
But again, we can notice, when we take our hand off the steering wheel,
we are saying what wins at the top,
rewarding the most extreme outrage-driven voices,
and that that produces a society that's dysfunctional.
Hi, Tristan and Aza.
I think there are probably a lot of people in the audience
who are interested in building new solutions
to problems with the attention economy.
My question is to you, if you were building a new solution,
where would you look to find those other people
that want to build solutions to?
what are the places that other people hang out
well get to where you can hang out in a second
but I want to start with a reframe
that took me a while to really internalize
after a career of making technology
as I started to approach
okay we live in an attention economy
we need a solution to the attention economy
well obviously that's going to be another app
that's going to be a company that I build
that makes a thing that gets in people's hands
and the realization is
that the form that we have of an app
might not be the appropriate form
to tackle the attention economy
because if you make the thing
that doesn't grab somebody's attention,
of course, somebody else will,
and so you'll be relegated to being a bit player,
and it's so tempting to just reach for the hammer
that you already have,
which is to make an app to try to solve the problem.
And so where I would like to send your attention,
are the kinds of communities that think up a scale at the more ecosystem approach.
So that's groups like New Public or Denison's, Psychology of Tech Institute,
radical exchange, school for social design, where it's, how do you just not use a hammer
to make the kind of building you've always made, but instead ask the different question of
how do we grow a good garden, like an ecosystem of solutions?
I mean, I think what you're pointing to is, does the attention economy exist inside of an app,
or does it exist inside of an ecosystem of everything that's competing for our attention,
including advertising and billboards and regular media and radio,
and what people need to get is that the attention economy has been radically captured
by a handful of these really big actors that are mostly driving the incentives
of where everyone else gets their attention.
So now if I'm a conference or a politician, yes, I have to get attention,
but I'm competing for that attention from within one of these.
existing social media apps, you know, can the next politician win an election without being
on TikTok? So I think the thing that you're really pointing to, Aza, is how can we change the
incentives overall? And I want to just note, that sounds hard for people because that's really
much more disempowering. If you're saying that the way I can fix a problem is I have to advocate
for this sort of vague legislation or regulation to change the overall incentives of what gets
attention. And so I think one of the things that has been very hard about this work over the last
many years is how disempowering it can feel to be on the outside. But I want to say that,
you know, in the history of this 10 years we've been doing this work, there's also so many groups
around the world. We used to have these time-well-spent meetups in Berlin and Tel Aviv, in Australia,
when there's other places that people can go also if they want to influence policy, like
Tech Congress, which I would highly recommend. It's trying to get more technologists into
the offices of policymakers in Congress so that they can make better and wiser policy.
And I'd love to highlight the work of Travis Moore, who's been running Tech Congress for the last
many years.
Hi, my name is Riley, and I'm a university student in the Netherlands.
Despite being able to identify the negative effects that social media has on their mental health,
many of my fellow students seem to genuinely believe that life without social media is impossible,
which really breaks my heart. What are some effective ways to help people in your community
consider quitting social media? First of all, we got so many questions from teenagers, actually,
for this Ask Us Anything episode, and many of them had this question. I just want to welcome you and thank you for
wanting to figure out how do we get more students to quit social media and say no to
FOMO. Say no to FOMO. It's not funny. I don't know. I found the rhyming funny.
I'm seeing more friends leave these big social platforms and moving to these smaller, even
ephemeral groups where you have a purpose for that signal group. It exists for a week until
the big event that happens, and then the group dies, then you create a new group. And I think that
there are alternatives. And one of the most pernicious thing is that you have always said about
social media is that what's inhumane about them is that we're forced to use these misaligned
technologies for things that we fundamentally need. If a teacher assigns a homework assignment and you
have to be on the Facebook group to get the homework assignment and to comment on what you thought
of that piece, then you have to use Facebook to comment on that homework assignment. So there is
this thing about social media colonizing the meaning of social participation in a classroom or
colonizing the meaning of where people are talking to each other about homework or something
like that. If they all do that on an Instagram channel, then that's going to be hard to not use
Instagram. But again, we have a choice, we have alternatives, and we can use these small
chat groups to stay in contact, and that's going to be a lot healthier for people.
Yeah, what so inhumane is that we are forced to use systems which are unsafe for the things
that we need. And it's real that social media companies, Instagram, TikTok, the rest,
control the means of social participation.
So it's not just the feeling of FOMO,
you're actually missing out if you aren't on the platforms.
It speaks a need for an alternative.
If there's a fear of missing out,
can the students form a set of groups
that just move everything over to signal?
And that can really help.
Do you believe that humane technology is possible
without government regulation?
The short answer is no.
And that's again because so long as venture capital meaning 10x return funded technology
startups are competing for this incentive of attention, they're going to be forced into
this race to the bottom of the brainstem that's ruthlessly trying to manipulate and
exploit our psychological and social psychological vulnerabilities.
And that race, that has to get changed.
And again, you can't ask one app not to do that.
You have to actually do that from some top force.
That top force can be either government regulation, things that the EU might be doing with the Digital Services Act, or it can be done by someone like an Apple or an Android, because they're sitting above the attention economy and can set different rules for what kinds of psychosocial exploits are you allowed to do on human beings.
And I think part of what we're trying to do in this podcast is educate people that we're not living in this technology environment.
We're living in a manipulation of these exploits of our psychology.
And if we have a culture that actually has this kind of immune system that they recognize that they're being manipulative.
They can push on actors like Apple or the government to create that regulation.
So that's really the only way out of the attention economy trap that we have been talking about so long in this podcast.
And no, this is not something particularly new.
We know that any market that can grow without guardrails will break the thing that's within, right?
If you just have capitalism growing without any guardrails, where is the incentive not to fish?
faster than all of your competitors so you can take the profit? Where is the incentive not to cut
down trees in the Amazon faster than your competitors so you can get the profit? We know that
you need guardrails to work hand in hand with markets. We have a new market, and that's the
attention market or the engagement market. So of course it is going to need guardrails. We just
haven't caught up with that realization yet.
Hi, everyone. Thanks for all the great insights that you've already been providing. I'm really curious to know what you think the next addictive technology is going to be. Right now, everyone's battling screen time. But do you think that the next thing will be VR that you had an episode about recently? Is it going to be something in the internet of things? Will it be wearables? Where do you guys see this heading in terms of these addictive behavior loops and how this is impacting human beings?
What's the next addictive thing going to be?
Yeah, this is a great question as we head into 2023.
So if I look out at social media, the most addictive product out there right now is TikTok.
I think the thing to recognize about what makes technology addictive, that is compulsive,
that we can't not use it, is how much it is manipulating our deepest instincts,
like the core instincts that make us human.
Our identity, how we see ourselves, our sense of social validation is fundamental.
And the degree to which we're saying technology is addictive is really the degree
to which we're saying, technology is manipulating our deepest human instincts.
And I think where technology is going as we head into 2023, and we're going to be talking
about this on future episodes, are these new large language models.
People have heard of chat, GPT.
These are interfaces where you're talking to an AI.
And if it starts to answer personal questions or give you dating advice or make you feel like
it knows you better than you know yourself and it cares about you, one of the things
that Aza and I are concerned about is actually the prospect of
synthetic relationships, that AI would sort of manipulate our sense of intimacy and our closeness,
that you have this friend who's there 24-7, they're always going to be there for you,
and when you have this level of closeness with a new agent, with a new AI, that is perfectly
responding to all of your every needs, you know, there's a risk that that becomes the most,
not so much addictive in the sense that TikTok is addictive, but the new deepest-held relationship
in our lives. And when people are increasingly lonely and isolated, because
of the last 10 years of social media and the erosion of community and belonging, people are
going to be more vulnerable to these new synthetic relationships. And I think we have to get ahead
of this trend and make sure that we're sort of doing a mass revitalization of the social fabric
and having technology that is humane, that is routing us back to in-person community experiences
so that we aren't so vulnerable and isolated when it comes to these new intimate relationships
that are going to come and seduce us.
Thank you to everyone who submitted questions and comments for us.
We really love doing this.
It's something that we're definitely going to be doing again.
If you joined us for the first time this year,
we're glad that you found us.
And if you're a long-time listener,
please just know how grateful we are,
that you care about this,
that you're following this topic,
and that you want to do something about it.
And we will think about more ways
we can bring this community together in the next few years.
In our call-out for this episode,
we also asked you to record messages for each other.
So that's what we're going to go out on.
We'll be putting all the references we made into the show notes for this episode.
Thank you so much for giving us your undivided attention.
Put your phone down when you are riding on the train or waiting in line or waiting for a friend at a coffee shop.
I just want to give like a big shout out to everyone who is actually listening to this podcast because that means that you care for a more humane side of technology.
I'd like every parent to have a look at the screen time settings on their device and on their child's device and start small.
Make sure your technology is helping you live a better life instead of stealing away the parts of you that make you human and satisfied at the end of the day.
In all honesty, we should just relax a bit more.
Stop listening to the noise and focus on our own noise.
From reducing your social media usage to opening up to your friends.
If we all work together, we can do some pretty good stuff, make some really good changes.
So it's time to regroup and work as a pack to get some positive change.
When I get home and my family's home, I take my phone and I stick it in a drawer, and that's where it stays.
It will be fine, and we will find the solution, just have hope.
I think the future of technology has to be humane, and those of us who can build things have a moral responsibility to do so.
Your undivided attention is produced by the Center for Humane Technology,
a non-profit organization working to catalyze a humane future.
Our senior producer is Julia Scott.
Our associate producer is Kirsten McMurray,
mixing on this episode by Jeff Sudakin.
Original music and sound design by Ryan and Hayes Holiday,
and a special thanks to the whole Center for Humane Technology team
for making this podcast possible.
You can find show notes, transcripts, and much more at HumaneTech.com.
A very special thanks to our general
as lead supporters, including the Omidiar Network, Craig Newmark Philanthropies, and the
Evolve Foundation, among many others. And if you made it all the way here, let me just give
one more thank you to you for giving us your undivided attention.