Your Undivided Attention - Should've Stayed in Vegas — with Natasha Dow Schüll
Episode Date: June 19, 2019In part two of our interview with cultural anthropologist Natasha Dow Schüll, author of Addiction by Design, we learn what gamblers are really after a lot of the time — it’s not money. And it’s... the same thing we’re looking for when we mindlessly open up Facebook or Twitter. How can we design products so that we’re not taking advantage of these universal urges and vulnerabilities but using them to help us? Tristan, Aza and Natasha explore ways we could shift our thinking about making and using technology.
Transcript
Discussion (0)
Last week, on your undivided attention.
McDonald's did not figure out how to make the perfect hamburger
that would sort of exploit the weaknesses of the human organism.
Someone stood and watched, like, we're going to have two hamburgers,
so it's a perfect A-B test, right?
Hamburger style at McDonald's.
Where are people lining up the most?
Oh, they like this burger better.
And then let's iterate on that burger and iterate on that burger.
That's Natasha Dow Scholl, an expert on the gambling industry
and author of the book, Addiction by Design,
which reveals how slot machines keep gamblers in a suspended state of play that's devastating to their finances and their well-being.
Last week, she described how the designers of these machines have hooked gamblers deeply into an addictive loop of small wins and small losses,
with the simple goal of extending their time on device.
Sound familiar?
That's an industry term the casinos pioneered long before Facebook.
And what struck Natasha about these designers was not their brilliant insights into human nature.
Quite the opposite.
it. They could hardly explain the human vulnerabilities they were exploiting.
If you go into the casino industry, you don't find, or any of these may be, you don't find,
sometimes you find it, but you don't find as much as you'd expect to the kind of causal stories
and predatory behavior. What happens, though, I think is actually more sinister, which is, or more
difficult. It's the banality of the evil. Right. I mean, it's just that the formula that gets hit upon,
you don't have to understand it.
It rises to the surface, and that's the product you go with.
And you're not even understanding what you're doing.
I mean, I think that's part of your mission, right,
is to get people who are doing it, to understand, you know,
you may not be engineering this.
But if we reverse engineer it for you a little bit,
maybe you'll want to not go that way.
Does that also sound familiar?
Today on the show, we'll explore how technology companies can choose a more aware path.
And before you listen, please make sure you've already heard part one of our interview with Natasha.
I'm Tristan Harris.
And I'm Azaraskin.
And this is your undivided attention.
First, we have to say clearly what the harms and costs are.
Because I think, you know, when people look at this, they say, what's the big deal?
I mean, there's 100 excuses people search for, right?
And, you know, the tech industry we say, oh, like, these are the people who want.
We're just giving people that they want.
Or it's not that bad.
You know, there's lots of places people spend time.
And this is just we're swapping out TV.
Or, you know, what's the big deal?
Like, they're just losing a little bit of money or, you know, it's just the people who don't
have anything else to do with their life.
I mean, there's this really divorced way of seeing reality.
There's nothing to do with compassion or care.
Right.
And the change that we're trying to see is that once you understand, like you said, once
these mechanics are visible.
So once we just discovered, almost like, you know, the nuclear, you know, atomic bomb
insight, you know, we just discovered some fact about nature.
Well, now technology and these slot machine systems you're describing are discovering
internal facts about human nature. Instead of splitting the atom, we're splitting the human nervous
system. And as we uncover more and more of the code, and that code, we don't have agency over.
We're trapped inside of the functioning and the biases of our nervous system and the ways in which
it has evolved. What is the way, and this is where the ethical conversation comes in, you can't
escape this. It's also being tapped into all the time to greater and lesser degrees in the built
environment as you walk around. We're in New York City right now.
You know, one of the core observations of behavioral economics and the nudge philosophy, right?
We're being nudged constantly.
So let's try to think about shifting choice architecture.
Some people find this paternalistic.
I always push back on that.
I say this is happening all around us.
It's not that these, you know, people want, these humanists want to make things better in our choice architecture.
It's that there's already like really bad architecture out there.
We may as well become aware of it.
Exactly.
And this is an uncomfortable moral transition we need to make because up until now, we have had this view, as Yuval Harari always says, that, you know, the center of the universe, of our moral universe is human choice and the responsibility of individual, at least in the post-enlightenment, you know, Western era.
And what that means is the customer is always right, you know, trust your feelings, trust your heart, you know, the voter knows best.
But in a world where we're reverse engineering the code to perfectly manipulate these things, and that code is getting reverse-engineering.
whether accidentally, as you said, through AB testing, split testing, 100 million variations
that'll work on the Voodol-like model of you sitting inside of a YouTube server to keep you
clicking for longer, or the simulations of which sort of slot machine mechanics, algorithmic
math should I use to keep you here longer? As we reverse engineer that code, what is the way
that we get this to work? And per your point, we can't say, like, let's take our hand off the
steering wheel and let voters know best. I mean, that's an extreme statement. But what I mean,
is well that's a that's a sort of free market that's the free market view but but if we watch the
free market play out right now so if we take our hand off the steering wheel of let's say technology
like well let's say it's already kind of off well it is off right now and what we're trying i mean
the whole premise of our you know our work right now in the movement is we need a new moral framework
that lets us ask what would be the compassionate good for us way of steering shaping these systems to
enhance agency to enhance reflection instead of the curvature, the 90 degree turns, but then you
get into this other thing where do you really want to activate conscious choice making at every
microscopic moment? That's a taxing way to live. So we have to actually be conserving attention.
So then we ask, so where do we want that attention and that conscious choice making, those 90
degree angles in our lives, to be there? And do we want 90 degree angles for which key do you want
to type? Or do you want 90 degree angles for what are we going to do about climate change or solving
inequality. What is the way we want to be devoting our very limited choice-making capacity
in a time of urgent challenges and when, if we just let the past dictate the future, we're
screwed? And I think, you know, to your point about, you know, we've always had this sort of
manipulative nudging-like environment. I think the analogy here is for geoengineering. You know,
people say, oh, my God, wait, we shouldn't geoengineer. And I agree there's huge risks and unintended
consequences of the geoengineering. But it's not like we're not geoengineering right now. We're
geoengineering ourselves towards catastrophe with climate change, we already have godlike technology
or we are already gods, we might as well get good at it. So if we're geoengineering towards
catastrophe, we might as well get conscious about our geoengineering and not do the self-destructive
thing and to see ourselves away from climate change. When it comes to technology, if we are
already reverse engineering the human psyche and getting certain outcomes and doing that in a way
that leads to disempowerment, to mass social isolation, to teen mental health issues, to outrage
to everyone wanting to become a celebrity, to election engineering. These are all sub-phenomena of
an increasing ability to reverse engineer the human psyche, and we're using it in a way that is
leading towards catastrophe. We are now forced to become morally aware of where we want this to go.
And that's an uncomfortable reality to be in because suddenly now we have to decide.
Right. But I would say that there's one of the challenges here in what you're identifying as this
kind of shift in ethical framework is this very, very entrenched framework of how we conceive
of responsibility, right? And I sort of always carry around in my mind this, what I call
the responsibility spectrum. And each point on that spectrum would suggest a different way
of regulating this, right? So if you're just all the way up, like full, the human had free choice,
people make their own decisions, you don't do anything, right? So that's the handoff.
the steering will. But then the next level down is consumer protection, consumer education.
And I think that goes some way toward this, right? Without that, we wouldn't have the warnings
on the cigarette labels. But the idea there buys in to the idea of individual responsibility
because it assumes that it's like, okay, well, yes, we continue to be fundamentally
choice-making individual responsible agents.
We're homo-economics, right?
But we'll seed that you need full information
in order to occupy your full agency.
So let's put the warning on the cigarette label.
Let's put the odds on the slot machines.
Let's suggest that we can fix gambling addiction
by statistical classes so that you understand, like, statistics.
Plenty of the gamblers I talked to were staticians and countants.
I mean, this is back to like they're not the dupes, right?
Right.
So this is actually a really critical point that I want to stop here and name is often
there's this view of intelligence is inversely correlated to your vulnerability to these things.
But speaking as a magician, whether, you know, if someone has a PhD, it's actually usually
easier to manipulate them because they are more confident and therefore less likely to notice
the things that they're doing.
If we're PhDs, people are more likely to self-justify or post-rationalize their decisions
with more complex reasoning.
There's a great study on how, on the ethical behavior of ethics professors
and how they're actually do more, like, unethical things,
but they're better at reasoning a creative rationale for, you know, why what they're doing is okay.
So we're all human.
We're all human, exactly.
And I think that's what this is really about.
And also what in this particular example, and I think for some of these technologies as well,
the assumption in we're just giving people what they want and or the,
sort of end, some of them are dupes. The assumption there is that what they want is to win
or what they want is, and sometimes, I mean, as a cultural anthropologist, the idea is that
you really hang out with people and you hang out with the things they're doing. And in this
case, the technologies. And what I found is that if you talk to them long enough, they are
able to articulate that they're wanting something very different than you go in thinking. And
in the case of gambling addicts, they're not trying to win. It's not like they, they're
are dumb in math and don't get gambling and how it works, right? It's not like people,
intelligent people who stay up and binge when they have a meeting the next morning on Netflix.
They can't stop somehow, and it's overriding the rationality. And in the case of gambling,
it's because what they want is that affect of the zone. Right. It's almost like what they want
is that feeling. They want the state. They want the state. They want the sort of affect modulation,
the mood. And so I see all this stuff. These are all little affect modulators. They
They modulate our mood and our sort of feeling states.
And whether it's boredom, anxiety, what have you, you're constantly have at your fingertips,
these little portals for modulating your affect.
That's the real aim.
It's not about communicating or winning or a game.
Right.
I find this fascinating the difference between our conscious statements about this.
Like when people get sucked into scrolling on social media, the infinite scroll,
which, by the way, itself is a slot machine because there your finger is going to swipe
and you're not sure what's it going to be next?
Is it going to be...
So people self-report that, you know,
oh, why am I scrolling on Facebook?
It's like, oh, because I'm trying to connect with friends.
That's what social media, of course, is for.
And we have this really simple language that we use
to self-narrate our behavior.
Like, I'm connecting with friends.
Really, that's my motivation?
Or is my finger enjoying the feeling of just doing it again?
Isah here?
Remember last episode where we paused Natasha's interview
you to brainstorm? Well, that last point just on made about whether our time on social media really
is helping us build our connections with friends, we want to stop there and double click and explore
that more. How could we make it as easy to arrive at a dinner table with your friends as it is to
scroll mindlessly on Facebook? Like right now, it's never been easier to just get mindlessly turned
into a zombie. So I mean, imagine right now, you know, very concrete example, if Facebook knows that
you're lonely, you're scrolling around, and after it recognizes this, the next swipe up,
it just shows you three or four of your friends who are nearby that are available right now.
And they show that they're also lonely and they're less than a mile away because it knows
that they're lonely because it also knows that they're scrolling mindlessly.
And you could opt into some kind of thing that says, hey, for these six close friends,
if we're ever lonely at the same time, please let us know because we'd love to just send each other
a phone call.
And it could do that.
I love this core concept of we can detect when users are getting into that zombie
flow state.
Right.
And once we can detect it, then we can ask...
The zombie detector.
The zombie detector.
Or this trans state.
And once we can start to detect when people go into the trans state, we now have an opportunity
for a choice of what to do about it.
And I think that's cool because, yeah, we can connect you to other people.
We can start slowing everything down so it gives your brain the chance to can stop
to your impulse.
You can have the app stop working.
There are so many things you can do once you call that out.
And I think for every company and every designer, this, you know, we always say like, oh, design.
Like, we should delight the user and bring joy to the user, which is another sort of self-dealing way of saying, like, if we can give them a little dopamine, we get them to stick around longer and have better brand affinity with me.
I think we can go the next way the next step and be like, we should know when we're causing harm or causing people to zone out when we're taking away their ability to live the life and make choices that they want because we've taken away.
the right angles. And if we can detect that, which we clearly can claim, then we can start asking
the more interesting questions over products is how do we give that agency back? Right. And what kind
of agency is helpful? I think that's the core question you're asking. And now let's get back
to our interview. As you said, people always assume that there's this sort of people
are dumb, they're dupes, they're, they, why don't they know that they're not? And so this whole idea
of how should we regulate it is making all sorts of assumptions about who we are in the
world, right? And what we want at each step. And it's like, no, why should we regulate it when, you know, and an extreme view of this in economics would be Gary Becker who sort of actually said, you know, that there's rational, rational addiction model, right? That smokers are consciously, rationally deciding. They're making a choice, right? This is their sort of extreme homo-economics who knows his or her own preferences and then reveals them through their marketplace choices. And the proof of that in his
paper, isn't it, that as taxes go up, people actually do, when you change the price,
they do sort of change their addictive behavior. That's one of his example. So what's the
counter to this argument that we are rationally addicted? In a way, you could say that this whole
book could be read as an extended case study against the model of Homo Economicus. I mean,
I think that to really shift the ethical framework, we have to shift the model of the human
being that's being regulated to. You know, the consumer protection assumes,
a certain kind of consumer who wants to be informed to make rational decisions in the market.
Addictive things and these little affect modulators throw a wrench, totally throw a wrench
into the whole economic theory of economics. It goes to a different level of being human,
which is not a weaker level. I don't want to call it a weakness. It's a different model.
Right. And so just also to pause here and recognize that that is essentially the mission of
behavioral economics since the 70s and with Conneman and,
Versky and many beyond leading all the way up through kind of nudge and some of these different
ideas. But I don't think that that has succeeded actually in displacing the model of
Homo Economicus. What's happened, and I've even seen you participate in this, Tristan,
is that the brain now is in a very loosey-goosey way split into the frontal cortex and
the reptile brain. And what that does, that's coming from game theory.
Right? And that was the contribution that economists made to behavioral economics from game theory. And what they were trying to do was sort of preserve the economic visions. So what they did basically is port homo-economicus into the brain and into the neocortex, turning it into a, you are no longer homo-economics, but your frontal cortex is, I call it homo-economics homunculusous.
Right. And then there's a little part of you that's a choice maker. And then there's the reptile brain is evil or wrong or something like that.
governing and this is how nudge works right that the premise there is that the
consumer you're governed you're not yes consumers are irrational we're gonna
accept that but we are going to govern to enhance the agency making choice
making of that sort of frontal cortex so you're still legislating to this pure
inner sort of liberal subject it's not it's not consumer protection it's
prefrontal cortex protection or something like yeah so I hear you making
this point it's like it goes some of the way you know as
if I think in a moderate way about it, like I'm on board with a whole lot of the things
and health insurance should be, you know, opt out instead of opt. It's all good, but I can't help
also as a critic noting that it carries on. It doesn't go far enough. So if we go further
down the spectrum, right? And we think about how could we actually change the technology? Because
so far, you know, in the gambling industry, I have a whole second part of my book, I'm like,
look, look at all the ways that the slot machine, we try to regulate it. And some of those ways
involve adding extra little screens and modules onto the slot machine or above the slot
machine that are even sometimes called like the responsibility aid or the pre-commitment calendar
where you, and it's all on you to open that, go in there, set your calendar, lock yourself out.
And it is, and then it sits. Tire hands behind your back, put the seatbelts there. But then
But then it sits there alongside a completely contradictory algorithm and ergonomics machine
that is sort of trying to get you to spend as much as it can.
And so it puts the person, again, the poor exhausted person, right, is saddled with resisting temptation,
and managing themselves.
What if we just moved that regulation down to the level of the algorithm?
And what is the point of these things in the first place?
I mean, just to name and mirror what you're talking about, this is called responsible gambling
devices, what is it called? Responsible gaming device. It probably has a million names now. And the
latest is just this pre-commitment notion, which really is like the next step maybe from consumer
protection, because it allows that like Homer who self-bound before passing the sirens.
Right. It's still protecting the... He's like, I'm feeling rational now, but I know I won't be in the
future. So let me bind myself to the mass. And just to notice that this moral framework and this
philosophy of there's still a choice maker in you and we still have all these people manipulated
you, but now we give you this tool to sort of try to prevent us from doing what we know
we're doing to you anyway. This is not very far from the social media screen device
controls that have now been introduced. Right. Now you can manage how much time you're
spending and don't you want to say how many notifications you want and putting all the burden
of responsibility on you. So now to defend a little bit this race to the top notion that we go
for is that we have to flip around the incentives. Forget the competition part, but just
so long as there is a race to get something out of you where you are an object,
to extract something from.
My goal, even if I give you these tools, like I said, is that the power is asymmetrically
on my side and it's like bringing a knife to a space laser fight.
Like, I'm going to win because I still have a thousand engineers in a supercomputer
and I know your nervous system and the data and the history.
I've got two billion other people that I'm processing in a supercomputer so I can make
predictions about you based on, even if I've never seen you before, based on like the first
two clicks that you've made, I know exactly what your psychology is.
So in this level of asymmetry, we need a different way.
way for this to be modeled this the only way isn't just to limit the power we have to flip it
around and say how can this be in service of people this has to be switched around in a deeper
more fundamental sense as opposed to we're still pumping out coal but we like put on some
some stacks at the top to try and clean it out a little bit what should the technology designers
know now that this is all out there and we can see clearly that YouTube is a machine that's playing
you like a slot machine to see how many views did I get and Twitter's a slot machine to say how many
followers do I have now and that I get more retweets now than I did 10 seconds ago?
So often this gets like this conversation can get muddy because people just say technology writ
large like it's this big muddy monolithic thing and that but I'm more about and I tried to do
that in my book and since my book I've tried to do that in relation to some other technologies.
I think you can really specify certain things that are particularly let's just use the word bad.
the things that kind of result in sucky behavior that you don't like about yourself, right?
And so I have distilled, and I was forced to do that, I should say, by, you know, my book came out.
I'm an anthropologist. I'm all about the specificity of my case study, but I started getting calls from journalists in around 2012.
The smartphone had been out since 2007, the iPhone, and people were beginning to see problems with it and trying to think that through.
I remember you reached out to me, and it took some convincing, but then I tried to kind of sit down and say, you know, can we extrapolate what is in common? Can we distill the features? And I think we can, and we can identify specifically what they are. And I call it the ludic loop, and it has to do with across all of it. And so these are questions that designers could really ask themselves as they're designing. Like, am I creating a ludic loop? And so a ludic loop, and this is an evolving.
idea, right? But at the moment, I think about it as having four main components that
spurs these continuous cycles of action, which are really cycles of affect modulation, right?
And so one is solitude. It's just, even if we call it social gaming, candy crush is really
just you in the screen, right?
Right.
And so solitude, you're alone with the machine. The next one would be fast feedback.
And fast feedback, you're getting immediate reinforcement in that insulated autonomous zone.
right? Immediately. Immediately. These stimulus response loops are rapid. And that contributes to the
hypnotic algorithm. So ask yourself, are there pauses? Is there breathing room? Right. Is there a
stopping cue? Right. Are there cues for stopping? Or just invitations to think about stopping,
right? The next one would be random rewards. That's come up a few times. So this is well understood
since the 60s with pigeon research. You know, things where you don't know what you're
getting and you don't know when will keep you drawn in and then there's the continuity and
this is an important thing which is the non-resolution of many of these games so there does your
game have an arc is it like a narrative kind of game that has where you build a character where
there's actually change or is it just repetition repetition same same same with no actual end
in the game right like after a tv show lost right but it did ultimately end right but it did ultimately end
Right. Without the resolution. But your point is that is that is it an open-ended mechanic that is seeking to create the curvature that just continues to curve and always interesting and more fascinating and unpredictable and fast in solitude in random ways, but doesn't actually have an arc and an end.
And what I think this ludic loop serves is a certain capitalist contemporary, you know, there's many capitalist models out there, it's a certain, very fiercely entrenched model for profit.
You know, I spoke before of the false wins, and that's been called Costco gaming, where you profit from volume, not price.
And I think we see this playing out in the ludic loop, because for the most part, these little loops are tiny.
I call it nanomonetization.
And so the profit logic here is that it's the click economy
and you just need to get as many, many, many, many, many clicks as possible.
So one thing we could just start doing,
and I'm not the person to do it, right,
but just to put out there,
and I know you've encouraged this direction as well,
is to think about what are some different business models
that are more ecological in their view
and of sort of cause and effect.
and health and care, et cetera.
One way to build a different business model
is to build a very different type of product.
And how you build that product depends
on what kind of approach you take
to help your users manage their attention.
There's a phrase in Danish of don't tie the,
or Japanese, don't tie the cat to the bacon.
Oh yeah, don't tie the cat to the bacon.
Yep.
Which is to say don't like tie the thing
that you're seduced by to the thing
in front of you.
Yeah, exactly.
It's just like you're setting yourself up to fails if you tie the cat to the bacon.
And this is the example here, right?
We know that streaks are powerful, so let's include a calendar where you can mark off the
days that you don't smoke, or you could just change the product so it's not addictive in the
first place.
Right.
And I think this speaks to two styles of intervention.
There's one style which is giving you better defensive mechanisms.
It's like you're holding up bigger pads, you know, against the persuasive machines.
But that's like not the actual way that we want this to work.
We don't want, like, an increasingly, increasingly persuasive world where, like, the trendline is going up and up and up and up.
But we give you, like, these small little tools, like a little bit more padding between you and that persuasive world.
We want to change the direction of persuasion.
So it's cooperative and uplifting us in the lives that we want to live versus being oppositional and giving you some better tools that you might be able to implement.
Like, there's two kinds of changes.
And we have to make sure we're differentiated.
You know, the image that comes into my mind is the Incredibles.
when there's that machine that learns from all of like the Incredibles behavior
and quickly learns all of their weak points and starts attacking.
Like that is the engagement economy.
That's the whole thing.
Applied to our minds.
That's the slot sheet and applied into our minds.
And you're times of two solutions.
There's one kind of solution that's like give Mr. Incredible like a bigger padding
and armor to defeat this thing or change that machine so it's helping us build a better future.
Right, exactly.
You identify these four components.
Solitude, fast feedback, random rewards, and continuity.
Continuity with no resolution.
Right.
Okay.
That's toxic.
I think that that has become a toxic loop that is facilitated by contemporary technology.
And it's got its own sort of internal momentum, and we need to stop and recognize it and regulate it.
I am not so hopeful that change will come from within because essentially companies at the end of the day are still about increasing their bottom line and revenue.
And so that's one area we'll get into probably outside this podcast, unfortunately.
I'm still talking about it with you, but is the policymaking that can protect against these dynamics and protect against the business models that are adversarial or treat human beings as resources to extract in which if time on site is directly coupled to my stock price, why in the world?
would I change. You cannot count on companies to change on their own, except to offer you the
responsible gambling management device systems. Right. So I'm a cynic there. Absolutely me.
I am as well. And this is not about just, we need the full force of collaborative mechanisms
from shareholder activism to policymaking to people on the inside advocating once they understand
these things to bring up these things in conversation. The media, the public, parents, children.
So this is a full court press of systems change. We need to get.
people aware that when they're, you know, that when they sort of log into things and they're
asked to identify how many pictures have bicycles, they're actually doing work. Right.
Being extracted from them, right? Absolutely. So a couple things and just to translate
these four features you've identified into some concrete actions that you could have had in some
companies taking. So solitude. You just mentioned that people being alone, it's really hard to just
be in a ludic loop if you're sitting there with other friends or other relationships that are
active requiring your attention. Think of live poker. Yeah. People can become addicted to that
fine, but it's different than what we're seeing, right? Right, where you've totally
control the environment. Okay, so with solitude right now, how much does Apple, Facebook,
Google, YouTube, et cetera, you know, are those devices and the menus being offered through
choice-making screens that we hold in our pockets? Are they strengthening or deepening
solitude or are they actually helping us be with other people? And I think this is one of the
core changes that especially Apple's in the best position to make. You know, imagine they have
this app right now called Find My Friends that lets you, it's never been easier to see a map
of where it's kind of hard to opt in. You have to add all these friends and you can see where they
are. But then people are suspicious about what is being extracted from that data of you and your
friendship network and how is it being monetized and modeled. Yeah, although Apple in this case is
not actually. That's because of their business model is different. People are still suspicious.
People will be suspicious. I think Apple needs to evolve from being the privacy company to the
trust company because their business model not being about attention and data can actually move
in this direction. But just to name this example, what the companies could do, any company,
Facebook, YouTube, you know, Apple could actually say, okay, if solitude is the issue, how would we help,
how would we make it as easy to access, you know, meaningful time and relationships with our friends
as it is to access knowledge from Wikipedia? And instead of imagine of a find my friends,
there was a time with friends kind of thing. And, you know, right now you think, oh, hold on a second,
don't they already offer those to us?
You can just open up a text message.
You can type in the name of the person.
It's never been easier to talk to someone.
And yet when we're feeling isolated, that doesn't feel so accessible, does it?
Because you're given this menu that says which key do you want to type?
Do you want to type the Q key, the W key, the E key?
But that's not a very empowering menu when you're in a state-dependent, you know, isolated, lonely state.
You're not feeding your brain the information that you need.
And then there's also the point that I don't think any one of these on its own is,
is a bad thing.
Solitude is a great thing, actually.
But when it's combined with fast feedback,
maybe some anxiety and continuity, then it becomes bad.
So it's really hard to design a, you know,
why would you want to design against solitude?
Agreed, but I think right now the technology,
we know that loneliness is incredibly costly,
and right now it's deepening and amplifying loneliness.
It's not let's eradicate loneliness and solitude,
but let's certainly not be deepening it in a crisis right now
where most people are feeling that loneliness.
Or medicating it.
medicating it. So the second one, fast feedback, you know, the easiest situation here is for
these apps or these companies to batch your rewards from drip by drip by drip, that perfect
random schedule reward, as you already said, to something that is the batched version. And this is
the easiest change that Facebook could make to prove that they are on the side of users,
Instagram, Facebook, TikTok, whatever the apps that have notifications, why in the world do you
need to get drip by drip? If the default setting was let's batch it and deliver it once at the
end of the day, unless specifically it's urgent. The other one, random rewards. I think another one
that people often don't think about. Randomness is also about ambiguity, so I don't really know what's
going to come. It's that mystery, that curiosity. It's life, yeah. But, you know, when your phone buzz,
let's take the simplest example. Your phone buzzes. It's totally ambiguous. It could be a text
message from, you know, someone in your family saying, our house is on fire. Or it could be,
hey, YouTube says there's a new video from that channel you subscribe to. So imagine if there's a
There's a specific unambiguous vibration signature.
You can actually set this up with your phone right now,
but Apple could make this even easier for people.
It's something I've done.
So when I get a text message,
it actually buzzes in a unique three buzz pattern.
And you can go biz, biz, biz.
And that's very different than when you get a calendar notification,
which you can buzz once in a long pattern or something like that.
Problem is still a lot of self-management, right?
It's still a lot of self-management.
But again, imagine a spectrum from it's totally impossible right now to do this
and dig into your settings to Apple creating a wizard.
that tries to make this as easy as possible and sets up the default settings and actively tries to make this.
And again, Apple's business model here is not adversarial. They could do this. And in fact,
consumers would trust them more if they did. So the third one, that was the third one of random rewards.
The fourth one, continuity and non-resolution. So this would be, as you said,
reintroducing stopping cues. And one of the things, you know, people say, well, now you can actually
set these time limits when you're infinitely scrolling. And you can show people a chart or a notification that
says, hey, you've been scrolling for this long. But that actually just makes people feel
worse about it because there they are feeling lonely and they say oh my god now it's been
four hours on it let me say that here's where you know coming back to to my book as a as a
sort of rich case study of one area this has been discussed you know till you just want to like
bang your head against the wall in the gambling industry for years there are literally like
thousand page reports that discuss precisely should we have a message that flashes at you
should it scroll from left to right should it scroll from right to left
should it scroll on the bottom.
For a stopping queue, you mean to be introduced that stopping queue?
So each of these things, each of these things has been so debated, so tested in the gambling industry.
But the gambling industry itself likes to point to those thousand page things and be like,
it's a mess.
There's no, they didn't do any of the work.
This is researcher, right?
But they point at it and they say, this is just a big mess.
We don't know anything.
We have no evidence on which to base any concrete change at all.
And there could be unintended consequences.
If we put the scroll thing on, you're going to feel worse about yourself.
You're going to want to keep playing.
I am here to say that actually all of that research has generated certain best practices.
There's a guy Bob Williams in Canada, and you can read his report that out of those tomes of research have come certain things that work.
And it wouldn't be a bad idea for the sort of more high-tech tech industry, Google,
Facebook, Apple, to go and read that report and say, oh, isn't that interesting that
putting a clock on doesn't do anything, but some equivalent of, like, lowering the number
of lines you can bet on in a multi-line game, that would work. And so would access, restricting
access, cutting off. So there are best practices, is what I'm saying. It's not a sea of, like,
we're not going to do anything because we don't know what to do. Right. And this is going to be an
illusion that they'll say we don't know what to. There's very concrete things that can be done.
And the point of this podcast is to try and encourage once we've diagnosed the specific features
of human psychology that are being exploited to say what would be most embracing, compassionate
and protective of those instincts. And it's the last one I want to mention since I know we have
to finish up is, you know, an example for continuity, Aza Raskin, my co-founder who invented
the Infinite Scroll has actually shown that if you created a random slowdown, so as you're
scrolling. So basically when you give yourself a notification or a timer, you're talking to the
PR department of your mind. You're telling your conscious mind, oh, you're spending time. That doesn't
actually change what your finger is doing. Your finger is still going to get that, that affective thing.
Introducing friction. And so what he's found is if you actually make the internet connection,
just get randomly slower, not in a predictable way, in a random way. And it does it linearly or progressively
as the longer the time you spent. You can imagine a future version of these time management
things, simply slowing down your internet to those websites like Facebook or whatever after,
you know, the fifth minute or whatever you've set your limit to. And that would be something
that's a little bit closer. I'm not saying this is the framing of the problem is not even about
time, but that would be at least something we can do. Right. And certain things I just want to end
should just be not allowed as options because I think people treat this as a normal commodity.
This isn't like a movie you can ask for your money back because you didn't like it or a pair of
shoes you can return. This is what's sometimes called a no ordinary commodity. And the way that this is not
ordinary is that it is affecting you in such intimate, physiological, affective ways. And if we can
figure out how to regulate toys from China and the percentage of plastic, I think we need to do
the research to figure out what exactly are we regulating here? What threshold do we want to set? What is the
psychology of this? And I think that's exactly what needs to happen next. Natasha, thank you so much
Thank you.
It's great to have you.
It was fun to talk.
Before we go, we suspect that there are listeners out there who want to keep talking about
these issues.
Natasha raises an interesting framework for products that extract attention.
Are you in a technology company whose product isolates users, no matter how unintentionally?
Does it encourage people to send a message instead of calling, allow them to scroll mindlessly?
Are you delivering rapid feedback and variable rewards or continuity with no resolution?
What could you do about that?
One of the challenges of this problem is just how big it is, how systematic.
You have to go all the way from policy down to pixels, and it's hard to know how to have
voice in that system, and that's something that honestly I'm figuring out for myself.
But there are many ways to have voice.
Be voice as a policymaker, as a voter, as a shareholder activist, as an ethical board member,
as an educator, as an evangelizer, as an artist, as somebody who's on the ground and hands-on
working to clean up some of the mess that technology has created.
Really excited to see how we all find our voices,
because I don't think anyone of us wants where this is going.
Next week on the show, we talk to Yale Eisenstadt,
a former CIA officer and national security advisor to Vice President Biden,
who now works on analyzing the threat of technology to our society.
You have some of the most brilliant minds here in Silicon Valley
that build incredible technology.
build incredible companies.
And what I find fascinating
is how you can have the smartest people
working on these things,
but as soon as there is a problem,
oh, that's too hard to fix it.
I mean, let's be honest,
how many times have we heard Mark Zuckerberg
or Cheryl Sandberg say,
it's really hard.
We're sorry, we know we need to do better,
but it's really hard.
Your undivided attention
is produced by the Center for Humane Technology.
Our executive producer is Dan Kedmi,
our associate producer is Natalie Jones.
Original music by Ryan and A's holiday.
Henry Lerner helped with the fact-checking.
Special thanks to Abby Hall, Brooke Clinton,
Randy Fernando, Colleen Hakes,
and the whole Center for Humane Technology team
for making this podcast possible.
A very special thanks to our generous lead supporters
at the Center for Humane Technology
who make all of our work possible,
including the Gerald Schwartz and Heather Reisman Foundation,
the Omidiar Network,
the Patrick J. McGovern Foundation,
Craig Newmark Philanthropies, Knight Foundation, Evolve Foundation, and Ford Foundation, among many others.
