Your Undivided Attention - What if we had fixed social media?
Episode Date: November 6, 2025We really enjoyed hearing all of your questions for our annual Ask Us Anything episode. There was one question that kept coming up: what might a different world look like? The broken incentives behind... social media, and now AI, have done so much damage to our society, but what is the alternative? How can we blaze a different path?In this episode, Tristan Harris and Aza Raskin set out to answer those questions by imagining what a world with humane technology might look like—one where we recognized the harms of social media early and embarked on a whole of society effort to fix them.This alternative history serves to show that there are narrow pathways to a better future, if we have the imagination and the courage to make them a reality.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIADopamine Nation by Anna LembkeThe Anxious Generation by Jon HaidtMore information on Donella MeadowsFurther reading on the Kids Online Safety ActFurther reading on the lawsuit filed by state AGs against MetaRECOMMENDED YUA EPISODESFuture-proofing Democracy In the Age of AI with Audrey TangJonathan Haidt On How to Solve the Teen Mental Health CrisisAI Is Moving Fast. We Need Laws that Will Too. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Hey, everyone, welcome to your undivided attention.
I'm Tristan Harris.
And I'm Aza Raskin.
So, you know, I'd say that when Aza and I are running around the world and talking to everybody,
there's really just one question that's the most popular question that we get asked,
which is, so what do we do about all this?
How do we get out of this trap?
And what would it look like if we got this right?
And they're really mostly talking about social media.
So we did this Ask Us Anything episode where you sent us all your questions, and this was the most popular question we got asked.
Here's Max Berry.
Hey, Tristan, it's Max Barry from Canada.
It really seems like we're all stuck in these feeds, and the companies are stuck too because they need the money from the ads.
It's like we're all trapped by the same algorithm.
Is there actually a way out of this whole thing?
So we're going to do a little thought exercise.
Just follow along here.
imagine that we actually took action.
And we're not saying that we might or we should.
We're saying imagine in past tense that we did.
What would it look like to comprehensively respond with the cultural changes, design and product design changes, legal changes, incentive changes,
the litigation and lawsuits that led to those incentive changes so that we could comprehensively reverse this problem.
What would that look like?
My favorite thing about this is that it really can feel so bleak and so inevitable we live in this world.
And that's just because we can't articulate an alternative.
So that's what we're going to try to do here.
So let me set this up then for you, Tristan.
Zoom your bind back to 2012.
It was all looking really bleak.
Right?
We had falling attention spans.
We had rising polarization.
We had the most anxious and depressed generation in history, a loneliness epidemic, mental health crises.
And then what happened?
Well, we sprang into action.
We realized we had a problem.
We replaced the division-seeking algorithms of social media with ones that rewarded unlikely consensus using Audrey Tang's bridge ranking for political content.
So now instead of scrolling and seeing infinite examples that made you pessimistic about the worst sort of things and violence and inflammatory content that is happening around the world,
every day, you were suddenly seeing optimistic examples of unlikely consensus from everyone
around the world. And that started to turn the psychology of the world around. And just like we
had emission standards on cars, we just put in these sort of dopamine emission standards,
recognizing that too many of the apps were incentivized to get into limbic hijacks and slot machine
behaviors. And then suddenly, when we had these emission standards for dopamine, using your phone
didn't make you feel disregulated, didn't make you feel sort of anxious. And you had more control
as you were using technology. Yeah, we began.
subsidizing solutions journalism, so that every time you're on a feed and you saw a problem,
it was contextualized with real-life solutions from around the world that gave us learned hopefulness,
not learned helplessness.
We realized that our phones were not just phones or products that we used.
They were more like a GPS for our lives.
They were kind of like a brain implant.
And we were only as good as the menus that we lived by inside of those GPSs.
And we realized that the attention economy was just creating a GPS.
that only ever steered us towards more content.
And so instead, we sort of reclassed these phones and these devices
as attention fiduciaries for making life choices.
We made the radical choice to treat technology companies
the way that we do every other kind of company,
which is to say that there are rules that we have to follow.
And sort of just like we have zoning laws and cities
for different building and noise codes,
we realized that we needed an attention economy
with a kid zone, a sleeping zone,
a residential zone versus a commercial zone.
And we realized it actually that wasn't a radical proposal in the same way that we added cigarette tax at the point of purchase to change behavior or put age restrictions on drinking and driving.
That obviously there are restrictions on the most powerful technology affecting us.
And groups like Moms Against Media Addiction or Mama and the anxious generation rallied public support to ban social media in schools.
And now you had tens of thousands of schools going phone free all around the world.
And once that happened, you know, and laughter returned.
turned to the hallways, attention spans started to reverse. We implemented age-appropriate design
codes so that we didn't have auto-playing videos in any of these social media apps. And, you know,
in terms of thinking and systems, this wasn't just about making design changes. It was about
changing the incentives. And once we reckoned with the total harm that all of this had caused,
what Michael Milken, the great capitalist, has called the trillions of dollars of damage to social
capital in the form of mental health care costs and lost GDP and productivity.
Once we accounted for that, there was a trillion-dollar lawsuit against the engagement-based
business model.
And just like the big tobacco lawsuit that ended up funding ongoing public awareness campaigns
that educate people that smoking kills, this funded ongoing digital literacy campaigns for
young people so that the problems of technology were understood at the speed at which
they were entering society.
And as part of that, it funded community events and rewilding of the social fabric
and refunding local news and investigative journalism all around the world
that had previously been bankrupted by that engagement-based model.
And this funded the mass re-humanification of connecting people to in-person events and nature.
So suddenly the smartest minds of our generation were thinking about how to design interfaces
that were all about hosting events in community.
And as part of that, we replaced the dating, swiping, industrial complex.
of dating apps like Tinder and Hinge and Raya
that were really just predating on people's loneliness
and causing people to send messages and never meet up.
And suddenly there was a simple change to all these dating apps
that made the world so much better,
which they were forced to actually spend money
to host real world events every week
in every major city in many venues
and then used AI to route everybody
who would match with each other into these common places.
So suddenly every week there is a place
where people who are lonely
had an opportunity to meet all sorts of people
that they had matched with.
And it turned out that once people were in healthy relationships,
about 25% of the polarization online
was actually just due to people feeling disconnected from themselves
and not happy.
And so polarization started to go down.
And realized that Mark Andresen was right,
or at least about one thing,
which was that software was eating the world.
But because software doesn't have the same kinds of protections
that we've built up in the real world,
as software ate the world,
we lost the protections from the real world.
real world. And we realized that you couldn't take over the world without caring for obtaining the
life support functions of the society that constitutes that world. So we realized you couldn't take
over childhood development without a duty of care to protect children's development. Or you couldn't
take over the information environment without a duty of care to protect the integrity of the
information environment. And by passing a duty of care act for all of technology, so that as
software and technology eat the world, the world doesn't end up chewed up.
that solved many of the problems.
Yeah, and there was so many aspects that software was taking over,
including our ability to unplug from technology.
And when technology ate the ability to unplug,
it also needed to care for our ability to unplug.
So we started getting our entire technology environment
that was actually protecting and making it easy to unplug.
There you are in email, and it makes it easy to sort of say,
I need to go offline for three days.
There you are in news and say, hey, I'm going to go offline for five days.
And when you come back,
It just summarizes all of the news that you missed, so you don't actually have to check it constantly.
And so suddenly, using technology felt like more balanced, like it was more in touch with the real world and balancing the real world with the online world.
We also realized that so much of this was that personnel is policy.
And we didn't have enough people who were actually trained in humane technology.
Just like the show The West Wing caused a 50% increase in enrollment in the Kennedy School, the social dilemma, and then a whole bunch of
bunch of new shows centered around what humane technology feels and looks like created a massive
wave of humane technologists. Can you imagine having Netflix shows that ongoingly cover what it
would look like in these fictional rooms where people were making design choices at technology
companies that were all about protecting and dealing with these societal issues?
To cite the work of Danella Meadows, who's a great systems change theorist, who said,
how do you change a system? And she said, you keep pointing out the
anomalies and failures in the old paradigm, and you keep coming yourself loudly and with assurance from the new one, and you insert people with the new paradigm of thinking into places of public visibility and power.
And so once we had all these people, you know, watching these Netflix shows of humane
technologists making thoughtful decisions about how to trade off, you know, and make technology
work for society, and once you had humane technology graduates who had all taken as foundations
of humane technology course, suddenly in these positions of public visibility and power,
the technology that we used every day was actually really starting to feel like it cared
about the society that it was really operating.
And this, I think, might be my favorite one.
Just like we ban the cell of human organs, something that's sacred to,
us that we need, we realized that we could ban engagement-based business models and that immediately
made technology much more humane, but it did something even deeper. It freed up two generations
of Silicon Valley's most brilliant minds to go from getting people to click on ads to solving
actual real-world problems like cancer drugs and fusion. And in the wake of that, Silicon
Valley went from being reviled to loved again.
And we saw that countries that started adopting these comprehensive humane technology reforms that were less dysregulated, less distracted, less polarized, started to actually out-compete the other countries who didn't regulate technology and still had these parasitic engagement-driven business models.
And there was also a national security side to all of this that we realized, which is we realized that authoritarian societies like China were consciously deploying technology to reinvent 21st century.
digital authoritarian societies.
And they were using tech to strengthen that model.
And in contrast, democracies were not consciously deploying technology to upgrade and create
21st century democracies.
Instead, we had inadvertently allowed two decades worth of these pernicious business models
to profit from the degradation of the health and cohesion of democracies.
But once we sprang into action, it really wasn't that hard to change all these design
patterns, change these incentives, and start to really set in motion a totally different trajectory
of a healthier, less lonely, more belonging, more community, less dysregulated, just more
coherent society. And so a beautiful world our hearts know as possible isn't nearly as far away
as we think if we can just start to see and feel into how a few changes like this could make a big
difference. Yeah. And we went from, we're upgrading the machines and downgrading the humans
to we're upgrading machines to upgrade the humans.
All right, I want everyone now to close your eyes.
And I'm going to ask Tristan to lead us in a little meditation of assume all of these things
have actually happened and we're living in that world.
What does that world feel like?
Yeah, so just imagine stepping into this other world that we just described for a second.
You know, there you are holding this device.
It's designed totally different.
differently in your hands. It doesn't make you feel dysregulated because you don't have
auto-playing videos and dopamine hijacking happening everywhere. When you're scrolling newsfeeds,
suddenly 30 to 40% of what you're seeing are things that you can do with real friends and
real community in your environment. So suddenly you're using technology and it's actually
encouraging us to disconnect and take breaks and making it easier and built in across all of
these messaging applications to do that. You know, Tristan, you said something that still resonates,
it's in my ears from Social Dilemma and you said, so there I am scrolling on social media,
one more cat video, where's the existential threat? And your point was that social media isn't
the existential threat. Social media brings out the worst in humanity and the worst in humanity
is the existential threat. So when I close my eyes and I imagine this world, this alternative
world. It's that I'm no longer seeing the worst of humanity. I'm starting to see consistently
the best of humanity. And then instead of existential threat, I'm seeing existential hope.
But now imagine the most violent thing that has happened today. And imagine just over and over again
pointing your attention at more and more examples of this. Just notice what happens in your nervous
system when you're doing that. I think one of the most pernicious aspects of the way that this
system has hijacked us is that we don't even really notice how profoundly different the world
that we're living in our inner environment is, because we've been living in it for so long.
And what inspires me about the narrative is that, you know, you've and we have just described
is it actually doesn't take a lot of just these small changes to create a very different feeling
world psychologically. But then a different feeling world psychologically starts to translate into a
differently constructed world. So really imagine that we've done all of these things. And people just
can see each other's humanity. We are bridging our divides. We are spending more time in
person. As societies, we are stronger and more coherent. And we can see that we are making better
decisions over time. In that world, suddenly AI seems much easier to deal with.
Well, and we saw that we had successfully dealt with social media. So now we knew that we were a
society that was capable of dealing with technology problems. And they weren't insurmountable.
It was just a matter of seeing the underlying incentive and design that was leading us to a world
that no one wanted. And once we had made those change of social media, we had the confidence
in ourselves that we could do something about AI. And it wasn't too late.
This is just one path through a set of things that could happen.
But we want all of you to be thinking about what your version of this narrative is.
And what would this narrative look like for AI?
We all need to tell that story of how this went a different direction.
Because if we sort of collapse into, well, the current path that we're on that's just reckless and sort of dystopic is just inevitable,
then we're never going to get there.
And so we hope this episode is an example of what it looks like to step into a version of
What we did do, past tense, that was obvious once we saw the problem clearly.
So some of you might be feeling maybe depressed even after hearing this alternative narrative,
but let me give you just a little bit of actual hope.
30-something attorneys general have actually sued meta and Instagram
for consciously addicting young people to their products.
There is a big tobacco-style lawsuit underway.
There are bills in Congress like the Kids Online Safety Act
to try to create things like the age-appropriate design code.
There is work being done by Audrey Tang to actually get X to implement bridge rank
as the center of how it ranks content for the world.
And so, you know, it doesn't look great out there.
But if you look at the road, it's almost like you can see these trailheads
where there is a path for more solutions to happen
if there was a comprehensive and concerted effort to make it happen.
Your undivided attention is produced by the Center for Humane Technology,
a non-profit working to catalyze a humane future.
Our senior producer is Julia Scott.
Josh Lash is our researcher and producer,
and our executive producer is Sasha Fegan.
Mixing on this episode by Jeff Sudaken,
original music by Ryan and Hayes Holiday.
And a special thanks to the whole Center for Humane Technology team
for making this podcast possible.
You can find show notes, transcripts, and much more at humanetech.com.
And if you like the podcast, we'd be grateful if you could rate it on Apple Podcasts,
because it helps other people find the show.
And if you made it all the way here, let me give one more thank you to you
for giving us your undivided attention.
