The Rich Roll Podcast - Kevin Roose: Futureproof Yourself Against The Robot Apocalyspe
Episode Date: March 15, 2021Artificial Intelligence isn’t an imagined future. It’s right here, right now. So what are the perils of society’s rapid pivot to automation? How do we avoid displacement and dehumanization? And,... most pressing, how do we find meaning in a world driven by algorithms? These are important questions we need to be asking. Today’s guest is the right guy to help answer them—one of my very favorite online follows for his insights on automation, online radicalization, cybersecurity, and digital wellness. A bestselling author and award-winning technology columnist for The New York Times, Kevin Roose specializes in technology and its effects on society—an interest that culminated in the mind-melting, must-listen podcast series Rabbit Hole, a story that exposes the many ways the internet influences our beliefs and behavior, often for the worse. A significant portion of today’s conversation focuses on artificial intelligence and the many ways in which our increasingly automated world impacts humanity. It’s also the subject of Kevin’s latest book, Futureproof: 9 Rules for Humans in the Age of Automation. Part A.I. primer part self-help survival guide, it breaks down the tools we need to be happy, successful humans in a world increasingly built by and for machines. As we usher in the age of artificial intelligence, more and more occupations are becoming automated. Social media algorithms not only frack our attention spans for clicks, but they have so thoroughly manipulated such that we now divest much of our decision-making and critical thinking skills (the things that literally makes us who we are) to technology. This is an important, potentially life-altering breakdown of the many ways the internet and AI-based algorithms are degrading us, locking us into information silos, inciting emotion for profit, and threatening our inherent humanity. It’s also a guide on surviving workplace automation, overcoming phone addiction, and protecting your time and attention. In addition, Kevin provides his insider take on a variety of other notable technology curiosities from Clubhouse to NFTs, the future of podcasting, and many other subjects that I know will pique your interest. FULL BLOG & SHOW NOTES: bit.ly/richroll587 YouTube: bit.ly/kevinroose587 Our most powerful trait is our innate humanness. My hope is that this exchange will serve as a reminder. Peace + Plants, Rich
Transcript
Discussion (0)
In order to survive this wave of AI and automation,
we need to become more human.
And we're starting to see that the skills
that are actually in demand right now
are these kind of what we would pejoratively call soft skills.
It's things like communication and empathy
and leadership and courage,
the things that machines can't do.
The hard part about that is there are forces
that are conspiring to make us less human.
And we interact with them every day. These are very powerful machines. Every time you look at
your phone, you are looking at a device on which hundreds of billions of dollars have been spent
to make you distracted from whatever else you might have been thinking about. And so I think
we tell ourselves that we need to be plugged in. But actually, I think in the present and in the future, there will be a real value to being able to separate yourself from your phone and your feeds and your technology.
Because that's how we get better.
I mean, everything in life that makes us better, very little of it happens through the phone.
It happens through taking on personal challenges, through struggling, through raising families, through the phone. It happens, you know, through taking on personal challenges,
through struggling, through raising families, through, you know, building communities. Like,
that's the stuff that is really hard and really rewarding.
That's Kevin Roos, and this is The Rich Roll Podcast. The Rich Roll Podcast.
Hey, everybody. Welcome to the podcast.
My name is Rich Roll.
I will be your pilot for today's flight.
Welcome aboard.
So I was reflecting on how I want to introduce
and contextualize today's conversation.
And I think it's fair to say that really since day one
of this show, my intention has been and is to shine a light
on truths, truths big and small,
to hopefully provoke some questioning
of held beliefs about the world,
but also about ourselves,
all in service of this greater goal
of becoming better stewards of the planet,
but also in service to becoming better humans
and ultimately more human.
But right now, humanity faces many existential threats, not the least of which is
the advent of artificial intelligence. As we usher in the age of machines,
more and more occupations are becoming automated. Social media algorithms not only
frack our attention spans for clicks, they have also,
and very thoroughly, I should add, manipulated us that we now actually divest much of our
decision-making and critical thinking skills, things that literally make us who we are,
to technology. So the question then becomes, how far can AI go?
What are the perils of a society quickly pivoting
to automation in all things?
How do we avoid displacement and dehumanization
and perhaps most pressing, how can we be happy
in a world that is increasingly built by and for machines?
These are important questions we all need to be asking, and today's guest is the right guy to
help answer them. Kevin Roos is a New York Times bestselling author. He's an award-winning
technology columnist for the New York Times and one of my very favorite people to read and follow online for
his many insights on everything from automation to social media, online radicalization,
cybersecurity, digital wellness, and generally all things tech. If you're a consistent listener
or viewer of this show, then you already know well my fondness for Kevin's work,
most notably his podcast, Rabbit Hole, which is a New York Times produced show of this show, then you already know well my fondness for Kevin's work. Most notably, his
podcast, Rabbit Hole, which is a New York Times produced show hosted by Kevin and all about what
the internet is doing to us. In my opinion, it's a non-negotiable must listen. So please check that
out if you haven't already. Kevin's latest book, which consumes a significant portion of this
conversation is entitled Future Proof,
Nine Rules for Humans in the Age of Automation.
It's sort of part AI primer, part self-help book,
survival guide, and it's really all about how to be happy, successful humans
in a world that is increasingly built by and for machines.
A couple more things I want to say about Kevin
and the conversation to come, but first.
We're brought to you today by recovery.com. I've been in recovery for a long time. It's not
hyperbolic to say that I owe everything good in my life to sobriety. And it all began with treatment and experience that I had
that quite literally saved my life. And in the many years since, I've in turn helped many suffering
addicts and their loved ones find treatment. And with that, I know all too well just how confusing
and how overwhelming and how challenging it can be to find the right place and the right level of
care, especially because unfortunately, not all treatment resources adhere to ethical practices. It's a real problem. A
problem I'm now happy and proud to share has been solved by the people at recovery.com,
who created an online support portal designed to guide, to support, and empower you to find the ideal level of care tailored to your personal needs.
They've partnered with the best global behavioral health providers to cover the full spectrum of behavioral health disorders,
including substance use disorders, depression, anxiety, eating disorders, gambling addictions, and more.
Navigating their site is simple. Search by insurance coverage, location, treatment type, you name it.
Plus, you can read reviews from former patients to help you decide.
Whether you're a busy exec, a parent of a struggling teen,
or battling addiction yourself, I feel you.
I empathize with you. I really do.
And they have treatment options for you.
Life in recovery is wonderful.
And recovery.com is your partner in starting that journey.
When you or a loved one need help, go to recovery.com and take the first step towards recovery.
To find the best treatment option for you or a loved one, again, go to recovery.com.
find the best treatment option for you or a loved one, again, go to recovery.com.
We're brought to you today by recovery.com. I've been in recovery for a long time. It's not hyperbolic to say that I owe everything good in my life to sobriety. And it all began with
treatment and experience that I had that quite literally saved my life. And in the many years since, I've in turn helped many suffering addicts and their loved ones find treatment.
And with that, I know all too well just how confusing and how overwhelming and how challenging it can be to find the right place and the right level of care.
Especially because, unfortunately, not all treatment resources adhere to ethical practices.
It's a real problem.
A problem I'm now happy and proud to share has been solved by the people at recovery.com
who created an online support portal designed to guide, to support, and empower you
to find the ideal level of care tailored to your personal needs.
They've partnered with the best global behavioral health
providers to cover the full spectrum of behavioral health disorders, including substance use
disorders, depression, anxiety, eating disorders, gambling addictions, and more. Navigating their
site is simple. Search by insurance coverage, location, treatment type, you name it. Plus,
you can read reviews from former patients to help you decide.
Whether you're a busy exec, a parent of a struggling teen,
or battling addiction yourself, I feel you.
I empathize with you. I really do.
And they have treatment options for you.
Life in recovery is wonderful,
and recovery.com is your partner in starting that journey.
When you or a loved one need help, go to recovery.com and take the first step towards recovery.
To find the best treatment option for you or a loved one, again, go to recovery.com.
Okay, Kevin Roos.
So this is a pretty thorough breakdown of the many, many ways the internet and AI-based algorithmic manipulation is serving to degrade us, to lock us into information silos and ultimately incite emotion for profit.
It's also a guide on surviving workplace automation, phone addiction, protecting your time and attention,
and most importantly, safeguarding your humanity.
In addition, Kevin provides his take on a variety of other tech curiosities from Clubhouse to NFTs, the future of podcasting,
and many other subjects.
As an admitted Kevin Roos fanboy, this one is a fun treat.
It's packed with helpful insights to better understand
the rapidly evolving shifts our society currently faces.
And a reminder that our most powerful trait
is our innate humanness. currently faces and a reminder that our most powerful trait
is our innate humanness.
So this is me and Kevin Roos.
Look at that, it's Kevin Roos.
Thanks so much, this is a dream.
Well, so delighted to have you here.
I'm such a big fan of all the reporting and the work
that you've done over the last several years.
You really have become one of my favorite follows
as somebody myself who's obsessed with the implications
of big tech social media platforms
and the way in which disinformation
infects real world outcomes,
how you've deconstructed radicalization.
And now, which we're gonna talk a lot about today,
how we should be thinking about AI
and the coming robot apocalypse.
And the more immersed I become in trying to understand
this rapidly changing environment that we find ourselves in,
I find myself turning more and more to your reporting and your insights and your guidance.
So I just wanted to thank you for that at the outset.
Oh, thanks so much.
And I'm a big fan of the show.
So this is a thrill for me.
Oh, cool.
Wow, that's very meaningful.
Thank you.
And in reflecting over the last year,
there were three things that I think impacted me really profoundly.
The first being the Social Dilemma, Netflix documentary,
the Feels Good Man documentary.
I'm wearing the Feels Good Man t-shirt.
Did you see that movie?
I haven't seen it yet.
No, but should I?
Oh, you have to see it.
I mean, it's just, you know,
internet to the power of a thousand. Like it's right up your alley. I think you'd to see it. I mean, it's just, you know, internet to the power of a thousand,
like it's right up your alley.
I think you'd really enjoy it.
And then thirdly, Rabbit Hole,
which I've discussed at length many times on the podcast,
often with my sometime co-host Adam Skolnick,
who's also a New York Times contributor.
I just, I love that series.
I think it's a must listen for everybody.
I wish that there was a new episode every week.
So before we get into Future Proof and all of that,
can we spend a few minutes talking about Rabbit Hole?
Yeah, absolutely, whatever you want.
So for people that aren't familiar,
maybe it would be worth just providing
a little bit of a synopsis.
Sure. So Rabbit Hole is an eight-part podcast that I helped produce and host with the New York Times
about basically, the idea was to do a podcast exploring sort of what the internet is doing to
us. So I'm a tech columnist. I report on the internet. And for a long time, that meant
reporting on gadgets and companies that made gadgets and software programs and new apps and
social media platforms. And then a couple of years ago, it really started to shift.
Because what we saw is that a lot of the things that were happening in the offline world had
sort of online antecedents, had like things that were happening
on the internet that led directly to things that happened in the offline world. And the most jarring
and horrible example of that was the shooting in Christchurch, New Zealand. And if people remember,
that was, you know, the really deadly one. I think 52 people ended up dying. And the shooter was a white nationalist
who was radicalized online
and who was very obviously spending a lot of time online.
And he live streamed this shooting
and posted about it on the internet.
And it was sort of like this internet event
that was kind of engineered for virality.
And it really, like, I don't report on terrorism.
I don't cover mass shootings.
But this was such like an internet event.
And so deeply tied to the kinds of extremism
and sort of violent rhetoric I'd seen on places
like 4chan and 8chan that I just decided I had to look into that.
And I had to figure out, my question was basically, I mean, I grew up on the internet. I'm
like a child of the internet and I love technology and the internet. And also I was seeing how it was
radicalizing people and turning them into extremists. And so I wanted to figure out how that happened.
What is it actually doing to us?
How do the systems and the algorithms interact with our information ecosystem?
Who are the people who are doing the radicalizing?
What is this whole ecosystem and how is it affecting who we are?
So that's what we spent a year digging into, starting with this guy, Caleb Kane, who was a 26-year-old kid from West
Virginia who got really into YouTube videos. And he became radicalized and he was able to
dig up and send us years worth of his YouTube browsing history.
So we actually got to go back and kind of reconstruct how this guy who had supported
Obama and cared about the environment and loved Michael Moore movies morphed into this guy who
was like deep in the kind of alt-right fever swamp and actually trace his path sort of down the rabbit hole. And what was the decision to do it as a podcast
as opposed to a long read or a series of articles
over a period of time?
Well, some of it did appear in articles,
but it's really like, it's just different to hear it.
I mean, you know this from doing podcasts,
like so much is conveyed in the voice
and so much is conveyed in the voice and so much is conveyed in, you know,
hearing things rather than reading them.
And so, especially with this series,
which was all about sort of internet media
and, you know, the charismatic people on YouTube
who are radicalizing young white men,
like it's just much more compelling
to hear those people talk
than to read, you know, transcriptions of what they're saying.
And ultimately, we had an amazing team
of people putting this together
and they sort of came up with the idea
of making the whole show sort of sound
like the experience of falling into an internet rabbit hole.
So if you pay attention, it's sort of like,
it's kind of engineered in this really careful and thoughtful and cool way
where there are no intros and outros of the episodes.
It just goes straight into one episode from another.
And it mimics the feeling of falling into an internet rabbit hole
and waking up.
I'm sure everyone's done this.
You start watching one YouTube video
and then four hours later, you're like,
why am I watching videos of trains in Norway?
Like, how did I get here?
And that's sort of the feeling
that we wanted to recreate in the show.
Yeah, well, you were very successful in that regard.
I mean, it is truly a sonic experience
and it was fascinating to go on this,
archeological exploration of how this young man,
step by step finds his way from one place
to a very different place,
which then of course brings up the subject of the algorithm,
the nature of the algorithm
and how this is being managed by YouTube.
You have a couple interviews with Susan Wojcicki at YouTube.
And I guess, so my question is,
so much has happened since rabbit hole.
What is your sense of what that algorithm looks like now
or the job that YouTube is doing to protect
against these sorts of acts of violence in the future?
Well, I think YouTube is doing a better job than they were.
I mean, when we started reporting the series,
they were still sort of saying there wasn't a problem,
that they didn't have extremists on their service.
They were kind of just like bizarrely uninterested
in dealing with what I saw and a lot of people saw as sort of this growing threat
of radicalization that was happening on YouTube primarily.
And not just on YouTube,
but through YouTube's recommendation algorithm,
which is responsible for something like 70%
of all the time people spend on YouTube
is due to that little sidebar
that's powered by some of the most sophisticated AI that's ever been created. And so that's the piece that we wanted to sort of connect was how
is the algorithm sort of pulling people into the, it's not just that people are falling into rabbit
holes, it's that they're sort of being pulled. And what is that force and how has that evolved
over time? And so I think they've made a lot of changes since the series came out.
They've updated their policies on hate speech and white nationalism.
They're kicking neo-Nazis off.
Some of the worst people in that world have been deplatformed.
And they've changed the recommendation algorithm now.
They call it borderline content,
stuff that basically doesn't violate their rules technically
but is not great.
And they're showing something like 70% fewer recommendations
for that kind of content through this algorithm now.
So I think there's still obviously more to do.
There's still a ton of extremism on every internet platform
and YouTube I think has a ways to go in that regard.
But I think they're at least aware of the problem now
in a way that they maybe weren't when we started reporting.
Yeah, and in fairness, and you posed this question to,
his name's Caleb, right, the young man?
Yep.
The very same algorithm that prompted him
to descend down this rabbit hole
is the algorithm that also ultimately introduces him
to other ideas and helps him start to critically think
about his long held beliefs and ultimately change them.
So the argument could be made
that the algorithm ultimately served a positive purpose
or worked in a way that is counterintuitive to the less nuanced narrative that it just, you know,
pushes people in one direction only. For sure. I mean, this algorithm is,
it has no, that thing is, it has no idea what it's recommending. It is just a machine.
And it is looking at these blank boxes
and it's saying, okay, when people watch box A,
they spend 10% more time on YouTube than box B,
so let's show them box A.
It has no idea what's in box A.
So it is not like anyone at YouTube is fiddling with the dials
and saying, okay, we want to show them more right wing content
or more left wing content
that's not how it works
so for Caleb what was really interesting
is he did get pulled out of this far right rabbit hole
that he was in
because he stumbled into this part of YouTube called BreadTube
which is this sort of left wing
hyper ironic very online community of YouTubers
who are basically just sort of combating right-wing radicalization by kind of mimicking
the aesthetics of how right-wing radicalization works on YouTube. So you know all the videos
that are like, Ben Shapiro destroys helpless college student.
It's sort of like that combative thing.
And so they were making videos that were like,
college student destroys Ben Shapiro.
Sort of taking all of the lessons that they learned
and trying to kind of hijack the algorithm
in this way where their videos would show up to people
who were looking for this other
stuff. So in some ways, it's not that the algorithm sort of pushed him to the left. It's that creators
who are the other piece of this equation had kind of formed this strategy around the algorithm
in an attempt to get into the recommendations of people like Caleb and
bring them out of the far right. Right. Getting more savvy about how YouTube works to
marshal that engine and weaponize it in a direction counter to the narrative that they're
trying to confront. Exactly. Yeah. And that's, I think, what drew me to this topic in the first place. I started feeling like the entire world was kind of arranging itself around these algorithms.
All the political figures who were succeeding were doing it by hacking the Facebook algorithm or the Twitter algorithm.
Posting became like very valuable skill
in our politics and in our culture.
You know, the celebrities who were figuring out
how to use TikTok and YouTube,
you know, to sort of get themselves out there,
like they were becoming the most famous people in the world.
You know, and it just felt like this,
like the world that we see is so formed and shaped
by the decisions that a few engineers
and a few CEOs are making
in ways that I don't think we fully understand yet,
but that are appearing in all of our lives every day.
Yeah, and it should be said
that this is not simply endemic to YouTube.
One of the things that you've done over the past year
is share the top most shared,
the top 10 most shared posts on Facebook every day.
And they tend to be overwhelmingly of the right wing.
And so that begs the question,
is this because this is where the country's at
and this is what they want?
Or is it that this particular strain of politics
is just better at understanding how Facebook works
to get their stuff to travel virally?
Yeah, I think it's really hard to untangle those things
because one thing that we know is that these algorithms,
these recommendation engines,
they don't just recommend things to us.
They can change our minds.
There's a French researcher named Camille Roth
who has this, he distinguishes between
read your mind algorithms and change your mind algorithms.
And we think that a lot of the stuff we're dealing with
on an everyday basis is reading our minds.
Like, I like this Netflix show,
so I'm also going to like this Netflix show.
But in reality, Netflix can corrupt that process
by saying like, we want you to watch this thing
instead of this other thing.
And so they're actually steering,
setting up our choices in such a way
that we actually don't,
we're not the ones sort of making the call,
even if it feels like we are.
And so that process is really interesting
and I think really troubling.
Right, and that's a central tenet of future proof, right?
We tend to think about automation
in the context of the robot apocalypse,
but in reality, it's these tectonic plates
that are shifting under our feet currently
that we're really not even consciously aware of.
I mean, you make the point in the book
about buying a pair of shoes and I don't even, did I really even want
these shoes or did I just think because it was in front of me that they were cool? Like we have
divested our decision-making power and, you know, a certain aspect of critical thinking
to algorithms that are making these decisions for us and deluding ourselves that
we're actually, you know, that we have agency over that when in fact that's less true than it's ever
been. Yeah. I mean, recommendations are so powerful to a degree that I didn't fully appreciate until
I started looking into this book and talking to the engineers who actually program recommendations algorithms. There's been some great studies that basically show that we trust what algorithms recommend for us
more than we trust our own tastes.
If you give us a food that we hate or a song that we hate,
and then you tell us this was algorithmically selected for you based on your pre-existing tastes,
we actually like it.
We let the machines override our own preferences. And so part of the, you know,
I think you're exactly right that we think of automation as this kind of external force in
the world, but there's a kind of internalized automation that I think is something that we're
just starting to deal with. And I think that's part of what people
are sort of trying to unpack is like,
what do I actually think and believe and like and enjoy?
And like, what is being put there by some machine?
Right, every day you open up your phone,
something has occurred in the news cycle.
And before you can take a minute
to read a neutral article about it,
you go to Twitter and you just see
what everyone's hot take is on it.
And then you're like, well, I like that guy.
So I guess I agree with that person's perspective on this.
And less and less, we're left to our own interior spaces
to figure out how that perspective aligns with our core values.
Like we really have just outsourced so much of this
and it happens so incrementally that we're not aware of it.
Like I said earlier, and we all think that
we're not susceptible to it like other people are,
but I'm good.
Yeah. There's a researcher named Oscar Gandhi who has this sort of theory about the difference between identity and identification. So identity is what we have. That's what we
develop during childhood. It's what we work on when we work on ourselves. It's who I am,
what kinds of things I like, what I value, what my relationships are like,
how I want to show up in the world.
Those are basic human questions.
And then there's identification,
which is the categories we belong to,
the tribes that we associate with,
the consumer segments we are a part of.
And that's what is being used by the tech companies
and the platforms to sell us stuff,
to shift our views, to change us into a version of ourselves
that is more predictable and clear-cut
than we might actually be.
And that has an effect.
And when identification becomes identity,
that is really bad.
Like that means we have lost the plot
and we are no longer in charge of our own lives.
Right.
Back to the algorithm subject for a second.
You, in addition to these conversations
that you have with Susan,
you introduced PewDiePie into the equation
and kind of show the history of this guy's ascent
to becoming the emperor of the internet.
And it's really the story of a young man grappling
with the responsibilities of so many millions of people
paying attention to what he says
and making mistakes in real time
and trying to understand where he's gone awry and where that responsibility rests.
And there's a parallel with Susan in that,
she being like the 16th employee at Google
from the very beginning
when YouTube was just cat videos, et cetera,
and then inheriting a responsibility
that nobody could have predicted
and grappling with the very real world problems
of how to police is the wrong word,
but manage that responsibility.
And it just seems like an almost impossible task
for a team of people to manage,
whether with some aspect of automation or not.
And I just don't know with more and more people
uploading every day and the billions of hours
that are on there, how that can be accomplished.
And of course, from an ethical perspective,
how much of that should be managed or policed?
Yeah, there's a lot in what you just said.
And I think that you're right to sort of point out
that the task that these companies have set for themselves
is really an impossible one.
I mean, to responsibly curate billions of hours of video every day
is probably impossible.
And you can get better or worse at doing it,
but you're never going to be perfect.
And I think that the thing about algorithms is that they have goals, right?
You have an algorithm and you are trying to use that algorithm to maximize some variable. And with YouTube, that variable has actually changed over the years. So at first it was, we're trying
to maximize the number of clicks. Like we want people to click on videos.
And so they implemented this algorithm and it worked for a while.
And then people started to game it with these like clickbait titles and they would have
these sensational thumbnails and then you'd open a video and it would be, you know, a
guy, you know, scratching, scratching his armpits, you know, for, for 45 minutes.
It didn't have anything to do with, with do with what the title was. And so then they
changed the variable to watch time. And they said, we want people to watch YouTube for as long as
possible. And so the algorithm was trained and calibrated to produce watch time. And that worked.
Like it was a really good algorithm at producing watch time. Unfortunately, one of the ways that
it produced a lot of watch time was by steering people into these extremist communities. And so that is a side effect that I
don't think they anticipated. And that, you know, I talked to one guy who was a former YouTube
engineer, Guillaume Chesleau, who was part of the team that developed the AI. And he sort of brought
up some of these concerns. He said, I think we're only showing people
one side of the story.
We're feeding them conspiracy theories
and he was kind of ignored and eventually got pushed out.
And so I think that the goals that we give our algorithms
are so, so important.
And it doesn't have to be watch time.
It doesn't have to be clicks.
It doesn't have to be outrage.
It doesn't have to be engagement. It could be something else. And I think the tech companies need to adjust what
they're aiming for. And I think good things could come of that. So that's an optimistic bent.
I'm mostly optimistic. I mean, I know I write about horrible things happening on the internet,
but I actually like this surprises people because my Twitter feed is all QAnon
and extremists and neo-Nazis.
I deal in a lot of really gross stuff
happening on the internet.
But I'm optimistic about people.
I'm not optimistic about all people,
but I'm optimistic about if you give people good information,
they will make better choices.
And I think that one thing that we
often don't realize is that algorithms
are just people.
Algorithms are not some magical detached thing.
They are built by humans who give them goals,
who tell them what to maximize,
who calibrate them and train the models. Kathy O'Neill, who's a great
writer on this subject, says that algorithms are opinions embedded in code. And I think when we
start to realize that these algorithms aren't some iron law of the universe, that they can be
changed and adjusted, it becomes a lot more hopeful because you start to realize like how it is right now
isn't how it has to be.
Sure, but at the same time,
there is a fundamental misalignment of incentives
across all of these big social media platforms
in that when they're oriented around maximizing watch time
or keeping you on the platform,
it's generally for the purpose of serving you up
as many ads as possible
so that they can ratchet up their ad revenue.
And until or unless there is a new
sort of conceptual orientation
around what a social media platform can and should be,
where the incentives are set up in a healthier dynamic,
we're gonna continue to see,
the kind of bad actors and gaming of the system
that we've seen today.
Totally, and that's why I think we need new incentives
for these networks and for the networks
that are going to replace them.
I mean, every social network gets replaced eventually, right?
One of the most predictable things in the tech world
is that every company dies and is replaced.
Every MySpace has a Facebook.
Every Friendster has a MySpace.
And so I think what we're trying to figure out right now
is what's going to come after our current social platforms
and how will they be different.
And my hope is that seeing how the last generation
sort of screwed things up
and how long it took them to fix their problems
will instruct the people building the next generation.
And maybe they'll still make mistakes
and this is still very much a work in progress.
And I think some of this is still very much a work in progress.
And I think some of this is just us evolving or adapting to our new environment technologically.
But I'm hopeful when I see people
making more thoughtful decisions at these companies.
Not that necessarily those companies will become non-profits
and give back all the money that they made
from promoting extremism or whatever.
But just that people are at least aware
that these pitfalls are out there.
Yeah, sure.
I mean, I suppose the current test case
in the Petri dish right now is Clubhouse.
And this is something you've reported on
and made a very keen observation,
which is that all of the things that we traditionally see
in the arc of a social media platform
are all kind of happening simultaneously with Clubhouse.
Like it starts out like this utopian place
where everybody's sharing ideas and it's wonderful
and then enter the bad actors and then, you know, okay,
so what are the community guidelines gonna be?
And then how are we gonna monetize that?
But this is all occurring with a very compressed timeline,
which I think is interesting in that it elucidates
like how fast everything is moving now.
Like the fact that this life cycle
is occurring instantaneously when with Facebook,
it took years and years and years for this kind of dialogue
and progression to unfold.
Totally, this is all happening so fast
and like Clubhouse isn't even technically released yet. Like it's still in beta, right?
But it's already been through like the first 10 years of Facebook's problems.
And it's now sort of trying to figure out like where to draw the lines. But I think Clubhouse is really instructive to me
because I don't know how much time you spend on there,
but I spend a fair bit of time on there.
And it seems to me like therapy for people
who have been traumatized by other social media platforms.
It's like group therapy where you're like,
okay, what did we just do to ourselves
for the last 10 years?
I was in a room the other day, and the topic was sort're like, okay, what did we just do to ourselves for the last 10 years? I was in a room the other day and the topic was
why we hate our Twitter personas.
A lot of people, I think, have spent years
crafting these personas that aren't even really them
and that they grow to hate because everything
has to be a little sharper for the retweets
and you've got to throw the jabs in there and like start fights. And like, maybe you're
saying things that you don't even really mean. And you look at them later and you're like,
why did I do that? And, and I think people are starting to really, there's like this,
this growing feeling of like, I don't like who I show up as on the internet, on social media.
And so Clubhouse to me feels like a response
to that piece of where we all are right now.
Yeah, that's cool.
I mean, I've been a bit of a lurker there.
I have yet to chime in on anything.
I'm gonna host my first Clubhouse chat
in another week or two.
But I've just been trying to sort of listen in like yourself
and figure out what's going on.
And it does seem to me fundamentally different.
There is something about the participatory aspect of it.
It's like being on a conference call
with a whole bunch of people where everybody gets to say
the fact that it's not recorded
and then it disappears afterwards feels like it sort of,
it's like a pressure valve release on that idea that everything I say has
to be finely honed and directly on point. And it seems to be supportive in general,
like everybody conducts themselves respectfully of each other. Maybe there's rooms where that's
not occurring, but that's been my experience today. Yeah, I mean, to me, it feels more human.
It feels like we are more complicated people
on Clubhouse than we are on Twitter,
where we're all sort of flattened down
into these like two-dimensional avatars
that just like fight with each other all the time.
You know, I've had people who like
call me the worst names possible on Twitter,
and then they show up in a Clubhouse room
and they're perfectly pleasant and cordial.
And it's like, what happened to that person?
Where did that guy go and why is he not calling me a communist?
It feels like, and I don't know that Clubhouse has problems.
There's tons of harassment and hate speech.
It's got all the problems that any big social network would have.
But I do think we're starting to see new models emerging
that maybe allow people to be a little bit more human.
Having a conversation voice-to-voice
is so much different than having it avatar-to-avatar.
And I think it's better, and I think it's more humane,
and I think it leads to better places.
And so I think that's what the next generation
of tech platforms are gonna be built around
is like, how do we show up as ourselves on the internet
and get away from these kind of two-dimensional caricatures
we've created?
Sure.
What's your sense of how it will impact podcasting?
I don't think it's the same thing.
I mean, there's some overlap,
especially the kind of like more produced clubhouse rooms where you have hosts and guests and some parts,
there's sort of like a little Venn diagram middle there,
but it's so different.
And podcasts are wonderful
because you can really go deep on a subject
Clubhouse I found is more like
going to a bar
and one minute you're talking about
video games and the next minute you're talking about crypto
and it's just this of this sort of moving
conversation that is kind of fun to be part of. My big thing about Clubhouse is I don't know
whether it's going to outlast the pandemic. I think that when people can go outside again and
talk to their friends and go to bars, like I can't see myself spending three hours a night on my
phone listening to some guy talk about NFTs for artists?
Well, I think there will be some breakout clubhouse stars
and those are gonna be the people
who are showing up consistently
to host a very kind of curated experience.
So it's gonna be less about the meandering nonsense
and I think more targeted.
And with that, you'll see monetization,
whether it's subscriptions or ad supported shows.
So I think it will not only survive, but thrive.
But I think the early entries
where it's just a bunch of people talking about nothing,
I do agree.
I think that people will tire of that.
I mean, that was a lot of podcasts too
in the early days, right?
It was just like people talking about nothing.
It's a lot of podcasts now.
It still is.
There are like 2 million podcasts,
so there's plenty of that.
Right.
So I think as platforms mature,
like the bar kind of gets higher,
you need to show up with a little bit more fire
to make a dent.
But I think right now Clubhouse is still in the weird experimental phase,
which I love.
I have no interest in the boring business Clubhouse rooms.
I love the weirdos.
There's this group the other day who was making,
it was called Whale Moan,
and it was all people just moaning like whales in a clubhouse room.
There's this group
called the Cotton Club
where you go in and you change
your avatar to a black and white photo
and you pretend like you're in the 1920s
and someone says,
hey Rich, what are you having tonight?
What are you drinking?
And there's jazz playing.
It's sort of a fun virtual world. I love the tonight? What are you drinking? And there's jazz playing. And it's like sort of a, you know,
a fun like virtual world.
So I love the weirdos who are experimenting
with stuff on there.
Yeah, cool.
All right, well, one more thing about Rabbit Hole
before we move on.
The last two episodes focus on QAnon.
And for a lot of people,
I have to believe that that perhaps
may have been their introduction to the world of QAnon.
And at that time when the podcast was first released,
it of course metastasized into this whole thing.
You have written about your perspective
on the current state of QAnon,
but because so much occurred in the aftermath
of completing Rabbit Hole,
I do think it would be good to hear a little bit
about your perspective on QAnon now
and also whether there is a plan to do a season two.
Like there's still so much to be mined here.
Like I would love for you guys to continue this project.
Yeah, it's, well, I'll talk about the QAnon piece of it first.
I mean, what drew me to QAnon,
I started off in journalism as a religion reporter.
I spent a semester undercover at Liberty University,
the Jerry Falwell's super right-wing Christian university.
And I came from like a very secular home,
very liberal, secular.
And so I was just like fascinated by the existence
of these like right-wing evangelicals.
So I went there for a semester, took all the classes,
you know, sang in their church choir,
lived in the dorms, wrote a book about that.
And that was sort of my introduction into journalism.
And when I started hearing about QAnon a couple of years ago,
it really triggered the like,
oh, this is a religion thing for me.
It was like, there are all these people,
they believe in this like narrative
about how the world operates and who's pulling the strings
and like these powers beyond our comprehension.
And they're following this mystical character on 4chan
who's dropping little clues
that they're all sort of interpreting together.
And it's filling a sort of community need
in a lot of their lives that I don't think people,
people think QAnon is just like a bunch of wackos
like being insane on the internet.
And there is some of that,
but people are also using this
as it's become people's social lives.
They're making friends.
It's an activity that they do together on the internet.
It's a lot of fun for them and so I felt a real parallel between that
and the religious reporting that I'd done earlier in my career
and so I just decided I've got to investigate this
so I spent a long time talking to QAnon believers
hanging out in chat rooms
listening to their conversations
reading up on the movement
and we did these two episodes about QAnon.
And then after that, like at the time,
QAnon was sort of like this novelty thing.
It was like, oh, look at these interesting weirdos.
Like they're maybe a little violent sometimes
around the edges, but like mostly, you know,
it's just strange and troubling,
but like maybe not an immediate threat.
And then they marched on the Capitol, right?
Like then it was like the QAnon riot at the Capitol.
And, you know, QAnon was a big part
of the stop the steal thing.
And suddenly it was like,
oh, this is not just like a cult of internet weirdos.
This is like a extremist political movement
that is like creating real offline violence and harms.
So yeah, I mean, I think there's so much more there.
I'm still fascinated by QAnon.
I think the movement is kind of dissipating
because Trump isn't the president anymore,
which was the whole fun of it
was that Trump was the president.
And so now I think a lot of their predictions have not come true. There are no mass arrests
of satanic pedophiles in government. And so I think a lot of people have sort of quietly moved
on. Right. And as far as season two, you should talk to my bosses at the New York Times, but I,
it's, it's really like, I think it would shock you how hard it is
to make something like that.
Oh, you don't have to tell me.
You know what I mean?
I can't imagine,
like I have such an appreciation
for the production quality of what you guys created.
I mean, that is a Herculean effort to pull that together
and to kind of create the experience that you created.
I know how much work it is
just for me to do simple conversations with people.
To do what you do is an entirely different thing.
Like I understand that that requires
a tremendous amount of time and resources
and money, et cetera,
but it's tremendously valuable.
Thank you.
I'm happy to call your bosses.
Yeah, please do.
I mean, I was shocked by that too
because I come from the world of like writing,
which is like time consuming
and requires a lot of thoughtfulness.
But I think podcasting, that kind of podcasting,
the kind of hyper produced narrative podcast
is actually more like making a film
than writing a newspaper article.
And I didn't appreciate that.
Like you gather hundreds of hours
of tape and then you cut it down into
36 minutes.
And it's like all this work
and then you're like, that's it?
Eight episodes?
27 minutes.
But I think it was a ton of fun.
Very rewarding.
I love podcasts as a medium
and I hope I can keep doing them.
So yeah, if you wanna call up my bosses,
I would not turn you down.
Cool.
Well, let's talk about the new book.
First of all, it's pub day for you.
So congrats on that.
Thank you.
That's very exciting.
It's not that much of a leap given where your focus has been
you know, talking about algorithms,
which is by its very nature and you know,
a discussion around automation to, you know,
the broader conversation around AI and what the future
and the present looks like in terms of our personal lives, our careers, and what our
communities look like. And what I appreciate about the book is that you approached it, well,
first of all, it's like part primer on what AI is
and what the nature of automation truly looks like.
But then it's also this sort of prescriptive manual
on how to deal with all of this.
It's like, it is in many ways a self-help book,
which is very cool, but it's laced with nuance.
It's neither utopian nor dystopian. Like
you're really trying to grapple with all of this in the most objective way. So maybe we could start
with just, you know, what prompted you to want to write a book about this? Like what was the question
lingering in your mind that led to this work? Well, it was literally a self-help book.
I mean, I was mostly asking,
is this going to happen to me?
Am I going to lose my job to some algorithm?
Am I turning into a zombie
who just scrolls through his feeds all day
and then goes to bed and wakes up and does it again?
I was really worried about what I was becoming
as a result of these algorithms and this automation
and this AI and what my future was going to look like.
I go to a lot of tech events for work
and I talk to a lot of people in the industry.
This was a few years ago and I was really shocked
by how simplistic and theoretical
the conversation around AI and automation was.
There were people who said,
all of this is terrible.
Robots are going to take all the jobs.
We're going to be unemployed and homeless
and it's going to be awful.
We're going to be the people in Wally
driving around on our chairs,
drinking big gulps.
And there are other people who think
this is going to be amazing
and AI is going to
help us discover new cancer drugs and fix the climate crisis. And those were sort of binary
opinions, but there wasn't a lot in the middle. And there also wasn't much that felt practical.
Like, what do I actually do if I am working in a job that is likely to be automated, how do I know if I'm in the risk zone?
And if I am, what can I do?
Do I go back to school? Do I learn to code?
Do I start sucking up to Alexa
so that when the robot revolution comes,
I end up on the robot side?
What is the actual action item here?
And so I couldn't find that.
I kept just trying to find anything
that would help me deal with this problem
and it didn't exist.
And so I decided to write it.
So the first half of the book is the kind of diagnosis part.
That's like, what is actually happening
with AI and automation?
How is it changing the world around us?
It's not about the future, it's about the present
and the ways that these things are showing up in our lives
and what it's doing to us.
And then the second half, as you said,
is the more prescriptive part where I'm telling people like,
here are nine things you can do to future-proof your life,
your career, and your community.
So let's talk a little bit about what's happening in the present
because I think it sort of belies this conventional notion that AI or automation is all about robots on the factory floor.
And that's're unaware of,
but has real kind of existential implications
in terms of the workforce,
not just with blue collar workers,
but now very much white collar workers.
Yeah, I think we still have this outdated notion
that automation is something that happens
to people in car factories,
and that it comes in and you come into work one day
and there's a machine there and sorry Rich,
your job has been automated, have fun finding new work.
That's not really how it works anymore most of the time.
And that's not who it happens to most of the time.
Most people who are being displaced by AI and automation
are not manufacturing
workers. They're people in retail, they're people doing clerical work, and they're increasingly
white collar professionals, like doing the kinds of things that you and I do. People in sales,
people in finance, people in medicine, people in law. There are now algorithms that can diagnose certain types of cancers more accurately than
human radiologists. So it's not just like truck drivers and, you know, McDonald's cashiers and
factory workers. And so I think that requires us to sort of update our mental model of what this
stuff is and how it's going to affect our lives. Because for a long time, this was sort of like people like me,
you know, thought this is someone else's problem.
And with that, you know, from my perspective,
it's almost that nobody is immune, right?
Like you talk about how, you know,
one of the kind of catalysts for this was understanding,
you know, early in your career, when you were a financial reporter, that a lot of those articles
are now generated by AI. There isn't even a journalist that writes these financial report
documents. So if that could be the case, then, you know, is anybody safe? And, you know,
was it last week where there was the whole, was the whole Tom Cruise deep fake thing on TikTok?
Like even Tom Cruise is not immune
from being automated into movies.
So it is frightening and it does beg the question of like,
what are we gonna do about this?
But at the same time, kind of balancing out against that,
you also see these AI generated screenplays
that are terrible or the point of sale machine
at the grocery store that never works right.
Like a lot of this stuff is super janky.
Like it feels like it's super futuristic
and also completely ineffectual at the same time.
Yeah, there's a great concept
that I got from these two economists,
Daron Osamoglu and Pascal Restrepo,
who write about automation
and its effects on labor markets.
And they talk about this concept
of so-so automation,
which is sort of the janky automation
you're talking about.
It's like the grocery store,
you know, self-checkout thing
or like the call center,
you know, you call for customer service when you need to renew your insurance or something.
And it's the kind of automation that actually isn't much better, if it's at all better,
than the humans it replaced.
But it is cheaper and it allows companies to reduce their costs.
But it doesn't make them... In the past, what's happened is we've gotten technology
that displaces some old jobs, but it also creates new jobs.
So the car, the automobile displaced a lot of carriage drivers
and people who sold buggy whips and blacksmiths who made horseshoes,
but suddenly you could get a job as a chauffeur or a car dealer or a mechanic.
And so that's kind of the way that we've been able
to kind of keep the economy running,
even as people get displaced.
But these new kinds of automation
that we're seeing a lot more of,
these kind of so-so forms of automation,
they're basically just good enough to replace humans,
but not good enough to enable other kinds of work
and to grow productivity and to grow the economy.
And so the solution is actually not less automation,
it's better automation.
Right, you talk about the productivity paradox.
Like if these innovations in AI were so good,
then how come we're not infinitely more productive
than we were before?
We're kind of the same.
And what's qualitatively different about this moment
is that the AI that we're onboarding at the moment
isn't having that reciprocal impact on creating new jobs.
It's eliminating jobs, but we're not seeing, you know,
the opportunities that generally come with new innovations.
And that's kind of the hearkening
of something very different.
Totally, yeah.
There's the whole industry
that's grown up in the last 10 years
and it's called RPA.
It's Robotic Process Automation.
And this is basically companies that sell software
to replace humans in the workplace.
So they're companies you've never heard of,
like UiPath and Automation Anywhere and Blue Prism.
They're kind of these faceless B2B software companies.
But they're huge and they're making tons of money.
And the way that they're doing that is by selling Coca-Cola
or Walgreens, a software package,
and saying, we can do with one bot
what used to take 20 people in billing to do.
And so executives love this stuff
because it's cheap and it's easy and it's efficient.
And even if it's not quite as good as the humans
who used to do that job, even if it makes some mistakes
and you have to kind of come in and correct it,
you're still lowering your costs.
And so that's what they're going after. And that industry, I think, flies under the people's radar because it's not
flashy. It's not, you know, Siri and Alexa. It's not flying cars. It's not, you know, self-driving,
you know, Ubers. It's this very basic office automation software, but it's had a huge impact
on the labor market. Right. And of course, the pandemic has
presented the opportune moment to implement a lot of these automated systems while everybody's
working from home and to do it in kind of a guiltless manner. So talk a little bit about how
COVID has accelerated this process. Well, COVID has really transformed the way
that executives feel about automation.
That has been the biggest thing.
I mean, there are certainly companies
that have used automation to kind of get through COVID.
So meatpacking plants, FedEx, shipping companies,
they've had to sort of bring in robotics and automation
just to be able to keep up with demand
while some of their humans are out sick.
But there's this other piece of this,
which is that I think for a long time,
executives knew that they could automate people,
but they were hesitant to do it
because they didn't want to be seen as job killers.
They saw what happened to executives in the 90s who sent jobs, who outsourced
jobs to India and China. They became villains and, you know, they were sort of the bad guys
in global capitalism. And so I think a lot of them sort of hesitated. They said, well, you know,
let's try to do a little bit of automation, but like maybe not the kind that's going to replace
anyone and we'll sort of play it safe. And then COVID happened. And it was like, someone described it to me as a consultant who
helps companies sort of do this kind of automation said, it's basically the executives are saying
like, we don't care anymore. Like we just need to do what we need to do for our business. And he
said, it's made automation a lot more politically acceptable because tons of people are out of work,
businesses are shutting down.
And so people I think are a lot more sort of forgiving of companies that need to make these changes right now.
So the floodgates have kind of been opened in a way.
Yeah, yeah, yeah.
Before we get into the prescriptive aspect of all of this,
was there anything that surprised you
in all the research that you've done?
I mean, obviously you're very tech fluent
and savvy going into this,
but what was something that you didn't expect
that you learned?
Well, I had sort of bought this narrative
that I had heard from people in Silicon Valley,
which is that basically every time
there's a big tech transformation,
whether it's the first industrial revolution
or the invention of electricity
or the factory automation of the 1960s and 70s,
that this has gone pretty smoothly.
That's the sort of story that people tell out here in the Bay Area.
It's like, yeah, there were some farmers who had to learn how to work in factories.
But like overall, this improved everyone's lives.
People were happy that they no longer had to do the old jobs.
And so I went back and I actually read a lot of the primary sources, like people's journals from the Industrial Revolution, things like that.
And like it sucked for a lot of people.
This was not a happy time in a lot of workers' lives
because all of a sudden they had to learn new skills,
go to a new place to work,
deal with these awful working conditions.
There's child labor, there's exploitation on a massive scale,
and workers' wages actually didn't reflect
the increased productivity.
Corporate owners and factory owners
were making a ton more money because of all the automation,
but workers weren't actually seeing a lot of that.
And this happened again with electricity
and again with factory automation in the 1960s and 70s.
So I think there's a great book by this economist named Carl Frey called The Technology Trap.
And it's all about how we've kind of invented this history where technology just comes in
and there's a little bit of displacement.
It's a little rocky for a few years, but then everyone moves on and we're prosperous and happy.
And technology is just sort of the grand march of progress.
And he basically says like,
that's not really how it goes.
And his line is something like,
if this is just another industrial revolution
we're living through,
then like alarm bells should be ringing.
Like we should be really freaked out.
Right.
Yeah, it's terrifying.
But there is a hopeful, like I wanna make it clear
that like this is not a book that's like all doom and gloom
because I don't actually think we are screwed.
And in fact, I think that we have,
we know from history and from just looking at our environment,
like what we need to do to survive
and to be happy and successful.
It's actually not like, this is not destiny.
We don't have to end up suffering in the same way
that people have when they've been displaced
throughout history,
because we actually know what to do to prevent that.
Right, and the solution,
the thesis that you lay out in the book is essentially
to make yourself, I don't know, bulletproof,
but to protect yourself against this rising tide,
that solution resides in being the most human
that we can possibly be.
It's about cultivating our inner humanity,
which is a thesis in contradiction
to a more conventional narrative,
which is if you wanna be competitive in this world,
it's all about STEM, you gotta learn how to code,
you have to be computer proficient.
But those, as you point out in the book,
like all of those pursuits are the first to be automated
and a terrain in which a human is very ill-equipped to
compete with a supercomputer. Yeah, that's a really perceptive reading. And I think that's right.
You know, for a long time, we've been training people to effectively be machines. We've been
telling them, you know, major in engineering, learn to code, optimize your life. If there's anything
that's inefficient or suboptimal about your life, take that out and get as much done as you can.
All this hustle culture we see is very much a product of that sort of attitude about what's
valuable in the economy. And that was true for a long time. You could differentiate yourself through hard work and hustle and just pure effort.
But that's not what's going to happen.
That's not what's happening right now.
And we're starting to see that the skills that are actually in demand right now
are these kind of what we would pejoratively call soft skills.
It's things like communication and empathy and leadership and courage.
The things that machines can't do,
you're never going to be able to out-hustle an algorithm.
It's just not possible.
They can work 24 hours a day
and do millions of calculations a second
and you're just never going to keep up.
And so the value is going to shift
from the sort of effort-based things
to the kind of emotional and social things.
And in the book, I talk about three sort of categories
of work that I think are gonna be really resistant
to automation.
And I think that's sort of where people should go.
That being, does that have to do
with being surprising, social and scarce,
which is basically your first tenant in this list of nine things?
Yeah, you got it.
So those are the three types of work that I think is going to be hard to automate.
And this was drawn from interviews with tons of AI executives.
And this was not just me sort of sitting in my office like dreaming up a strategy.
This is based on a ton of reporting.
dreaming up a strategy.
This is based on a ton of reporting.
And the things that I asked AI experts were basically,
what are humans good at that machines right now are bad at?
And they told me that that basically falls into these three categories,
surprising, social, and scarce.
So let's just take them one at a time.
So surprising is work that involves irregular environments,
changing rules,
lots of variables, things that aren't predictable or regular. Like AI really likes rules and bounded environments and structure. And so that's why it's really good at playing chess, for example,
because chess, every game of chess has exactly the same rules. But if you, you know, train an AI to
be a kindergarten teacher,
it's not going to do so well.
It's going to be pretty bad at that.
And that's because there's so many environments,
there's so many variables,
there's so much going wrong at any given time.
You have to be really adaptable and flexible to do that job.
And so for now, that's human work.
Social jobs are jobs that involve making people feel things rather than making things.
So this is, you know, all the sort of compassion jobs
we think of, social workers, home care workers,
ministers, therapists, people like that.
But it also includes things that you wouldn't think of
as being social jobs, like baristas, for example.
On one level, that's a job that got automated a long time ago.
We have machines in our houses
that make perfectly good coffee,
but we still like to go to, I mean, pre-COVID,
we still like to go to coffee shops
because there's a human element there. We're
getting something out of that transaction that's more than just a cup of coffee.
So those kinds of jobs are going to be hard to automate because machines aren't that good at
making us feel things. People are much better at that. And then the final category is scarce jobs,
which is sort of people who have rare skills, are excellent in their fields,
people who are sort of working in like low fault tolerant environments.
So people, you know, like EMTs and people who answer the phone when you call 911,
those jobs are going to be very hard to automate.
When I think about you and your career, I see somebody who created a situation
in which you differentiated yourself from your peers.
Like on some level, you've created a personal brand
around the kind of beat that you're on
and the stories that you tell.
And I just know as somebody who follows you,
yes, I have reverence for the New York Times
and that means something,
but I'm not opening up the New York Times app
to see what the New York Times has to say about subject X.
It's like, I wanna know what Kevin thinks about this,
or I wanna know what Taylor Lorenz thinks
about this other internet thing,
or whether it's Nicholas Kristof or David Brooks
or whoever it is,
this idea that you inject your work with your personality
and that becomes so embedded in what you do
that some AI could cover a subject matter,
but it wouldn't have the imprimatur of what you do. And so you create career safety for yourself
by carving that out. Yeah. And I want to be clear, like this was not an intentional strategy on my
part. Like I did not think like I am going to run from the robots by starting podcasts and writing newsletters
and being on Twitter all the time.
But I think a lot of us in industries
that have been challenged by automation, like journalism,
have subconsciously adapted in these ways
that make sense when we look back at them.
So yeah, I did used to write formulaic stories.
I wrote the most boring corporate earnings reports
you could ever imagine.
Alcoa made $7 million in its smelting division last quarter.
That's the kind of stuff I was doing early in my career.
And then I sort of started realizing,
I need to kind of put a little bit more of myself
into my work because otherwise I'm replaceable, not necessarily by an AI, but just by another person. And so I
started infusing more of my personality into stuff. And I think that's what's happening in
the media writ large right now is you're starting to see the kind of, it's almost like moving from
like an industrial economy to an artisanal economy.
It's sort of like we used to be these factories,
you know, newspapers used to be these factories
that churned out, you know, like pink slime essentially.
And like all content was sort of the institutional
like voice from God.
And it was like, it didn't matter who really,
who the bylines were,
because that wasn't really what people opened it for.
And now you're starting to see kind of like
what I call like artisanal content making,
which is like, I am one person, you are following me,
this is my opinion, this is my reporting.
I'm not gonna hide the fact that I am a person
who made this.
And that's, I think where we're headed,
not just in journalism, but in a lot of industries.
Right, and it plays out not just in journalism, but in a lot of industries. Right, and it plays out not just in individuals,
but in corporations as well.
Like you wanna find that cool shop
that's on the main drag of the groovy town
that is not part of the pottery barn gap industrial complex.
And that's becoming harder and harder to do.
So then we then develop a greater sense of value
around those things.
Yeah, there's a whole chapter about this in the book.
I did a lot of reading around the psychology of value
and why we value certain things more highly than others.
And there's a concept in social science
called the effort heuristic.
And basically what it says is that we attach value
to things that we think people work really hard on.
So they've run experiments.
There's a professor at UNC who's run all these experiments
where he gives people a bag of candy.
And he says, this candy was randomly selected for you.
And people eat it and they give another group of people
the exact same bag of candy,
but instead of saying this was randomly selected,
they say, a human picked this out for you,
especially based on your tastes.
And they love it.
It makes a big difference in their appreciation of the thing
when they think that someone else worked hard on it.
So I think this is a really good roadmap for where the economy
and where our jobs are headed is I think that we have this tendency
to want to say, we want to sort of remove ourselves
from our work and sort of the biggest compliment
you could pay someone for a while is like, oh, he makes that look so easy.
But I think what we're moving to is an economy in which people who,
you know, people who say, you know, oh, he worked so hard on that. Like that becomes the thing that
is valuable. Everything that is done by machines will become cheap and inexpensive and easy.
And everything that's done by humans will be valuable and hard and will be rewarded for that hardness.
Right, I don't necessarily want the coffee mug
that I can buy at Target,
but I really want the one that Seth Rogen made
in his house on his pottery.
I love Seth Rogen's pottery.
I'm sort of obsessed.
Yeah, no, and I think that's like,
so one of the principles, one of the rules in the book,
in the second half of the book, is leave handprints.
Because I think we've gotten accustomed as sort of workers
to kind of trying to make everything we do perfect
and look effortless and take away all traces
of the fact that we struggled over something.
But I think what we should be doing
is really emphasizing the humanity that goes into something,
emphasizing that we are people who are making things,
who are doing labor in the world,
and that that's what makes it valuable,
that we're not machines.
And there's a great talk by this guy, Jan LeCun,
who's the head of AI research at Facebook.
And he uses the example of a flat screen TV
and a ceramic bowl.
So kind of like your Seth Rogen mug.
And he says, you know, flat screen TV
is like an amazing piece of technology.
It's got, you know, hundreds of parts.
It's got rare earth metals.
It's got lasers and it's entirely made by robots
start to finish.
And as a result, like you can get a pretty good TV
for a couple hundred bucks.
But a ceramic bowl that's made by like a talented artisan,
you know, technology that's been around
for thousands of years,
that's gonna cost you probably more
than the flat screen TV.
And the reason for that is not
because it involves more labor, more technology,
it's because it's done by a human rather than a machine.
And he thinks that's gonna be the wave of the future.
And I sort of agree.
What's really interesting about that is,
is that we're seeing that sensibility
getting played out in the digital space now with NFTs.
Like isn't an NFT a digital version of the one-off,
you know, piece of pottery.
Totally.
And that's, I'm actually,
I have some questions about NFTs.
I'm actually writing about them right now.
I'd love to know what you think about them.
Cause I think that-
I mean, I'm still, I barely understand it at all.
You haven't paid 200 grand for a GIF of Michael Jordan.
Yeah, I mean, I think there's something to this idea
of creating scarcity on the internet.
Because what enables the artisan
to charge hundreds of dollars for that ceramic bowl
is that there's only one of them.
And you can't duplicate it a million times for free.
And so I think that's-
You could, sorry to interject,
but you could 3D print it
and create an exact replica of that
in the way that you could copy a JPEG or a GIF,
but there's still the understanding
that it's not the original.
Right, and there's some friction involved in that.
It's not a simple process
or it's not free and infinite
the way that copying a file is.
So I think there is value in creating scarcity, a model for scarcity on the internet. I don't know
whether NFTs are it. There's been some reporting on sort of the environmental questions around
NFTs and whether we should be burning all this carbon to produce these these. I love, there was a tweet the other day that said like,
yes, I left 11 Ford F-150s idling for 15 hours,
but now the computer says I own a drawing of Spider-Man,
which I loved.
But I don't understand why so much carbon
has to be burned for these,
but obviously there's a reason
that has to do with blockchain
and stuff that I don't understand.
Right, so yeah, I'll figure that out.
I'll come back and tell you all about it.
But I'm actually like,
I'm glad that people are working on this
because I think that one aspect,
one precondition for this kind of change in the economy
where we're moving away from mass production
and toward sort of artisan production
is that you need to be able to protect scarcity
and protect artists and get them paid
for doing these kind of specialized works.
And so that right now that doesn't really exist
on the internet.
So I'm hopeful about NFTs.
Maybe I'm a sucker, but I feel like it could be good
for people like us.
Well, I think we're just at the starting gate
of something that I think could fundamentally change
how we think about ownership and creation.
We're seeing it now with JPEGs and pieces of art,
but it doesn't take a huge mental leap
to see how that impacts essentially everything
that gets created.
I mean, Kevin, you could write a short story or a book,
make that an NFT and do a limited run.
Like anything that you produce that is of you
can be valued in a way that's very different
than how we think about ownership
and demand currently.
Yeah, I'm thinking about it.
Should I turn this book into an NFT?
I think you should.
I think that would be, well, I mean,
you're the perfect person to do it, right?
Like you could be, it could be like this meta
Charlie Kaufman-esque sort you know, sort of adaptation experience
of like exploring the world of NFTs
through a, you know, a work,
a written work that is itself an NFT.
Yeah, I'll have to talk to Random House.
I'm not sure how copyrights work in NFTs,
but I might get a call from the lawyers.
I seriously doubt the publishing houses
have any concept of this right now.
You don't think they're talking to Malcolm Gladwell
about his next book-length NFT?
I don't know.
They should be.
Whether they are or not, who knows?
I mean, I've thought about it.
I'm really intrigued by this
and I think it could be the answer to something
that artists and creators have been looking for for a long time
because right now the way that artists and creators get paid
most of them on the internet is that they put their content
into these giant machines run by algorithms that we call
YouTube and Instagram and Facebook and Twitter
and the platforms sell ads against them
and then they kick back sometimes some small percentage
of what they're making
to the creators who have no control over what that percentage is
and no control over the algorithm
and they are basically like serfs
in the feudal empires
that these platforms have built.
And so I think any model that allows people
to cut out those middlemen is a good one. And I I think any model that sort of allows people to kind of cut out those middlemen
is a good one. And I probably shouldn't say that because I work for a middleman.
I'm part of the industrial content model here at the New York Times. And I think there's a role
for institutions and I'm not a burn it all down, mass media is over. I don't think that's true,
but I do think it's good for creators
to have different models of getting paid.
Yeah, 100%.
But you do have one thing in the book,
which is arm the rebels.
So there's a little bit of a punk rock aesthetic in here
of rise up against the machines.
Yeah, I mean, it's hard to talk about this stuff
without sounding like I'm saying technology's bad
and automation's bad and all this stuff is bad
and things were better before.
I'm not nostalgic like that.
I grew up on the internet, I love the internet
and I want it to be the best version of itself.
And I also think that the way that our internet is arranged right now is like,
we're not doing so good. And so I have this, in the book I talk about these two people
in the 19th century. One is Henry David Thoreau, who wrote Walden. And he was sort of the grouchy
tech-phobe of his day. he was like, this stuff all sucks.
And it's, you know, I'm moving to the woods
and I'm gonna write, you know,
I'm gonna write about my transcendental life in the woods.
And there was also this other woman very close to him
who is not as well known, but her name was Sarah Bagley.
And she was a labor organizer.
She actually was one of the people who worked in the factories in the Industrial Revolution
and became a labor leader.
And she advocated for, not for the,
she wasn't a Luddite, she wasn't breaking machines,
but she wanted workers to be treated better.
She wanted the machines to produce more prosperity
for the people who were operating them
and not just for the owners.
And ultimately,
she became America's first female telegraph operator, which I love. Like, she was sort of
this groundbreaking person who was part of this technological revolution. And she did, took the
opposite approach. She didn't run away from the technology. She leaned into it and she tried to
make it fairer and better and more equitably distributed.
And like, I think that's the model I'm getting behind here.
It's not this stuff sucks and we should all, you know,
throw our phones in the nearest body of water
and go live in the woods.
Like, I think we have to engage with this
or else, you know, it shapes us
and we don't get to shape it.
Right, if you don't engage with it,
then you truly do become a Thoreau-esque character.
And it's just not an option for navigating the world,
especially if you're a young person,
you have to be literate in these tools,
but it becomes incumbent upon you
to shoulder the responsibility of, you know,
managing your life in such a way
that they're not commandeering your decision-making
and driving you towards outcomes that you don't want.
So a big part of the book, in addition to canvassing
what the workplace currently looks like
and quite possibly will look like,
it's about tools for how to manage
your own interior life, right?
You talk about doing these digital detoxes
and learning how to sort of be in charge of your phone
rather than letting your phone be in charge of you.
So talk a little bit about what that was like,
cause you kind of did that experientially.
Yeah, so I think one hard thing,
so the message of the book is pretty simple.
It's that in order to survive this wave of AI and automation
and keep our jobs and be happy and healthy and prosperous
and good members of communities, we need to become more human.
Pretty simple message.
The hard part about that is there are forces that are conspiring
to make us less human.
And some of those are right on our phone.
And so we interact with them every day.
Today, before I came on this podcast with you,
I woke up, I checked my phone,
I saw what was happening on Twitter,
checked my Instagram feed,
asked Alexa what the weather was going to be like,
listened to a Spotify playlist
that was automatically generated for me.
Like I am not immune to these forces either.
And all of that is kind of making us less individualized
and differentiated and making us sort of more homogenous
and I think less human.
And so one of the things that I tried to do
as part of this book is do this kind of phone detox,
but it was not really about using my phone less.
Like I think we all use our phones a lot.
I'm not sure what's your screen time like these days?
Oh, it's, I'm embarrassed to admit.
I mean, just so many hours.
I mean, it's with the pandemic it's ramped up
and you know, it's just embarrassing as somebody who hosts a podcast
about wellbeing to then get that alert on Sunday
that tells you how much you bet on your phone.
I know, I hate that alert so much for the last year.
It's like-
I'm trying to disable it, but you can't, I don't think.
No, it wants to shame you
and it wants to tell you how badly you're doing.
But I actually don't think like screen time itself is the problem.
I think the problem is that we delegate authority to our phones.
If you remember the first phone you got, it was a tool.
You used it to call people.
You might have had texting, like the T9 texting.
You would use that.
You would use it as a calculator.
You might use it to play Brick Breaker or Snake or whatever,
but it was a tool.
And then over time, they became sort of our bosses in this way.
They tell us what to think about, what to do,
what to care about, what news stories to read,
who to communicate with, how to communicate with them.
They sort of run our lives in this way that I think is very harmful.
And so my phone detox,
which I actually got a phone coach for
who guided me through this 30 day process,
like going to rehab for phone addiction.
That might be a future proof occupation.
Oh, there are gonna be phone coaches
in every town in America. Like this is going to
be, if I were a brazen capitalist, I would open up like the world's largest center for phone rehab.
And I would just have hundreds of thousands of people streaming into my rehab centers every day
because they can't think for themselves anymore.
So yeah, I think that's part of what I did
to sort of try to future-proof myself
was to say like,
it's not just about using your phone less.
It's not about being distracted.
It's not about being productive.
It's literally about like figuring out who you are
and what you actually value
and separating these categories
of like identification and identity.
Because I think our phones and the technology
in our lives want us to blur those things together.
So I try to spend at least an hour every day,
just like no phone around me.
I call it human hour.
And I go, you know, work in my garden
or I take the dog out
or I do something that is just my time
to kind of remind myself, like, I am not my digital avatar.
I am not a creature of the internet.
I actually am a person in the offline world
who has desires and hopes and beliefs and values.
And so that's sort of my daily chore is to remind myself,
like, I am not who I show up as on the internet.
Right.
Yeah, it's super powerful.
I mean, this is something that Cal Newport
has written extensively about.
Everybody should read his book, Digital Minimalism.
He's got this new book out about a world beyond email.
And I think the kind of mental leap
or what makes it so challenging,
especially for somebody like yourself or myself,
who applies their trade online to some degree,
there's a feeling or a fear of irrelevance,
or if you opt out for a minute that suddenly
you're jeopardizing what it is that you do for a living
and decoupling that, like trying to understand
that that's actually not true and that it is core
to the quality of what you do for a living
to put distance between yourself and technology
from time to time.
But it's hard.
It's hard.
No, I'm not saying it's easy.
Like I struggle with this every day
and my wife will tell you,
I am not an ideal phone-free meditator.
You have the ultimate excuse.
It's like, this is my beat.
I have to know what's going on in tech.
I have to be on this all the time.
Things are happening quickly.
This is what I'm writing about.
Exactly. If I'm up at 1 a.m. looking at TikToks, I mean, that's work. Those are billable hours. No, but I think that's sort of the story I tell myself. But in reality,
these are very powerful machines. Every time you look at your phone, you are looking at a device
on which hundreds of billions of dollars have been spent to make you
distracted from whatever else you might've been thinking about. The world's smartest engineers
are working on how to stop you from looking away from your phone. And so I think we tell ourselves
that we need to be plugged in, but actually I think there's something happening right now where like I am in awe of people
who are not slaves to their technology.
I mean, sometimes even when you read a book,
you can just tell like, oh, this person is not on Twitter.
It's almost transgressive
when you see somebody reading a book.
Yeah, and it's like, they're more evolved than I am.
Like they are not in the same,
like they are eating artisanal slow food
and I am over here like chowing down on KFC
and like we are not the same.
And so I think in the present and in the future,
there will be a real value to being able
to separate yourself from your phone
and your feeds and your technology
because that's how we get better. I mean, everything in life that makes us better,
very little of it happens through the phone. It happens through taking on personal challenges,
through struggling, through raising families, through building communities. Like that's the stuff that is really hard
and really rewarding.
Until we graduate to the ultimate virtual world
where we can just upload our consciousness
into an alternate reality
and create the utopia of our dreams.
Exactly.
Or I just wanna, I want the Wally chairs.
Like I wanna, I just wanna be driven around by a robot
and drink my big gulp some days.
To your law school class at Walmart.
Exactly, exactly.
If you were giving counsel to a young person now
who's listening to this, who's grappling with like,
well, what should I pursue?
Or what would be the best way to set myself up
to be future proof?
What are some of the things other than just the general,
like double down on the soft skills,
like where is your head in terms of like
where you would direct that person?
Well, I have a, there's a chapter in the book called
Learn Machine Age Humanities.
And I don't think that just studying classics or philosophy or art history is enough.
I don't think that's going to prepare people.
But I do think there are these skills that we can teach people that will help them no matter what they choose to study.
So one of them is this idea of guarding our attention.
I think that's a really underrated
skill. People who are able to kind of control what they pay attention to, what they think about,
whether that's meditation, whether it's, you know, getting off their phones, whether it's
reading books, I think that's a really important skill. I think there are other skills that we can
teach people, like the value of rest.
I mean, there's a really counterintuitive thing that I sort of came across, which is that a lot of the people I knew who were the highest ranking people in AI and engineering
and hyper, you know, hyper well-respected, thoughtful people were actually not working
like 17 hour days.
They were taking time off.
They were going home to their families.
One of them took naps during the day.
I mean, these are like the kinds of things
that we would think, oh, that person's a slacker.
They're not very productive.
But actually that's what allows them to do
the kinds of creative and thoughtful human work
that machines can't replicate.
Yeah. An amazing example of that would be Yuval Noah Harari, who goes on these extended silent
meditation retreats. And he credits those experiences in addition to his daily, however
many hours he does it every single day, with his ability to have that kind of objective perspective on really big issues and
the clarity that he can instill his writing with. Yeah. And I'm not as good at meditating as that.
You know, I top out at 10 or 12 minutes on my call map, but I try, and I think that's a really important skill
for people to develop.
And I also think we're starting to see
that there are classes and curriculums
that are being sort of adapted slowly,
and mostly at sort of high-end, elite schools.
But you're starting to see things like,
there's a course, I think it's Stanford,
I don't know what its actual title is,
but it goes by, people refer to it as empathy for engineers.
So they basically take people who are computer scientists
and teach them about people,
teach them how to be emotionally intelligent,
how to read people's emotions, how to talk to people.
It's sort of like remedial people skills
for super high IQ engineers.
And you're starting to see that in more disciplines.
There are medical school classes now
where they don't even talk about medicine.
It's about how to talk to patients.
It's about how to communicate with them empathetically
and to solve their problems and to hear them out.
So I think that's like the,
there's a venture capitalist, Frank Chen,
who invests in a lot of AI startups.
And the book that he recommends to people who work in AI
is this book called Everything I Need to Know I Learned in Kindergarten.
It's written decades ago by this minister.
And it's all about kind of like the basic skills of human existence, you know, sharing,
you know, like things that we learn in kindergarten
and then kind of forget.
And so his big thing is, you know,
those are the skills of the future.
And I agree.
I think that we all could use a refresher course
on the things that we learned in kindergarten.
Yeah, that's powerful.
I mean, in the most general sense, I suppose,
it's about learning how to live your life
non-reactively and more consciously, right?
To the extent that you can develop an objective sense
to an objectivity around just how much you're reacting
to technology and devices and develop the skill sets
to kind of transcend that so that you can have,
you know, the interior life and bring that into your,
not just your work, but your families and your communities.
That seems to be the antidote.
Yeah, and I think that's gonna be a big part
of how we get out of this.
I mean, especially coming out of the pandemic,
I think we're gonna need a lot of really intentional work
on ourselves because for the last year,
we've been experiencing the world primarily
through our screens.
We've sort of conflated who
we are on the internet with who we are offline. And I think we have a lot of rebuilding to do
there of getting back out in the community, getting back face-to-face with people when it's
safe to do so, like figuring out what we actually care about and value and who we are. Because I think, and you all know Harari made this point
in one of his books,
if the algorithms know you better than you know yourself,
then the power will transfer to them.
You will no longer be in the driver's seat of your own life.
These algorithms and AIs will.
And that's not a good place to be.
And I think that's something that I am conscious of every day
and trying to sort of swim in the other direction from.
Well, I think that's a good place to kind of round it out.
I love the book.
This is so fun.
It's a gift.
This is cool, man.
Well, if you're ever in LA,
you gotta come over and do it with me in person.
Yeah, I would love to.
Once I get my second shot, I'll be-
You had your first shot,
you're waiting on the second one.
I did, yes, yes.
So yeah, next time we'll do it in person.
Right, and maybe a final thought.
I'm interested in how you,
like today's pub day for your book,
so obviously you're going out and you're doing press,
and the way in which, you know,
authors try to market their books
seems to change like almost every year
in terms of like what works and what doesn't.
Like the attention graph is such a radically,
you know, elastic thing.
It's weird as hell, man.
And you wrote about this the other day.
Like you used to just, if you could just get on like,
you know, the morning talk show,
you're on the Today Show or Good Morning America.
And then maybe you get, you know,
a nice review in the New York Times
or you're in the, you know, the Sunday book section,
like you're cool.
And then it was about the digital space,
doing a bunch of podcasts or doing a Reddit AMA,
which I know you did the other day.
But like attention is so fractured now
that it's so difficult to try to, you know,
marshal the number of eyeballs
that you would wish to have
when you're sharing your work with the world.
Yeah, and it's fun to experiment with things.
I'm doing a, you know,
cause book tours obviously aren't happening.
So I'm doing a Clubhouse book tour.
I'm going around and doing different rooms
with different clubs on Clubhouse
that are talking about some of these issues.
I'm gonna have a virtual book party on Twitter tonight,
Twitter spaces tonight.
So I'm having fun experimenting.
I'm sure it's very stressful to be a book publicist
in the year 2021, but that's not,
I just write the books. It's somebody else's job to sell year 2021, but that's not, I just write the books.
It's somebody else's job to sell them.
No, that's not true.
That's where you're wrong.
It's your job.
The fact that you have to understand that,
like shoulder that responsibility, my friend.
All right, I'm convinced.
All right, cool.
Well, this was super fun.
I appreciate the work that you do.
I always look forward to every new piece that you write.
I love following you online.
So congrats on the new book.
It's called Future Proof,
Nine Rules for Humans in the Age of Automation.
Kevin's easy to find on the internet
at Kevin Roos on Twitter and all the places.
Anywhere else you wanna direct people?
You've got this new newsletter now.
Yeah, I got a new newsletter called Future Proof also.
It's at futureproof.tips, T-I-P-S.
And you can, I'll be sending out one newsletter a week
on sort of the issues in the book
and how you can be more human.
Cool, all right, man.
Come and talk to me again sometime.
Thanks so much.
All right. Take care.
Peace.
Thanks.
Thanks for listening, everybody. For links and resources related to everything discussed today, visit the show notes on the episode page at
richroll.com. If you'd like to support the podcast, the easiest and most impactful thing you can do is
to subscribe to the show on Apple Podcasts, on Spotify, and on YouTube. Sharing the show or your favorite episode
with friends or on social media
is of course always appreciated.
And finally, for podcast updates,
special offers on books, the meal planner,
and other subjects, subscribe to our newsletter,
which you can find on the footer of any page
on richroll.com.
Today's show was produced and engineered by Jason Camiolo.
The video edition of the podcast
was created by Blake Curtis.
Portraits by Allie Rogers and Davey Greenberg.
Graphic elements courtesy of Jessica Miranda.
Copywriting by Georgia Whaley.
And our theme music was created by Tyler Pyatt,
Trapper Pyatt, and Harry Mathis.
You can find me at richroll.com
or on Instagram and Twitter at Rich Roll.
I appreciate the love.
I love the support.
I don't take your attention for granted.
Thank you for listening.
See you back here soon.
Peace.
Plants.
Namaste. Thank you.