How to Be a Better Human - How to make yourself more human in an automated world (with Kevin Roose)
Episode Date: July 11, 2022Humans can have a complex relationship with technology: tools like smartphones make our lives easier, but they can also be a source of anxiety or dependence. The internet can be an amazing place, or i...t can be a doom scrolling nightmare. And then there’s the always looming threat that our jobs–even the ones we thought only humans could do, like making art–could be lost to automation. Kevin Roose is a tech journalist who writes about the intersection of tech, business, and culture. In today’s episode, he talks about the shift of technology’s role in our lives and how we can set up boundaries with our devices to regain our autonomy. He also shares why he’s optimistic about the future, and his view on how futureproofing your job in an automated world has less to do with sharpening up our coding skills and more to do with leaning into our shared humanity. His new book, “Future Proof: 9 Rules for Humans in the Age of Automation” is out now. Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
You're listening to How to Be a Better Human.
I'm your host, Chris Duffy, and today we're talking about the surprising ways that artificial
intelligence and automation will affect both the future of our jobs and our own behavior
beyond the workplace.
Okay, hi.
This is actually Chris.
This voice you're hearing right now, this is really me.
The voice that you heard before, that was computer generated based on audio of me from
past episodes.
And the fact that it's even remotely possible to create a computer generated version of my voice is terrifying.
Even if that voice sounded like he was maybe not fully enthused about doing this show and needed a cup of coffee.
But I am scared about that because I need this job.
I don't want to be replaced with a hosting robot.
I need this job.
I don't want to be replaced with a hosting robot.
And, you know, that fear, that fear of automation coming for our jobs and changing the way that we work, that is something that our guest today, Kevin Roos, knows very well.
Kevin is a columnist for The New York Times and the author of a recent book called Future
Proof, Nine Rules for Humans in the Age of Automation.
So Kevin has written a ton about how technology might impact our jobs and the way that we
work in the future. And he's a guy who really understands my terror when I heard that computer version of my voice. Because when it comes to worrying about automation in your job, Kevin has been there himself. Here's a clip from his TED Talk.
that I could be replaced by a robot.
At the time, I was working as a financial reporter covering Wall Street and the stock market.
And one day, I heard about this new AI reporting app.
Basically, you just feed in some data,
like a corporate financial report
or a database of real estate listings.
And the app would automatically strip out
all the important parts,
plug it into a news story,
and publish it with no human input required.
Now, these AI reporting apps,
they weren't going to win any polls or prizes, but they were shockingly effective. Major news organizations were already starting to use them, and one company said that its AI reporting app
had been used to write 300 million news stories in a single year. For the last few years, I've been researching this coming
wave of AI and automation. And I've learned that what happened to me that day is happening to
workers in all kinds of industries, no matter how seemingly prestigious or high paid their jobs are.
You might be surprised after hearing that clip of Kevin to learn that I always feel optimistic
after I hear his thoughts. And that's because Kevin believes that if we exercise some agency over technology,
we can make it something that works for us rather than the other way around.
And today, we're going to be talking about how to make that vision of the future a reality.
But first, we're going to take a short break. We'll be right back with Kevin after this.
And we are back.
Hi, I'm Kevin Roos.
I'm the author of Future Proof and a tech columnist at the New York Times.
Let's start by talking about Future Proof.
I actually want to talk about a lot of your writing and your other books as well.
But starting with Future Proof, what should regular people be doing to prepare themselves
for the future of work?
Well, a couple of things.
One is I think we really need to figure out who is most at risk.
So I think people need to look at what's happening in their industry, their profession.
And, you know, automation and AI are making huge strides forward in every industry right
now, including some ones that we thought were kind of immune to it, like art and
music and caring for elderly people. I mean, robots are being deployed to do all of those
things now. And so I think we all need to really take a close look in the mirror and say, like,
is what I do for a living vulnerable? Is what I do for a living repetitive enough
that it could be automated and may be automated soon.
And if that's the case, it doesn't mean that you should pack up and, you know, go plan for your
second career, you know, mining Bitcoin on Elon Musk's Mars colony or whatever. But it does mean
that we should figure out how to adapt and make ourselves less replaceable. And so for a lot of
people, I think the first step
is just sort of acceptance that this, you know, this could happen to me. And this was something
that I sort of dawned on me about a decade ago. And I learned that there were AIs being taught to
do basic reporting tasks, including some of the ones that I did as a young journalist.
And then the second thing I think we need to do is to display our humanity more in our work.
In my book, there's a rule that I call leave handprints.
And this is about basically taking the work that you do and instead of trying to erase yourself and the traces of sort of human frailty from it, leave those things in.
Make it very clear to the people who are consuming your work, whether it's, you know, an audience.
to the people who are consuming your work,
whether it's, you know, an audience.
Well, okay, so I have a question about that piece because like you said at the beginning,
there are lots of jobs that I would have thought
are not at all vulnerable to automation or to AI.
And then increasingly, I wonder if that's even true,
if there are any jobs at all,
because, you know, I would have never thought
that writing and comedy and, for example,
like using my own voice to host a podcast, I would have never thought that writing and comedy and, for example, like using my own voice to host a podcast, I would have never thought that those were vulnerable.
But now there are tools where people can type words into a script and it will make it sound
like I'm saying them or, you know, it's kind of a meta joke.
Right.
But there's like all over the Internet is like I fed 300 sitcoms into a neural network
and look at what it spit out.
And it is genuinely funny, mostly because it's like full of weird non sequiturs. But it just makes it clear that like it's possible
for a computer to be funny, whether it's intentional or not. Are there jobs that are just
not automatable or is everyone at risk? Well, the way I like to think of it is not as occupational
categories because there are no occupational categories that are safe. So some of every job will be automated.
The question is just which parts and how quickly.
So for example, there was an interesting little flap
just a couple of weeks ago when OpenAI,
this studio, this AI company in San Francisco,
released this program called DALI.
Have you heard of this?
I haven't heard of it.
Sort of like WALL-E, but DALI.
It's an AI that basically takes text and turns it into art. So you tell it,
I want an illustration of three bears playing ping pong in business attire on the moon.
And it will generate an original piece of artwork depicting exactly what you have asked it to depict. And it's incredible. It's really good. And it's the kind of thing where immediately illustrators
and people who, you know, make art for a living saw this thing going viral on Twitter and thought
to themselves like, oh, crap, like, I thought I was safe. And I am really not safe, because that
is essentially what I do. And this program is maybe not as good as me, but it's maybe 80% as good as me.
And it's so much cheaper and faster.
And you can get something instantaneously from a machine.
And so that's the kind of realization that I think a lot of people, especially in our
industries, in the creative industries, have had recently.
We sort of had this like automation for the but not for me attitude where we like
thought we were immune because we make things with words and art and music. And that is just
not true. There are AI programs being deployed now to, for example, create new levels in video
games or to write music. A lot of the music that used to be written by studio musicians, like the
songs you would hear, you know, over the loudspeaker in a supermarket, those about like racism in the AI programming
and in the effects that the results that it spits out. There's this idea, I think a lot of people
have that artificial intelligence is somehow more neutral and unbiased and just a computer
spitting out facts. And it seems like the results are very clear that is not the case.
I wonder how you think about combating that piece, too, as we think about like how to make
things more human. How do we make maybe AI less human in that way?
Yeah, well, AI is a big category and it includes everything from like, you know, the Roomba that
vacuums my house to, you know, the supercomputers that run YouTube and TikTok and Facebook and all of these giant
billion plus user algorithms. And so it's hard to generalize. But I would say that in general,
AI is very good at using past data to predict future outcomes. You know, if you have clicked
on 300 YouTube videos about a toilet repair, the algorithm is pretty good at figuring out
that you might want more of those.
And this is not a random example. I did just watch a bunch of videos about toilet repair.
This is a side note. But just to say that the algorithm is absolutely convinced that I own a
pet lizard because one time I was doing research about lizards for a joke. And like for years,
it has been like, do you want a warming lamp for your pet lizard? Which I do not own.
You're being radicalized into lizard ownership.
I guess I have at a certain point.
I'm like, fine, I'll get the Iguana.
You sold me.
Well, right.
So this is a thing that, you know, AI is very good at.
And that can be good and it can be quite dangerous.
There's been a lot of research showing, for example, that these things called predictive
policing systems, that a lot of police departments now use AI programs to try to guide their officers to quote-unquote high crime areas.
So basically use an algorithm to tell me where a crime is likely to occur. And because these
systems are built off of decades worth of data that reflect biased policing practices,
data that reflect biased policing practices, over-policing low-income neighborhoods,
minority neighborhoods, systematic over-policing of those areas, it is more likely to tell an officer, hey, if you go to this corner on this street at this time, you are very likely to see
a crime in progress. And of course, where do the crimes, you know, like if you put a police officer
on a corner, they're more likely to see a crime happening there, which then feeds back into the algorithm, which then tells them this is a really high crime block or corner.
And it perpetuates this bias throughout the ages, except now it seems objective, right?
Because it's coming from a computer rather than from the brain of a police officer.
So those are the kinds of things that I worry about.
I also, you know, there are many examples
of algorithmic bias in, for example, hiring.
A lot of companies now use AI to screen resumes
and that can be disastrous too
if it tends to select for, you know,
only white men or people who went to Harvard
or some other, you know, flawed criteria.
So I guess that leads to a question, which is, in what ways are the same things that
make us human and make us special and unique also what make us susceptible to being shaped
or manipulated by technology?
It's a really interesting question.
I mean, I think that we have always been shaped by our technologies, right?
There's this really fascinating study that came out of the University of Minnesota a few years ago. And basically, they were interested in this question of whether algorithms sort of reflect our preferences or whether they shape our preferences.
where they would be tested on how they liked a series of songs,
like whether they liked a series of songs that were played for them.
And then they sort of manipulated the star ratings,
like, you know how Spotify or any of these systems will give you sort of ratings based on how much they think you will like the song.
And they sort of manipulated these ratings
so that they didn't really have any real connection to people's actual preferences.
And then they forced people to sort of listen to the whole songs. And it turned out that the sort of star
ratings influenced people's judgment of the songs, regardless of whether or not they actually liked
them. They trusted the star ratings more than they trusted their own subjective taste and
experience. And so I think there's this way in which we are kind of outsourcing our judgment and our preferences to AI, which may or may not actually have
our best interests and a good picture of what we're like as people in mind. And so I think
that worries me almost as much as like the factory automation and stuff. It's like this
kind of internal automation that I think we all feel kind of tugging on us every day.
I wonder how knowing that and also just your reporting on tech in general,
how has it changed your relationship to what you use in your day to day life?
Well, I am not a Luddite. I am not a technophobe. I have plenty of robots and gadgets in my house,
plenty of robots and gadgets in my house. But I do try to exercise real caution with the kinds of things, the kinds of decisions that I let algorithms and machines make for me. I wrote a
story a few years ago where I did a 30-day phone detox with the help of a professional phone rehab
coach because I was horribly addicted to my phone. Like we all are.
This was pre pandemic, which you know, I should I need to probably do it again. But it was really
instructive. And it was really instructive about which of my cues I was taking from my phone. I
think our phones kind of started out many years ago, as like assistants, like they were there to
sort of be helpful with whatever you wanted to do. But then at some point in the past, like a few years, they got promoted and became our bosses.
And now they just tell us like, pay attention to this thing and get mad about this thing,
get freaked out about this new, you know, story. And I think that restoring balance in our
relationship with the devices in our lives is really important. So, you know, right now, I'm just had a kid, and I'm very cautious of like, what kind of media I'm consuming about that, whether I'm in,
you know, the Facebook group where all the parents share the craziest, scariest things
that have happened to their kids, like, I'm very guarded about what I let into my consciousness.
And that's maybe makes me sound like, you know, a paranoid freak. But it's part
of how I try to reduce the influence that machines have on my life. For example, I don't use YouTube
autoplay, I turn that off. So that when I'm watching a video, it doesn't just automatically
start playing a new video, because that's something that I found I'm very susceptible to.
I'm careful about TikTok. Actually, I've been meaning to write about this. I have this sort of like,
TikTok amnesty policy where like, every like few weeks, I delete my TikTok account and start a new account just to like clear out the algorithm, like whatever, like junk I've been watching.
I don't want like to be fed just more of that. Like I want like new junk. And so I try to sort of
cleanse my timeline a little bit that way. So you wrote a book review with the help of
artificial intelligence, which I think kind of goes to the idea of flipping the script of like
the artificial intelligence and phones and technology used to be something that were our
assistants and now they're more like our bosses. So yeah, tell us, tell me about the process of
writing a book review using AI to help you. Yeah, well, I this was for the New York Times
book review earlier this year, and I had gotten assigned to review this book, Eric Schmidt,
the former CEO of Google and Henry Kissinger had written a book about AI together. And I read and
I was sort of dreading reviewing it because it was like kind of boring.
And I was like, I'm not really like, really, I got to come up with like a thousand words
about this book.
And then a light bulb went off and I thought, what if a robot could help me?
So I use this app called PseudoWrite, which is basically like a, it's basically a super
powered version of the autocomplete on your iPhone, where like, you put in a little bit
of text, and it spits out the next, you know, however many hundred words you want. So I wrote
a little intro, and then I fed it into pseudo write, and it spit out like, you know, seven or
eight paragraphs of analysis of this book that, you know, it had not read and just was sort of
guessing at. And it was pretty good. Like, it was not great. Like, it was not perfect. And it took me a couple sort of tries to like,
tune it to get the right kind of output. But eventually, it was like, you know, as good as
anything I would have written. And so I just slapped an intro on it and disclosed, you know,
this review was written by an AI, and then printed it and it was like perfectly serviceable.
And I don't think people objected to it is a little bit of a stunt. But this is going to be happening more and more. I think
we're going to reach very soon if we haven't already, the point where more text on the
internet is written by AI than by humans. And that will be an important inflection point and
kind of a scary one if you're in the words business. So thinking about the future, you
have kind of an interest. I mean,
there's a lot of reasons why you have an interesting perspective on the future
that have to do with just your own brain, but you also are a new father.
And I think a lot of people, when we think about the impact of technology, it's not just for our
own lives, it's for the next generation. So I wonder when you think about your son growing up
in this world, what are you excited about for him? And what are you worried about when it comes to technology?
Children should have no access to technology and no access to even phones at the right age.
It was really important for me as a kid to have that stuff.
And I think it's important to find ways to coexist with it for kids growing up today. I am scared about the kind of loss of autonomy that I see happening in a lot of parts of culture.
parts of culture. But there have been some studies that have shown that it matters what you're doing on these screens and on these devices. You know, playing Minecraft is not the same as like,
watching a zillion TikToks, you know, connecting with your friends on, you know, inside Fortnite,
or in, you know, on a group chat or on Snapchat is different than,
you know, posting selfies on Instagram for other people to kind of like and comment on.
And then I think just sort of being discerning about what kids are doing on social media and
encouraging them to do things that involve being creative. There's so much, I mean, it's so possible
to be a totally passive person on the internet and just lurk and
scroll and never, you know, create anything. And for me, what was, you know, what was important
about the internet as a kid for me is just the ability to make stuff, to create stuff. I had
a modestly successful ring of GeoCities fan pages for Buffy the Vampire Slayer that I maintained when I was a
kid. I built websites. I, you know, did little flash animations. I coded a little bit. Like,
it was really a sort of sandbox for me. And I think there are ways to do that, you know, today,
a lot more ways, actually. But there's also this kind of other way to experience the internet,
which is like as a totally passive consumer. And that I think is really damaging. I find that for myself, and I'm not a parent,
and I find that even just as an adult who likes to think that I kind of am like,
more fully formed and not as as malleable as maybe a young teenager is, I still find that
when I am using the internet, and I'm using it as a way to put things out and
to produce and to connect with people that I feel good about, right? Like when I'm like, oh,
here's something that I wrote and I want to publish it. Great. I love that. I love that. If
I can't find a, you know, a newspaper or magazine that will publish something, I can just put out
my thoughts and people will still read it and engage with it. That feels good. And the part
that feels bad and feels like it starts to shape me and maybe make me feel inadequate or feel like I'm not doing enough
or constantly competing with a bar that is ever shifting higher and impossibly is when I just start
passively consuming. So when I'm scrolling through Instagram or I'm scrolling through
TikTok or I'm just looking at other people's accomplishments, then I feel bad.
But when I put things out and creatively engage, then I feel like, oh, this is an amazing tool
where I can be talking to you from hundreds of miles away and we can have an actual conversation
that that never makes me feel bad.
That part of it.
Totally.
And you asked about reasons for optimism and sort of things I'm thinking about with respect
to my son growing up with technology.
And I'll add one more, which is that I think this generation of Gen Z,
people who got their first smartphones
and their first social media accounts as teenagers
during this last wave of tech,
I think those were basically the sort of guinea pigs
for this giant social experiment.
And I think we're going to look back and see
a bunch of people driving fast cars with no seatbelts, who just like didn't have the tools to like, cope with what was now possible.
So I think unfortunately, there's like a generation of kids who grew up without any
real safeguards or knowledge about what they were even doing to themselves by like,
living on these platforms. And I think that by the I hope that by the time
my kid is of age to start using this stuff, like we've built up a little bit more sort of knowledge
and awareness and, you know, sort of immunity and resistance to like, this thing that we all do.
I think you never want to be like the first generation to be like building with stuff.
It's always nice to like, work the bugs out and use the second version of the product. So I think with any luck, he will
be using like the second or third or fourth or 10th version of this stuff, rather than kind of
being on the frontier where no one knows anything. So Kevin, obviously, there is there's no going
back to a world where using technology like smartphones or the internet, it's is not essential
to participation, right? We're not going to go back to that world. But if that was possible, is that something that you would even want?
No.
And why or why not would you? Why wouldn't you want that?
No, I don't want us to go back to a world with no internet, no social media, no smartphones.
You know, I think, you know, these things have had enormous costs, but they have also had a
lot of benefits. And I'm very critical of certain social media companies. I don't think, you know, these things have had enormous costs, but they have also had a lot of benefits. And
I'm very critical of certain social media companies. I don't think, you know, I don't
think a world without, for example, Facebook would be significantly worse, might be significantly
better. But I do think that on the whole, you know, we just need to figure out how to make
this technology work for us rather than us working for it. And so I think
I'm still, you know, maybe I'm a starry-eyed optimist, but I still believe that there's a
world in which we use all of this stuff for its highest purpose, and it frees us from routine and
repetitive, you know, tasks, and it leads to a society that is, you know, more abundant and more
fair. There's a great book came out a few years ago,
which I mostly just love the title of,
called Fully Automated Luxury Communism,
which is about how sort of robots and AI
could produce this kind of utopian society
where we just all sit around and make art
and do philosophy all day
and the robots just take care of everything we need.
So I'm still, I don't think we'll ever get fully there,
but I think we can do better than we are now.
And that's what keeps me motivated.
Okay, we're going to take a quick break,
but we'll be back with much more from Kevin Roos
right after this.
Hey, Sasquatch here.
I've made a lot of friends working out at Planet Fitness.
Let me introduce you to the Sasquatch.
First up, the Kettlebell Queen.
She puts the fun in functional training.
Next, Sir Lifts-A-Lot.
He lifts a lot.
And of course, the stretcher.
Her flex, flexibility.
Get started at Planet Fitness
today for $1 down and then $15
a month. Offer expires January
10th. $49 annual fee applies.
See Home Club for details.
And we are back.
We've been talking about the impact of technology
on our work, and if you find yourself
increasingly worried about that impact and what that means,
here's a clip from Kevin's TED Talk that I think can help us understand
one way that we might move forward.
If you, like me, sometimes worry about your own place in an automated future,
you have a few options.
You can try to compete with the machines.
You can work long hours.
You can turn yourself into a sleek, efficient productivity machine.
Or you can focus on your humanity and doing the things that machines can't do.
Bringing all those human skills to bear on whatever your work is.
I'd love to talk a little bit about something that I know you've done a lot of recent work on in explaining and in doing
research on, which is crypto and also Web3. And, you know, I've heard you say this, that basically
that there's this element of how everyone made fun of social media when it first started. And
we're like, it's a joke. Look how dumb this is. Oh, my gosh, this is a, you know, it's pictures
and they get to comment on them. And then the systems became incredibly powerful. And all of the issues with them are deeply entrenched and
really hard to fix. And I've heard you say that you're basically trying to avoid that same thing
happening with Web3, where right now people treat it like a joke, but there's also obvious issues.
And if we don't engage with them now, by the time we do, it will be so much harder to fix them.
Is that first of all, is that like an accurate assessment of how you feel about this and why you're reporting on it?
Yeah, totally.
I mean, that is the essence of why I think this stuff is important.
I'm not a crypto fan.
I'm not a crypto skeptic.
I'm sort of a crypto moderate when it comes to all things crypto and Web3.
when it comes to all things crypto and Web3.
One of my deeply held beliefs, though,
is that the people who are involved in the early days of a technological shift
get outsized input into what that technology
eventually becomes.
So in the early days of social media,
as you said, when people were sort of mocking,
like, oh, who wants to see pictures of my brunch?
And like, you know,
why would anyone tweet about what's going on in their neighborhood? Like, it was just not seen as a serious thing. Like now, obviously, like, it's the biggest force, you know, one of the
biggest forces in politics and culture. And, you know, elections are won and lost on social media,
and it shapes the fate of, you know, democracies. And, and so I
think that right now, we have this very nascent crypto industry that, you know, seems in a lot
of ways, like something you, you know, shouldn't take seriously, like, it's got a lot of indicators
of like, there are a lot of scam artists, there are a lot of, you know, there's a lot of fraud,
there's a lot of just really stupid stuff.
And I think the temptation is to kind of dismiss it all
and like hope that it goes away
and that you never have to understand it.
It's one of these tech trends that just like comes and goes.
And I think that's a real mistake
because if this does work,
if the crypto people are right,
if this is technology that sort of reshapes finance and culture and ownership and art and all the things that they think it will do,
I want there to be people on the ground floor of that who are thinking about these
risks and these big questions. And what happens if crypto takes over the world? How do we make
sure that it doesn't just become, you know, six white guys in San Francisco, like getting all the money again? How do we actually make this the best
version of itself that it can be? So I want people to engage with it, whether or not they're
skeptical. And maybe especially if they are skeptical, I think it's good for people to
understand and engage with it. So what do you think that a regular person who's not a tech
reporter and not living in Silicon Valley, what should they do to engage with crypto and with these issues right now?
Well, first things first, to self-promote a little bit, I did write a very long 14,000
word explainer of crypto and Web3 and DeFi and NFTs and all the other stuff that ran
in the New York Times back in March.
You can search for that.
And it's also, I think, at least in my memory, the only time I've ever seen an entire section
of the paper written by one person.
Truly incredible.
Yeah, it was.
It was wild.
I just I started and I thought it would be a short little thing.
And then it just kept going because it turns out it's sort of complicated.
So that's sort of my attempt to give people who are a little bit intimidated by this topic, like just an easy way into understanding
like the basic contours of what's going on. So I would start there. It's called The Late
Comer's Guide to Crypto. And it's on New York Times website. And then I think just sort of
experimenting with it a little bit like I wouldn't, you know, I'm not a financial advisor,
I would be the last person you should ask about what to invest in. I found that my own understanding of crypto really
kicked up a couple notches when I accidentally sold an NFT for a lot of money in a charity auction
last year. And all of a sudden, I had all this crypto that I was not keeping, but that I was
sort of transferring to a charity. And it really forced me to learn how this stuff worked
because all of a sudden I had this like, you know,
pot of money that I was the custodian of
that I had to figure out like how to keep secure
and how to transfer.
And it really sort of threw me into the deep end
and made me learn about this stuff.
Well, for people who are listening
and are sold on these ideas about the promise,
but also the potential perils of future technology.
How can we be better participants in the future of tech? Is it being better stewards around
regulation? Or how do we get involved? And how do we make it so that the future is what we want it
to be rather than what we fear it could be? Yeah, I think the first step is to learn,
is to really understand what's happening on the
technological frontier so that you feel comfortable weighing in, so that technology is not just a
thing that happens to you. It is a thing that you feel like you have some agency over. If you're
signing up for some new service or new social network or new product, figure out what is
happening under the hood a little bit. Be a more educated consumer the way that like,
you want to understand what's in the food that you eat, you want to understand what's in your
information diet and what forces are operating there. This idea in, in the tech world, sort of
known as like friction, which is basically like, how do I and it's usually used in the context of
tech products that are trying to get rid of friction.
So making it as easy as possible to like, watch a video or order something or like, you know, comment on somebody's birthday, you know, Facebook page or something.
We stored your credit card.
So it's just one click to buy.
Exactly.
But I've been sort of trying to systematically introduce a little bit more friction into my life because I think like are a little bit too easy, and it tends to put me on to autopilot. And so I've been,
you know, like, taking the long way to go somewhere and like, not following like the
Google Maps fastest route every time, like maybe getting something from the hardware store down
the street, instead of ordering something from Amazon, even if it's a little more expensive,
trying to like, be a little bit more thoughtful about
what I consume. And then I think, yeah, just engaging in the democratic process, you know,
elect people who understand this stuff and are thoughtful about it, make your feelings known
in a way that's, you know, thoughtful and respectful. But I think we're entering into an age where the tools in our society are
more important than they ever have been. And so it's incumbent on people to understand that and
to weigh in and to not just, you know, wake up one day and find that the world has changed
around you and you had no part in deciding how to live in that world.
The show's called How to Be a Better Human.
So what are you personally trying to do right now
to be a better human in your own life?
Well, right now I'm trying to raise a son.
That's a big one.
Which feels sort of cliche,
but also like truly terrifying and challenging
and tests me in all kinds of ways that I sort of feel like
it's forcing me to be a better human, you know, to respond with compassion and empathy at 3am
when there's a meltdown happening, as I did last night, that feels like it's stretching me in some
new ways. So that's one of the ways I'm trying to be a better human. That's a huge one. That's a
really big one. And then what is something that has helped you to be
a better human, whether it's a book, a movie, a piece of music, an idea, anything?
I am an obsessive evangelist for this app called Freedom, which is basically the only reason that
I have been able to get anything done for the past five years.
Freedom is an app. It's on your computer, it's on your phone. And you basically tell it like,
I like do not let me go on social media for the next, you know, x hours. Do not let me
check my email. Do not let me, you know, surf YouTube. And you can put in sort of custom sites
that you custom lists of sites that you wanted to block whatever your time wasters are, like,
and you're sort of just, I don't know, junk food for your brain are, you can set it to just
cut that off systematically for any like the time you want. And so it's how I write. It's how I
focus. I have no self control. So I need to outsource that to this want. And so it's how I write. It's how I focus. I have no self-control,
so I need to outsource that to this app.
And luckily this app is very good
at implementing self-control for me.
So that is my shortcut to being a better human.
Amazing.
Kevin, thank you so much for being on the show.
Thank you for all the writing
and all the thinking that you've done about this,
but also just for talking to us about it.
It's really been a true pleasure here.
It has been a real pleasure. Thank you for having me.
That is it for today's episode. I am your host, Chris Duffy, and this has been How to Be a Better
Human. Thank you so much to today's guest, Kevin Roos. His latest book is called Future Proof,
and you can also check out his podcast with the New York Times. It's called Rabbit Hole.
On the TED side, this show is brought to you by Sammy Case and Anna Phelan, both of whom
are not robots.
And from Transmitter Media, we're brought to you by Isabel Carter, Farrah DeGrange,
and Wilson Sayre, all purely human, 100% human.
For PRX Productions, this show is brought to you by the unautomated, fully analog Jocelyn Gonzalez,
even though she uses digital tools,
and Sandra Lopez-Monsalve,
also 100% flesh and blood.
She's a human.
She's not digital bits.
Thank you so much for listening.
And we will be back next week.