3 Takeaways - How AI is Rewiring and Rotting our Brains (#204)
Episode Date: July 2, 2024Slowly but surely, AI is taking over. What does it mean to live in an age where we can outsource our thinking to machines? According to Tomas Chamorro-Premuzic, it's no less than a fundamental re...structuring of what it means to be human and a questioning of our essence. Learn how to future-proof yourself and maintain what makes us human. “If you want to future-proof yourself in the age of AI … the worst thing you can do is be lazy.”“If we are at the mercy of AI, free will isn't even an illusion anymore. It’s just completely gone.”
Transcript
Discussion (0)
We've all heard about the amazing possibilities of artificial intelligence, as well as some
of the risks.
We have three recent episodes on AI, including AI that's more powerful than humans is coming,
how will we be able to control it, which is episode 187.
What does the future of work look Like, which is episode 116. And our third episode, which is War
in the Age of AI, a chilling, mind-blowing talk with a former Pentagon defense expert,
which is episode 151. But that leaves the key question, which almost no one is focusing on, which is how will AI change us as humans? How will it change our lives,
our values, and our behavior? Hi, everyone. I'm Lynn Thoman, and this is Three Takeaways.
On Three Takeaways, I talk with some of the world's best thinkers, business leaders, writers,
politicians, newsmakers, and scientists. Each episode
ends with three key takeaways to help us understand the world and maybe even ourselves a little
better. Today, I'm delighted to be with Tomas Chamorro-Premuzic, who's an organizational
psychologist, a University College London professor, and the author of I,
Human. I'm excited to find out how artificial intelligence is changing us as humans. Tomas
believes the most notable thing about AI so far is how it's reshaping how we live,
and that it's bringing out some of the worst in us, turning us into
a duller and more primitive version of ourselves.
Welcome Tomas, and thanks so much for joining Three Takeaways today.
Thank you for having me.
How much time do people spend online?
Well, the average human alive today is expected to spend about 20 years of their lives
online, of which around seven years will be spent on social media. So I think that puts things into
perspective. Wow. So that's about on average, how many hours a day? Between four and six hours a day. And you would think that this gets
higher or worse as people get younger. But actually, there's some staggering statistics
about people, including Americans, between the ages of 60 and 75, spending something like eight hours a day on their iPads.
What does all that time online do to our brains?
We don't know yet. I think people on both sides of the spectrum who are arguing quite compellingly
at times that our brains will be completely handicapped and not fit for the future and
deformed somehow by these interactions don't really have much evidence to support it.
There's a little bit of evidence on, for instance, what this does to our reward circuits like
dopamine circuits, et cetera, which makes sense, right? If you're getting constant stimulation and the fact that to be a little bit ADHD today
is almost a necessity if you want to adapt to things like multitasking.
But also people who are claiming that there's no risks whatsoever resemble to me people
who in the early days of the tobacco industry said it's
absolutely fine to smoke, you know, there are no risks. I mean, we probably don't know yet,
but then the question is whether an overreaction is worse or better than an underreaction.
So I think research and regulation are needed and will be needed to protect people and consumers.
In the 1960s, we put people on the moon with computers less powerful
than a calculator. But today, everybody has essentially a supercomputer in their pocket,
and they're not sure if the world is flat or if vaccines are filled with poison.
Are we wiser with access to so much data? Generally speaking, no. I think we have the potential to
harness expertise, but also there are fewer incentives to learn stuff just because you're
curious or even to harness our curiosity because of this sort of lazy attitude that one would
normally have thinking that, well, if I could find the answer
when needed, then what's the point of finding it now? The brain is not for thinking, it's for
optimizing the world in terms of familiarity and for making very efficient, lazy decisions so that
we don't burn too many calories and we don't spend that much time thinking. And actually, AI and its algorithms hijack this motivational kind of system by giving
us more of what aligns with our beliefs, attitudes, and values, in effect, turning us into a more
radical and tribalized version of ourselves. I always say it'll be very easy to retweak or edit
these algorithms so that they make us more open-minded.
So imagine if Spotify, Netflix, Twitter, Amazon, and even Tinder gave people recommendations that
they don't like or want, but that they should consider in order to broaden their horizons.
I mean, this would lead to a mass exodus away from these platforms and onto tools and services
that give us more of what we want.
So the bias is not in the AI.
The bias is in the human brain.
Tell us about our digital selves now.
What are we sharing and what kind of behavior does the digital world encourage? So we start with the second part, the digital world,
mostly in the form of sticky platforms, encourages stickiness or visiting these sites,
spending as much time interacting with these sites, even if the ultimate goal is to interact with other people or things on this side,
so that they can datify or create digital records of our preferences, choices, and behaviors.
So even in the early days of social media, the main question was not,
can you attract visitors, but can you make them come back?
And today we see sites or platforms like TikTok being very,
very good at this because you're going to get that high from getting novelty or fascinating content
that is very hard to abandon. I mean, I think YouTube is still a very good example. It's very
easy for anybody to go down the YouTube rabbit hole if you start watching something and then you see the recommendations, which are not always tailored to things that you might like, by the way.
Sometimes it's things that you will love to hate, for example, but you click from one thing to the next.
And so all of these platforms and tools extract a lot of data on our preferences, our values, our identity. So people spend a lot of
time choosing what Facebook groups they join or how they customize their LinkedIn feed and profile.
But that's just like when you enter a room or a party or arrive to a dinner, you spend a lot of
time before worrying about how you present yourself, how you dress, what you say, hopefully, by the way, that's the
case. And so self-presentation is as strong on these sides as in the analog world. And it does
capture very profound psychological traits that make us who we are.
Tell us about the social media that people post and the feedback loops. In essence, we are rewarded for sharing as much information as we can on these sites.
And the reward isn't just coming from the positive reinforcements that the algorithms
introduce, but the reinforcement that they encourage from other people. So if I share a post on what my cat had
for breakfast and I get 15 likes, then I will share another post that is equally irrelevant
to most of the world, but seems very relevant to me when the likes are coming. These sites are
optimized to provide people with positive feedback. It's very rare
that you share something and somebody says you're such a loser or this is so boring. I mean,
they'll be deemed antisocial or at least rude if they do that. But I think there are stark
differences in the degree to which we are incentivized to provide people with even fake positive feedback
in the digital world versus the analog or real world.
So in the real world, let's say if you go around the office telling everybody how great
you are and what you did on the weekend and that you checked into the business class lounge
and you talk over them and you don't listen to them and you seem very unjustifiably
pleased with yourself and unaware of your limitations, plus what your cat had for breakfast,
you'll be deemed pretty annoying.
But in the digital world, this will probably turn you into an influencer and your social
rank will go up.
And with that, you can see these algorithms functioning as self-enhancing tools in that
they are designed to boost our confidence, even when they don't necessarily boost our
expertise, our competence, or our social skills, because ultimately, a key part of having social
skills is to know what not to say, what not to share, and to refrain from engaging in
inappropriate self-disclosure
at times. Irregardless of how artificial intelligence unfolds, it seems like it will
probably continue to take more effort out of our choices. A future in which we ask Google or Bing
what we should study, where we should work, or even who we should marry,
doesn't seem as far-fetched as it might have seemed. What's the impact of that?
So I think we should still feel hopeful that the impact is to make us feel a little bit ashamed,
guilty, or embarrassed, so that we actually do something to reverse it. Humans are very adaptable. The trouble is that once we develop certain habits,
it takes a lot of effort and time to break them and replace them with better habits.
But every new habit, including the thousands of hours we spend online or social media,
didn't exist at a certain point and develop also over time. So the dark side or the bleak possibility, on the other hand, is that we basically automate
ourselves, that we become irrelevant because actually we are happy or content with being
so predictable that we can automate even things that should require certain social skills
and certain thinking. So if in the future, instead of my sending you an email, ChatGPT sends you an email
on my behalf, and then you use ChatGPT to respond, you apply that to job interviews or first dates
and communication, that might seem very efficient. But the question is, what will humans do? And why should anybody pay us for what
we do is the question if, in fact, the productivity is being created by machines.
The promise of technological innovation so far from farming equipment to factories
is to standardize, to automate and outsource tasks to machines so that we can have more free time and also
potentially be able to spend more time on higher level intellectual or creative activities.
If AI frees us from boring or difficult activities or decision making,
what do you think we do with this extra time?
Hopefully not wasted on TikTok or other social media platforms
with the necessary level of self-control and motivation actually invested on something that
is creative. I think this really is a critical point. In fact, for a lot of people, it is a
puzzle that advancements in technology during the last 20 years have not quite produced
the improvements in productivity that they should have or that we expect or would have
expected them to produce.
And of course, it is true that you can't just invest in technology.
You have to invest in people's skills so that they leverage them and culture change and
the human factor. But the more obvious reason for it is that productivity improved between 2000 and 2008,
so in the first wave of the digital revolution.
But with the rise of social media, it basically stagnated because all of the time that was
saved due to some productivity tools was then not reinvested in creative or intellectual
tasks, but actually wasted on other digital distractions that are also part of the digital
revolution. I think the reality is that it's hard to talk about even AI generically because AI is
simultaneously a productivity and an unproductivity tool,
when people are secretly using these tools like generative AI, not telling their bosses,
is because they know that they can save 30 or 40% of their time to produce the same tasks,
but they have no desire to reinvest that time on rescaling, upscaling, learning something new.
But if we don't do that, precisely then we are
vulnerable to automation or being replaced by technology. So that's, I think, the fundamental
paradox that we face today. Where previous tech advances, such as the industrial revolution, were
about mechanizing manual labor, replacing humans with machines. The current AI revolution is about mechanizing
intellectual work, replacing human thinking and learning with machine alternatives.
What do you think the impact of that will be? So I think first on a philosophical level,
this is a huge change. Ever since René Descartes came along and said, I think, therefore I am, after his profound meditations, looking for what's unique and unquestionable and inherent and specific to humans is the fact that you can think and that you are human. And now we're living in an age where we are almost invited or allowed
and tempted to be human while we outsource most, if not all of our thinking to machines.
So it's a fundamental restructuring of what it means to be human. And it's a fundamental
re-questioning of our essence. On a more practical level, I think there are clear
implications for expertise.
I think the AI age is redefining the meaning of expertise, which is no longer about knowing a
little bit about everything or having the answers to many questions, but rather asking the right
questions. Even being a machine learning scientist today isn't that hot and probably won't predict a very
brilliant future career compared to being a smart prompt engineer. Because if AI has the answers,
we need to be good at asking the right questions. Equally, expertise is being able to call out AI
on its BS or hallucinations, as the euphemism says. And that requires having deep expertise,
at least in one or two areas, to the point that you can demonstrate to a buyer, a client,
a colleague, that actually you are better than the combined or crowdsource aggregated wisdom
of the crowds, which usually isn't the smartest thing anyway, right?
Oscar Wilde famously said, everything popular is wrong. This is basically giving us a mass-produced,
pre-processed, manufactured consent of what the correct answer should be in a certain area,
but it really gives you the most brilliant answer. But if you want to future-proof yourself
and understand whether you know about something, you better check what ChagGPT and generative AI
knows to prove your value. AI is pushing us to add value beyond what most people think about
a certain subject. And therefore, the worst thing we can do is be lazy.
Neuroscientists all agree that free will is an illusion, as we make most of our
decisions influenced by tons of factors other than logic, from the amount of sunlight to room
temperature, sleep quality, caffeine consumption, and many other factors. How do you see a world in
which we spend most of our lives online and where everything we see is targeted and optimized for us individually?
First of all, I think we're living in that world already.
I don't think it is a kind of matrix like or a future black mirror like dystopia.
I think this is where we are. But I also think that if we get our act together
and we pay attention, we still have the power to resist some of these nudges. AI engages in this
trick whereby if it only shows us two or three things and doesn't show us another 100 things
that exist, it will actually be able to predict our choices much better because you have a baseline of 33% and with a little bit of data, you can go
to 50 or 100%.
You log on to Netflix and there's probably 10 movies that come top.
If you're so lazy that you can't be bothered to think or search for what you want to watch
and you apply that same logic to your choices on books,
music, restaurants, hotels, et cetera, then we are at mercy of the AI.
And pre-will isn't even an illusion anymore.
I mean, it's just completely gone.
And that's why I think the main concern of these last 10 years is that not so much that
AI became more human-like,
but that humans became more like machines. We became the predictable robots that have to pass
this Turing test by proving to AI that we are human by clicking on the traffic lights or the
wheels in a cybersecurity test, which we often fail because that requires more thinking than
we're used to doing.
Tomás, can you expand on, to quote you, the risk that we become passengers or spectators in this world?
Well, I think in conversations about even dramatic and exaggerated news headlines of
this being doomsday scenario and we're becoming completely irrelevant and AI
will rule our world. I mean, that's probably an exaggeration, but underlying these claims
is the assumption that we are no longer in the driving seat. And of course,
some people are running, training and teaching or designing the AI. But also, AI is only good and effective
and lucrative or profitable if we allow it to guide our decision. There is this thing called
the analog world out there, which is the real world. And if only we spend a little bit more time in it or on it, less exposed or less influenced
by all of these recommendation engines and data-driven kind of nudges, we would already
feel freer. But that requires more thinking. It requires reacquainting ourselves with serendipity,
which seems almost totally forgotten today,
but ruled most of our lives and many of our choices and big life outcomes for most of our
human history. We shouldn't assume that a world optimized just for efficiency and laziness and
lack of thinking is the best case scenario, because that, again,
is a very low bar if that's where we want to end up.
Tomas, what are the three takeaways you'd like to leave the audience with today?
Number one, we haven't lost the battle, as many people suggest. Number two, it is a mistake to actually think about a battle in the first place,
as in us versus machines or human versus machines,
because the correct way to look at AI is as a tool that we invented
to actually augment our humanity and our experiences.
But this isn't a given, and there's no instructional manual that actually tells us how and our experiences. But this isn't a given and there's no
instructional manual that actually tells us how to do this. We have to figure it out.
And number three, whatever you do, don't become a robot. Remain and be as human as you can,
because that's going to be more and more in demand as more and more of your peers keep on behaving like machine or robots.
Thank you, Tomas. This has been great.
Thank you. It's been a pleasure.
Today's guest was Tomas Chamorro-Premuzic, the author of iHuman.
If you enjoyed today's episode, you might enjoy our other episode with Tomás Chamorro
Premusic.
That's episode 149.
Why do so many incompetent men become leaders?
If you're interested in AI, we have three recent episodes on AI, including AI that's
more powerful than humans is coming.
How will we be able to control it?
Which is episode 187.
What does the work of the future look like?
Which is episode 116.
And our third episode is War in the Age of AI,
a chilling, mind-blowing talk
with a former Pentagon defense expert,
which is episode 151. If you'd like to find our featured
guests, as well as our episodes grouped by category, you can find them on our
3takeaways.com website, where you can also sign up for our weekly newsletter.
If you'd like, you can also follow us on Apple Podcasts, Spotify, or wherever you listen.
And if you're enjoying the podcast, and I really hope you are, please review us on Apple Podcasts
or Spotify. It really helps get the word out. You can also follow us on LinkedIn, X, Instagram,
and Facebook. I'm Lynn Toman, and this is Three Takeaways. Thanks for listening.