3 Takeaways - Your Brain, For Sale: The Hidden Ways AI Can Manipulate You with Cass Sunstein (#273)
Episode Date: October 28, 2025AI doesn’t just predict our behavior — it can shape it.Cass Sunstein, Harvard professor and co-author of Nudge, reveals how artificial intelligence uses classic tools of manipulation — from scar...city and social proof to fear and pleasure — to steer what we buy, believe, and even feel.Its influence is so seamless, we may not even notice it.The battle for the future isn’t for our data — it’s for our minds.In a world this personalized, how do we keep control of our own minds?
Transcript
Discussion (0)
AI can learn our tastes, our fears, our biases, and use that knowledge to steer what we buy,
what we believe, even how we feel.
Sometimes that's helpful, but sometimes it's dangerous.
So where's the line?
And how do we protect free will in a world where we may be manipulated without even realizing it?
Hi, everyone. I'm Lynn Toman, and this is Three Takeaways. On three takeaways, I talk with some of the world's best thinkers, business leaders, writers, politicians, newsmakers, and scientists. Each episode ends with three key takeaways to help us, understand the world, and maybe even ourselves, a little better.
Today, I'm excited to be with Cass Sunstein. Cass is one of the world's most influential leading.
scholars, as well as a leading thinker on behavioral science and how policies and laws shape human
behavior. He served in the Obama administration as administrator of the White House Office of
Information and Regulatory Affairs, and he's advised governments around the world on regulation,
law, and behavioral science. Cass has written dozens of books, including Nudge, co-authored with
Nobel laureate Richard Thaler, which transformed how we think about decision-making and public
policy. His latest book, Manipulation, explores how our choices can be quietly shaped increasingly
by artificial intelligence that learns more about us than we realize. Cass, welcome back to three
takeaways. It is always a pleasure to be with you. Thank you. A great pleasure to be with you.
In your book, Manipulation, you write that dystopias of the future include two kinds of human slavery.
One built on fear of pain, the other on the appeal of pleasure.
Let's start with fear.
How can AI undermine free will through fear?
It can make you really scared at AI can that things are going to be terrible unless you hand over your money or your time.
So AI might make you think that your economic situation is dire and you need something,
or it might think that your health is at risk and you need to change your behavior.
It might make you think that things are unsafe.
Now, if the situation is dire or unsafe, it's kind of good to know that,
but AI can manipulate you into thinking things are worse than they actually are.
And what could a dystopia of pleasure look like?
Dysopia of pleasure sounds a little like an oxymour.
mon. So if we're delighted and smiling and everything's going great, that sounds pretty good. But if it's the case that people are being diverted, let's say, from things that are meaningful to a world of videos that are producing smiles or smirks, it may be that your meaning in your life has been atrophied and what you're doing now is staring at things in a way that is making your life kind of useless and a little purposeless.
AI can now learn an enormous amount about us, our tastes, our habits, even our biases,
and soon it will have even more knowledge.
What additional knowledge will AI have and how does that knowledge open the door
to even greater and more subtle manipulation?
We need an account of what manipulation is.
So let's say manipulation involves getting people through forms of influence to make
choices that don't reflect their own capacity for deliberative choice. So if I decide I want to get
a new book on manipulation, I hope I'm not being manipulated. If I am influenced to think that if I
don't get that new book, then my life is going to fall in the toilet, then I'm probably being
manipulated. So what AI is in a unique capacity to do and algorithms right now, unique in human
history and it's getting more extreme, is to know what people's weaknesses are. So it may know that
certain people lack information, let's say about what's an economically sensible choice, or certain
people are very focused on the short term and they can be manipulated to give up a lot of money
like tomorrow in return for a good that produces a little bit of pleasure today. Or may know,
AI may know that certain people are unrealistically optimistic. They think that plans are
going to go just beautifully even when they won't. And they can lead you to buy a product that's
kind of going to break on day three. And this ability to get access to people's weaknesses,
that is kind of a terrain for manipulation through AI or through algorithms. And AI will basically
have access through our phones to all of our conversations, all of our contacts, everything we look up
on the internet, everything we read, as well as now biometric data increasingly, our heart rate,
how long we look at something. What will all of that additional data enable AI to do?
Well, we should note that there's a good side of this. So if AI knows that what you're really
interested in are books about behavioral economics and Labrador Retrievers, and you're not really
interested in books about particle physics or about chihuahuas, then you can get information
that is relevant to your interests or maybe offerings that are connected with your both side life.
So there's a good side to it. If AI knows that certain people, let's say, have self-control
problems, that they are addictive personalities or that they are reckless purchasers, then AI can
really get resources from them and maybe put their economic situation into a very bad state.
If AI knows that certain people are very parsimonious and they don't really want to spend
much money and they are very careful, AI might know that people like that are vulnerable only
to this and then it can work on you. If you are being subject to some form of trickery,
that gets you to have your weaknesses exploited and you're not making a reflective choice.
Then we're in the domain of the manipulative.
Whether this is something that we want regulation for depends a lot on how markets are working
themselves out and how both companies and people who use products are reacting to the relevant
risks. About 10 years ago, Facebook, which you talk about in your book, ran an experiment to see if it could
influence users' emotions. What did the company do and what did it find? It found that emotions are
not only contagious, which we know. So if you're surrounded by grumpy people, the chance that you
will grow grumpy increases. If you're surrounded by happy, fun people, you're probably going to
be happier and have more fun. Facebook can induce positive or negative motions through posts.
And it would be regrettable if some people's, you know, it's unfortunately true, some people's
principal social relationships are online. Even if your principal social relations aren't online,
you can be rendered. Facebook found happier or sadder just by virtue of what Facebook is showing you.
And since Facebook has a capacity to put happier or sadder posts on your news feed, say, it can induce emotional states.
And Facebook got a lot of pushback for that. That was desirable that there was that pushback.
Facebook, I think, wasn't doing anything malevolent there. It was just trying to learn.
but the idea that a company can have some authority
or people's emotional states that is troubling with a capital T.
You asked AI to draft a step-by-step manipulative guide
to push someone toward buying an expensive car.
The results to me were scary
because, as you note, the same strategies could be used to sell almost anything
or even recruit someone to a radical cause.
Let's walk through some of these tactics.
starting with the anchoring effect.
What is it?
How does it work?
And can you give an example?
Well, I'll tell you, you know, and your listeners that if you'd like to buy my book,
I have copies that you can get for $45.
And because, you know, I know you have worked together before and I love your program
and love your listeners, I'll sell it to you for $39.95.
See what I did there?
I just anchored you on the $45. It doesn't cost $45. It doesn't cost $39.95. But I started with
45 and that anchored people on thinking, okay, it's a $45 book. 39.95 sounds pretty good.
So anchoring is an initial number from which people adjust. Real estate brokers sometimes do this.
Sometimes they're very self-conscious. So they'll say, if there's a house, it's on sale for $400,000.
And let's just suppose it's an area where the particular house, the real estate,
seller knows, it's going to go for significantly less. But starting with that initial number
inflates people's willingness to pay. So anchors are super powerful. They work in negotiations.
They work in divorce settlements. They are a coin of a realm. And AI could completely anchor people.
What a refrigerator are you going to get? Their refrigerators available in a store near you.
And they cost X, but there's a discount. And let's just stipulate that AI
is inflating the cost and the initial starting price, and that's a form of manipulation.
Another manipulation strategy is the scarcity principle. Can you talk about that?
I don't know if you saw, but my manipulation book, I don't know whether it's, you know,
I've just gotten lucky with the demand or something else that the availability is extremely
restricted. And I'm pleased to say what you probably know, Lynn, which there are copies available
on Amazon, but I'm not sure they're going to be available tomorrow. I'm hoping the publisher's
going to be speedy and republishing, but you never know with paper shortages. So what I just did was
scarcity. And for me, if I learned that some food that my dogs really like is hard to get,
I'm probably going to go to the store.
How about social proof?
What is it and why is it so powerful?
Well, there's a book that recently came out called Bounded Rationality.
I'm privileged to be second author.
First is an economist.
In about a couple years ago, it's a long book, pretty technical book.
Okay, I'll play it straight.
I won't do this any foolishness here.
One thing we did was we have.
people who were really good at behavioral economics to say that they like the book. And we didn't
do any tricks to get them to do it. We just said, might you. So we have some really excellent
people saying they like the book, that's social proof. So if you are, let's say, the sibling or the
parent of a young tennis player, I'm the parent of a young tennis player, my young tennis
playing son is going to be applying to colleges pretty soon. If Roger Federer or Raffa Nadal would
write a little note saying, I've rarely seen such a promising young tennis player as my son,
that would be social proof. That would also be a miracle. He's good, but we don't know those.
How about authority bias? What is it? And can you give an example?
If you have an authority who is said to like something,
a lot, or to think that you should do something, it would be rational to be influenced by
that, but sometimes the influence outweighs what it is rational to do. So sometimes it's
overweighted, the judgment of an authority. How does reciprocity drive behavior?
Reciprocity often involves people to say, I'll do your favor, and then people feel obliged.
Sellers are often very smart at that.
So they say, this is what I'm going to do for you.
Maybe you'll tell you a little story, a great story, I think, which is when I bought a car a few years ago, it was on a Saturday.
And as one does, I was negotiating for the car.
And the price offered was higher than I had hoped.
And I said, can you do a little better?
And he went back to talk to his boss and then came back.
And he said to me, Cass, of course, they're very good at using your first name.
he said, Cass, I talked to my boss. It's Saturday. We're not going to sell any cars. Saturday is a very
tough day, so we're going to give you a great deal. Here you go. And I thought, great. He's doing
something nice for me, a big deal, and I'll do something nice for him, say yes. So there's a little
reciprocity there. And then an hour later, when I was driving the car off, I said, thank you so much.
I'm glad to be able to do this on a day when you don't sell any cars. And he forgot what he said to me.
And he looked at me, he said, what are you talking about? Saturday? That's the best day for car sales. This is our big day.
So he lied to me when he said, I'm going to give you a good deal because it's a Saturday. He used reciprocity. And he thought as he did deal for me, then I would say yes to him. And he forgot what he had said, which is we don't sell any cars on Saturday. It was a good line. It made me think I was getting a good deal. But then when I drove off, he said, well, it was truthful, which is Saturday is our big sales day.
So he was smart. I was manipulated.
Cass, what's the principle of commitment and consistency?
So if you commit to, let's say, vote to a friend who wants you to vote, say, yes, I'm going to vote,
then the likelihood that you're going to vote jumps.
And AI can certainly invite a commitment.
And then you'll act consistently with your commitment.
So to get people to commit to do something like, I'm going to drink no diet soda for the next week.
I actually did that a few years ago, and I haven't had any diet soda for the last years because of the initial commitment that I wasn't going to drink it for the next week.
That's often a very effective behavioral strategy to induce a commitment.
How about loss aversion?
How does that influence decision making?
If people are told, if you use energy conservation strategies, you'll save $200 in the next 12 months,
the likelihood they will use energy conservation increases, but not as much as if people are told
if you don't use energy conservation strategies, you'll lose $200 for the next 12 months.
They're identical sentences in terms of their meaning.
One is framed as a loss.
The other is framed as a gain.
people really don't like losses. People tend to dislike a loss twice as much as they like a
corresponding gain. Sometimes it's just semantic. It's just a rediscription of the phenomenon.
If something's described as loss, on average, people are going to be concerned and take action to
prevent it. And finally, what's the decoy effect? Let's suppose you have two choices at a restaurant,
an expensive mid-sized piece of cake and an inexpensive small piece of cake.
Let's suppose that people buy on average the less expensive small piece of cake.
And let's suppose the restaurant thinks we want to make a little more money.
We want people buying the mid-size where we get more profit.
If you have a decoy, that is a big piece of cake, like really big piece of cake,
that no one's going to want super expensive and it's going to do terrible things for your waistline.
And if you introduce the decoy, people flip from the small to the midsize.
So the introduction of a decoy can often flip people who would choose A over B.
Once they see a C, they'll choose B over A.
Cass, what happens when AI can use all these strategies against us?
Well, if agile companies are using AI cleverly, we can be manipulated to lose money and time.
As a legal scholar, what consumer protection is?
do you believe that we should have against manipulation in this new AI era?
The rallying cry is that we need a right not to be manipulated.
We have a right not to be deceived.
We have a right not to be defrauded.
Right now, we need a right not to be manipulated.
Now, specifying that right is a work in progress.
Probably it's best to work from egregious cases of manipulation.
But the most extreme ones are when people are subject to hidden terms or to cognitive tricks.
So they are parting with something that matters to them, their money, their time, without really consenting.
And that means we need to specify what that looks like.
Calling for right to not be manipulated isn't standard, but we're kind of getting there.
and the U.S. government over recent years has verged on that, saying, for example, that if there's a fee that you haven't gotten clarity on, sometimes they're described as junk fees, you don't have to pay it.
It has to be something that you have clarity you're paying.
You mentioned protection against cognitive tricks. Can you give some examples?
One idea would be to say that you are going to automatically pay.
monthly fees if you agree to pay a fee now. If the monthly fee is automatic and not really in your
face, you might click on it, even though the consequence for you is one you would not welcome and would
not agree to if you had clarity on it. So what is being done here is using limited attention against
people to default them into an economic arrangement that they would not have accepted if they had
clarity about it.
Here's another one where you agree to an economic relationship where entry into the relationship
just as one click and exit from it means you have to go to a place far, stand in a long line,
talk to seven people, then make a phone call, then do 20.
push-ups and then recite the last names of your great, great, great, great, great-grandparents.
That's a mild exaggeration of easy-in, extremely hard extrication, and that works on the fact
that people have an aversion to navigating, let's call it sludge, which is administrative
burdens, and to discounting the future. So the future horror of extrication isn't something
that people attend a whole lot to. And our government at times has said things should be as easy
to extricate yourself from as they are to enter into. Now, there's things for which that
wouldn't be sensible, but for economic transactions with, let's say, magazines or banks,
that's a pretty good start. And Cass, I should say thank you for your work in government to
reduce sludge. Thank you for that. Before I ask for the three takeaways,
on manipulation that you would like to leave the audience with today.
Is there anything else you'd like to mention that you have not already talked about?
I'd emphasize that one form of manipulation is sometimes described as a product trap
where people enter into a relationship, let's say, with a company,
because they think other people are doing it too, and then they are fearful of missing out
and they'll stay in, not because they like it,
but because they think they'll be excluded from something.
For young people and not so young people,
social media platforms are often a product trap
where they're on TikTok or Instagram like a lot
because they think other people are too.
And that's a form of manipulation by Instagram and TikTok.
And there's a lot of work being done now
to try to find ways to spring the trap
by enabling people to work collectively to say, we're all going to be off, at least we're all
going to be off between 9 p.m. and 8 a.m. And that is a new frontier of manipulation.
Cass, what are the three takeaways you'd like to leave the audience with today?
Takeaway number one is that manipulation is bad because it is an insult to people's autonomy or freedom,
because it is like deception and lying.
It prevents people from making reflective choices.
The second takeaway is that manipulation consists.
It should be defined as a form of trickery
that compromises and fails to respect
people's capacity for deliberative choice.
Now, if we understand it that way,
then we can spot manipulation in the family
at work and online, if you think that that form of trickery is always bad, you probably
lack a sense of humor. It's sometimes a very fun thing, but in egregious cases where it's
harmful and it takes things from people without their consent, then it's bad.
The last of the three takeaways is that it's time today to start to create a right not to be
manipulated. Cass, thank you. It is always a pleasure to be with you. I've very much enjoyed your
book manipulation. Thank you. Great pleasure for me. If you're enjoying the podcast and I really hope
you are, please review us on Apple Podcasts or Spotify or wherever you get your podcasts. It really helps
get the word out. If you're interested, you can also sign up for the Three Takeaways newsletter at
Three Takeaways.com, where you can also listen to previous episodes. You can also follow us on
LinkedIn, X, Instagram, and Facebook. I'm Lynn Toman, and this is Three Takeaways. Thanks for listening.
